Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo, mail # user - Efficient Tablet Merging [SEC=UNOFFICIAL]


Copy link to this message
-
Re: Efficient Tablet Merging [SEC=UNOFFICIAL]
Eric Newton 2013-10-04, 01:20
Any errors on those servers?  Each server should be checking periodically
for compactions, some crazy errors might escape error handling, though that
is rare these days.

Are you experiencing any table level errors?  Unable to read or write files?

How full is HDFS?

If you scan the !METADATA table, are you seeing any trend in the tablets
that have problems?

At this point, we're looking for logged anomalies, the earlier the better.
 Anything red or yellow on the monitor pages.
On Thu, Oct 3, 2013 at 8:43 PM, Dickson, Matt MR <
[EMAIL PROTECTED]> wrote:

> **
>
> *UNOFFICIAL*
> We have restarted the tablet servers that contain tablets with high
> volumes of files and did not see any majc's run.
>
> Some more details are:
> On 3 of our nodes we have 10-15 times the number of entries that are on
> the other nodes.  When I view the tablets for one of these nodes there
> are 2 tablets with almost 10 times the the number of entries as the others.
>
> When we query on the date rowid's the queries are now hanging and there
> are several scans running on the 3 nodes that have higher entries and they
> are not completing, can I cancel these?
>
> In the logs we are getting "tablet ..... has too many files, batch lookup
> can not run"
>
> At this point I'm stuck for ideas, so any suggestions would be great.
>
>  ------------------------------
> *From:* Eric Newton [mailto:[EMAIL PROTECTED]]
> *Sent:* Thursday, 3 October 2013 23:52
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Efficient Tablet Merging [SEC=UNOFFICIAL]
>
>  You should have a major compaction running if your tablet has too many
> files.  If you don't, something is wrong. It does take some time to
> re-write 10G of data.
>
> If many merges occurred on a single tablet server, you may have these
> many-file tablets on the same server, and there are not enough major
> compaction threads to re-write those files right away.  If that's true, you
> may wish to restart the tablet server in order to get the tablets pushed to
> other idle servers.
>
> Again, if you don't have major compactions running, you will want to start
> looking for other problems.
>
> -Eric
>
>
>
> On Thu, Oct 3, 2013 at 2:29 AM, Dickson, Matt MR <
> [EMAIL PROTECTED]> wrote:
>
>> **
>>
>> *UNOFFICIAL*
>> Hi Eric,
>>
>> We have gone with the second more conservative option. We changed our
>> split threshold to 10GB and then we ran a merge over a week worth of
>> tablets which has resulted in one tablet with a massive number of files. We
>> then ran a query over that range and it is returning an message saying:
>>
>> Tablet has too many files (3n;20130914;20130907...) retrying...
>>
>> We assumed that when the merge was done that a major compaction would be
>> started, which would notice that the tablet is too large, split it into
>> 10GB tablets. We assumed that we would not have to manually start any
>> compaction but instead it would be scheduled at some point after the merge
>> finished.
>>
>> We have completed three separate merges of week long ranges and now have
>> identified 3 tablet extents with too many files.
>>
>> Can you please explain what is supposed to happen? And whether after the
>> merge, compact command for those ranges needs to be run (or will it do it
>> automatically, as we have not seen any started)?
>>
>> Cheers
>> Matt
>>
>>  ------------------------------
>>  *From:* Eric Newton [mailto:[EMAIL PROTECTED]]
>> *Sent:* Thursday, 3 October 2013 13:28
>>  *To:* [EMAIL PROTECTED]
>> *Subject:* Re: Efficient Tablet Merging [SEC=UNOFFICIAL]
>>
>>   I'll use ASCII graphics to demonstrate the size of a tablet.
>>
>> Small: []
>> Medium: [ ]
>> Large: [  ]
>>
>> Think of it like this... if you are running age-off... you probably have
>> lots of little buckets of rows at the beginning and larger buckets at the
>> end:
>>
>> [][][][][][][][][]...[ ][ ][ ][ ][ ][  ][  ][    ][    ][    ][    ][
>>  ][    ]
>>
>> What you probably want is something like this: