-Re: increase "running scans" in monitor?
Marc Reichman 2013-04-04, 14:47
I think I concluded that my map task is a little more CPU-bound, and when
I'm not in my map code I'm running scans, thus, if I'm running a lot of
scans, I'm not spending enough time in my map task. Sound reasonable? We
were pointed to this due to a customer who indicated that maybe we had a
setting wrong as to be "not seeing enough scans" given the map task slot
Our use of accumulo is fairly basic at this point. Our map task could work
fine with a SequenceFile or MapFile directly, but we went with HBase and
then Accumulo because we will need to add/remove single pieces of data
regularly, which doesn't really scale with the direct-HDFS approach. As
such, my mapreduce jobs are using the AccumuloRowInputFormat and operating
on every row in sequence, no explicit other iterators, no locality, no
I will run listscans next time I'm running and see if it paints a better
Thanks for writing back!
On Wed, Apr 3, 2013 at 8:15 PM, Keith Turner <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 2, 2013 at 11:35 AM, Marc Reichman <[EMAIL PROTECTED]>
> > I apologize, I neglected to include row counts. For the above split sizes
> > mentioned, there are roughly ~55K rows, ~300K rows, ~800K rows, and ~2M
> > rows.
> > I'm not necessarily hard-set on the idea that lower "running scans" are
> > affecting my overall job time negatively, and I realize that my jobs
> > themselves may simply be starving the tablet servers (cpu-wise). In my
> > experiences thus-far, running all 8 CPU cores per node leads to an
> > quicker job completion than pulling one core out of the mix to let
> > itself have more breathing room.
> Scans in accumulo fetch batches of key/values. When a scan is
> fetching one of these batches and storing it in a buffer on the tablet
> server, its counted as running. While that batch is being serialized
> and sent to the client its not counted as running. In my experience
> the speed at which a batch of key values can be read from RFiles, is
> much faster than the speed at which a batch can be serialized, sent to
> client, and then deserialized. Maybe this explains what you are
> Have you tried running listscans in the shell while your map reduce
> job is running? This will show all of the mappers scan sessions. For
> each scan session you can see its state, the running state should
> correspond to the run count on the monitor page.
> I suspect if you ran a map reduce job that pushed a lot of work into
> iterators on the tablet servers, then you would see much higher
> running scans counts. For example if your mappers setup a filter
> that only returned 1/20th of the data, then scans would spend a lot
> more time reading a batch of data relative to the time spent
> transmitting a batch of data.
> > On Tue, Apr 2, 2013 at 10:20 AM, Marc Reichman <[EMAIL PROTECTED]>
> > wrote:
> >> Hi Josh,
> >> Thanks for writing back. I am doing all explicit splits using addSplits
> >> the Java API since the keyspace is easy to divide evenly. Depending on
> >> table size for some of these experiments, I've had 128 splits, 256,
> 512, or
> >> 1024 splits. My jobs are executing properly, MR-wise, in the sense that
> I do
> >> have a proper amount of map tasks created (as the count of splits above,
> >> respectively). My concern is that the jobs may not be quite as busy as
> >> can be, dataflow-wise and I think the "Running Scans" per table/tablet
> >> server seem to be good indicators of that.
> >> My data is a 32-byte key (an md5 value), and I have one column family
> >> 3 columns which contain "bigger" data, anywhere from 50-100k to an
> >> occasional 10M-15M piece.
> >> On Tue, Apr 2, 2013 at 10:06 AM, Josh Elser <[EMAIL PROTECTED]>
> >>> Hi Marc,
> >>> How many tablets are in the table you're running MR over (see the
> >>> monitor)? Might adding some more splits to your table (`addsplits` in