Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Reading table sequentially...


Copy link to this message
-
Re: Reading table sequentially...
Something Something 2010-01-12, 22:01
Cool.  That works.  Just couple quick questions to confirm:

All the Keys returned by this code are guaranteed to be in order by the key
values, correct?

Also, for some other table I am retrieving all column names for a particular
key, and those all seem to be in the correct order as well.  Is this always
guaranteed?

On Tue, Jan 12, 2010 at 11:12 AM, stack <[EMAIL PROTECTED]> wrote:

> Setup the scanner and next it as you did previous.  Then on the Result
> object, do something like:
>
> for (KeyValue kv: result.raw()) {
>  System.out.println(Bytes.toString(kv.getRow()) + " " +
> Bytes.toString(kv.getValue()));
> }
>
> St.Ack
>
> On Tue, Jan 12, 2010 at 10:24 AM, Something Something <
> [EMAIL PROTECTED]> wrote:
>
> > Sorry.  That was a typo.  In any case, it seems like I am using the wrong
> > API.
> >
> > Here's what my table contains:
> >
> >
> >  ABC_111                     column=info:estimate,
> timestamp=1263319888463,
> > value=179.59
> >  ABC_222                     column=info:estimate,
> timestamp=1263319888463,
> > value=191.50
> >  ABC_333                     column=info:estimate,
> timestamp=1263319888463,
> > value=180.65
> >  ABC_444                     column=info:estimate,
> timestamp=1263319888463,
> > value=183.63
> >  & so on....
> >
> >
> > I want to retrieve:
> >
> > ABC_111  179.59
> > ABC_222  191.50
> > ABC_333  180.65
> > ABC_444  183.63
> > & so on...
> >
> > What API should I use?  Please let me know.  Thanks for your help.
> >
> >
> > On Tue, Jan 12, 2010 at 9:36 AM, stack <[EMAIL PROTECTED]> wrote:
> >
> > > See below:
> > >
> > > On Tue, Jan 12, 2010 at 9:12 AM, Something Something <
> > > [EMAIL PROTECTED]> wrote:
> > > >
> > > >            NavigableMap<byte[], NavigableMap<byte[],
> NavigableMap<Long,
> > > > byte[]>>> map = result.getMap();
> > > >
> > >
> > > Above returns a map keyed by families:
> > >
> > >
> >
> http://hadoop.apache.org/hbase/docs/current/api/org/apache/hadoop/hbase/client/Result.html#getMap%28%29
> > >
> > >
> > >
> > >
> > > >            for (Map.Entry<byte[], NavigableMap<byte[],
> > NavigableMap<Long,
> > > > byte[]>>> entry : map.entrySet()) {
> > > >            byte[] key = entry.getKey();
> > > >
> > >
> > > This is family name, not key.
> > >
> > >
> > >
> > > >            *LOG.info("key = " + Bytes.toString(key));*
> > > >            NavigableMap<byte[], NavigableMap<Long, byte[]>> value > > > > entry.getValue();
> > > >
> > >
> > > This is a map keyed by column qualifiers.
> > >
> > >
> > >
> > > >              for (Entry<byte[], NavigableMap<Long, byte[]>> entry1 :
> > > > value.entrySet()) {
> > > >                  byte[] key1 = entry1.getKey();
> > > >                  *LOG.info("key1 = " + Bytes.toString(key1));*
> > > >
> > >
> > >
> > > This is the family qualifier.
> > >
> > >
> > >
> > > >                  NavigableMap<byte[], NavigableMap<Long, byte[]>>
> > value1
> > > > > > > entry.getValue();
> > > >
> > >
> > >
> > > I do not think you intended to do this.  I think you meant entry1, not
> > > 'entry' and map type should be NavigableMap<Long, byte[]> rather than
> > > above.
> > >
> > > St.Ack
> > >
> > >
> > >
> > > >                  for (Entry<byte[], NavigableMap<Long, byte[]>>
> entry2
> > :
> > > > value1.entrySet()) {
> > > >                      String key2 = Bytes.toString(entry2.getKey());
> > > >                      *LOG.info("key2 = " + key2);*
> > > >
> > > >                  }
> > > >              }
> > > >            }
> > > >        }
> > > >
> > >
> >
>