Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> large data and hbase


Copy link to this message
-
Re: large data and hbase
This is encouraging.

¨Make sure HDFS is running first. Start and stop the Hadoop HDFS daemons by
running bin/start-hdfs.sh over in the HADOOP_HOME directory. You can ensure
it started properly by testing the *put* and *get* of files into the Hadoop
filesystem. HBase does not normally use the mapreduce daemons. These do not
need to be started.¨

On Mon, Jul 11, 2011 at 1:40 PM, Bharath Mundlapudi
<[EMAIL PROTECTED]>wrote:

> Another option to look at is Pig Or Hive. These need MapReduce.
>
>
> -Bharath
>
>
>
> ________________________________
> From: Rita <[EMAIL PROTECTED]>
> To: "<[EMAIL PROTECTED]>" <[EMAIL PROTECTED]>
> Sent: Monday, July 11, 2011 4:31 AM
> Subject: large data and hbase
>
> I have a dataset which is several terabytes in size. I would like to query
> this data using hbase (sql). Would I need to setup mapreduce to use hbase?
> Currently the data is stored in hdfs and I am using `hdfs -cat ` to get the
> data and pipe it into stdin.
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>

--
--- Get your facts first, then you can distort them as you please.--