Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> large data and hbase


+
Rita 2011-07-11, 11:31
+
Robert Evans 2011-07-11, 14:54
+
Bharath Mundlapudi 2011-07-11, 17:40
Copy link to this message
-
Re: large data and hbase
This is encouraging.

¨Make sure HDFS is running first. Start and stop the Hadoop HDFS daemons by
running bin/start-hdfs.sh over in the HADOOP_HOME directory. You can ensure
it started properly by testing the *put* and *get* of files into the Hadoop
filesystem. HBase does not normally use the mapreduce daemons. These do not
need to be started.¨

On Mon, Jul 11, 2011 at 1:40 PM, Bharath Mundlapudi
<[EMAIL PROTECTED]>wrote:

> Another option to look at is Pig Or Hive. These need MapReduce.
>
>
> -Bharath
>
>
>
> ________________________________
> From: Rita <[EMAIL PROTECTED]>
> To: "<[EMAIL PROTECTED]>" <[EMAIL PROTECTED]>
> Sent: Monday, July 11, 2011 4:31 AM
> Subject: large data and hbase
>
> I have a dataset which is several terabytes in size. I would like to query
> this data using hbase (sql). Would I need to setup mapreduce to use hbase?
> Currently the data is stored in hdfs and I am using `hdfs -cat ` to get the
> data and pipe it into stdin.
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>

--
--- Get your facts first, then you can distort them as you please.--
+
Harsh J 2011-07-12, 13:01
+
Rita 2011-07-13, 10:29
+
Harsh J 2011-07-13, 12:26
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB