Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hive >> mail # user >> Hive + mongoDB


+
Sandeep Nemuri 2013-09-11, 17:36
+
Jérôme Verdier 2013-09-11, 17:48
+
Jitendra Yadav 2013-09-11, 18:11
+
Russell Jurney 2013-09-11, 18:20
+
Sandeep Nemuri 2013-09-12, 08:32
+
Nitin Pawar 2013-09-12, 10:19
+
Sandeep Nemuri 2013-09-12, 10:54
+
Nitin Pawar 2013-09-12, 11:32
Copy link to this message
-
Re: Hive + mongoDB
How we will get mongo data into mongo table ?

By using this we can just create table

create external table mongo_users(id int, name string, age int)
stored by "org.yong3.hive.mongo.MongoStorageHandler"
with serdeproperties ( "mongo.column.mapping" = "_id,name,age" )
tblproperties ( "mongo.host" = "192.168.0.5", "mongo.port" = "11211",
"mongo.db" = "test", "mongo.user" = "testUser", "mongo.passwd" "testPasswd", "mongo.collection" = "users" );
On Thu, Sep 12, 2013 at 5:02 PM, Nitin Pawar <[EMAIL PROTECTED]>wrote:

> If you are importing from hive to mongo, why can't you just select from
> mongo table and insert into hive table?
>
>
> On Thu, Sep 12, 2013 at 4:24 PM, Sandeep Nemuri <[EMAIL PROTECTED]>wrote:
>
>> Hi Nitin Pawar,
>>                         I have checked That link .There data is
>> importing from Hive-mongo.
>> But my requirment is to import data from mongo-hive.
>>
>>
>> On Thu, Sep 12, 2013 at 3:49 PM, Nitin Pawar <[EMAIL PROTECTED]>wrote:
>>
>>> Sandip,
>>>
>>> Did you try using hive-mongo (https://github.com/yc-huang/Hive-mongo).
>>>
>>> Its pretty easy to use as well. If you want to start with analytics
>>> directly.
>>>
>>>
>>> On Thu, Sep 12, 2013 at 2:02 PM, Sandeep Nemuri <[EMAIL PROTECTED]>wrote:
>>>
>>>> Thanks all
>>>> i am trying to import data with this program
>>>> but when i compied this code i got errors
>>>>
>>>> Here is the code
>>>>
>>>> import java.io.*;
>>>> import org.apache.commons.logging.*;
>>>> import org.apache.hadoop.conf.*;
>>>> import org.apache.hadoop.fs.Path;
>>>> import org.apache.hadoop.io.*;
>>>> import org.apache.hadoop.mapreduce.lib.output.*;
>>>> import org.apache.hadoop.mapreduce.*;
>>>> import org.bson.*;
>>>> import com.mongodb.hadoop.*;
>>>> import com.mongodb.hadoop.util.*;
>>>>
>>>> public class ImportWeblogsFromMongo {
>>>>
>>>> private static final Log log = LogFactory.
>>>> getLog(ImportWeblogsFromMongo.class);
>>>>
>>>> public static class ReadWeblogsFromMongo extends Mapper<Object,
>>>> BSONObject, Text, Text>{
>>>>
>>>> public void map(Object key, BSONObject value, Context context) throws
>>>> IOException, InterruptedException{
>>>>
>>>> System.out.println("Key: " + key);
>>>> System.out.println("Value: " + value);
>>>>
>>>> String md5 = value.get("md5").toString();
>>>> String url = value.get("url").toString();
>>>> String date = value.get("date").toString();
>>>> String time = value.get("time").toString();
>>>> String ip = value.get("ip").toString();
>>>> String output = "\t" + url + "\t" + date + "\t" + time + "\t" + ip;
>>>>
>>>> context.write( new Text(md5), new Text(output));
>>>> }
>>>> }
>>>>
>>>> public static void main(String[] args) throws Exception{
>>>>
>>>> final Configuration conf = new Configuration();
>>>>
>>>> MongoConfigUtil.setInputURI(conf,"mongodb://localhost:27017/mongo_hadoop.example");
>>>>
>>>> MongoConfigUtil.setCreateInputSplits(conf, false);
>>>> System.out.println("Configuration: " + conf);
>>>>
>>>> final Job job = new Job(conf, "Mongo Import");
>>>> Path out = new Path("/user/mongo_data");
>>>> FileOutputFormat.setOutputPath(job, out);
>>>> job.setJarByClass(ImportWeblogsFromMongo.class);
>>>> job.setMapperClass(ReadWeblogsFromMongo.class);
>>>> job.setOutputKeyClass(Text.class);
>>>> job.setOutputValueClass(Text.class);
>>>> job.setInputFormatClass(MongoInputFormat.class);
>>>> job.setOutputFormatClass(TextOutputFormat.class);
>>>> job.setNumReduceTasks(0);
>>>> System.exit(job.waitForCompletion(true) ? 0 : 1 );
>>>> }
>>>> }
>>>>
>>>>
>>>>
>>>> On Wed, Sep 11, 2013 at 11:50 PM, Russell Jurney <
>>>> [EMAIL PROTECTED]> wrote:
>>>>
>>>>> The docs are at
>>>>> https://github.com/mongodb/mongo-hadoop/tree/master/hive
>>>>>
>>>>> You need to build mongo-hadoop, and then use the documented syntax to
>>>>> create BSON tables in Hive.
>>>>>
>>>>>
>>>>> On Wed, Sep 11, 2013 at 11:11 AM, Jitendra Yadav <
>>>>> [EMAIL PROTECTED]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> 1. you may use Hadoop-mongodb connector, create a map reduce program

  Sandeep Nemuri
+
Nitin Pawar 2013-09-12, 12:53
+
Sandeep Nemuri 2013-09-13, 10:54
+
Nitin Pawar 2013-09-13, 16:04
+
Sandeep Nemuri 2013-09-19, 09:09
+
Nitin Pawar 2013-09-19, 09:20
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB