Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> importdirectory in accumulo


Copy link to this message
-
Re: importdirectory in accumulo
No go. I get the same error:
Exception in thread "main" java.lang.ClassNotFoundException:
/opt/accumulo/lib/examples-simple-1/4/2-sources/jar

Note, how it says simple-1/4/2... I would expect simple-1.4.2 ..

On Wed, Apr 3, 2013 at 4:16 PM, Christopher <[EMAIL PROTECTED]> wrote:

> Try with -libjars:
>
> /opt/accumulo/bin/tool.sh /opt/accumulo/lib/examples-simple-*[^c].jar
> -libjars  /opt/accumulo/lib/examples-simple-*[^c].jar
> org.apache.accumulo.examples.simple.mapreduce.bulk.BulkIngestExample
> myinstance zookeepers user pswd tableName inputDir tmp/bulkWork
>
> --
> Christopher L Tubbs II
> http://gravatar.com/ctubbsii
>
>
> On Wed, Apr 3, 2013 at 4:11 PM, Aji Janis <[EMAIL PROTECTED]> wrote:
> > I am trying to run the BulkIngest example (on 1.4.2 accumulo) and I am
> not
> > able to run the following steps. Here is the error I get:
> >
> > [user@mynode bulk]$ /opt/accumulo/bin/tool.sh
> > /opt/accumulo/lib/examples-simple-*[^c].jar
> > org.apache.accumulo.examples.simple.mapreduce.bulk.BulkIngestExample
> > myinstance zookeepers user pswd tableName inputDir tmp/bulkWork
> > Exception in thread "main" java.lang.ClassNotFoundException:
> > /opt/accumulo/lib/examples-simple-1/4/2-sources/jar
> >         at java.lang.Class.forName0(Native Method)
> >         at java.lang.Class.forName(Class.java:264)
> >         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
> > [user@mynode bulk]$
> > [user@mynode bulk]$
> > [user@mynode bulk]$
> > [user@mynode bulk]$ ls /opt/accumulo/lib/
> > accumulo-core-1.4.2.jar
> > accumulo-start-1.4.2.jar
> > commons-collections-3.2.jar
> > commons-logging-1.0.4.jar
> > jline-0.9.94.jar
> > accumulo-core-1.4.2-javadoc.jar
> > accumulo-start-1.4.2-javadoc.jar
> > commons-configuration-1.5.jar
> > commons-logging-api-1.0.4.jar
> > libthrift-0.6.1.jar
> > accumulo-core-1.4.2-sources.jar
> > accumulo-start-1.4.2-sources.jar
> > commons-io-1.4.jar
> > examples-simple-1.4.2.jar
> > log4j-1.2.16.jar
> > accumulo-server-1.4.2.jar
> > cloudtrace-1.4.2.jar
> > commons-jci-core-1.0.jar
> > examples-simple-1.4.2-javadoc.jar
> > native
> > accumulo-server-1.4.2-javadoc.jar
> > cloudtrace-1.4.2-javadoc.jar
> > commons-jci-fam-1.0.jar
> > examples-simple-1.4.2-sources.jar
> > wikisearch-ingest-1.4.2-javadoc.jar
> > accumulo-server-1.4.2-sources.jar
> > cloudtrace-1.4.2-sources.jar
> > commons-lang-2.4.jar
> >  ext
> > wikisearch-query-1.4.2-javadoc.jar
> >
> > [user@mynode bulk]$
> >
> >
> > Clearly, the libraries and source file exist so I am not sure whats going
> > on. I tried putting in
> /opt/accumulo/lib/examples-simple-1.4.2-sources.jar
> > instead then it complains BulkIngestExample ClassNotFound.
> >
> > Suggestions?
> >
> >
> > On Wed, Apr 3, 2013 at 2:36 PM, Eric Newton <[EMAIL PROTECTED]>
> wrote:
> >>
> >> You will have to write your own InputFormat class which will parse your
> >> file and pass records to your reducer.
> >>
> >> -Eric
> >>
> >>
> >> On Wed, Apr 3, 2013 at 2:29 PM, Aji Janis <[EMAIL PROTECTED]> wrote:
> >>>
> >>> Looking at the BulkIngestExample, it uses GenerateTestData and creates
> a
> >>> .txt file which contians Key: Value pair and correct me if I am wrong
> but
> >>> each new line is a new row right?
> >>>
> >>> I need to know how to have family and qualifiers also. In other words,
> >>>
> >>> 1) Do I set up a .txt file that can be converted into an Accumulo RF
> File
> >>> using AccumuloFileOutputFormat  which can then be imported into my
> table?
> >>>
> >>> 2) if yes, what is the format of the .txt file.
> >>>
> >>>
> >>>
> >>>
> >>> On Wed, Apr 3, 2013 at 2:19 PM, Eric Newton <[EMAIL PROTECTED]>
> >>> wrote:
> >>>>
> >>>> Your data needs to be in the RFile format, and more importantly it
> needs
> >>>> to be sorted.
> >>>>
> >>>> It's handy to use a Map/Reduce job to convert/sort your data.  See the
> >>>> BulkIngestExample.
> >>>>
> >>>> -Eric
> >>>>
> >>>>
> >>>> On Wed, Apr 3, 2013 at 2:15 PM, Aji Janis <[EMAIL PROTECTED]> wrote:
> >>>>>
> >>>>> I have some data in a text file in the following format.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB