Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> WritableName can't load class


Copy link to this message
-
Re: WritableName can't load class
Hi YouPeng,
I think that using --class-name for specifying the package name might cause some issues, so I would suggest to use the --class-name parameter only for specifying class name and --package to specify desired package name. You can find more details about the parameters in our user guide:

http://sqoop.apache.org/docs/1.4.3/SqoopUserGuide.html

Jarcec

On Mon, Apr 15, 2013 at 02:30:10PM +0800, YouPeng Yang wrote:
> Dear ALL
>
>   I have done an import job by running this:
>   $/home/sqoop-1.4.1-cdh4.1.2/bin/sqoop import --connect
> jdbc:oracle:thin:@10.167.14.225:1521:wxoss  -username XUJINGYU -password
> 123456  -target-dir sqoop/NMS_CMTS_MEMORY_CDX3 --query "select  a.* from
> NMS_CMTS_MEMORY_CDX a where \$CONDITIONS" --split-by a.CMTSID
> --as-sequencefile --class-name com.jhel.memoryseq
>
> Note:this is an sequencefile import,and I set the  class-name.
>
> After the success import job,I intend to do a MRv2 job[1] (with
> Hadoop2.0.0) to process the imported datafiles in the HDFS.However I got
> the exception [2].
>
> I have already put the the memoryseq.class into the directory bin of my
> project :
> [root@Hadoop01 ~]# cp
>  /tmp/sqoop-hadoop/compile/893b75fc25d3ade0272ab8fa1db420ef/com/jhel/memoryseq.class
>  /home/hadoop/indigo_workspace/sqoopetl/bin/com/jhetl
>
> The exception still came out.
>
> please help me.
>
>
>
>
> [1]the job settings as following.
> =====================================================> public static void main(String[] args) throws IOException,
> InterruptedException, ClassNotFoundException {
> Configuration conf = new Configuration();
> String[] otheArgs = new GenericOptionsParser(conf,args).getRemainingArgs();
> if(otheArgs.length != 2){
> System.err.println("Usage:aaaa");
> System.exit(2);
> }
>  @SuppressWarnings("deprecation")
> Job job = new Job(conf,"Data test2");
> job.setMapperClass(MEMMapper.class);
> job.setReducerClass(MEMReducer.class);
> job.setInputFormatClass(SequenceFileAsTextInputFormat.class);
> job.setOutputKeyClass(Text.class);
> job.setOutputValueClass(Text.class);
> FileInputFormat.addInputPath(job,new Path(otheArgs[0]));
> FileOutputFormat.setOutputPath(job,new Path(otheArgs[1]));
> System.exit(job.waitForCompletion(true) ?  0 : 1);
>  }
> ====================================================>
>
> [2]==================================================> ...
>  2013-04-15 14:10:08,907 WARN  mapred.LocalJobRunner
> (LocalJobRunner.java:run(479)) - job_local_0001
> *java.lang.Exception: java.lang.RuntimeException: java.io.IOException:
> WritableName can't load class: com.jhel.memoryseq*
> * at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:400)*
> *Caused by: java.lang.RuntimeException: java.io.IOException: WritableName
> can't load class: com.jhel.memoryseq*
> at
> org.apache.hadoop.io.SequenceFile$Reader.getValueClass(SequenceFile.java:1966)
> at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1906)
> at
> org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1765)
> at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1714)
> at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
> at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:54)
> at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader.initialize(SequenceFileAsTextRecordReader.java:56)
> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:488)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:724)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:232)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB