Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Pig >> mail # dev >> Re: [tw-hadoop-users] ERROR 2017: Internal error creating job configuration.


+
Julien Le Dem 2012-11-15, 17:36
+
Gabor Szabo 2012-11-15, 17:53
Copy link to this message
-
Re: [tw-hadoop-users] ERROR 2017: Internal error creating job configuration.
Aha, thanks Gabor. I'll just wait til Twadoop is building and let it fill
in.

On Thu, Nov 15, 2012 at 9:53 AM, Gabor Szabo <[EMAIL PROTECTED]> wrote:

> Kurt, there's a lot of dependencies that this job expects at different
> places, and if you run it as a normal user, it will get confused. Most
> likely some input files are not found.
>
>
> On 11/15/2012 09:36 AM, Julien Le Dem wrote:
>
>> Hi Kurt,
>>  From the stack trace, it looks like it runs into an error while
>> estimating the size of the input.
>> Are all of the paths it's looking for there in hdfs:///user/kurt ?
>> does it work with pig_11 ? add --pig_version pig_11 to the oink command
>> also send out the command line you are using.
>> Thanks,
>> Julien
>>
>> On Thu, Nov 15, 2012 at 9:15 AM, Kurt Smith <[EMAIL PROTECTED]> wrote:
>>
>>> I'm getting this error when doing a manual run of the search_simplified
>>> twadoop query. This query has run fine before. Any idea what the issue
>>> is?
>>>
>>> Pig Stack Trace
>>> ---------------
>>> ERROR 2017: Internal error creating job configuration.
>>>
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> JobCreationException:
>>> ERROR 2017: Internal error creating job configuration.
>>>          at
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> JobControlCompiler.getJob(**JobControlCompiler.java:738)
>>>          at
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> JobControlCompiler.compile(**JobControlCompiler.java:264)
>>>          at
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> MapReduceLauncher.launchPig(**MapReduceLauncher.java:150)
>>>          at org.apache.pig.PigServer.**launchPlan(PigServer.java:**1267)
>>>          at
>>> org.apache.pig.PigServer.**executeCompiledLogicalPlan(**
>>> PigServer.java:1252)
>>>          at org.apache.pig.PigServer.**execute(PigServer.java:1242)
>>>          at org.apache.pig.PigServer.**executeBatch(PigServer.java:**
>>> 356)
>>>          at
>>> org.apache.pig.tools.grunt.**GruntParser.executeBatch(**
>>> GruntParser.java:132)
>>>          at
>>> org.apache.pig.tools.grunt.**GruntParser.processScript(**
>>> GruntParser.java:452)
>>>          at
>>> org.apache.pig.tools.**pigscript.parser.**PigScriptParser.Script(**
>>> PigScriptParser.java:752)
>>>          at
>>> org.apache.pig.tools.**pigscript.parser.**PigScriptParser.parse(**
>>> PigScriptParser.java:423)
>>>          at
>>> org.apache.pig.tools.grunt.**GruntParser.parseStopOnError(**
>>> GruntParser.java:189)
>>>          at
>>> org.apache.pig.tools.grunt.**GruntParser.parseStopOnError(**
>>> GruntParser.java:165)
>>>          at org.apache.pig.tools.grunt.**Grunt.exec(Grunt.java:84)
>>>          at org.apache.pig.Main.run(Main.**java:561)
>>>          at org.apache.pig.Main.main(Main.**java:111)
>>>          at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native
>>> Method)
>>>          at
>>> sun.reflect.**NativeMethodAccessorImpl.**invoke(**
>>> NativeMethodAccessorImpl.java:**39)
>>>          at
>>> sun.reflect.**DelegatingMethodAccessorImpl.**invoke(**
>>> DelegatingMethodAccessorImpl.**java:25)
>>>          at java.lang.reflect.Method.**invoke(Method.java:597)
>>>          at org.apache.hadoop.util.RunJar.**main(RunJar.java:186)
>>> Caused by: java.lang.NullPointerException
>>>          at org.apache.hadoop.fs.**FileSystem.globStatus(**
>>> FileSystem.java:971)
>>>          at org.apache.hadoop.fs.**FileSystem.globStatus(**
>>> FileSystem.java:944)
>>>          at
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> JobControlCompiler.**getInputSize(**JobControlCompiler.java:840)
>>>          at
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> JobControlCompiler.**estimateNumberOfReducers(**
>>> JobControlCompiler.java:810)
>>>          at
>>> org.apache.pig.backend.hadoop.**executionengine.**mapReduceLayer.**
>>> JobControlCompiler.**adjustNumReducers(**JobControlCompiler.java:750)
Kurt Smith
Senior Data Scientist, Analytics | Twitter, Inc
@kurtosis0 <https://twitter.com/intent/user?screen_name=kurtosis0>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB