Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Pig >> mail # user >> file system closed?


+
Lauren Blau 2012-10-21, 13:34
+
Richipal Singh 2012-10-21, 21:40
Copy link to this message
-
Re: file system closed?
I was not creating the filesystem myself. The script was like:
split into a and b
join a and b
filter join result.
that's where it fails there is more after but I tried storeing after the
filter and it still fails. However, most of them time when it fails there
were no error messages anywhere I know to look.

Lauren

On Sun, Oct 21, 2012 at 5:40 PM, Richipal Singh <[EMAIL PROTECTED]> wrote:

> Lauren,
>      Can you post your pig script,
> I don't know If this would help but I have seen a similar error when I was
> creating
> FileSystem through Java map reduce, example
>
> String uri = "localhost"
>
> Configuration conf = new Configuration();
>
> FileSystem fs = FileSystem.get(URI.create(uri), conf);
> this would create a error FileSystem Closed, where as when
> my
> String uri = "hdfs://localhost" it would work fine.
>
> --
> Richipal Singh
>
>
>
> On Sun, Oct 21, 2012 at 9:34 AM, Lauren Blau <
> [EMAIL PROTECTED]> wrote:
>
> > I have a pig job that keeps failing at near completion. After 3 runs
> (long
> > ones), I've finally found something out of the ordinary in a log:
> > Anyone have any ideas what could be causing this?
> >
> > Thanks
> >
> > $Proxy1.complete(Unknown Source)
> >         at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3566)
> >         at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3481)
> >         at
> > org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1133)
> >         at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:243)
> >         at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:349)
> >         at
> > org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1616)
> >         at
> >
> org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:1586)
> > 2012-10-21 12:08:33,158 INFO
> > org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs'
> > truncater with mapRetainSize=-1 and reduceRetainSize=-1
> > 2012-10-21 12:08:33,159 WARN org.apache.hadoop.mapred.Child: Error
> running
> > child
> > java.io.IOException: java.io.IOException: Filesystem closed
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:470)
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.processOnePackageOutput(PigGenericMapReduce.java:433)
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:413)
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.reduce(PigGenericMapReduce.java:257)
> >         at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
> >         at
> > org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:572)
> >         at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:414)
> >         at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Unknown Source)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> >         at org.apache.hadoop.mapred.Child.main(Child.java:264)
> > Caused by: java.io.IOException: Filesystem closed
> >         at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
> >         at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
> >         at
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:3236)
> >         at
> >
> org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:150)
> >         at
> > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:132)
> >         at
> > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:121)
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB