Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Avro >> mail # user >> Re: Is it possible to append to an already existing avro file


+
Michael Malak 2013-02-01, 19:32
+
Doug Cutting 2013-02-06, 00:08
+
Michael Malak 2013-02-06, 00:10
+
Doug Cutting 2013-02-06, 00:27
+
Michael Malak 2013-02-06, 03:30
+
Harsh J 2013-02-06, 18:17
+
Michael Malak 2013-02-07, 00:42
+
Harsh J 2013-02-07, 16:28
+
Doug Cutting 2013-02-07, 16:51
+
Harsh J 2013-02-07, 16:56
+
Michael Malak 2013-02-07, 16:42
+
Ken Krugler 2013-02-06, 18:03
+
TrevniUser 2013-07-08, 16:29
Copy link to this message
-
Re: Is it possible to append to an already existing avro file
Since the exception is thrown from java.io.FileInputStream#open, it's
trying to append to a local file, not one in HDFS.

You're passing 'new File(...)' to appendTo, when you should probably
be passing 'new FsInput(...)'.

Doug

On Mon, Jul 8, 2013 at 9:29 AM, TrevniUser <[EMAIL PROTECTED]> wrote:
> I was following this thread for a problem I am facing while using
> SortedKeyValueFiles.
>
> Below is the piece of code that tries to obtain the appropriate writer based
> on whether I am appending or creating a new file:
>
> OutputStream dataOutputStream;
>             if (!fileSystem.exists(dataFilePath)) {
>                 dataOutputStream = fileSystem.create(dataFilePath);
>                 mDataFileWriter = new
> DataFileWriter<GenericRecord>(datumWriter).setSyncInterval(1 <<
> 20).create(mRecordSchema, dataOutputStream);
>             } else {
>                 dataOutputStream = fileSystem.append(dataFilePath);
>                 mDataFileWriter = new
> DataFileWriter<GenericRecord>(datumWriter).setSyncInterval(1 <<
> 20).appendTo(new File(options.getPath() + DATA_FILENAME));
>             }
>
> but it fails with this:
>
> java.io.FileNotFoundException: /CHANGELOG/data (No such file or directory)
>         at java.io.FileInputStream.open(Native Method)
>         at java.io.FileInputStream.<init>(FileInputStream.java:120)
>         at org.apache.avro.file.SeekableFileInput.<init>(SeekableFileInput.java:29)
>         at org.apache.avro.file.DataFileWriter.appendTo(DataFileWriter.java:149)
>         at
> com.abc.kepler.datasink.hdfs.util.SortedKeyValueFile$Writer.<init>(SortedKeyValueFile.java:597)
>         at
> com.abc.kepler.datasink.hdfs.util.ChangeLogUtil.getChangeLogWriter(ChangeLogUtil.java:84)
>         at
> com.abc.kepler.datasink.hdfs.HDFSDataSinkChangeLog.append(HDFSDataSinkChangeLog.java:219)
>         at
> com.abc.kepler.datasink.hdfs.HDFSDataSinkChangesTest.writeDataSingleEntityKeyDefaultLocation(HDFSDataSinkChangesTest.java:1036)
>         at
> com.abc.kepler.datasink.hdfs.HDFSDataSinkChangesTest.javadocExampleTest(HDFSDataSinkChangesTest.java:645)
>
> So, is the avro writer it not able to locate the file on hdfs? Could you
> please share some pointers what could be leading to this?
>
>
>
> --
> View this message in context: http://apache-avro.679487.n3.nabble.com/Is-it-possible-to-append-to-an-already-existing-avro-file-tp3762049p4027785.html
> Sent from the Avro - Users mailing list archive at Nabble.com.
+
TrevniUser 2013-07-09, 17:24
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB