Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Avro >> mail # user >> Hadoop Streaming mangles Python-produced Avro


Copy link to this message
-
Hadoop Streaming mangles Python-produced Avro
I've got a rather simple script that takes Twitter data in JSON and turns
it into an Avro file.

from avro import schema, datafile, io
import json, sys
from types import *

def main():
    if len(sys.argv) < 2:
        print "Usage: cat input.json | python2.7 JSONtoAvro.py output"
        return

    s = schema.parse(open("tweet.avsc").read())
    f = open(sys.argv[1], 'wb')

    writer = datafile.DataFileWriter(f, io.DatumWriter(), s, codec 'deflate')

    failed = 0

    for line in sys.stdin:
        line = line.strip()

    try:
        data = json.loads(line)
    except ValueError as detail:
        continue

    try:
        writer.append(data)
    except io.AvroTypeException as detail:
        print line
        failed += 1

writer.close()

print str(failed) + " failed in schema"

if __name__ == '__main__':
    main()

>From there, I use this to feed a basic Hadoop Streaming script (also in
Python) which just pulls out certain elements of the tweets. However, when
I do this, it appears that the input for the script is mangled JSON.
Usually the JSON fails with some errant \u in the middle of the tweet body
or user-defined description.

The Streaming script is rather basic -- it reads from sys.stdin and
attempts to parse the JSON string using the json package.

Here is the bash script I use to invoke Hadoop Streaming:

jars=/usr/lib/hadoop/lib/avro-1.7.1.cloudera.2.jar,/usr/lib/hive/lib/avro-mapred-1.7.1.cloudera.2.jar

hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
    -files
$jars,$HOME/sandbox/hadoop/streaming/map/tweetMapper.py,$HOME/sandbox/hadoop/streaming/data/keywords.txt,$HOME/sandbox/hadoop/streaming/data/follow-r3.txt
\
     -libjars $jars \
     -input /user/ahanna/avrotest/avrotest.json.avro \
     -output output \
     -mapper "tweetMapper.py -a" \
     -reducer org.apache.hadoop.mapred.lib.IdentityReducer \
     -inputformat org.apache.avro.mapred.AvroAsTextInputFormat \
     -numReduceTasks 1

I'm starting to think this is a bug with
org.apache.avro.mapred.AvroAsTextInputFormat?

--
Alexander Hanna
PhD Student, Department of Sociology
University of Wisconsin-Madison
http://alex-hanna.com
@alexhanna
+
Martin Thurn 2013-03-27, 20:16
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB