Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Avro >> mail # user >> avro-1.7.4 for python throws exception on Windows server 2008


Copy link to this message
-
avro-1.7.4 for python throws exception on Windows server 2008
Hello, friends!

I am newbie in avro and try to put my data into this container. But it is
failing with exception.
I found that code like that:

# -*- coding: utf-8 -*-
from avro import schema
from avro import io
from avro import datafile

schema_object = schema.parse (u"""{
      "type": "record",
      "name": "unicode_values",
      "fields": [ { "name": "value", "type": "string"}]
    }""")

dfw = datafile.DataFileWriter (open ("test.avro", "wb"), io.DatumWriter (),
schema_object)
dfw.append ({u"value" : u"95/44"})
dfw.append ({u"value" : u"238"})
dfw.append ({u"value" : u"103/20"})
dfw.append ({u"value" : u"117"})
dfw.append ({u"value" : u"936"})
dfw.append ({u"value" : u"226/407"})
dfw.append ({u"value" : u"217/43"})
dfw.append ({u"value" : u"45/4"})
dfw.append ({u"value" : u"654"})
dfw.append ({u"value" : u"217/23"})
dfw.append ({u"value" : u"1022200571470"})
dfw.close ()

# ======================reader = datafile.DataFileReader (open ("test.avro", "r"), io.DatumReader())
for value in reader:
    print value
reader.close ()
failing with exception
Traceback (most recent call last):
  File "sandbox/write_avro.py", line 28, in <module>
    for value in reader:
  File
"C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\datafile.py", line
362, in next
    datum = self.datum_reader.read(self.datum_decoder)
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 445, in read
    return self.read_data(self.writers_schema, self.readers_schema, decoder)
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 490, in read_data
    return self.read_record(writers_schema, readers_schema, decoder)
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 690, in read_record
    field_val = self.read_data(field.type, readers_field.type, decoder)
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 468, in read_data
    return decoder.read_utf8()
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 233, in read_utf8
    return unicode(self.read_bytes(), "utf-8")
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 226, in read_bytes
    return self.read(self.read_long())
  File "C:\Anaconda\lib\site-packages\avro-1.7.4-py2.7.egg\avro\io.py",
line 184, in read_long
    b = ord(self.read(1))
TypeError: ord() expected a character, but string of length 0 found
Tell me, what i do wrong?
Is there any workaround to avoid this exception?
Best regards,
Mezentsev Pavel
+
Miki Tebeka 2013-05-28, 16:53
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB