thanks for the help.  For others who happen upon this thread, the problem
was indeed on the consumer side. Spark (0.9.1) needs a bit of help setting
the Kafka properties for big messages.

    // setup Kafka with manual parameters to allow big messaging
    val kafkaParams = Map[String, String](
      "zookeeper.connect" -> zkQuorum, "" -> group,
      "" -> "10000",
      "fetch.message.max.bytes" -> "10485760",    // 10MB
      "fetch.size" -> "10485760")    // not needed?
    val lines = kafka.KafkaUtils.createStream[String, String,
StringDecoder, StringDecoder](
                ssc, kafkaParams, topicpMap,

 sorry about all the messages on this topic for those of you who aren't
getting digests
On Fri, Jun 27, 2014 at 10:43 AM, Louis Clark <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB