Hi everyone,

I'm currently running Zeppelin on a spark master node using the AWS
provided Zeppelin install. I'm trying to get the notebook setup so
multiple devs can use it (and the spark cluster) concurrently. I have
the spark interpreter set to instantiate 'Per Note' in 'isolated'
processes. I also have 'spark.dynamicAllocation.enabled' set to 'true'
so the multiple spark contexts can share the cluster.

The problem I'm seeing is when the second spark context tries to
instantiate hive starts throwing errors because the Derby database has
already been booted (by the other context). Full stack trace is
available here [1]. How do I go about working around this? Is there a
way to have it use another database or is this a limitation?

Thanks for any help!

[1] https://gist.github.com/aheyne/8d84eaedefb997f248b6e88c1b9e1e34

Austin L. Heyne
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB