I can work around by using:

bin/pyspark --conf spark.sql.catalogImplementation=in-memory

now, but still wonder what's going on with HiveConf..

On Thu, Jun 14, 2018 at 11:37 AM, Li Jin <[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB