Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive >> mail # user >> Question about how to add the debug info into the hive core jar


Copy link to this message
-
Re: Question about how to add the debug info into the hive core jar
Hi Yong,

Have you tried running the H query in debug mode. Hive log level can be changed by passing the following conf while hive client is running.
 
#hive -hiveconf hive.root.logger=ALL,console -e " DDL statement ;"
#hive -hiveconf hive.root.logger=ALL,console -f ddl.sql ;  
 
Hope this helps

 
Thanks
On Mar 20, 2013, at 1:45 PM, java8964 java8964 <[EMAIL PROTECTED]> wrote:

> Hi,
>
> I have the hadoop running in  pseudo-distributed mode on my linux box. Right now I face a problem about a Hive, which throws Exception in a table for some data which used my custom SerDe and InputFormat class.
>
> To help me to trace the root cause, I need to modify the code of org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe to add more debug logging information to understand why the exception happens.
>
> After I modify the hive code, I can compile it and generate a new hive-serde.jar file, with the same name as the release version, just size changed.
>
> Now I put my new hive-serde.jar under $HIVE_HOME/lib folder, replace the old one, and run the query which failed. But after the failure, if I check the $HADOOP_HOME/logs/user_logs/, I saw the Exception stacktrace still looked like generated by the original hive-serde class. The reason is that the line number shown in the log doesn't match with the new code I changed to add the debug information.
>
> My question is, if I have this new compiled hive-serde.jar file, besides $HIVE_HOME/lib, where should I put it in?
>
> 1) This is a pseudo environments. Everything (namenode, data node, job tracker and tasktracer are all running in one box)
> 2) After I replace hive-serde.jar with my new jar, I even stop all the hadoop java processing and restart them.
> 3) But when I run the query in the hive session, I still saw the log generated by the old hive-serde.jar class. Why?
>
> Thank for any help
>
> Yong

NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB