-Re: Run hive queries, and collect job information
Nitin Pawar 2013-01-30, 11:30
for all the queries you run as user1 .. hive stores the hive cli history
into .hive_history file (please check the limits on how many queries it
For all the jobs hive cli runs, it keeps the details in /tmp/user.name/
all these values are configurable into hive-site.xml
On Wed, Jan 30, 2013 at 3:55 PM, Qiang Wang <[EMAIL PROTECTED]> wrote:
> Every hive query has a history file, and you can get these info from hive
> history file
> Following java code can be an example:
> 2013/1/30 Mathieu Despriee <[EMAIL PROTECTED]>
>> Hi folks,
>> I would like to run a list of generated HIVE queries. For each, I would
>> like to retrieve the MR job_id (or ids, in case of multiple stages). And
>> then, with this job_id, collect statistics from job tracker (cumulative
>> CPU, read bytes...)
>> How can I send HIVE queries from a bash or python script, and retrieve
>> the job_id(s) ?
>> For the 2nd part (collecting stats for the job), we're using a MRv1
>> Hadoop cluster, so I don't have the AppMaster REST API<http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/MapredAppMasterRest.html>.
>> I'm about to collect data from the jobtracker web UI. Any better idea ?