-Re: Hive query started map task being killed during execution
Dean Wampler 2013-03-08, 22:16
Do you have more than one hive process running? It looks like you're using
Derby, which only supports one process at a time. Also, you have to start
Hive from the same directory every time, where the metastore "database" is
written, unless you edit the JDBC connection property in the Hive config
file to point to a particular path. Here's what I use:
<description>JDBC connect string for a JDBC metastore</description>
On Fri, Mar 8, 2013 at 4:09 PM, Dileep Kumar <[EMAIL PROTECTED]>wrote:
> Hi All,
> I am running a hive query which does insert into a table.
> What I noticed from the symptom it looks like it got to do with some
> settings but I am not able to figure out what settings.
> When I submit the query it starts 2130 map tasks in the job and 150 of
> them completes fine without any error and then next batch of 75 gets killed
> and all of them after that gets killed.
> While I submit a similar query based on smaller table its starts around
> only 135 map tasks and it runs till completion without any error and does
> the insert into appropriate table.
> I don't find any obvious error messages in any of the tasks log apart form
> 08:54:06,910 INFO orapache.hadoop.hive.ql.exec.MapOperator:
> 08:41:06,060 INFO orapache.hadoop.hive.ql.exec.MapOperator:
> 08:46:54,390 ERROR o.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher:
> Error during instantiating JDBC driver org.apache.derby.jdbc.EmbeddedDriver.
> 08:46:54,394 ERROR o.apache.hadoop.hive.ql.exec.FileSinkOperator:
> StatsPublishing error: cannot connect to database
> Please suggest if I need to set anything in Hive when I invoke this query.
> The query that runs successfully has lot less rows compared to on that
*Dean Wampler, Ph.D.*