Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hive, mail # user - Creating Indexes


Copy link to this message
-
Re: Creating Indexes
Shreepadma Venugopalan 2012-11-03, 00:06
Hi Peter,

While it looks like the map-red task may have succeeded it looks like the
alter index actually failed. You should look into the execution log to see
what the exception is. Without knowing why the DDLtask failed its hard to
pinpoint the problem.

As for the original problem with the jar as Dean pointed out for some odd
reason the jar was not on the classpath prior to the add jar.

Thanks,
Shreepadma

On Fri, Nov 2, 2012 at 4:59 PM, Peter Marron <
[EMAIL PROTECTED]> wrote:

>  Hi Dean,****
>
> ** **
>
> At this stage I’m really not worried about this being a hack.****
>
> I just want to get it to work, and I’m grateful for all your help.****
>
> I did as you suggested and now, as far as I can see, the Map/Reduce****
>
> has succeeded. When I look in the log for the last reduce I no longer****
>
> find an error. However this is the output from the hive command****
>
> session:****
>
> ** **
>
> MapReduce Total cumulative CPU time: 0 days 1 hours 14 minutes 51 seconds
> 360 msec****
>
> Ended Job = job_201211021743_0001****
>
> Loading data to table default.default__score_bigindex__****
>
> Deleted hdfs://localhost/data/warehouse/default__score_bigindex__****
>
> Invalid alter operation: Unable to alter index.****
>
> Table default.default__score_bigindex__ stats: [num_partitions: 0,
> num_files: 138, num_rows: 0, total_size: 446609024, raw_data_size: 0]****
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask****
>
> MapReduce Jobs Launched: ****
>
> Job 0: Map: 511  Reduce: 138   Accumulative CPU: 4491.36 sec   HDFS Read:
> 137123460712 HDFS Write: 446609024 SUCESS****
>
> Total MapReduce CPU Time Spent: 0 days 1 hours 14 minutes 51 seconds 360
> msec****
>
> hive>     ****
>
> ** **
>
> I find this very confusing. We have the bit where it says “Job 0:….
> SUCCESS”****
>
> and this seems to fit with the fact that I can’t find errors in the
> Map/Reduce.****
>
> On the other hand we have the bit where it says: “Invalid alter operation:
> Unable to alter index.”****
>
> So has it successfully created the index  or not? And if not, then what do
> I do next?****
>
> Is there somewhere else where it records Hive errors as opposed to
> Map/Reduce errors?****
>
> ** **
>
> Regards,****
>
> ** **
>
> Peter Marron****
>
>                                    ****
>
> ** **
>
> *From:* Dean Wampler [mailto:[EMAIL PROTECTED]]
> *Sent:* 02 November 2012 14:03
>
> *To:* [EMAIL PROTECTED]
> *Subject:* Re: Creating Indexes****
>
>  ** **
>
> Oh, I saw this line in your Hive output and just assumed you were running
> in a cluster:****
>
> ** **
>
> Hadoop job information for Stage-1: number of mappers: 511; number of
> reducers: 138****
>
> ** **
>
> I haven't tried running a job that big in pseudodistributed mode either,
> but that's beside the point.****
>
> ** **
>
> So it seems to be an issue with indexing, but it still begs the question
> why derby isn't on the classpath for the task. I would try using the ADD
> JAR command, which copies the jar around the "cluster" and puts it on the
> classpath. It's what you would use with UDFs, for example:****
>
> ** **
>
> ADD JAR /path/to/derby.jar****
>
> ALTER INDEX ...;****
>
> ** **
>
> It's a huge hack, but it just might work.****
>
> dean****
>
> ** **
>
> On Fri, Nov 2, 2012 at 3:44 AM, Peter Marron <
> [EMAIL PROTECTED]> wrote:****
>
> Hi Dean,****
>
>  ****
>
> I’m running everything on a single physical machine in pseudo-distributed
> mode.****
>
>  ****
>
> Well it certainly looks like the reducer is looking for a derby.jar,
> although I must****
>
> confess I don’t really understand why it would be doing that.****
>
> In an effort to fix that I copied the derby.jar (derby-10.4.2.0.jar) into
> the****
>
> Hadoop directory, where I assume that the reducer would be able to find it.
> ****
>
> However I get exactly the same problem as before.****
>
> Is there some particular place that I should put the derby.jar to make this