I've recently taken up the work efforts on CRUNCH-340 [1] to get a
functioning source and target for going against HCatalog. One of the issues
I've ran into is around named outputs being added to the JobID, which then
makes its way into the TaskAttemptID. The stack trace is below.

The issue is the named output (e.g. 'out0') becomes part of the
TaskAttemptID and the HCat output committer is trying to map between
o.a.h.mapreduce.TaskAttemptID and o.a.h.mapred.TaskAttemptID [2] it fails
between TaskAttemptID.forName expects the id to only be 6 parts, separated
by underscores, and with the named output, it becomes 7. If I remove the
named output from being set on the JobID, then everything works fine [3].

However, I am hesitant with that change. In the version of code I am
working against (0.11.x at the moment) there is a comment stating that
certain output formats rely upon this change. However, in the latest
version of the code in master, that comment has been removed. I'm curious
if the comment was removed because it is no longer true, and thus safe to
remove the named output from the job id, or if there is a better/more
preferred way to handle the exception below.
Error: java.lang.IllegalArgumentException: TaskAttemptId string :

[1] https://issues.apache.org/jira/browse/CRUNCH-340
[2]
https://github.com/cloudera/hive/blob/cdh5.13.0-release/hcatalog/core/src/main/java/org/apache/hive/hcatalog/mapreduce/HCatMapRedUtil.java#L34
[3]
https://github.com/apache/crunch/blob/master/crunch-core/src/main/java/org/apache/crunch/io/CrunchOutputs.java#L230
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB