Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Sqoop >> mail # user >> Creating compressed data with scooop


Copy link to this message
-
Re: Creating compressed data with scooop
Hi Santosh,
almost :-). When you click on this page on the "map" link you should get page with map tasks. When you click on one arbitrary task you should get table with all corresponding task attempts. On this page you should see link to log.

Jarcec

On Wed, Jan 09, 2013 at 07:45:44PM +0800, Santosh Achhra wrote:
> Hi Jarcec,
>
> Hopefully this is log which you had requested.
>
> all MAP task list for 0035_1357731518922_saachhra
>
> Task Id Start Time Finish Time
> Error
> task_201301081859_0035_m_000000 9/01 11:38:46 9/01 11:38:52 (6sec)
>
>
> After I click on "all MAP task list for 0035_1357731518922_saachhra", this
> is what I get
>
> Hadoop Job 0114_1357651667175_saachhra on History Viewer
>
> User: saachhra
> JobName: TABLE.jar
> JobConf:
> hdfs://host:8020/user/saachhra/.staging/job_201301072354_0114/job.xml
> Job-ACLs: All users are allowed
> Submitted At: 8-Jan-2013 13:27:47
> Launched At: 8-Jan-2013 13:27:47 (0sec)
> Finished At: 8-Jan-2013 13:28:00 (12sec)
> Status: SUCCESS
> Analyse This Job
> Kind Total Tasks(successful+failed+killed) Successful tasks Failed tasks Killed
> tasks Start Time Finish Time
> Setup 1 1 0 0 8-Jan-2013 13:27:50 8-Jan-2013 13:27:52 (1sec)
> Map 1 1 0 0 8-Jan-2013 13:27:53 8-Jan-2013 13:27:58 (5sec)
> Reduce 0 0 0 0
> Cleanup 1 1 0 0 8-Jan-2013 13:27:58 8-Jan-2013 13:28:00 (2sec)
>
>
>
> Counter Map Reduce Total
> File System Counters FILE: Number of bytes read 0 0 0
> FILE: Number of bytes written 0 0 170,459
> FILE: Number of read operations 0 0 0
> FILE: Number of large read operations 0 0 0
> FILE: Number of write operations 0 0 0
> HDFS: Number of bytes read 0 0 87
> HDFS: Number of bytes written 0 0 7,863,800
> HDFS: Number of read operations 0 0 1
> HDFS: Number of large read operations 0 0 0
> HDFS: Number of write operations 0 0 1
> Job Counters Launched map tasks 0 0 1
> Total time spent by all maps in occupied slots (ms) 0 0 9,118
> Total time spent by all reduces in occupied slots (ms) 0 0 0
> Total time spent by all maps waiting after reserving slots (ms) 0 0 0
> Total time spent by all reduces waiting after reserving slots (ms) 0 0 0
> Map-Reduce Framework Map input records 0 0 14,363
> Map output records 0 0 14,363
> Input split bytes 0 0 87
> Spilled Records 0 0 0
> CPU time spent (ms) 0 0 4,530
> Physical memory (bytes) snapshot 0 0 303,534,080
> Virtual memory (bytes) snapshot 0 0 1,868,890,112
> Total committed heap usage (bytes) 0 0 757,792,768
>
> Good wishes,always !
> Santosh
>
>
> On Wed, Jan 9, 2013 at 6:01 PM, Jarek Jarcec Cecho <[EMAIL PROTECTED]>wrote:
>
> > your JobTracker web ui, you will see running/failed/history jobs. If you
> > will follow the id link, you'll get to job page summary where you lastly
> > got the job XML file