Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Sqoop >> mail # user >> Creating compressed data with scooop


+
Santosh Achhra 2013-01-07, 11:34
+
Jarek Jarcec Cecho 2013-01-07, 12:31
+
Santosh Achhra 2013-01-08, 05:21
+
Jarek Jarcec Cecho 2013-01-08, 07:36
+
Santosh Achhra 2013-01-08, 08:01
+
Jarek Jarcec Cecho 2013-01-08, 11:09
+
Santosh Achhra 2013-01-08, 14:33
+
Jarek Jarcec Cecho 2013-01-08, 15:43
+
Santosh Achhra 2013-01-08, 16:09
+
Jarek Jarcec Cecho 2013-01-09, 10:01
Copy link to this message
-
Re: Creating compressed data with scooop
Hi Jarcec,

Hopefully this is log which you had requested.

all MAP task list for 0035_1357731518922_saachhra

Task Id Start Time Finish Time
Error
task_201301081859_0035_m_000000 9/01 11:38:46 9/01 11:38:52 (6sec)
After I click on "all MAP task list for 0035_1357731518922_saachhra", this
is what I get

Hadoop Job 0114_1357651667175_saachhra on History Viewer

User: saachhra
JobName: TABLE.jar
JobConf:
hdfs://host:8020/user/saachhra/.staging/job_201301072354_0114/job.xml
Job-ACLs: All users are allowed
Submitted At: 8-Jan-2013 13:27:47
Launched At: 8-Jan-2013 13:27:47 (0sec)
Finished At: 8-Jan-2013 13:28:00 (12sec)
Status: SUCCESS
Analyse This Job
Kind Total Tasks(successful+failed+killed) Successful tasks Failed tasks Killed
tasks Start Time Finish Time
Setup 1 1 0 0 8-Jan-2013 13:27:50 8-Jan-2013 13:27:52 (1sec)
Map 1 1 0 0 8-Jan-2013 13:27:53 8-Jan-2013 13:27:58 (5sec)
Reduce 0 0 0 0
Cleanup 1 1 0 0 8-Jan-2013 13:27:58 8-Jan-2013 13:28:00 (2sec)

Counter Map Reduce Total
File System Counters FILE: Number of bytes read 0 0 0
FILE: Number of bytes written 0 0 170,459
FILE: Number of read operations 0 0 0
FILE: Number of large read operations 0 0 0
FILE: Number of write operations 0 0 0
HDFS: Number of bytes read 0 0 87
HDFS: Number of bytes written 0 0 7,863,800
HDFS: Number of read operations 0 0 1
HDFS: Number of large read operations 0 0 0
HDFS: Number of write operations 0 0 1
Job Counters Launched map tasks 0 0 1
Total time spent by all maps in occupied slots (ms) 0 0 9,118
Total time spent by all reduces in occupied slots (ms) 0 0 0
Total time spent by all maps waiting after reserving slots (ms) 0 0 0
Total time spent by all reduces waiting after reserving slots (ms) 0 0 0
Map-Reduce Framework Map input records 0 0 14,363
Map output records 0 0 14,363
Input split bytes 0 0 87
Spilled Records 0 0 0
CPU time spent (ms) 0 0 4,530
Physical memory (bytes) snapshot 0 0 303,534,080
Virtual memory (bytes) snapshot 0 0 1,868,890,112
Total committed heap usage (bytes) 0 0 757,792,768

Good wishes,always !
Santosh
On Wed, Jan 9, 2013 at 6:01 PM, Jarek Jarcec Cecho <[EMAIL PROTECTED]>wrote:

> your JobTracker web ui, you will see running/failed/history jobs. If you
> will follow the id link, you'll get to job page summary where you lastly
> got the job XML file
+
Jarek Jarcec Cecho 2013-01-10, 09:40
+
Santosh Achhra 2013-01-10, 12:47
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB