Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
HBase, mail # user - Fwd: Bulk Loading DFS Space issue in Hbase


Copy link to this message
-
Re: Fwd: Bulk Loading DFS Space issue in Hbase
Vikas Jadhav 2013-01-23, 19:48
I checked for log file but log file havent consumed space
Issue got solved when i created different region  6 regions for table
I am able to load table now dfs taking only 20 gb(6.5 gb taable x 3
replication factor ) space and non dfs
space also unchanged

Thanks
On Wed, Jan 23, 2013 at 3:01 PM, Shashwat Shriparv <
[EMAIL PROTECTED]> wrote:

>
>
> please Check Your map Reduce function how Much memory it is using It May
> be generating lot of temp temperory files which may be filling up ur space
> please check Your log and temp directory for the actual reason of failure.
> Please do post the job tracker and other logs.
>
>
> Regards
> §
> Shashwat Shriparv
>
>
> Sent from Samsung GalaxyVikas Jadhav <[EMAIL PROTECTED]> wrote:Hi
> I am trying to bulk load 700m CSV data with 31 colms into Hbase
>
> I have written MapReduce Program for but when run my program
> it takes whole disk space and fails
>
> Here is Status before running
> *
> *
> **
> Configured Capacity : 116.16 GB DFS Used : 13.28 GB Non DFS Used :
> 61.41 GBDFS Remaining:41.47 GBDFS Used%:11.43 %DFS Remaining%:35.7 %
> Live
> Nodes <
> http://rdcesx12078.race.sas.com:50070/dfsnodelist.jsp?whatNodes=LIVE>
> : 1 Dead Nodes<
> http://rdcesx12078.race.sas.com:50070/dfsnodelist.jsp?whatNodes=DEAD>
> :0 Decommissioning
> Nodes<
> http://rdcesx12078.race.sas.com:50070/dfsnodelist.jsp?whatNodes=DECOMMISSIONING
> >
> : 0 Number of Under-Replicated Blocks : 68
>
>
>
> After Runnign
>
> * *
>
> * Configured Capacity*
>
> :
>
> 116.16 GB
>
> * DFS Used*
>
> :
>
> 52.07 GB
>
> * Non DFS Used*
>
> :
>
> 61.47 GB
>
> * DFS Remaining*
>
> :
>
> 2.62 GB
>
> * DFS Used%*
>
> :
>
> 44.83 %
>
> * DFS Remaining%*
>
> :
>
> 2.26 %
>
> * **Live Nodes*<
> http://rdcesx12078.race.sas.com:50070/dfsnodelist.jsp?whatNodes=LIVE>
> * *
>
> :
>
> 1
>
> * **Dead Nodes*<
> http://rdcesx12078.race.sas.com:50070/dfsnodelist.jsp?whatNodes=DEAD>
> * *
>
> :
>
> 0
>
> * **Decommissioning
> Nodes*<
> http://rdcesx12078.race.sas.com:50070/dfsnodelist.jsp?whatNodes=DECOMMISSIONING
> >
> * *
>
> :
>
> 0
>
> * Number of Under-Replicated Blocks*
>
> :
>
> 455
>
>
>
>
>
> So what is taking so much DFS space.
>
> Has Anybody come across this issue.
>
>
>
> even though map and reduce complete 100%
>
> For incramental loading of HFILE it again keep on
>
> Demanding spcace until whole drive ..
>
>
>
>
>
> 52 GB for 700 MB csv File
>
>
>
>
>
> I am able to trace problem  to bulk loading
>
> 700mb csv file (31 column) generate 6.5 GB HFile
>
> But while loading this  these following lines excution take so much space
>
>   LoadIncrementalHFiles loader = new LoadIncrementalHFiles(*conf*);
>
>    loader.doBulkLoad(new Path(*args*[1]), hTable);
>
>
>
>
>
> *
> *
> *
>
> Thanx and Regards*
> * Vikas Jadhav*
>

--
*
*
*

Thanx and Regards*
* Vikas Jadhav*