Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> using hdfs shell commands in a pig script


Copy link to this message
-
Re: using hdfs shell commands in a pig script
Hi Panshul,
        You can write "fs -getmerge source destination;" in your pig script.
Thanks,
Harsha
On Wednesday, March 6, 2013 at 3:33 AM, Panshul Whisper wrote:

> Hello,
>
> Is it possible to use hadoop fs commands in a pig script?
>
> What i exactly want to do is, at the end of my pig script, after the
> execution of store in a file command, I want the pig script to merge the
> multiple output files it creates.
>
> So either I can try and execute the hadoop fs -getmerge command in the pig
> script, or is there some other way to do it in pig?
>
> I got this idea, since grunt supports hadoop fs commands so I thought maybe
> it is possible to somehow execute them from within a pig script file.
>
> Any suggestions are welcome.
>
> Thanking You,
>
> --
> Regards,
> Ouch Whisper
> 010101010101
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB