Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # dev >> using pipe in the command executed by slaves.sh


Copy link to this message
-
Re: using pipe in the command executed by slaves.sh
[hadoop@us01-ciqps1-name01 ~]$ slaves.sh '/usr/java/default/bin/jps |
/bin/grep Child | /usr/bin/wc -l'
us01-ciqps1-grid01.carrieriq.com: bash: /usr/java/default/bin/jps : No such
file or directory
us01-ciqps1-grid01.carrieriq.com: bash:  /bin/grep Child : No such file or
directory
us01-ciqps1-grid01.carrieriq.com: bash:  /usr/bin/wc -l: No such file or
directory

But I verified that /usr/java/default/bin/jps exists on each node.

On Thu, Nov 25, 2010 at 12:40 AM, Harsh J <[EMAIL PROTECTED]> wrote:

> Hi,
>
> On Thu, Nov 25, 2010 at 11:49 AM, Ted Yu <[EMAIL PROTECTED]> wrote:
> > Hi,
> > We use cdh3b2
> > I want to get per-tasktracker statictics, such as the count of map/reduce
> > tasks on each node.
> > The following returns total:
> > slaves.sh  /usr/java/default/bin/jps | /bin/grep Child | /usr/bin/wc
> >
> > How do I get per-node count ?
>
> You could get it by passing that entire command to the slaves?
> Something like slaves.sh  '/usr/java/default/bin/jps | /bin/grep Child
> | /usr/bin/wc -l' should work I think, instead of aggregating in your
> shell..
>
>
>
> --
> Harsh J
> www.harshj.com
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB