Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Chukwa >> mail # user >> Error while starting the collector


Copy link to this message
-
Re: Error while starting the collector
Are you sure you started HDFS already? Is the namenode, datanode and
tasktraker all started. Can you store/read files from HDFS before starting
chukwa?
On Mon, Nov 14, 2011 at 3:26 PM, Mohammad Tariq <[EMAIL PROTECTED]> wrote:

> One more strange thing which I have noticed is that if am removing
> "initial_adaptors", I am able to start the agent. But if the
> "initial_adaptors" file is present inside "conf", I am getting
> following errors -
> tariq@ubuntu:~/chukwa-0.4.0$ bin/chukwa agent
> tariq@ubuntu:~/chukwa-0.4.0$ java.io.IOException: Cannot run program
> "/usr/bin/sar": java.io.IOException: error=2, No such file or
> directory
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>        at java.lang.Runtime.exec(Runtime.java:593)
>        at java.lang.Runtime.exec(Runtime.java:431)
>        at java.lang.Runtime.exec(Runtime.java:328)
>        at
> org.apache.hadoop.chukwa.inputtools.plugin.ExecPlugin.execute(ExecPlugin.java:66)
>        at
> org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:68)
>        at java.util.TimerThread.mainLoop(Timer.java:512)
>        at java.util.TimerThread.run(Timer.java:462)
> Caused by: java.io.IOException: java.io.IOException: error=2, No such
> file or directory
>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>        ... 7 more
> java.io.IOException: Cannot run program "/usr/bin/iostat":
> java.io.IOException: error=2, No such file or directory
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>        at java.lang.Runtime.exec(Runtime.java:593)
>        at java.lang.Runtime.exec(Runtime.java:431)
>        at java.lang.Runtime.exec(Runtime.java:328)
>        at
> org.apache.hadoop.chukwa.inputtools.plugin.ExecPlugin.execute(ExecPlugin.java:66)
>        at
> org.apache.hadoop.chukwa.datacollection.adaptor.ExecAdaptor$RunToolTask.run(ExecAdaptor.java:68)
>        at java.util.TimerThread.mainLoop(Timer.java:512)
>        at java.util.TimerThread.run(Timer.java:462)
> Caused by: java.io.IOException: java.io.IOException: error=2, No such
> file or directory
>        at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
>        at java.lang.ProcessImpl.start(ProcessImpl.java:65)
>        at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
>        ... 7 more
>
> Regards,
>     Mohammad Tariq
>
>
>
> On Mon, Nov 14, 2011 at 6:49 PM, TARIQ <[EMAIL PROTECTED]> wrote:
> > Hello Ahmed,
> >    Thanks for your valuable reply. Actually, earlier it was
> > hdfs://localhost:9000...but it was not working so I made it 9999..But
> > 9999 is also not working..Here is my core-site.xml file -
> > <configuration>
> >      <property>
> >          <name>dfs.replication</name>
> >          <value>1</value>
> >      </property>
> >
> >       <property>
> >          <name>dfs.data.dir</name>
> >          <value>/home/tariq/hdfs/data</value>
> >      </property>
> >
> >      <property>
> >          <name>dfs.name.dir</name>
> >          <value>/home/tariq/hdfs/name</value>
> >      </property>
> > </configuration>
> >
> > And hdfs-site.xml -
> > <configuration>
> >    <property>
> >          <name>fs.default.name</name>
> >          <value>hdfs://localhost:9000</value>
> >    </property>
> >    <property>
> >   <name>hadoop.tmp.dir</name>
> >   <value>file:///home/tariq/hadoop_tmp</value>
> >    </property>
> > </configuration>
> >
> > Regards,
> >     Mohammad Tariq
> >
> >
> >
> > On Mon, Nov 14, 2011 at 5:21 PM, Ahmed Fathalla [via Apache Chukwa]
> > <[hidden email]> wrote:
> >> I think the problem you have is in this line
> >>    <name>writer.hdfs.filesystem</name>
> >>    <value>hdfs://localhost:9999/</value>
> >>    <description>HDFS to dump to</description>
> >>  </property>
> >>
> >>
> >> Are you sure you've got HDFS running on port 9999 on your local machine?
> >> On Mon, Nov 14, 2011 at 1:18 PM, Mohammad Tariq <[hidden email]> wrote:

Ahmed Fathalla
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB