Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
HDFS >> mail # user >> New hadoop 1.2 single node installation giving problems


Copy link to this message
-
Re: New hadoop 1.2 single node installation giving problems
After starting  i would suggest always check whether your NameNode and job
tracker UI are working or not and check the number of live nodes in both of
the UI..
Regards,
Som Shekhar Sharma
+91-8197243810
On Tue, Jul 23, 2013 at 10:41 PM, Ashish Umrani <[EMAIL PROTECTED]>wrote:

> Thanks,
>
> But the issue was that there was no directory and hence it was not showing
> anything.  Adding a directory cleared the warning.
>
> I appreciate your help.
>
> Regards
> ashish
>
>
> On Tue, Jul 23, 2013 at 10:08 AM, Mohammad Tariq <[EMAIL PROTECTED]>wrote:
>
>> Hello Ashish,
>>
>> Change the permissions of /app/hadoop/tmp to 755 and see if it helps.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Tue, Jul 23, 2013 at 10:27 PM, Ashish Umrani <[EMAIL PROTECTED]>wrote:
>>
>>> Thanks Jitendra, Bejoy and Yexi,
>>>
>>> I got past that.  And now the ls command says it can not access the
>>> directory.  I am sure this is a permissions issue.  I am just wondering
>>> which directory and I missing permissions on.
>>>
>>> Any pointers?
>>>
>>> And once again, thanks a lot
>>>
>>> Regards
>>> ashish
>>>
>>> *hduser@ashish-HP-Pavilion-dv6-Notebook-PC:/usr/local/hadoop/conf$
>>> hadoop fs -ls*
>>> *Warning: $HADOOP_HOME is deprecated.*
>>> *
>>> *
>>> *ls: Cannot access .: No such file or directory.*
>>>
>>>
>>>
>>> On Tue, Jul 23, 2013 at 9:42 AM, Jitendra Yadav <
>>> [EMAIL PROTECTED]> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> Please check <property></property>  in hdfs-site.xml.
>>>>
>>>> It is missing.
>>>>
>>>> Thanks.
>>>> On Tue, Jul 23, 2013 at 9:58 PM, Ashish Umrani <[EMAIL PROTECTED]
>>>> > wrote:
>>>>
>>>>> Hey thanks for response.  I have changed 4 files during installation
>>>>>
>>>>> core-site.xml
>>>>> mapred-site.xml
>>>>> hdfs-site.xml   and
>>>>> hadoop-env.sh
>>>>>
>>>>>
>>>>> I could not find any issues except that all params in the
>>>>> hadoop-env.sh are commented out.  Only java_home is un commented.
>>>>>
>>>>> If you have a quick minute can you please browse through these files
>>>>> in email and let me know where could be the issue.
>>>>>
>>>>> Regards
>>>>> ashish
>>>>>
>>>>>
>>>>>
>>>>> I am listing those files below.
>>>>>  *core-site.xml *
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>hadoop.tmp.dir</name>
>>>>>     <value>/app/hadoop/tmp</value>
>>>>>     <description>A base for other temporary directories.</description>
>>>>>   </property>
>>>>>
>>>>>   <property>
>>>>>     <name>fs.default.name</name>
>>>>>     <value>hdfs://localhost:54310</value>
>>>>>     <description>The name of the default file system.  A URI whose
>>>>>     scheme and authority determine the FileSystem implementation.  The
>>>>>     uri's scheme determines the config property (fs.SCHEME.impl) naming
>>>>>     the FileSystem implementation class.  The uri's authority is used
>>>>> to
>>>>>     determine the host, port, etc. for a filesystem.</description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *mapred-site.xml*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <property>
>>>>>     <name>mapred.job.tracker</name>
>>>>>     <value>localhost:54311</value>
>>>>>     <description>The host and port that the MapReduce job tracker runs
>>>>>     at.  If "local", then jobs are run in-process as a single map
>>>>>     and reduce task.
>>>>>     </description>
>>>>>   </property>
>>>>> </configuration>
>>>>>
>>>>>
>>>>>
>>>>> *hdfs-site.xml   and*
>>>>>  <?xml version="1.0"?>
>>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>>>
>>>>> <!-- Put site-specific property overrides in this file. -->
>>>>>
>>>>> <configuration>
>>>>>   <name>dfs.replication</name>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB