Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
MapReduce >> mail # user >> setLocalResources() on ContainerLaunchContext


Copy link to this message
-
Re: setLocalResources() on ContainerLaunchContext
Hi Omkar,

  I will try that. I might have got 2 of '/' wrongly while trying it in
different ways to make it work. The file kishore/kk.ksh is accessible to
the same user that is running the AM container.

  And my another questions is to understand what are the exact benefits of
using this resource localization? Can you please explain me briefly or
point me some online documentation talking about it?

Thanks,
Kishore
On Wed, Aug 7, 2013 at 11:49 PM, Omkar Joshi <[EMAIL PROTECTED]> wrote:

> Good that your timestamp worked... Now for hdfs try this
> hdfs://<hdfs-host-name>:<hdfs-host-port><absolute-path>
> now verify that your absolute path is correct. I hope it will work.
> bin/hadoop fs -ls <absolute-path>
>
>
> hdfs://isredeng:8020*//*kishore/kk.ksh... why "//" ?? you have hdfs file
> at absolute location /kishore/kk.sh? is /kishore and /kishore/kk.sh
> accessible to the user who is making startContainer call or the one running
> AM container?
>
> Thanks,
> Omkar Joshi
> *Hortonworks Inc.* <http://www.hortonworks.com>
>
>
> On Tue, Aug 6, 2013 at 10:43 PM, Krishna Kishore Bonagiri <
> [EMAIL PROTECTED]> wrote:
>
>> Hi Harsh, Hitesh & Omkar,
>>
>>   Thanks for the replies.
>>
>> I tried getting the last modified timestamp like this and it works. Is
>> this a right thing to do?
>>
>>       File file = new File("/home_/dsadm/kishore/kk.ksh");
>>       shellRsrc.setTimestamp(file.lastModified());
>>
>>
>> And, when I tried using a hdfs file qualifying it with both node name and
>> port, it didn't work, I get a similar error as earlier.
>>
>>       String shellScriptPath = "hdfs://isredeng:8020//kishore/kk.ksh";
>>
>>
>> 13/08/07 01:36:28 INFO ApplicationMaster: Got container status for
>> containerID= container_1375853431091_0005_01_000002, state=COMPLETE,
>> exitStatus=-1000, diagnostics=File does not exist:
>> hdfs://isredeng:8020/kishore/kk.ksh
>>
>> 13/08/07 01:36:28 INFO ApplicationMaster: Got failure status for a
>> container : -1000
>>
>>
>>
>> On Wed, Aug 7, 2013 at 7:45 AM, Harsh J <[EMAIL PROTECTED]> wrote:
>>
>>> Thanks Hitesh!
>>>
>>> P.s. Port isn't a requirement (and with HA URIs, you shouldn't add a
>>> port), but "isredeng" has to be the authority component.
>>>
>>> On Wed, Aug 7, 2013 at 7:37 AM, Hitesh Shah <[EMAIL PROTECTED]> wrote:
>>> > @Krishna, your logs showed the file error for
>>> "hdfs://isredeng/kishore/kk.ksh"
>>> >
>>> > I am assuming you have tried dfs -ls /kishore/kk.ksh and confirmed
>>> that the file exists? Also the qualified path seems to be missing the
>>> namenode port. I need to go back and check if a path without the port works
>>> by assuming the default namenode port.
>>> >
>>> > @Harsh, adding a helper function seems like a good idea. Let me file a
>>> jira to have the above added to one of the helper/client libraries.
>>> >
>>> > thanks
>>> > -- Hitesh
>>> >
>>> > On Aug 6, 2013, at 6:47 PM, Harsh J wrote:
>>> >
>>> >> It is kinda unnecessary to be asking developers to load in timestamps
>>> and
>>> >> length themselves. Why not provide a java.io.File, or perhaps a Path
>>> >> accepting API, that gets it automatically on their behalf using the
>>> >> FileSystem API internally?
>>> >>
>>> >> P.s. A HDFS file gave him a FNF, while a Local file gave him a proper
>>> >> TS/Len error. I'm guessing there's a bug here w.r.t. handling HDFS
>>> >> paths.
>>> >>
>>> >> On Wed, Aug 7, 2013 at 12:35 AM, Hitesh Shah <[EMAIL PROTECTED]>
>>> wrote:
>>> >>> Hi Krishna,
>>> >>>
>>> >>> YARN downloads a specified local resource on the container's node
>>> from the url specified. In all situtations, the remote url needs to be a
>>> fully qualified path. To verify that the file at the remote url is still
>>> valid, YARN expects you to provide the length and last modified timestamp
>>> of that file.
>>> >>>
>>> >>> If you use an hdfs path such as hdfs://namenode:port/<absolute path
>>> to file>, you will need to get the length and timestamp from HDFS.
>>> >>> If you use file:///, the file should exist on all nodes and all
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB