Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Plain View
Hadoop >> mail # user >> Re: Trying to copy file to Hadoop file system from a program


+
Sai Sai 2013-02-24, 11:40
+
Nitin Pawar 2013-02-24, 11:42
Copy link to this message
-
Re: Trying to copy file to Hadoop file system from a program
Many Thanks Nitin for your quick reply.

Heres what i have in my hosts file and i am running in VM i m assuming it is the pseudo mode:

*********************
127.0.0.1    localhost.localdomain    localhost
#::1    ubuntu    localhost6.localdomain6    localhost6
#127.0.1.1    ubuntu
127.0.0.1   ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
*********************
In my masters i have:
ubuntu
In my slaves i have:
localhost
***********************
My question is in my variable below:
public static String fsURI = "hdfs://master:9000";

what would be the right value so i can connect to Hadoop successfully.
Please let me know if you need more info.
Thanks
Sai

________________________________
 From: Nitin Pawar <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]; Sai Sai <[EMAIL PROTECTED]>
Sent: Sunday, 24 February 2013 3:42 AM
Subject: Re: Trying to copy file to Hadoop file system from a program
 

if you want to use master as your hostname then make such entry in your /etc/hosts file 

or change the hdfs://master to hdfs://localhost 

On Sun, Feb 24, 2013 at 5:10 PM, Sai Sai <[EMAIL PROTECTED]> wrote:
>
>Greetings,
>
>
>Below is the program i am trying to run and getting this exception:
>***************************************
>
>Test Start.....
>java.net.UnknownHostException: unknown host: master
>    at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:214)
>    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1050)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>    at $Proxy1.getProtocolVersion(Unknown Source)
>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
>    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
>    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
>    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
>    at
 org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
>    at kelly.hadoop.hive.test.HadoopTest.main(HadoopTest.java:54)
>
>
>
>
>********************
>
>
>
>public class HdpTest {
>   
>    public static String fsURI = "hdfs://master:9000";
>
>   
>    public static void copyFileToDFS(FileSystem fs, String srcFile,
>   
         String dstFile) throws IOException {
>        try {
>            System.out.println("Initialize copy...");
>            URI suri = new URI(srcFile);
>            URI duri = new URI(fsURI + "/" + dstFile);
>            Path dst = new Path(duri.toString());
>            Path src = new Path(suri.toString());
>            System.out.println("Start copy...");
>            fs.copyFromLocalFile(src, dst);
>            System.out.println("End copy...");
>        } catch (Exception e)
 {
>            e.printStackTrace();
>        }
>    }
>
>    public static void main(String[] args) {
>        try {
>            System.out.println("Test Start.....");
>            Configuration conf = new Configuration();
>            DistributedFileSystem fs = new DistributedFileSystem();
>            URI duri = new URI(fsURI);
>            fs.initialize(duri, conf); // Here is the xception occuring
>            long start = 0, end = 0;
>       
     start = System.nanoTime();
>            //writing data from local to HDFS
>            copyFileToDFS(fs, "/home/kosmos/Work/input/wordpair.txt",
>                    "/input/raptor/trade1.txt");
>            //Writing data from HDFS to Local
>//             copyFileFromDFS(fs, "/input/raptor/trade1.txt", "/home/kosmos/Work/input/wordpair1.txt");
>            end = System.nanoTime();
>            System.out.println("Total Execution times: " + (end - start));
>            fs.close();
>        } catch
 (Throwable t) {
>            t.printStackTrace();
Nitin Pawar
+
sudhakara st 2013-02-24, 12:07
+
Nitin Pawar 2013-02-24, 12:17
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB