Home | About | Sematext search-lucene.com search-hadoop.com
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> When applying a patch, which attachment should I use?


Copy link to this message
-
Re: When applying a patch, which attachment should I use?
Dear Adarsh,

I have a single machine running Namenode/JobTracker/Hbase Master.
There are 17 machines running Datanode/TaskTracker
Among those 17 machines, 14 are running Hbase Regionservers.
The other 3 machines are running Zookeeper.

And about the Zookeeper,
Hbase comes with its own Zookeeper so you don't need to install a new
Zookeeper. (except for the special occasion, which I'll explain later)
I assigned 14 machines as regionservers using
"$HBASE_HOME/conf/regionservers".
I assigned 3 machines as Zookeeperss using "hbase.zookeeper.quorum" property
in "$HBASE_HOME/conf/hbase-site.xml".
Don't forget to set "export HBASE_MANAGES_ZK=true"
in "$HBASE_HOME/conf/hbase-env.sh". (This is where you announce that you
will be using Zookeeper that comes with HBase)
This way, when you execute "$HBASE_HOME/bin/start-hbase.sh", HBase will
automatically start Zookeeper first, then start HBase daemons.

Also, you can install your own Zookeeper and tell HBase to use it instead of
its own.
I read it on the internet that Zookeeper that comes with HBase does not work
properly on Windows 7 64bit. (
http://alans.se/blog/2010/hadoop-hbase-cygwin-windows-7-x64/)
So in that case you need to install your own Zookeeper, set it up properly,
and tell HBase to use it instead of its own.
All you need to do is configure zoo.cfg and add it to the HBase CLASSPATH.
And don't forget to set "export HBASE_MANAGES_ZK=false"
in "$HBASE_HOME/conf/hbase-env.sh".
This way, HBase will not start Zookeeper automatically.

About the separation of Zookeepers from regionservers,
Yes, it is recommended to separate Zookeepers from regionservers.
But that won't be necessary unless your clusters are very heavily loaded.
They also suggest that you give Zookeeper its own hard disk. But I haven't
done that myself yet. (Hard disks cost money you know)
So I'd say your cluster seems fine.
But when you want to expand your cluster, you'd need some changes. I suggest
you take a look at "Hadoop: The Definitive Guide".

Regards,
Edward

2011/1/13 Adarsh Sharma <[EMAIL PROTECTED]>

> Thanks Edward,
>
> Can you describe me the architecture used in your configuration.
>
> Fore.g I have a cluster of 10 servers and
>
> 1 node act as ( Namenode, Jobtracker, Hmaster ).
> Remainning 9 nodes act as ( Slaves, datanodes, Tasktracker, Hregionservers
> ).
> Among these 9 nodes I also set 3 nodes in zookeeper.quorum.property.
>
> I want to know that is it necessary to configure zookeeper separately with
> the zookeeper-3.2.2 package or just have some IP's listed in
>
> zookeeper.quorum.property and Hbase take care of it.
>
> Can we specify IP's of Hregionservers used before as zookeeper servers (
> HQuorumPeer ) or we must need separate servers for it.
>
> My problem arises in running zookeeper. My Hbase is up and running  in
> fully distributed mode too.
>
>
>
>
> With Best Regards
>
> Adarsh Sharma
>
>
>
>
>
>
>
>
> edward choi wrote:
>
>> Dear Adarsh,
>>
>> My situation is somewhat different from yours as I am only running Hadoop
>> and Hbase (as opposed to Hadoop/Hive/Hbase).
>>
>> But I hope my experience could be of help to you somehow.
>>
>> I applied the "hdfs-630-0.20-append.patch" to every single Hadoop node.
>> (including master and slaves)
>> Then I followed exactly what they told me to do on
>>
>> http://hbase.apache.org/docs/current/api/overview-summary.html#overview_description
>> .
>>
>> I didn't get a single error message and successfully started HBase in a
>> fully distributed mode.
>>
>> I am not using Hive so I can't tell what caused the
>> MasterNotRunningException, but the patch above is meant to  allow
>> DFSClients
>> pass NameNode lists of known dead Datanodes.
>> I doubt that the patch has anything to do with MasterNotRunningException.
>>
>> Hope this helps.
>>
>> Regards,
>> Ed
>>
>> 2011/1/13 Adarsh Sharma <[EMAIL PROTECTED]>
>>
>>
>>
>>> I am also facing some issues  and i think applying
>>>
>>> hdfs-630-0.20-append.patch<
>>>
>>> https://issues.apache.org/jira/secure/attachment/12446812/hdfs-630-0.20-append.patch