Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Hadoop >> mail # user >> When applying a patch, which attachment should I use?


Copy link to this message
-
Re: When applying a patch, which attachment should I use?
Dear Sharma,

I am not a Zookeeper professional since this is the first time I have
installed zookeeper myself.
But looking at your log, I think the problem is either with your firewall
setting, or server connection setting.

About the patch installation,
download the patch you'd like to apply and place them in $(HADOOP_HOME)
then apply the patch by typing:
patch -p0 < "patch_name"

Good luck

Regards,
Ed

2011/1/21 Adarsh Sharma <[EMAIL PROTECTED]>

> Extremely Sorry, Forgot to attach logs :
> Here they are :
>
>
> Adarsh Sharma wrote:
>
>> Thanx Edward, Today I look upon your considerations and start working :
>>
>> edward choi wrote:
>>
>>> Dear Adarsh,
>>>
>>> I have a single machine running Namenode/JobTracker/Hbase Master.
>>> There are 17 machines running Datanode/TaskTracker
>>> Among those 17 machines, 14 are running Hbase Regionservers.
>>> The other 3 machines are running Zookeeper.
>>>
>>>
>>
>> I have 10 servers and a  single machine running Namenode/JobTracker/Hbase
>> Master.
>> There are 9 machines running Datanode/TaskTracker
>> Among those 9 machines, 6 are running Hbase Regionservers.
>> The other 3 machines are running Zookeeper.
>> I'm using hadoop-0.20.2, hbase-0.20.3
>>
>>> And about the Zookeeper,
>>> Hbase comes with its own Zookeeper so you don't need to install a new
>>> Zookeeper. (except for the special occasion, which I'll explain later)
>>> I assigned 14 machines as regionservers using
>>> "$HBASE_HOME/conf/regionservers".
>>> I assigned 3 machines as Zookeeperss using "hbase.zookeeper.quorum"
>>> property
>>> in "$HBASE_HOME/conf/hbase-site.xml".
>>> Don't forget to set "export HBASE_MANAGES_ZK=true"
>>>
>>>
>>
>> I think bydefault it takes true anyways I set "export
>> HBASE_MANAGES_ZK=true" in hbase-env.sh
>>
>>  in "$HBASE_HOME/conf/hbase-env.sh". (This is where you announce that you
>>> will be using Zookeeper that comes with HBase)
>>> This way, when you execute "$HBASE_HOME/bin/start-hbase.sh", HBase will
>>> automatically start Zookeeper first, then start HBase daemons.
>>>
>>>
>> But perhaps I found my Hbase Master is running  through Web UI. But there
>> are exceptions in my Zookeeper Logs. I am also able to create table in hbase
>> and view it.
>>
>> The onle thing I don't do is apply the *hdfs-630-0.20-append.patch* to
>> each hadoop package in each node. As I don't know how to apply it.
>>
>> If this is the problem Please guide me the steps to apply it.
>>
>> I also attached my Zookeeper Logs of my Zookeeper Servers.
>> Please find the attachment.
>>
>>  Also, you can install your own Zookeeper and tell HBase to use it instead
>>> of
>>> its own.
>>> I read it on the internet that Zookeeper that comes with HBase does not
>>> work
>>> properly on Windows 7 64bit. (
>>> http://alans.se/blog/2010/hadoop-hbase-cygwin-windows-7-x64/)
>>> So in that case you need to install your own Zookeeper, set it up
>>> properly,
>>> and tell HBase to use it instead of its own.
>>> All you need to do is configure zoo.cfg and add it to the HBase
>>> CLASSPATH.
>>> And don't forget to set "export HBASE_MANAGES_ZK=false"
>>> in "$HBASE_HOME/conf/hbase-env.sh".
>>> This way, HBase will not start Zookeeper automatically.
>>>
>>> About the separation of Zookeepers from regionservers,
>>> Yes, it is recommended to separate Zookeepers from regionservers.
>>> But that won't be necessary unless your clusters are very heavily loaded.
>>> They also suggest that you give Zookeeper its own hard disk. But I
>>> haven't
>>> done that myself yet. (Hard disks cost money you know)
>>> So I'd say your cluster seems fine.
>>> But when you want to expand your cluster, you'd need some changes. I
>>> suggest
>>> you take a look at "Hadoop: The Definitive Guide".
>>>
>>>
>>>
>>
>> Thanks & Best Regards
>>
>> Adarsh Sharma
>>
>>  Regards,
>>> Edward
>>>
>>>
>>>
>>
>>
>>  2011/1/13 Adarsh Sharma <[EMAIL PROTECTED]>
>>>
>>>
>>>
>>>> Thanks Edward,
>>>>
>>>> Can you describe me the architecture used in your configuration.
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB