Let's say I have a /home/me/foo.jar which contains a main that runs a hadoop
job and once it get completed launches another job (a pipeline of a couple
of jobs). The jar contains all the hadoop libs and other stuff needed too. I
launch it with hadoop jar /home/me/foo.jar.
If while the first job is running I do some changes to the code (that just
affect the second job, or neither of them) and upload the new compiled jar
to /home/me/foo.jar, once the job that was running finishes and the second
tries to start, everything gets broken. If I launch everything form the
beginning again, it works with no problems.
The thing is that I have this execution croned, so , every time I change
something, I have to stop the cron, wait for the execution to end, upload
the new jar and activate the cron again. This way I can avid the crash.
Are there any good practices about doing this kind of uploads?
Thanks in advance.
View this message in context: http://lucene.472066.n3.nabble.com/Good-practices-using-a-jar-with-hadoop-jobs-tp3085755p3085755.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.