A few questions please.
1. When I tried to run Oozie examples I was told to copy to copy /examples folder into HDFS. However when I tried to run oozie job I was told that the source file was not found. Well, until I cd'ed into the local directory on Linux and re-run the job successfully.
What was the point copying the examples into HDFS if they are started from local Linux FS.
This opens up a similar question. When I create my jobs I run .jar file from local FS and not from HDFS. I have though that was the way. Can you actually tell hadoop where to run the MR job from: local or HDFS?
2. I noticed that Linux created group "hadoop" and adds hdfs/mapreduce user to it after the installation. However, inside HDFS the group is called "supergroup". Can someone elaborate on that?
NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le présent courriel et toute pièce jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur et peuvent être couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le destinataire prévu de ce courriel, supprimez-le et contactez immédiatement l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent courriel