<ahref="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibilty between Hadoop 1.x and Hadoop 2.x</a>
<ahref="http://maven.apache.org/"title="Built by Maven"class="poweredBy">
<imgalt="Built by Maven"src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<divid="bodyColumn">
<divid="contentBox">
<!-- Licensed under the Apache License, Version 2.0 (the "License"); --><!-- you may not use this file except in compliance with the License. --><!-- You may obtain a copy of the License at --><!-- --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!-- --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. See accompanying LICENSE file. --><divclass="section">
<h2>Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.<aname="Hadoop_MapReduce_Next_Generation_-_Setting_up_a_Single_Node_Cluster."></a></h2>
<ul>
<li><ahref="#Hadoop_MapReduce_Next_Generation_-_Setting_up_a_Single_Node_Cluster.">Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.</a>
<p><b>NOTE:</b> You will need <aclass="externalLink"href="http://code.google.com/p/protobuf">protoc 2.5.0</a> installed.</p>
<p>To ignore the native builds in mapreduce you can omit the <tt>-Pnative</tt> argument for maven. The tarball should be available in <tt>target/</tt> directory. </p></div>
<divclass="section">
<h3>Setting up the environment.<aname="Setting_up_the_environment."></a></h3>
<p>Assuming you have installed hadoop-common/hadoop-hdfs and exported <b>$HADOOP_COMMON_HOME</b>/<b>$HADOOP_HDFS_HOME</b>, untar hadoop mapreduce tarball and set environment variable <b>$HADOOP_MAPRED_HOME</b> to the untarred directory. Set <b>$HADOOP_YARN_HOME</b> the same as <b>$HADOOP_MAPRED_HOME</b>. </p>
<p><b>NOTE:</b> The following instructions assume you have hdfs running.</p></div>
<divclass="section">
<h3>Setting up Configuration.<aname="Setting_up_Configuration."></a></h3>
<p>To start the ResourceManager and NodeManager, you will have to update the configs. Assuming your $HADOOP_CONF_DIR is the configuration directory and has the installed configs for HDFS and <tt>core-site.xml</tt>. There are 2 config files you will have to setup <tt>mapred-site.xml</tt> and <tt>yarn-site.xml</tt>.</p>
<divclass="section">
<h4>Setting up <tt>mapred-site.xml</tt><aname="Setting_up_mapred-site.xml"></a></h4>
<p>Add the following configs to your <tt>mapred-site.xml</tt>.</p>
<p>Assuming that the environment variables <b>$HADOOP_COMMON_HOME</b>, <b>$HADOOP_HDFS_HOME</b>, <b>$HADOO_MAPRED_HOME</b>, <b>$HADOOP_YARN_HOME</b>, <b>$JAVA_HOME</b> and <b>$HADOOP_CONF_DIR</b> have been set appropriately. Set $<b>$YARN_CONF_DIR</b> the same as $<b>HADOOP_CONF_DIR</b></p>