<ahref="../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibilty between Hadoop 1.x and Hadoop 2.x</a>
<ahref="http://maven.apache.org/"title="Built by Maven"class="poweredBy">
<imgalt="Built by Maven"src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<divid="bodyColumn">
<divid="contentBox">
<!-- Licensed under the Apache License, Version 2.0 (the "License"); --><!-- you may not use this file except in compliance with the License. --><!-- You may obtain a copy of the License at --><!-- --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!-- --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><divclass="section">
<h2>Hadoop HDFS over HTTP 2.3.0 - Server Setup<aname="Hadoop_HDFS_over_HTTP_2.3.0_-_Server_Setup"></a></h2>
<p>[ <ahref="./index.html">Go Back</a> ]</p>
<p>This page explains how to quickly setup HttpFS with Pseudo authentication against a Hadoop cluster with Pseudo authentication.</p>
<p>By default, HttpFS assumes that Hadoop configuration files (<tt>core-site.xml & hdfs-site.xml</tt>) are in the HttpFS configuration directory.</p>
<p>If this is not the case, add to the <tt>httpfs-site.xml</tt> file the <tt>httpfs.hadoop.config.dir</tt> property set to the location of the Hadoop configuration directory.</p></div>
<p>NOTE: Invoking the script without any parameters list all possible parameters (start, stop, run, etc.). The <tt>httpfs.sh</tt> script is a wrapper for Tomcat's <tt>catalina.sh</tt> script that sets the environment variables and Java System properties required to run HttpFS server.</p></div>
<divclass="section">
<h3>Test HttpFS is working<aname="Test_HttpFS_is_working"></a></h3>
<p>HttpFS supports the following <ahref="./httpfs-default.html">configuration properties</a> in the HttpFS's <tt>conf/httpfs-site.xml</tt> configuration file.</p></div>
<divclass="section">
<h3>HttpFS over HTTPS (SSL)<aname="HttpFS_over_HTTPS_SSL"></a></h3>
<p>To configure HttpFS to work over SSL edit the <ahref="#httpfs-env.sh">httpfs-env.sh</a> script in the configuration directory setting the <ahref="#HTTPFS_SSL_ENABLED">HTTPFS_SSL_ENABLED</a> to <ahref="#true">true</a>.</p>
<p>In addition, the following 2 properties may be defined (shown with default values):</p>
<p>In the HttpFS <tt>tomcat/conf</tt> directory, replace the <tt>server.xml</tt> file with the <tt>ssl-server.xml</tt> file.</p>
<p>You need to create an SSL certificate for the HttpFS server. As the <tt>httpfs</tt> Unix user, using the Java <tt>keytool</tt> command to create the SSL certificate:</p>
<p>You will be asked a series of questions in an interactive prompt. It will create the keystore file, which will be named <b>.keystore</b> and located in the <tt>httpfs</tt> user home directory.</p>
<p>The password you enter for "keystore password" must match the value of the <tt>HTTPFS_SSL_KEYSTORE_PASS</tt> environment variable set in the <tt>httpfs-env.sh</tt> script in the configuration directory.</p>
<p>The answer to "What is your first and last name?" (i.e. "CN") must be the hostname of the machine where the HttpFS Server will be running.</p>
<p>Start HttpFS. It should work over HTTPS.</p>
<p>Using the Hadoop <tt>FileSystem</tt> API or the Hadoop FS shell, use the <tt>swebhdfs://</tt> scheme. Make sure the JVM is picking up the truststore containing the public key of the SSL certificate if using a self-signed certificate.</p>