<ahref="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibilty between Hadoop 1.x and Hadoop 2.x</a>
<ahref="http://maven.apache.org/"title="Built by Maven"class="poweredBy">
<imgalt="Built by Maven"src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<divid="bodyColumn">
<divid="contentBox">
<!-- Licensed to the Apache Software Foundation (ASF) under one or more --><!-- contributor license agreements. See the NOTICE file distributed with --><!-- this work for additional information regarding copyright ownership. --><!-- The ASF licenses this file to You under the Apache License, Version 2.0 --><!-- (the "License"); you may not use this file except in compliance with --><!-- the License. You may obtain a copy of the License at --><!-- --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!-- --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. --><ul>
<p>The File System (FS) shell includes various shell-like commands that directly interact with the Hadoop Distributed File System (HDFS) as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS, and others. The FS shell is invoked by:</p>
<divclass="source">
<pre>bin/hadoop fs <args></pre></div>
<p>All FS shell commands take path URIs as arguments. The URI format is <tt>scheme://authority/path</tt>. For HDFS the scheme is <tt>hdfs</tt>, and for the Local FS the scheme is <tt>file</tt>. The scheme and authority are optional. If not specified, the default scheme specified in the configuration is used. An HDFS file or directory such as /parent/child can be specified as <tt>hdfs://namenodehost/parent/child</tt> or simply as <tt>/parent/child</tt> (given that your configuration is set to point to <tt>hdfs://namenodehost</tt>).</p>
<p>Most of the commands in FS shell behave like corresponding Unix commands. Differences are described with each of the commands. Error information is sent to stderr and the output is sent to stdout.</p></div>
<p>Append single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and appends to destination file system.</p>
<p>Returns 0 on success and -1 on error.</p></div>
<divclass="section">
<h2>chgrp<aname="chgrp"></a></h2>
<p>Usage: <tt>hdfs dfs -chgrp [-R] GROUP URI [URI ...]</tt></p>
<p>Change group association of files. The user must be the owner of files, or else a super-user. Additional information is in the <ahref="../hadoop-hdfs/HdfsPermissionsGuide.html">Permissions Guide</a>.</p>
<p>Options</p>
<ul>
<li>The -R option will make the change recursively through the directory structure.</li></ul></div>
<p>Change the permissions of files. With -R, make the change recursively through the directory structure. The user must be the owner of the file, or else a super-user. Additional information is in the <ahref="../hadoop-hdfs/HdfsPermissionsGuide.html">Permissions Guide</a>.</p>
<p>Options</p>
<ul>
<li>The -R option will make the change recursively through the directory structure.</li></ul></div>
<divclass="section">
<h2>chown<aname="chown"></a></h2>
<p>Usage: <tt>hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]</tt></p>
<p>Change the owner of files. The user must be a super-user. Additional information is in the <ahref="../hadoop-hdfs/HdfsPermissionsGuide.html">Permissions Guide</a>.</p>
<p>Options</p>
<ul>
<li>The -R option will make the change recursively through the directory structure.</li></ul></div>
<p>Count the number of directories, files and bytes under the paths that match the specified file pattern. The output columns with -count are: DIR_COUNT, FILE_COUNT, CONTENT_SIZE FILE_NAME</p>
<p>Exit Code: Returns 0 on success and -1 on error.</p></div>
<divclass="section">
<h2>dus<aname="dus"></a></h2>
<p>Usage: <tt>hdfs dfs -dus <args></tt></p>
<p>Displays a summary of file lengths. This is an alternate form of hdfs dfs -du -s.</p></div>
<divclass="section">
<h2>expunge<aname="expunge"></a></h2>
<p>Usage: <tt>hdfs dfs -expunge</tt></p>
<p>Empty the Trash. Refer to the <ahref="../hadoop-hdfs/HdfsDesign.html"> HDFS Architecture Guide</a> for more information on the Trash feature.</p></div>
<p>Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.</p>
<p>Takes a source directory and a destination file as input and concatenates files in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file.</p></div>
<divclass="section">
<h2>ls<aname="ls"></a></h2>
<p>Usage: <tt>hdfs dfs -ls <args></tt></p>
<p>For a file returns stat on the file with the following format:</p>
<p>Displays a "Not implemented yet" message.</p></div>
<divclass="section">
<h2>mv<aname="mv"></a></h2>
<p>Usage: <tt>hdfs dfs -mv URI [URI ...] <dest></tt></p>
<p>Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across file systems is not permitted.</p>
<p>Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.</p>
<li><tt>hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile</tt> Reads the input from stdin.</li></ul>
<p>Exit Code:</p>
<p>Returns 0 on success and -1 on error.</p></div>
<divclass="section">
<h2>rm<aname="rm"></a></h2>
<p>Usage: <tt>hdfs dfs -rm [-skipTrash] URI [URI ...]</tt></p>
<p>Delete files specified as args. Only deletes non empty directory and files. If the -skipTrash option is specified, the trash, if enabled, will be bypassed and the specified file(s) deleted immediately. This can be useful when it is necessary to delete files from an over-quota directory. Refer to rmr for recursive deletes.</p>
<p>Returns 0 on success and -1 on error.</p></div>
<divclass="section">
<h2>rmr<aname="rmr"></a></h2>
<p>Usage: <tt>hdfs dfs -rmr [-skipTrash] URI [URI ...]</tt></p>
<p>Recursive version of delete. If the -skipTrash option is specified, the trash, if enabled, will be bypassed and the specified file(s) deleted immediately. This can be useful when it is necessary to delete files from an over-quota directory.</p>
<p>Changes the replication factor of a file. If <i>path</i> is a directory then the command recursively changes the replication factor of all files under the directory tree rooted at <i>path</i>.</p>
<p>Options:</p>
<ul>
<li>The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.</li>
<li>The -R flag is accepted for backwards compatibility. It has no effect.</li></ul>