Page MenuHomec4science

HdfsImageViewer.html
No OneTemporary

File Metadata

Created
Tue, Feb 25, 05:43

HdfsImageViewer.html

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!-- Generated by Apache Maven Doxia at 2014-02-11 -->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 2.3.0 - Offline Image Viewer Guide</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20140211" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xleft">
<a href="http://www.apache.org/" class="externalLink">Apache</a>
&gt;
<a href="http://hadoop.apache.org/" class="externalLink">Hadoop</a>
&gt;
<a href="../">Apache Hadoop Project Dist POM</a>
&gt;
Apache Hadoop 2.3.0
</div>
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://svn.apache.org/repos/asf/hadoop/" class="externalLink">SVN</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2014-02-11
&nbsp;| Version: 2.3.0
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Hadoop Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">File System Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Hadoop Compatibility</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Superusers</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithQJM.html">High Availability With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html">High Availability With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">HDFS Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Hftp.html">HFTP</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">C API libhdfs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS REST API</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">HDFS NFS Gateway</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibilty between Hadoop 1.x and Hadoop 2.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">YARN Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">YARN Commands</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/HistoryServerRest.html">History Server</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/releasenotes.html">Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CHANGES.txt">Common CHANGES.txt</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CHANGES.txt">HDFS CHANGES.txt</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-mapreduce/CHANGES.txt">MapReduce CHANGES.txt</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!-- Licensed under the Apache License, Version 2.0 (the "License"); --><!-- you may not use this file except in compliance with the License. --><!-- You may obtain a copy of the License at --><!-- --><!-- http://www.apache.org/licenses/LICENSE-2.0 --><!-- --><!-- Unless required by applicable law or agreed to in writing, software --><!-- distributed under the License is distributed on an "AS IS" BASIS, --><!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. --><!-- See the License for the specific language governing permissions and --><!-- limitations under the License. See accompanying LICENSE file. --><div class="section">
<h2>Offline Image Viewer Guide<a name="Offline_Image_Viewer_Guide"></a></h2>
<ul>
<li><a href="#Offline_Image_Viewer_Guide">Offline Image Viewer Guide</a>
<ul>
<li><a href="#Overview">Overview</a></li>
<li><a href="#Usage">Usage</a>
<ul>
<li><a href="#Basic">Basic</a></li></ul></li></ul></li></ul>
<div class="section">
<h3>Overview<a name="Overview"></a></h3>
<p>The Offline Image Viewer is a tool to dump the contents of hdfs fsimage files to human-readable formats in order to allow offline analysis and examination of an Hadoop cluster's namespace. The tool is able to process very large image files relatively quickly, converting them to one of several output formats. The tool handles the layout formats that were included with Hadoop versions 16 and up. If the tool is not able to process an image file, it will exit cleanly. The Offline Image Viewer does not require an Hadoop cluster to be running; it is entirely offline in its operation.</p>
<p>The Offline Image Viewer provides several output processors:</p>
<ol style="list-style-type: decimal">
<li>Ls is the default output processor. It closely mimics the format of the lsr command. It includes the same fields, in the same order, as lsr : directory or file flag, permissions, replication, owner, group, file size, modification date, and full path. Unlike the lsr command, the root path is included. One important difference between the output of the lsr command this processor, is that this output is not sorted by directory name and contents. Rather, the files are listed in the order in which they are stored in the fsimage file. Therefore, it is not possible to directly compare the output of the lsr command this this tool. The Ls processor uses information contained within the Inode blocks to calculate file sizes and ignores the -skipBlocks option.</li>
<li>Indented provides a more complete view of the fsimage's contents, including all of the information included in the image, such as image version, generation stamp and inode- and block-specific listings. This processor uses indentation to organize the output into a hierarchal manner. The lsr format is suitable for easy human comprehension.</li>
<li>Delimited provides one file per line consisting of the path, replication, modification time, access time, block size, number of blocks, file size, namespace quota, diskspace quota, permissions, username and group name. If run against an fsimage that does not contain any of these fields, the field's column will be included, but no data recorded. The default record delimiter is a tab, but this may be changed via the -delimiter command line argument. This processor is designed to create output that is easily analyzed by other tools, such as <a class="externalLink" href="http://pig.apache.org">Apache Pig</a>. See the <a href="#Analyzing_Results">Analyzing Results</a> section for further information on using this processor to analyze the contents of fsimage files.</li>
<li>XML creates an XML document of the fsimage and includes all of the information within the fsimage, similar to the lsr processor. The output of this processor is amenable to automated processing and analysis with XML tools. Due to the verbosity of the XML syntax, this processor will also generate the largest amount of output.</li>
<li>FileDistribution is the tool for analyzing file sizes in the namespace image. In order to run the tool one should define a range of integers [0, maxSize] by specifying maxSize and a step. The range of integers is divided into segments of size step: [0, s[1], ..., s[n-1], maxSize], and the processor calculates how many files in the system fall into each segment [s[i-1], s[i]). Note that files larger than maxSize always fall into the very last segment. The output file is formatted as a tab separated two column table: Size and NumFiles. Where Size represents the start of the segment, and numFiles is the number of files form the image which size falls in this segment.</li></ol></div>
<div class="section">
<h3>Usage<a name="Usage"></a></h3>
<div class="section">
<h4>Basic<a name="Basic"></a></h4>
<p>The simplest usage of the Offline Image Viewer is to provide just an input and output file, via the -i and -o command-line switches:</p>
<div>
<pre> bash$ bin/hdfs oiv -i fsimage -o fsimage.txt</pre></div>
<p>This will create a file named fsimage.txt in the current directory using the Ls output processor. For very large image files, this process may take several minutes.</p>
<p>One can specify which output processor via the command-line switch -p. For instance:</p>
<div>
<pre> bash$ bin/hdfs oiv -i fsimage -o fsimage.xml -p XML</pre></div>
<p>or</p>
<div>
<pre> bash$ bin/hdfs oiv -i fsimage -o fsimage.txt -p Indented</pre></div>
<p>This will run the tool using either the XML or Indented output processor, respectively.</p>
<p>One command-line option worth considering is -skipBlocks, which prevents the tool from explicitly enumerating all of the blocks that make up a file in the namespace. This is useful for file systems that have very large files. Enabling this option can significantly decrease the size of the resulting output, as individual blocks are not included. Note, however, that the Ls processor needs to enumerate the blocks and so overrides this option.</p></div></div></div>
<div class="section">
<h2>Example<a name="Example"></a></h2>
<p>Consider the following contrived namespace:</p>
<div>
<pre> drwxr-xr-x - theuser supergroup 0 2009-03-16 21:17 /anotherDir
-rw-r--r-- 3 theuser supergroup 286631664 2009-03-16 21:15 /anotherDir/biggerfile
-rw-r--r-- 3 theuser supergroup 8754 2009-03-16 21:17 /anotherDir/smallFile
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:11 /mapredsystem
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:11 /mapredsystem/theuser
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:11 /mapredsystem/theuser/mapredsystem
drwx-wx-wx - theuser supergroup 0 2009-03-16 21:11 /mapredsystem/theuser/mapredsystem/ip.redacted.com
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:12 /one
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:12 /one/two
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:16 /user
drwxr-xr-x - theuser supergroup 0 2009-03-16 21:19 /user/theuser</pre></div>
<p>Applying the Offline Image Processor against this file with default options would result in the following output:</p>
<div>
<pre> machine:hadoop-0.21.0-dev theuser$ bin/hdfs oiv -i fsimagedemo -o fsimage.txt
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:16 /
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:17 /anotherDir
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:11 /mapredsystem
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:12 /one
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:16 /user
-rw-r--r-- 3 theuser supergroup 286631664 2009-03-16 14:15 /anotherDir/biggerfile
-rw-r--r-- 3 theuser supergroup 8754 2009-03-16 14:17 /anotherDir/smallFile
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:11 /mapredsystem/theuser
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:11 /mapredsystem/theuser/mapredsystem
drwx-wx-wx - theuser supergroup 0 2009-03-16 14:11 /mapredsystem/theuser/mapredsystem/ip.redacted.com
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:12 /one/two
drwxr-xr-x - theuser supergroup 0 2009-03-16 14:19 /user/theuser</pre></div>
<p>Similarly, applying the Indented processor would generate output that begins with:</p>
<div>
<pre> machine:hadoop-0.21.0-dev theuser$ bin/hdfs oiv -i fsimagedemo -p Indented -o fsimage.txt
FSImage
ImageVersion = -19
NamespaceID = 2109123098
GenerationStamp = 1003
INodes [NumInodes = 12]
Inode
INodePath =
Replication = 0
ModificationTime = 2009-03-16 14:16
AccessTime = 1969-12-31 16:00
BlockSize = 0
Blocks [NumBlocks = -1]
NSQuota = 2147483647
DSQuota = -1
Permissions
Username = theuser
GroupName = supergroup
PermString = rwxr-xr-x
...remaining output omitted...</pre></div>
<div class="section">
<h3>Options<a name="Options"></a></h3>
<table border="1" class="bodyTable">
<tr class="a">
<td align="right"><b>Flag</b></td>
<td align="left"><b>Description</b></td></tr>
<tr class="b">
<td align="right"><tt>-i</tt>|<tt>--inputFile</tt> <i>input file</i></td>
<td align="left">Specify the input fsimage file to process. Required.</td></tr>
<tr class="a">
<td align="right"><tt>-o</tt>|<tt>--outputFile</tt> <i>output file</i></td>
<td align="left">Specify the output filename, if the specified output processor generates one. If the specified file already exists, it is silently overwritten. Required.</td></tr>
<tr class="b">
<td align="right"><tt>-p</tt>|<tt>--processor</tt> <i>processor</i></td>
<td align="left">Specify the image processor to apply against the image file. Currently valid options are Ls (default), XML and Indented..</td></tr>
<tr class="a">
<td align="right"><tt>-skipBlocks</tt></td>
<td align="left">Do not enumerate individual blocks within files. This may save processing time and outfile file space on namespaces with very large files. The Ls processor reads the blocks to correctly determine file sizes and ignores this option.</td></tr>
<tr class="b">
<td align="right"><tt>-printToScreen</tt></td>
<td align="left">Pipe output of processor to console as well as specified file. On extremely large namespaces, this may increase processing time by an order of magnitude.</td></tr>
<tr class="a">
<td align="right"><tt>-delimiter</tt> <i>arg</i></td>
<td align="left">When used in conjunction with the Delimited processor, replaces the default tab delimiter with the string specified by arg.</td></tr>
<tr class="b">
<td align="right"><tt>-h</tt>|<tt>--help</tt></td>
<td align="left">Display the tool usage and help information and exit.</td></tr></table></div>
<div class="section">
<h3>Analyzing Results<a name="Analyzing_Results"></a></h3>
<p>The Offline Image Viewer makes it easy to gather large amounts of data about the hdfs namespace. This information can then be used to explore file system usage patterns or find specific files that match arbitrary criteria, along with other types of namespace analysis. The Delimited image processor in particular creates output that is amenable to further processing by tools such as [38]Apache Pig. Pig provides a particularly good choice for analyzing these data as it is able to deal with the output generated from a small fsimage but also scales up to consume data from extremely large file systems.</p>
<p>The Delimited image processor generates lines of text separated, by default, by tabs and includes all of the fields that are common between constructed files and files that were still under constructed when the fsimage was generated. Examples scripts are provided demonstrating how to use this output to accomplish three tasks: determine the number of files each user has created on the file system, find files were created but have not accessed, and find probable duplicates of large files by comparing the size of each file.</p>
<p>Each of the following scripts assumes you have generated an output file using the Delimited processor named foo and will be storing the results of the Pig analysis in a file named results.</p>
<div class="section">
<h4>Total Number of Files for Each User<a name="Total_Number_of_Files_for_Each_User"></a></h4>
<p>This script processes each path within the namespace, groups them by the file owner and determines the total number of files each user owns.</p>
<div>
<pre> numFilesOfEachUser.pig:
-- This script determines the total number of files each user has in
-- the namespace. Its output is of the form:
-- username, totalNumFiles
-- Load all of the fields from the file
A = LOAD '$inputFile' USING PigStorage('\t') AS (path:chararray,
replication:int,
modTime:chararray,
accessTime:chararray,
blockSize:long,
numBlocks:int,
fileSize:long,
NamespaceQuota:int,
DiskspaceQuota:int,
perms:chararray,
username:chararray,
groupname:chararray);
-- Grab just the path and username
B = FOREACH A GENERATE path, username;
-- Generate the sum of the number of paths for each user
C = FOREACH (GROUP B BY username) GENERATE group, COUNT(B.path);
-- Save results
STORE C INTO '$outputFile';</pre></div>
<p>This script can be run against pig with the following command:</p>
<div>
<pre> bin/pig -x local -param inputFile=../foo -param outputFile=../results ../numFilesOfEachUser.pig</pre></div>
<p>The output file's content will be similar to that below:</p>
<div>
<pre> bart 1
lisa 16
homer 28
marge 2456</pre></div></div>
<div class="section">
<h4>Files That Have Never Been Accessed<a name="Files_That_Have_Never_Been_Accessed"></a></h4>
<p>This script finds files that were created but whose access times were never changed, meaning they were never opened or viewed.</p>
<div>
<pre> neverAccessed.pig:
-- This script generates a list of files that were created but never
-- accessed, based on their AccessTime
-- Load all of the fields from the file
A = LOAD '$inputFile' USING PigStorage('\t') AS (path:chararray,
replication:int,
modTime:chararray,
accessTime:chararray,
blockSize:long,
numBlocks:int,
fileSize:long,
NamespaceQuota:int,
DiskspaceQuota:int,
perms:chararray,
username:chararray,
groupname:chararray);
-- Grab just the path and last time the file was accessed
B = FOREACH A GENERATE path, accessTime;
-- Drop all the paths that don't have the default assigned last-access time
C = FILTER B BY accessTime == '1969-12-31 16:00';
-- Drop the accessTimes, since they're all the same
D = FOREACH C GENERATE path;
-- Save results
STORE D INTO '$outputFile';</pre></div>
<p>This script can be run against pig with the following command and its output file's content will be a list of files that were created but never viewed afterwards.</p>
<div>
<pre> bin/pig -x local -param inputFile=../foo -param outputFile=../results ../neverAccessed.pig</pre></div></div>
<div class="section">
<h4>Probable Duplicated Files Based on File Size<a name="Probable_Duplicated_Files_Based_on_File_Size"></a></h4>
<p>This script groups files together based on their size, drops any that are of less than 100mb and returns a list of the file size, number of files found and a tuple of the file paths. This can be used to find likely duplicates within the filesystem namespace.</p>
<div>
<pre> probableDuplicates.pig:
-- This script finds probable duplicate files greater than 100 MB by
-- grouping together files based on their byte size. Files of this size
-- with exactly the same number of bytes can be considered probable
-- duplicates, but should be checked further, either by comparing the
-- contents directly or by another proxy, such as a hash of the contents.
-- The scripts output is of the type:
-- fileSize numProbableDuplicates {(probableDup1), (probableDup2)}
-- Load all of the fields from the file
A = LOAD '$inputFile' USING PigStorage('\t') AS (path:chararray,
replication:int,
modTime:chararray,
accessTime:chararray,
blockSize:long,
numBlocks:int,
fileSize:long,
NamespaceQuota:int,
DiskspaceQuota:int,
perms:chararray,
username:chararray,
groupname:chararray);
-- Grab the pathname and filesize
B = FOREACH A generate path, fileSize;
-- Drop files smaller than 100 MB
C = FILTER B by fileSize &gt; 100L * 1024L * 1024L;
-- Gather all the files of the same byte size
D = GROUP C by fileSize;
-- Generate path, num of duplicates, list of duplicates
E = FOREACH D generate group AS fileSize, COUNT(C) as numDupes, C.path AS files;
-- Drop all the files where there are only one of them
F = FILTER E by numDupes &gt; 1L;
-- Sort by the size of the files
G = ORDER F by fileSize;
-- Save results
STORE G INTO '$outputFile';</pre></div>
<p>This script can be run against pig with the following command:</p>
<div>
<pre> bin/pig -x local -param inputFile=../foo -param outputFile=../results ../probableDuplicates.pig</pre></div>
<p>The output file's content will be similar to that below:</p>
<div>
<pre> 1077288632 2 {(/user/tennant/work1/part-00501),(/user/tennant/work1/part-00993)}
1077288664 4 {(/user/tennant/work0/part-00567),(/user/tennant/work0/part-03980),(/user/tennant/work1/part-00725),(/user/eccelston/output/part-03395)}
1077288668 3 {(/user/tennant/work0/part-03705),(/user/tennant/work0/part-04242),(/user/tennant/work1/part-03839)}
1077288698 2 {(/user/tennant/work0/part-00435),(/user/eccelston/output/part-01382)}
1077288702 2 {(/user/tennant/work0/part-03864),(/user/eccelston/output/part-03234)}</pre></div>
<p>Each line includes the file size in bytes that was found to be duplicated, the number of duplicates found, and a list of the duplicated paths. Files less than 100MB are ignored, providing a reasonable likelihood that files of these exact sizes may be duplicates.</p></div></div></div>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">&#169; 2014
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a></div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>

Event Timeline