Linux Commands - Mkdir - Rmdir - Touch - RM - CP - More - Less - Head - Tail - Cat
Linux Commands - Mkdir - Rmdir - Touch - RM - CP - More - Less - Head - Tail - Cat
Linux Commands - Mkdir - Rmdir - Touch - RM - CP - More - Less - Head - Tail - Cat
mkdir:
Example:
touch:
Creating a file:
Example:
hadoopguru@hadoop2:~$ cd hadoop_test/
hadoopguru@hadoop2:~/hadoop_test$ ls -l
total 0
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 hive.txt
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 mahout
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 pig.txt
1. touch –t :
One can create a new file with user defined date and time, using this touch -t command.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls -l
total 0
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 hive.txt
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 mahout
-rw-rw-r-- 1 hadoop hadoop 0 Oct 28 00:13 pig.txt
-rw-rw-r-- 1 hadoop hadoop 0 Oct 27 23:50 testemp
-rw-rw-r-- 1 hadoop hadoop 0 Oct 27 23:50 testemp1
rm:
Removing a file:
Note: A file can’t be recovered back after remove. A file once removed is gone. Hence one should be
careful before removing a file.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp testemp1 testemp2
hadoopguru@hadoop2:~/hadoop_test$ rm testemp2
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp testemp1
rm -i :
This command asks for a confirmation from user to remove a file.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp testemp1
hadoopguru@hadoop2:~/hadoop_test$ rm -i testemp1
rm: remove regular empty file `testemp1'? y
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp
rm -r :
This command can be used to remove any file or directory.
Example:
test_dir is a directory which can be removed using rm –r command
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt test_dir testemp test_empty
hadoopguru@hadoop2:~/hadoop_test$ rm –r test_dir
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt testemp test_empty
cp:
Copy a file:
In order to copy a file cp command is used.
If in case the target is a directory, the source file will get copied to that directory.
Example
2: Target is a directory
hadoopguru@hadoop2:~/hadoop_test$ cd Test_dir
hadoopguru@hadoop2:~/hadoop_test/Test_dir$ ls
hadoopguru@hadoop2:~/hadoop_test/Test_dir$ cd ..
hadoopguru@hadoop2:~/hadoop_test$ cd Test_dir
hadoopguru@hadoop2:~/hadoop_test/Test_dir$ ls
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir2 Test_file1.txt
mahout Test_dir testemp Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ cp -r Test_dir Test_dir1
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir1/
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir1
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_dir1 Test_file2.txt
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir1
Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
Test_file1.txt Test_file2.txt
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt pig.txt Test_dir1 testemp Test_file2.txt
mahout Test_dir Test_dir2 Test_file1.txt
hadoopguru@hadoop2:~/hadoop_test$ cp hive.txt pig.txt Test_dir1/Test_file1.txt Test_dir2
hadoopguru@hadoop2:~/hadoop_test$ ls Test_dir2
hive.txt pig.txt Test_file1.txt Test_file2.txt
5: Interactive commands:
Interactive command (cp –i) ask user confirmation to overwrite the existing file.
Example:
hadoopguru@hadoop2:~/hadoop_test$ ls
hive.txt mahout pig.txt Test_dir Test_dir1 Test_dir2
hadoopguru@hadoop2:~/hadoop_test$
more:
more command is used to view the contents of a file containing more than 1 page.
Can scroll to the next page using Space bar or PageUp Pagedown buttons.
Example:
hadoop@hadoop2:~$ ls
AboutHadoop.txt aveo data derby.log hadoop-
1.2.1 hadoop_test hive-0.11.0-bin.tar.gz metastore_db
apache-flume-1.4.0-bin.tar.gz.1 count.txt
datanode flume hadoop-1.2.1-bin.tar.gz
hive mahout-distribution-0.8.tar.gz namenode
1. Scalable fault tolerant distributed system for large data storage & processing.
2. Designed to solve problems that involve storing, processing & analyzing large data (Terabytes,
petabytes, etc.)
less:
less is also a command to view file content. After viewing file press q to quit.
Example:
hadoop@hadoop2:~$ less count.txt
Press enter
one
two
three
four
five
six
seven
eight
nine
ten
eleven
twelve
count.txt (END)
Type "q:" & <Hit Enter> to come out of this.
hadoop@hadoop2:~$
Head:
Displays first ten lines of a file.
Example:
Tail:
Displays last 10 lines of file
Example:
three
four
five
six
seven
eight
nine
ten
eleven
twelve
More Commands:
head –n or tail -n : Displays n number of lines
head –cn or tail –cn : Displays n no of bytes of file.
-----------------------------------------------------------------------------------------------------------------
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
cd (Change directory):
Change directory (cd) command can be used to change your current working
directory.
Example: inside /home/hadoop there is a folder named hadoop-1.2.1
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/
hadoopguru@hadoop2:~/hadoop-1.2.1$ pwd
/home/hadoop/hadoop-1.2.1
1. cd ~ or cd:
cd command without any target directory will take you to your home directory, same
is the effect of cd ~
I. hadoopguru@hadoop2:~/hadoop-1.2.1$ cd ~
hadoopguru@hadoop2:~$ pwd
/home/hadoop
2. cd .. :
cd .. command will take you to the parent working directory. Parent working
directory is the one which is just above your current working directory.
The usage of slash after ../.. will take user the directory which parent to the parent
directory.
Example:
I. hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd ..
hadoopguru@hadoop2:~/hadoop-1.2.1$ pwd
/home/hadoop/hadoop-1.2.1
Example:
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ pwd
/home/hadoop/hadoop-1.2.1/conf
hadoopguru@hadoop2:~/hadoop-1.2.1/conf$ cd -
/home/hadoop
Sol: Starting directory name with slash (/) always direct you to the root of the file
tree.
Example:
I. hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ cd /home
hadoopguru@hadoop2:/home$ pwd
/home
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/
hadoopguru@hadoop2:~/hadoop-1.2.1$ pwd
/home/hadoop/hadoop-1.2.1
3. If user wants to open a directory present in root directory, and current working
directory is root directory.
Sol: one can find solution in above 2 scenarios, in such cases usage of slash (/)
makes no difference
Example:
I. Without slash(/)
hadoopguru@hadoop2:/$ pwd
/
hadoopguru@hadoop2:/$ cd home
hadoopguru@hadoop2:/home$ pwd
/home
hadoopguru@hadoop2:/$ pwd
/
hadoopguru@hadoop2:/$ cd /home
hadoopguru@hadoop2:/home$ pwd
/home
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoop-1.2.1
1. ls –a:
In order to list all files including hidden files use –a with ls (i.e. ls –a).
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ ls -a
. flume
.. hadoop-1.2.1
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
.bash_history hive-0.11.0-bin.tar.gz
.bash_logout .hivehistory
.bash_profile mahout-distribution-0.8.tar.gz
.bashrc metastore_db
.cache namenode
data .profile
datanode .ssh
derby.log .viminfo
2. ls –l:
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ ls -l
total 261824
-rw-rw-r-- 1 hadoop hadoop 60965956 Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4096 Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4096 Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4096 Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4096 Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 38096663 Oct 6 12:37 hadoop-1.2.1-
bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4096 Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 59859572 Oct 6 12:08 hive-0.11.0-
bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 109137498 Oct 6 12:29 mahout-
distribution-0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 23:30 namenode
Example:
A. ls -lh :
hadoopguru@hadoop2:~$ ls -lh
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
B. ls -l -h :
hadoopguru@hadoop2:~$ ls -l -h
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-
bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
C. ls -hl :
hadoopguru@hadoop2:~$ ls -hl
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
D. ls -h -l :
hadoopguru@hadoop2:~$ ls -h -l
total 256M
-rw-rw-r-- 1 hadoop hadoop 59M Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 24 01:22 aveo
drwxr-xr-x 6 hadoop hadoop 4.0K Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4.0K Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4.0K Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4.0K Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 37M Oct 6 12:37 hadoop-1.2.1-bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4.0K Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 58M Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 105M Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4.0K Oct 6 23:30 namenode
Make directory:
User can create their directory using mkdir command.
Example:
hadoopguru@hadoop2:~$ pwd
/home/hadoop
hadoopguru@hadoop2:~$ mkdir aveo_hadoop
hadoopguru@hadoop2:~$ ls -l
total 261828
-rw-rw-r-- 1 hadoop hadoop 60965956 Jul 1 09:41 apache-flume-1.4.0-
bin.tar.gz.1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 24 01:22 aveo
drwxrwxr-x 2 hadoop hadoop 4096 Oct 27 10:10 aveo_hadoop
drwxr-xr-x 6 hadoop hadoop 4096 Oct 6 23:30 data
drwxrwxr-x 2 hadoop hadoop 4096 Oct 6 17:37 datanode
-rw-rw-r-- 1 hadoop hadoop 343 Oct 6 17:47 derby.log
drwxrwxr-x 7 hadoop hadoop 4096 Oct 6 17:55 flume
drwxr-xr-x 15 hadoop hadoop 4096 Oct 6 16:32 hadoop-1.2.1
-rw-rw-r-- 1 hadoop hadoop 38096663 Oct 6 12:37 hadoop-1.2.1-
bin.tar.gz
drwxrwxr-x 8 hadoop hadoop 4096 Oct 6 17:44 hive
-rw-rw-r-- 1 hadoop hadoop 59859572 Oct 6 12:08 hive-0.11.0-bin.tar.gz
-rw-rw-r-- 1 hadoop hadoop 109137498 Oct 6 12:29 mahout-distribution-
0.8.tar.gz
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 17:47 metastore_db
drwxrwxr-x 5 hadoop hadoop 4096 Oct 6 23:30 namenode
1. mkdir -p:
Example:
hadoopguru@hadoop2:~$ mkdir -p
aveo_hadoop/aveo_hadoop1/aveo_hadoop2
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1
aveo hadoop-1.2.1-bin.tar.gz
aveo_hadoop hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoopguru@hadoop2:~$ cd aveo_hadoop
hadoopguru@hadoop2:~/aveo_hadoop$ ls
aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop$ cd aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ ls
aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ cd aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1/aveo_hadoop2$ pwd
/home/hadoop/aveo_hadoop/aveo_hadoop1/aveo_hadoop2
Remove directory:
One can delete the existing directory using rmdir command but iff the directory is
empty.
Example:
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
aveo_hadoop hive-0.11.0-bin.tar.gz
data mahout-distribution-0.8.tar.gz
datanode metastore_db
derby.log mydir
flume namenode
hadoop-1.2.1
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1
aveo hadoop-1.2.1-bin.tar.gz
aveo_hadoop hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
rmdir –p:
User can remove directory from any specified path using rmdir –p.
Example:
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1
aveo hadoop-1.2.1-bin.tar.gz
aveo_hadoop hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoopguru@hadoop2:~$ cd aveo_hadoop
hadoopguru@hadoop2:~/aveo_hadoop$ ls
aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop$ cd aveo_hadoop1
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ ls
aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1$ cd aveo_hadoop2
hadoopguru@hadoop2:~/aveo_hadoop/aveo_hadoop1/aveo_hadoop2$ cd
hadoopguru@hadoop2:~$ rmdir -p
aveo_hadoop/aveo_hadoop1/aveo_hadoop2/
hadoopguru@hadoop2:~$ ls
apache-flume-1.4.0-bin.tar.gz.1 hadoop-1.2.1-bin.tar.gz
aveo hive
data hive-0.11.0-bin.tar.gz
datanode mahout-distribution-0.8.tar.gz
derby.log metastore_db
flume namenode
hadoop-1.2.1
pushd: pushd adds a directory to the stack n changes to new current directory
popd: popd removes a directory from the stack and sets the current directory.
Example:
pushd :
hadoopguru@hadoop2:~$ cd hadoop-1.2.1/
popd :
hadoopguru@hadoop2:/hadoop$ popd
/lib /bin ~/hadoop-1.2.1
hadoopguru@hadoop2:/lib$ popd
/bin ~/hadoop-1.2.1
hadoopguru@hadoop2:/bin$ popd
~/hadoop-1.2.1