bin/hdfs dfs -ls /
bin/hdfs dfs -lsr /
bin/hdfs dfs -mkdir /data01
bin/hdfs dfs -put abc.txt /dir1
This -put operation will not handle file duplication, so duplicated file cannot do overwrite.
If "/dir1" not exist, the "dir1" will be a file name, not directory
bin/hdfs dfs -get /dir1/abc.txt ~/
bin/hdfs dfs -text /dir1/abc.txt
bin/hdfs dfs -rm /dir1/abx.txt
bin/hdfs dfs -rmr /dir1
bin/hdfs dfs -cp /dir1/abx.txt /dir2/abc.txt
bin/hdfs dfs -help
hadoop default user directory: /user/username, for example /user/yushan
check safe mode
[yushan@hadoop-yarn hadoop-2.6.4]$ bin/hdfs getconf
hdfs getconf is utility for getting configuration information from the config file.
hadoop getconf
[-namenodes] gets list of namenodes in the cluster.
[-secondaryNameNodes] gets list of secondary namenodes in the cluster.
[-backupNodes] gets list of backup nodes in the cluster.
[-includeFile] gets the include file path that defines the datanodes that can join the cluster.
[-excludeFile] gets the exclude file path that defines the datanodes that need to decommissioned.
[-nnRpcAddresses] gets the namenode rpc addresses
[-confKey [key]] gets a specific key from the configuration
0.0.0.0 means localhost
bin/hdfs getconf -namenodes
bin/hdfs getconf -confKey hadoop.tmp.dir
Hadoop 1.x
Commands
hadoop fs -ls /
hadoop fs -lsr /
hadoop fs -mkdir /dir1
hadoop fs -put abc.txt /dir1
This -put operation will not handle file duplication, so duplicated file cannot do overwrite.
If "/dir1" not exist, the "dir1" will be a file name, not directory
hadoop fs -get /dir1/abc.txt ~/
hadoop fs -text /dir1/abc.txt
hadoop fs -rm /dir1/abx.txt
hadoop fs -rmr /dir1
hadoop fs -cp /dir1/abx.txt /dir2/abc.txt
hadoop fs -help
hadoop default user directory: /user/username
So if cannot access the default directory, we need to manually create the directory.
No comments:
Post a Comment