How can I view how many blocks has a file been broken into, in a Hadoop file system?
Asked
Active
Viewed 2.6k times
4 Answers
4
hadoop fsck filetopath
used the above commad in CDH 5. Got the below Error.
hadoop-hdfs/bin/hdfs: line 262: exec: : not found
Use the below command and it worked good
hdfs fsck filetopath
yoga
- 1,829
- 2
- 13
- 18
2
It is always a good idea to use hdfs instead of hadoop as 'hadoop' version is deprecated.
Here is the command with hdfs and to find the details on a file named 'test.txt' in the root, you would write
hdfs fsck /test.txt -files -blocks -locations
user1795667
- 401
- 4
- 10
-3
This should work..
hadoop fs -stat "%o" /path/to/file
user1514507
- 5
- 2
-
`%o` is the block size, not the number of blocks, per http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-common/FileSystemShell.html#stat – Nickolay May 25 '15 at 02:34