Hi@akhtar,
By default, the block size is 64MB in Hadoop Cluster. But you can change the block size dynamically at the time of uploading the file using dfs.block.size parameter as ahoen below.
$ hadoop fs -D dfs.block.size=32000000 -put abc.txt /
Here you have to specify the block size in bytes. Otherwise it will not work.