Hi@siva,
Hadoop HDFS split large files into small chunks known as Blocks. It contains a minimum amount of data that can be read or write. HDFS stores each file as blocks. And input split represents the data which individual mapper processes. Thus the number of map tasks is equal to the number of input splits.