First of all, Map Reduce algorithm is not programmed to work on physical memory blocks of the file. It is designed to work on the logical input splits. Each file or data you enter into HDFS splits into a default memory sized block. Input split block size depends on the memory location where the record was written. A record can extend to two Mappers.
HDFS was designed in such a way that it divides files into blocks measuring 128MB each by default and replicates the data before storing the default replication factor is three. Then these blocks are transferred to different nodes in the Hadoop cluster.
HDFS has no regard for the data present in those files. A file can start in A-Block and end of that file can be present in B-Block.
To solve this problem, Hadoop uses a logical representation of the data stored in file blocks, known as input splits. When a MapReduce job is assigned from the client, it calculates the total number of input splits, it understands where the first record in a block starts and where the last record in the block finishes.
In cases where the last record in a block is incomplete, the input split includes location information for the next block and the byte offset of the data needed to complete the record.