There are two conditions for no. of mappers.
(1) No. of Mappers per slave
(2) No. of Mappers per MapReduce job
(1) No. of Mappers per slave: There is no exact formula. It depends on how many cores and how much memory you have on each slave. Generally, one mapper should get 1 to 1.5 cores of processors. So if you have 15 cores then one can run 10 Mappers per Node. So if you have 100 data nodes in Hadoop Cluster then one can run 1000 Mappers in a Cluster.
(2) No. of Mappers per MapReduce job:The number of mappers depends on the amount of InputSplit generated by trong>InputFormat (getInputSplits method). If you have 640MB file and Data Block size is 128 MB then we need to run 5 Mappers per MapReduce job.
Reducers:
There are two conditions for no. of reducers.
(1) No. of Reducers per slave
(2) No. of Reducers per MapReduce job
(1) No. of Reducers per slave: It is same as No of Mappers per slave
(2) No. of Reducers per MapReduce job:
The right no. reducer we can set with following formula:
0.95 * no. of nodes * mapred.tasktracker.reduce.tasks.maximum
or
1.75 * no. of nodes * mapred.tasktracker.reduce.tasks.maximum
With 0.95 all of the reducers can launch immediately and start transferring map o/p when map finished.
With 1.75 faster nodes will finish their first round of reduces and launch the second wave of reduces.