Below is what happens with the map-reduce jars:
After the jar of the job is submitted, now once your job has been submitted the Resource Manager will assign a new application id to this job which will be then passed on to the client.
Client will copy the jar file and other job resources to HDFS. Basically, client submits the job through Resource Manager.
Resource Manager, being master node, allocate the resources needed for the job to run and keeps track of cluster utilization. It also, initiates an application master for each job who is responsible to co-ordinate the job execution.
Application master gets the meta data info from namenode to determine where the blocks (input split) for input are located and then supervises the respective nodemanagers to submit the tasks
Basically, the App Master creates a map task object for each input split, as well as a number of reduce task objects.