I'm using hadoop to process a video using HVPI, an open-source interface. However, the implementation of the inputsplit, more precisely in the isSplitableobContext (context, Path file)method returns a false. By default, this method returns true but in the current implementation, there is a reason to return afalse. If this method returns false I will only have one map task. If I am not wrong, hadoop allocates for each input split a container that corresponds to the computational resources of a certain node of the network where a map task is executed and this node should preferably contain the data that will process. If I have a false I will only have an input split and consequently, just one map task and this map task will run only on a cluster node. The big question is how an only map task take advantage of all the CPU resources of a cluster and not just a single container on a single node?