Let me explain you the difference between the two.
A failed task attempt is a task attempt that completed, but with an unexpected status value. A killed task attempt is a duplicate copy of a task attempt that was started as part of speculative execution.
Hadoop uses "speculative execution." The same task may be started on multiple boxes. The first one to finish wins, and the other copies are killed.
Failed tasks are tasks that error out.
There are a few reasons Hadoop can kill tasks by his own decisions:
a) Task does not report progress during timeout (default is 10 minutes)
b) FairScheduler or CapacityScheduler needs the slot for some other pool (FairScheduler) or queue (CapacityScheduler).
c) Speculative execution causes results of task not to be needed since it has completed on other place.
Hope this will clear the difference between a failed task attempt and a killed task attempt.