Hadoop is basically designed including HDFS loaded with libraries to support MapReduce. The HDFS is the core reason for the Hadoop to be scalable and the Data in the HDFS is stored in multiple Datanodes in the form of Data chunks which allows the MapReduce to function parallelly on data stored in HDFS.
Here is a link to the MapReduce Tutorial.
Cassandra, on the other hand, is totally similar to Hadoop when it comes to data storage and data processing as it uses Hadoop's HDFS to store the data which actually makes it Scalable. It uses HashTables to store and maintain the data and this sort of Key-Value pairs are the reason which makes it different from the Conventional Data Storage units.
Big Table :
This is what makes Cassandra different. SST(String Sorted Table) is a software file whic=h is loaded on to the HDFS in Cassandra designed to store all the data in the form of Key-Value pairs. SST maintains the Index with an offset along with data. SST are prone to immutability which means you cannot change the data once stored in the HDFS. New key/value pairs are appended to the file. Update and Delete of records are appended to the file, update with a newer key/value and deletion with a key and tombstone value. Duplicate keys are allowed in this file for SSTable.
That's how Cassandra is more fast and Efficient than Hadoop.