Hi,
RDD in spark stands for REsilient distributed dataset which is considered to be the backbone of Spark and is one of the fundamental data structure of Spark. It is also known as the schema-less structure which can handle both structured and unstructured data.
In spark, anything we do is around RDD, you are reading the data in spark then it is read into RDD again when we are transforming the data then we are performing transformations on old RDD and creating a new one. Then, at last, you will perform some action of RDD and store that data present in RDD to persistent storage.