Dimensionality reduction is used in Machine Learning to get better features for either a classification or a regression task. It can be understood as reducing feature space from high dimension to low dimension in such a way that lower dimensions provide more information.
Consider the following scenario: You have a list of 100 TV shows and 10000 people, and you know whether each person enjoys or dislikes each of the 100 TV shows. So you have a binary vector of length 100 [position I is 0 if that individual dislikes the i'th TV show, 1 for like] for each occurrence
You could perform your machine learning task directly on these vectors, or you could choose 5 TV shows genres and, using the data you already have, determine whether the person likes or dislikes the entire genre, reducing your data from a vector of size 100 to a vector of size 5 [position i is 1 if the person likes genre i]
Because most people may only like TV shows in their particular genres, the vector of length 5 can be regarded as a good example of the vector of length 100.
However, it will not be an exact representation because some people may dislike all TV shows in a genre except one.
The argument is that the smaller vector carries the majority of the information in the larger one while taking up far less space and being much faster to compute.
Discover the Power of AI, Enroll now in the midjourney course today!