dimensionality reduction

Dimensionality Reduction: Simplifying Complex Data

Dimensionality reduction is a unsupervised learning method. Furthermore, its purpose is effective reduction of input data features.

Now, what do I mean by effective?

This technique allows us to optimize features to maximize impact on accuracy of mapping them to their desired output.

Sometimes, we’re dealing with a massive dataset. Therefore, we need to optimize its features to avoid unnecessary computational cost when we train our model.

We would use this process to simplify complex data, but not at the cost of reducing performance of our trained model.

Furthermore, we can also use it in combination with other machine learning techniques, like clustering and classification.

Dimensionality Reduction Methods

Principal Component Analysis (PCA)

Principal component analysis, or PCA for short, is one of the most popular method for this task.

This method performs a linear mapping of data to a lower-dimensional space. The reason is so that we maximize its variance in this low-dimensional representation of it.

In other words, it identifies the directions in the data, which have the most variation and projecting it on a new set of axes. Furthermore, these new set of axes will be aligned with those directions.

Therefore, we’ll end up with a new set of features, which capture most of the variation in the data, but with fewer dimensions.

Linear Discriminant Analysis (LDA)

This is another popular choice of such algorithms, since it’s similar to PCA. However, we use it for supervised learning, because it finds the directions in the data that best separate different classes.

t-Distributed Stochastic Neighbor Embedding (t-SNE)

Another method is t-distributed stochastic neighbor embedding or t-SNE for short. Furthermore, we use this one for non-linear dimensionality reduction.

We use it for visualization of high-dimensional data in a 2 or 3-dimensional space.


In conclusion, dimensionality reduction is a powerful technique in unsupervised learning for optimizing our datasets for training.

I hope this article helped you gain a better understanding about this technique and perhaps even inspire you to learn even more.

Share this article:

Related posts