There are no standards fixed as to when to use Linear Discriminant Analysis or Naive Bayes, it depends upon trials and the accuracy of the model by applying both LDA as well as Naive Bayes.
In few data sets LDA might perform well, and in other data sets chances are there that Naive Bayes will give good results.
Disadvantages of Naive Bayes:
-
Not suitable for continuous features the features have to be converted into discrete by applying any of the techniques like one-hot encoding or label encoding, but one hot encoding leads to dummy trap and also causes multicollinearity problems, thus giving poor results.
2) Naive Bayes makes assumptions that all the features are independent of each other, but this is rarely the case with real-life data.
Advantages of Naive Bayes:
-
Performs better on small data sets, provided features are not correlated and are independent of each other.
-
Works well with categorical features
-
Naive Bayes can be used for multi-class label classification tasks.
Advantages of LDA
-
LDA minimizes variance in the dataset by reducing the number of features.
-
It is useful as it reduces the curse of dimensionality by effectively reducing the high-dimensional data into low-dimensional feature space.
Disadvantages of LDA
-
Makes assumption requires features to be normally distributed.
-
Does not give good results in case of unbalanced dataset.
-
Not suitable for non-linear problems.
-
Prone to overfitting