In GaussianNB, there is a mechanism to set prior probabilities. It's called 'priors,' and it's a parameter that you can use. See the following documentation: "Parameters: priors: array-like, (n classes,) shape The classes' prior probability. The priors are not adjusted according to the data unless otherwise specified." As an example, consider the following:
from sklearn.naive_bayes import GaussianNB
# minimal dataset
X = [[1, 0], [1, 0], [0, 1]]
y = [0, 0, 1]
# use empirical prior, learned from y
gauss = GaussianNB()
print (gauss.fit(X,y).predict([1,1]))
print (gauss.class_prior_)
>>>[0]
>>>[ 0.66666667 0.33333333]
However, if you adjust the prior probabilities, you'll get a different result, which I believe is what you're looking for.
# use custom prior to make 1 more likely
gauss = GaussianNB(priors=[0.1, 0.9])
gauss.fit(X,y).predict([1,1])
>>>>array([1])
You can't set class prior with the GaussianNB() function in scikit-learn. If you look at the documentation online, you'll notice that. Instead of arguments, class prior_ is an attribute. You can access the class prior_ property after fitting the GaussianNB().
Elevate Your Expertise with Our Machine Learning Certification Program!