I am developing a model that creates text or images for a large-scale platform. During testing, I noticed that the model exhibits bias, producing output that favors one demographic over another. How can I handle this business and ensure that the model generates fair and inclusive outputs during both training and inference?