To prevent data leakage in generative models while performing cross-validation on multi-modal datasets, you can follow the following strategies:
- Separate Data Splits: Ensure that each modality (e.g., images, text, and audio) is consistently split into training and validation sets to avoid leakage between modalities.
- Cross-Validation Per Modality: Perform cross-validation separately for each modality and ensure that the validation set from each fold is kept disjoint from the training set of the other modality.
- Ensure No Overlap: When generating synthetic data, ensure that the model does not use information from the validation set during training (e.g., by masking or excluding validation data in the generative process).
- Data Augmentation & Normalization: Apply consistent data augmentation and normalization techniques across training and validation data to prevent the model from learning unintended correlations.
Here is the code snippet you can refer to:

In the above code, we are using the following key points:
- Separate Data Splits: Ensures that each modality is independently cross-validated without overlap.
- Disjoint Training and Validation Sets: Prevents data leakage between the training and validation sets.
- Cross-Validation Per Modality: Ensures that the model is evaluated on unseen data in each modality, preventing overfitting.
- Validation-Only Usage: Ensures the validation data is not used in any way during training, even when generating synthetic data.
Hence, by referring to the above, you can prevent data leakage in generative models while performing cross-validation on multi-modal datasets.