Incomplete Multimodal Federated Learning via Masking and Contrasting Prototypes
In real-world scenarios, random modality missingness in multimodal federated learning (mFL) poses a significant challenge, diminishing the performance of global model inference. However, existing mFL methods are predominantly limited to simple scenarios that typically involve participant clients restricted to either a single modality or multimodal clients with complete modalities. They employ modality-specific encoders on each client and train modality fusion modules on the server, leading to se
