Keeping Secrets Safe While Learning Together
In the world of technology, visual-language models (VLMs) are getting really good at understanding both pictures and words. But when these models are used in a federated setting, where different users contribute data, two big problems come up.
- Data Heterogeneity: The data from each user is not the same, which can make the model perform poorly.
- Privacy Concerns: Sending prompts (which are like hints for the model) in plaintext can expose private information.
The Solution: PPFPL
To tackle these issues, a new method called Privacy-Preserving Personalized Federated Prompt Learning (PPFPL) has been developed. This method helps the model learn better by using a special algorithm that weighs the importance of different prompts. This way, each user can extract useful information from both pictures and words while keeping the model's performance strong.
Privacy Protection
Privacy is also a big concern. PPFPL ensures that sensitive information is protected by splitting it between two servers that do not work together. This means neither server can see the full private data, keeping it safe from semi-honest servers that might try to peek.
Test Results
Tests have shown that PPFPL works well even when the data is very different. It:
- Improves the accuracy of local tasks by up to 9.12%.
- Boosts the model's performance on new tasks by an average of 4.32% compared to standard methods.
This shows that PPFPL is a promising approach for using VLMs in a federated setting while keeping user data private.