Preserve Privacy with Collaborative Machine Learning

Training a machine-learning model to perform a task requires thousands, millions, or even billions of examples. These vast datasets are a privacy challenge. Researchers from MIT and MIT startup DynamoFL are using federated learning to make it faster and more accurate.

The collaborative method for training a machine-learning model keeps sensitive user data private. Users each train their model using their data on their device, then transfer their models to a central server, which combines them to create a better model, and sends it back to all users.

It’s not perfect. Moving this volume of data has high communication costs, especially as the model must travel back and forth dozens or even hundreds of times. The data doesn’t necessarily follow the same statistical patterns, slowing the performance of the combined model. The final combined model is made by taking an average rather than being personalized for each user.
The researchers innovated a technique to tackle the challenges of federated learning. Their solution boosts accuracy, reduces size, and ensures each user receives a more personalized model, improving performance.

The system, called FedLTN, relies on an idea in machine learning–the lottery ticket hypothesis. Within extensive neural network models, there exist much smaller subnetworks that can achieve the same performance. Finding one of these subnetworks is winning a lottery ticket. (LTN stands for “lottery ticket network.”)

Finding a winning lottery ticket network is no small task, so the team used iterative pruning. When accuracy is above a set threshold, they remove nodes and the connections between them and then test the leaner neural network to see if the accuracy remains above the threshold. The team’s results to date are impressive.

Leave A Reply

Your email address will not be published.