{"id":326302,"date":"2026-04-29T13:53:56","date_gmt":"2026-04-29T08:23:56","guid":{"rendered":"https:\/\/ebiztoday.news\/?p=326302"},"modified":"2026-04-29T13:53:56","modified_gmt":"2026-04-29T08:23:56","slug":"enabling-privacy-preserving-ai-training-on-on-a-regular-basis-devices-mit-news","status":"publish","type":"post","link":"https:\/\/ebiztoday.news\/index.php\/2026\/04\/29\/enabling-privacy-preserving-ai-training-on-on-a-regular-basis-devices-mit-news\/","title":{"rendered":"Enabling privacy-preserving AI training on on a regular basis devices | MIT News"},"content":{"rendered":"<div>\n<p>A brand new method developed by MIT researchers can speed up a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure.<\/p>\n<p>The MIT researchers boosted the efficiency of a way often known as federated learning, which involves a network of connected devices that work together to coach a shared AI model.<\/p>\n<p>In federated learning, the model is broadcast from a central server to wireless devices. Each device trains the model using its local data after which transfers model updates back to the server. Data are kept secure because they continue to be on each device.<\/p>\n<p>But not all devices within the network have enough capability, computational capability, and connectivity to store, train, and transfer the model backwards and forwards with the server in a timely manner. This causes delays that worsen training performance.<\/p>\n<p>The MIT researchers developed a way to beat these memory constraints and communication bottlenecks. Their method is designed to handle a heterogenous network of wireless devices with varied limitations.<\/p>\n<p>This latest approach could make it more feasible for AI models to be utilized in high-stakes applications with strict security and privacy standards, like health care and finance.<\/p>\n<p>\u201cThis work is about bringing AI to small devices where it just isn&#8217;t currently possible to run these sorts of powerful models. We stock these devices around with us in our each day lives. We want AI to find a way to run on these devices, not only on giant servers and GPUs, and this work is a crucial step toward enabling that,\u201d says Irene Tenison, an electrical engineering and computer science (EECS) graduate student and lead writer of a <a href=\"https:\/\/arxiv.org\/pdf\/2510.03165\" target=\"_blank\">paper on this method<\/a>.<\/p>\n<p>Her co-authors include Anna Murphy \u201925, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from\u00a0Ecole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL) in Switzerland and a machine-learning engineer at Flower Labs; and senior writer Lalana Kagal, a principal research scientist within the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research might be presented on the IEEE International Joint Conference on Neural Networks.\u00a0<\/p>\n<p><strong>Reducing lag time<\/strong><\/p>\n<p>Many federated learning approaches assume all devices within the network have enough memory to coach the complete AI model, and stable connectivity to transmit updates back to the server quickly.<\/p>\n<p>But these assumptions fall short with a network of heterogenous devices, like smartwatches, wireless sensors, and mobile phones. These edge devices have limited memory and computational power, and sometimes face intermittent network connectivity.<\/p>\n<p>The central server often waits to receive model updates from all devices, then averages them to finish the training round. This process repeats until training is complete.<\/p>\n<p>\u201cThis lag time can decelerate the training procedure and even cause it to fail,\u201d Tenison says.<\/p>\n<p>To beat these limitations, the MIT researchers developed a brand new framework called FTTE (Federated Tiny Training Engine) that reduces the memory and communication overhead needed by each mobile device.<\/p>\n<p>Their framework involves three most important innovations.<\/p>\n<p>First, slightly than broadcasting all the model to all devices, FTTE sends a smaller subset of model parameters as a substitute, reducing the memory requirement for every device. Parameters are internal variables the model adjusts during training.<\/p>\n<p>FTTE uses a special search procedure to discover parameters that may maximize the model\u2019s accuracy while staying inside a certain memory budget. That limit is ready based on essentially the most memory-constrained device.<\/p>\n<p>Second, the server updates the model using an asynchronous approach. Slightly than waiting for responses from all devices, the server accumulates incoming updates until it reaches a hard and fast capability, then proceeds with the training round.<\/p>\n<p>Third, the server weights updates from each device based on when it received them. In this fashion, older updates don\u2019t contribute as much to the training process. These outdated data can hold the model back, slowing the training process and reducing accuracy.<\/p>\n<p>\u201cWe use this semi-asynchronous approach because need to involve the least powerful devices within the training process in order that they can contribute their data to the model, but we don\u2019t want the more powerful devices within the network to remain idle for a very long time and waste resources,\u201d Tenison says.<\/p>\n<p><strong>Achieving acceleration<\/strong><\/p>\n<p>The researchers tested their framework in simulations with a whole bunch of heterogeneous devices and a wide range of models and datasets. On average, FTTE enabled the training procedure to succeed in completing 81 percent faster than standard federated learning approaches.<\/p>\n<p>Their method reduced the on-device memory overhead by 80 percent and the communication payload by 69 percent, while attaining near the accuracy of other techniques.<\/p>\n<p>\u201cBecause we wish the model to coach as fast as possible to avoid wasting the battery lifetime of these resource-constrained devices, we do have a tradeoff in accuracy. But a small drop in accuracy might be acceptable in some applications, especially since our method performs a lot faster,\u201d she says.<\/p>\n<p>FTTE also demonstrated effective scalability and delivered higher performance gains for larger groups of devices.<\/p>\n<p>Along with these simulations, the researchers tested FTTE on a small network of real devices with various computational capabilities.<\/p>\n<p>\u201cNot everyone has the most recent Apple iPhone. In lots of developing countries, as an illustration, users may need less powerful mobile phones. With our technique, we will bring the advantages of federated learning to those settings,\u201d she says.<\/p>\n<p>In the long run, the researchers want to review how their method might be used to extend the personalized performance of AI models on each device, slightly than specializing in the typical performance of the model. Additionally they need to conduct larger experiments on real hardware.<\/p>\n<p>This work was funded, partially, by a Takeda PhD Fellowship.<\/p>\n<\/p><\/div>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A brand new method developed by MIT researchers can speed up a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure. The MIT researchers boosted the efficiency of a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":326303,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[1107,11365,18461,182,395,10576,463],"class_list":["post-326302","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-devices","tag-enabling","tag-everyday","tag-mit","tag-news","tag-privacypreserving","tag-training"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/posts\/326302","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/comments?post=326302"}],"version-history":[{"count":2,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/posts\/326302\/revisions"}],"predecessor-version":[{"id":326305,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/posts\/326302\/revisions\/326305"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/media\/326303"}],"wp:attachment":[{"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/media?parent=326302"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/categories?post=326302"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ebiztoday.news\/index.php\/wp-json\/wp\/v2\/tags?post=326302"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}