All posts tagged: MITs

MIT’s self-organizing laser revolutionizes 3D imaging of the brain’s protective barrier

MIT’s self-organizing laser revolutionizes 3D imaging of the brain’s protective barrier

At high power, laser light inside a multimode optical fiber is supposed to misbehave. The beam usually breaks into a noisy, scattered pattern as the light ricochets through many paths at once. But MIT researchers found a case where that expectation fails. Push the system close to its limit, line the beam up just right, and the optical mess can collapse into a tightly focused, self-organized “pencil beam.” That beam, the team reports, can do more than tidy up a physics problem. In experiments on a human model of the blood-brain barrier, it produced 3D images about 25 times faster than a standard approach. This was achieved while keeping similar cellular-level detail. The work appears in Nature Methods. “The common belief in the field is that if you crank up the power in this type of laser, the light will inevitably become chaotic. But we proved that this is not the case. We followed the evidence, embraced the uncertainty, and found a way to let the light organize itself into a novel solution for bioimaging,” …

Telepathy Machine? Here’s What MIT’s AlterEgo Wearable Actually Does

Telepathy Machine? Here’s What MIT’s AlterEgo Wearable Actually Does

There’s a new wearable in town and it looks like telepathy in action. It’s called AlterEgo, and its secret isn’t magic or mind-reading.  Its makers say the hardware detects “silent speech,” including everything from mouthing words to vocalizing internally (think subtle internal movements).  AlterEgo’s demo features projections of the “silent speech” detected by the wearable. AlterEgo This means many tasks normally done with your voice could now be done silently, including conversation, live translation between languages and controlling digital devices. The device itself is worn on your ears. There’s a privacy benefit to not having to vocalize sensitive information in public, but there are also privacy concerns that naturally arise whenever a computer comes between two people communicating. How it works AlterEgo, a spinoff company from the original MIT Media Lab project of the same name, says the technology behind AlterEgo builds on what researchers call “silent speech,” a method for interpreting the subtle neuromuscular signals involved in speech before words are spoken aloud. But AlterEgo’s system goes a step further. Its creators developed a …

MIT’s New 3D Printer Can Print a Working Motor, Complete With Moving Parts

MIT’s New 3D Printer Can Print a Working Motor, Complete With Moving Parts

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech The tech behind 3D printing has come an extremely long way. The additive manufacturing technique, which generally involves depositing one layer at a time, has gone from relatively crude rapid prototyping in industrial settings to high-end fabrication of detailed parts in a growing list of fields, from medical implants to the construction of entire neighborhoods and rocket engines. Now, MIT researchers have devised new tech that can 3D print entire complex machines with moving parts in a matter of hours. As Gizmodo half-jokingly points out, it brings us one small step closer to being able to “steal a car” by downloading it from the internet, as suggested in the slogan of the much-derided anti-piracy ad from the early 2000s. The team came up with a retrofitted 3D printer, which features four separate extruders that can deposit a wide variety of printable materials, including magnetic and conductive ones, by squeezing them through a nozzle. The novel 3D printing platform …

MIT’s new fine-tuning method lets LLMs learn new skills without losing old ones

MIT’s new fine-tuning method lets LLMs learn new skills without losing old ones

When enterprises fine-tune LLMs for new tasks, they risk breaking everything the models already know. This forces companies to maintain separate models for every skill. Researchers at MIT, the Improbable AI Lab and ETH Zurich have developed a new technique that enables large language models to learn new skills and knowledge without forgetting their past capabilities. Their technique, called self-distillation fine-tuning (SDFT), allows models to learn directly from demonstrations and their own experiments by leveraging the inherent in-context learning abilities of modern LLMs. Experiments show that SDFT consistently outperforms traditional supervised fine-tuning (SFT) while addressing the limitations of reinforcement learning algorithms. For enterprise applications, the method enables a single model to accumulate multiple skills over time without suffering from performance regression on earlier tasks. This offers a potential pathway for building AI agents that can adapt to dynamic business environments, gathering new proprietary knowledge and skills as needed without requiring expensive retraining cycles or losing their general reasoning abilities. The challenge of continual learning Once an LLM is trained and deployed, it remains static. It …

MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot

MIT’s new ‘recursive’ framework lets LLMs process 10 million tokens without context rot

Recursive language models (RLMs) are an inference technique developed by researchers at MIT CSAIL that treat long prompts as an external environment to the model. Instead of forcing the entire prompt into the model’s context window, the framework allows the LLM to programmatically examine, decompose, and recursively call itself over snippets of the text. Rather than expanding context windows or summarizing old information, the MIT team reframes long-context reasoning as a systems problem. By letting models treat prompts as something they can inspect with code, recursive language models allow LLMs to reason over millions of tokens without retraining. This offers enterprises a practical path to long-horizon tasks like codebase analysis, legal review, and multi-step reasoning that routinely break today’s models. Because the framework is designed as a wrapper around existing models, it can serve as a drop-in replacement for applications that make direct calls to LLMs. The LLM context problem While frontier models are becoming increasingly sophisticated at reasoning, their ability to process massive amounts of information is not scaling at the same rate. This …