New wearable uses light and AI to turn silent throat movements into audible speech
Speech usually seems simple. Air moves, vocal cords vibrate, sound comes out. But the act of speaking leaves behind another trace, one that never reaches the ear. Tiny muscles in the throat tense and shift. Skin stretches by fractions so small they are easy to miss. Those motions, a team of researchers found, may carry enough information to rebuild spoken words. This is true even when no sound is made at all. That is the idea behind a new wearable system developed by researchers at POSTECH, or Pohang University of Science and Technology. Led by Professor Sung-Min Park and Dr. Sunguk Hong, the team created a neck-mounted device that reads subtle throat movements with light. Then, it uses artificial intelligence to decode those patterns and turn them back into speech in the user’s own synthesized voice. The concept targets a stubborn problem. In loud places, clear communication breaks down fast. Factories, construction zones, battlefields, and even some clinical settings can make spoken words unreliable. Traditional silent speech interfaces have tried to solve that by measuring …







