Joint CS-ECE Assistant Professor Paris Smaragdis was named an IEEE fellow, a distinction reserved for select members whose extraordinary accomplishments make them fitting of this prestigious grade elevation, according to the IEEE's website.
Joint CS-ECE Assistant Professor Paris Smaragdis has been named an IEEE fellow, “a distinction reserved for select IEEE members whose extraordinary accomplishments in any of the IEEE fields of interest are deemed fitting of this prestigious grade elevation,” according to the IEEE website.
Smaragdis noted that IEEE fellows are selected based on a combination of recommendations from current members, the quality and originality of their research, and the service and leadership they’ve performed for their research communities.
Smaragdis has spent the majority of his career working on some of the most challenging problems in audio processing. The core of Smaragdis’s work lies in making machines understand sound. One of his early breakthroughs was in a field called source separation analysis, or the science of isolating distinct sounds in a recording. In a recording studio, for instance, each member of a band could be simultaneously playing different lines on different instruments, and a sound engineer will need to be able to independently process each instrument, and subsequently remix them into a coherent track. What if he wants to isolate simply the bass line to make it richer, or deeper? What if he needs to isolate the singer’s voice?
The prevailing solution at the time he started working on the problem was called the instantaneous mixing model, and it relied on a number of basic assumptions. The model worked flawlessly, but only as long as you assumed that both of the sounds mixed have no propagation delay, no echoes, and no filters applied. It was a perfect solution, but only applicable to what Smaragdis notes is a very sterile problem.
Smaragdis’s solution: frequency-domain independent component analysis. His concept was to rely less on heuristics, or established, pre-set rules for how computers should interpret sounds based on how humans think. It focused more on machine learning, or creating algorithms that could allow computers to learn on their own. This solution started Smaragdis on integrating machine learning and sound.
“This set of algorithms was the first solution to the problem that was usable in real life, as opposed to theoretical, perfect environments,” Smaragdis said.
It’s not just his opinion either: Smaragdis was named one of MIT’s top innovators under 35 in 2006 for his work on making machines that listen and learn from the world around them.
Along with streamlining music production, Smaragdis envisions his research eventually helping engineers make more efficient hearing aids that can sort speech from background noise, algorithms that can tell when a football game is getting exciting, or even software that interacts better with people by analyzing their voices, just a few applications in a world of possibilities.
“Making computers that understand their world around them is an incredibly hard problem,” Smaragdis noted in his research statement. “Fortunately it is also fascinating. My research explores the computational foundations for constructing systems that can understand sound (e.g., speech or music) the same way we do ... this results in constructing actual machines with hearing abilities such as TVs that can find when the football game gets interesting, stethoscopes that detect and analyze heartbeats, music players that automatically DJ for you and smart traffic lights that can hear accidents that happen in their intersection.”