New Northeastern University research by Francesco Restuccia, a professor of electrical and computer engineering, may bring us one step closer to true smart glasses.
For as far as we have come in mixed reality — with commercial headsets available for purchase from makers like Samsung, Meta and Apple — technologists are still racing to produce a viable and affordable pair of smart glasses.
It’s easy to understand why. Traditional headsets are cumbersome, bulky and uncomfortable to wear for long periods of time. They are also not the most fashionable.
What if you could have all that tech in a traditional pair of glasses?
There are many companies that are experimenting in space. Meta’s collaboration with Ray-Ban is perhaps the most well-known example, pairing its AI technology with Ray-Ban’s signature frames.
It’s still early days, and no company is offering a pair of shades as capable or feature-rich as Apple’s $3,500 Vision Pro or even Meta’s entry-level $300 Quest 3S.But new research by Northeastern University professor of electrical and computer engineering Francesco Restuccia may bring us one step closer to getting there.
The key issue with current headsets is that they require huge amounts of data processing to work properly. This requires equipping the headset with bulky batteries. Alternatively, the processing could be done by another computer wirelessly connected to the headset. However, this is a huge challenge with today’s wireless technologies.
He and a group of researchers at Northeastern, including doctoral students Foysal Haque and Mohammad Abdi, have discovered a method to drastically decrease the communication cost to do more of the AR/VR processing at nearby computers, thus reducing the need for a myriad of cables, batteries and convoluted setups.
To do this, the group created new AI technology based on deep neural networks directly executed at the wireless level, Restuccia explains. This way, the AI gets executed much faster than existing technologies while dramatically reducing the bandwidth needed for transferring the data.
“The technology we have developed will lay the foundation for better, faster and more realistic edge computing applications, including AR/VR, in the near future,” says Restuccia. “It’s not something that is going to happen today, but you need this foundational research to get there.”
The paper “PhyDNNs: Bringing Deep Neural Networks to the Physical Layer” can be found here and will be presented in May at the IEEE International Conference on Computer Communications (INFOCOM) in London, England. Support for the research was provided in part by the National Science Foundation, the Office of Naval Research, and the Air Force Office of Scientific Research.