This brain-inspired hardware could one day make autonomous vehicles safer
The researchers tested their vision system component in several simulation scenarios — including autonomous vehicle driving and robotic arm operations — to study its capacity to predict motion and track objects.

Hop inside a robotaxi, cruising the streets of San Francisco or Phoenix and you may come away thinking these driverless cars operate on magic.
Not quite fairydust, but they are backed by a series of sophisticated technologies, including machine-learning software, high-definition cameras and state-of-the-art sensors. Even so, these vehicles are not foolproof — they can have problems with how they perceive other objects or people on the road.
Developed in part by Northeastern University Electrical and Computer Engineering Professor Ravinder Dahiya, a new component inspired by biology and designed to improve these vehicles’ vision may make them even more responsive in the future.
In a newly published paper in Nature Communications, Dahiya and his co-authors outline new perception technologies that they designed and developed to mimic how a human retina analyzes visual changes.
The goal of the hardware is to reduce time delays between when an autonomous system sees an object and when it can analyze it and take action, a particularly important issue for driverless vehicles which operate in proximity to pedestrians, Dahiya said.
Researchers say they were able to tackle this challenge by taking advantage of synaptic transistors, electrical devices designed to simulate the neural pathways of the brain. The work builds on previous research Dahiya has conducted that involved using these “neuromorphic” or brain-like sensing technologies.
“We call it neuromorphic behavior because that’s pretty much how processing takes place in our body.”
The researchers’ system is designed to emulate a human’s ability to take in and make sense of visual information, even as that information is changing, such as when someone’s hair is blowing in the wind, or in the case of a robotaxi, a pedestrian walking in the car’s path.

This is part of human’s ability to store information about a scene and only update it to account for changes over time, Dahiya said. These are what are known as “temporal motion cues,” the researchers write.
By focusing only on areas of change, an autonomous system lightens how much work it has to do in any given moment, which in turn decreases processing time.
“This research is about identifying those regions of interest,” said Dahiya. “We are doing that with the hardware, so that you are processing less data and enhancing speed.”
Editor’s Picks
The researchers tested their vision system in several simulation scenarios — including autonomous vehicle driving and robotic arm operations — to study its capacity to predict motion and track objects.
What they found is that compared to more traditional image processing systems, their hardware had a 400% increase in processing speed.
These synaptic transistors could one day be installed on a range of autonomous systems, aside from driverless vehicles, including smart glasses and industrial robot arms, which are often used in tasks that require correctly identifying objects or materials.
But it may take a while for industry partners to develop the infrastructure needed to support this form of visual learning in the long run, he said.
Companies like NVIDIA, the artificial intelligence chipmaker, for example have developed chips capable of specialized software that can run artificial intelligence, but they haven’t released similarly capable, analog hardware-based neuromorphic systems.
So for now this technology will likely remain in academia, he said.
“It’s going to take some extra effort to get the right kind of hardware that this approach needs,” he said.










