This post was written by assistant professor Wil Robertson.
We are in the midst of an explosive proliferation of computing devices. Once confined to the domain of massive, expensive mainframes tended to by teams of specialists, technological and economic forces have pushed ever smaller and more capable devices into our work and social lives. Now, it’s commonplace to have a laptop, a smartphone, a tablet, and any number of peripheral supporting devices, each of which is likely to be networked and orders of magnitude more powerful than those dusty old mainframes. And, there’s no reason to believe that these devices won’t continue to evolve, appearing in ordinary objects that we would never have expected.
One need look no further than Glass, Google’s project to integrate computing resources into eyeglasses, to see the way the world is moving. High-profile examples aside, however, there’s a widespread movement underway to embed more powerful CPUs and network interfaces into just about every device you could imagine; think printers, security cameras, watches, and environmental controls.
With all of this convenience and power comes hidden dangers. As security researchers in the Northeastern Systems Security Lab, it’s long been clear that assuring that there isn’t any hidden malicious functionality lurking in the hardware or software running on traditional desktops and servers is a difficult problem. But, over the years we’ve developed ways to mitigate this threat through monitoring, sandboxing, and other means. The concern now is how to deal with new classes of embedded devices that can be easily transported and installed behind otherwise hardened security perimeters, and is the focus of a new $1.2M DARPA-funded project we are conducting.
Let’s consider a concrete scenario. Imagine that your IT department has installed a new set of wireless routers in your building. But, unbeknownst to them, the router firmware — i.e., the embedded code that implements the router’s functionality — contains a hidden trigger that activates after enough data has passed through the device. The trigger sends a beacon out over the corporate network to a group of hackers; because the connection originates from inside the organization, it’s allowed to traverse the company firewall. The hackers use this connection to remotely control the device, essentially giving them a foothold inside of the organization they can use to capture data passing through the device or probe other devices on the network for vulnerabilities they can exploit. Our challenge in this project is this: Can we identify the presence of this malicious behavior before the device has been deployed to the target?
To tackle this problem, we’re using a set of techniques referred to as program analysis, which — simply put — provides ways of discovering facts about how a program behaves in response to input from its environment. Program analysis has a long history, but our project is focusing on developing analyses specific to rooting out hidden malicious behaviors. One example of this is dynamic analysis, which consists of running a device in an instrumented environment that allows us to automatically observe who the device contacts, what data it sends, and much more. In some ways, the process is akin to putting a specimen under a microscope and probing it to see how it responds.
However, discovering hidden malicious behavior is no easy task. Hackers have innumerable ways to try to evade detection, from requiring extremely complex trigger conditions before executing its malicious actions, to exploiting subtle differences between a real environment and the analysis environment to determine whether it should hide its malicious behavior. Much of our research has dealt with similar problems in the traditional malware world, and we anticipate similar challenges in this context.
Despite the challenges, we’re very excited to be solving emerging problems, staying one step ahead of the attackers, and producing research that will result in a safer, more secure Internet for everyone.