What can we learn about cybersecurity from the Russian hacks?

On Inau­gu­ra­tion Day, NBC News reported that the FBI—aided by the CIA, the National Secu­rity Agency, and the Trea­sury Department—was car­rying out a counter-​​intelligence inves­ti­ga­tion to learn how, as NBC’s Ken Dilanian put it, “Russia’s efforts to manip­u­late public opinion in the U.S. pres­i­den­tial election…was paid for and whether any Amer­i­cans were involved.” The month before, myriad news out­lets reported Russia’s hacking of the Demo­c­ratic National Com­mittee and other polit­ical orga­ni­za­tions to influ­ence the elec­tion, with both the CIA and FBI agreeing about the source and aim of the hacks.

We asked two North­eastern fac­ulty and cyber­se­cu­rity experts—associate pro­fessor Alina Oprea and pro­fessor Cristina Nita-​​Rotaru—to explain what these break-​​ins tell us about the state of U.S. cyber­se­cu­rity, whether an impen­e­trable system is even pos­sible, and how such attacks might be pre­vented in the future.

What do these break-​​ins tell us about the state of cyber­se­cu­rity in the U.S.?

Oprea: Rather than informing us about the state of cyber­se­cu­rity in the U.S. only, these attacks pro­vide a pic­ture on the state of cyber­se­cu­rity on the global scale. They demon­strate that attackers are becoming increas­ingly sophis­ti­cated in devel­oping new ways to gain remote access to crit­ical sys­tems. For instance, using var­ious sources of recon­nais­sance, such as social net­works and news reports, attackers are able to craft so-​​called spear-​​phishing emails that imper­sonate legit­i­mate senders and look cred­ible to human users. In the recent Russian cam­paigns, the attackers sent emails that were very sim­ilar to emails auto­mat­i­cally sent by Google when sus­pi­cious activity in users’ Gmail accounts is detected. Users were asked to change their Gmail pass­words and redi­rected to a site con­trolled by attackers.

The “watering-​​hole attack” is another infec­tion vector hackers are increas­ingly adopting. Here they silently inject lists of mali­cious com­mands, called “scripts,” or pieces of soft­ware called “exploits” that take advan­tage of a vul­ner­a­bility in legit­i­mate web­sites. Sim­ilar to how preda­tors in the nat­ural world wait for their desired prey near watering holes, these attackers wait for their vic­tims at “water-​​holed” websites.

Why is it so dif­fi­cult to pro­tect against com­puter hacks and other cybercrimes?

Nita-​​Rotaru: One of the prin­ci­ples of com­puter and net­work secu­rity is that a system is as secure as the weakest link. Most of the time humans are the weakest link. This is not to say that com­puters do not have vul­ner­a­bil­i­ties, but even if all the tech­nical prob­lems are addressed, the human in the loop remains a cru­cial ele­ment. A simple example is the fact that we require iden­ti­fying secrets and pass­words that a person must remember and change prop­erly; many sys­tems are broken because default pass­words are not changed on accounts or devices.

Cristina Nita-​​Rotaru, pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence Photo by Matthew Moodono/​Northeastern University

Cristina Nita-​​Rotaru, pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence Photo by Matthew Moodono/​Northeastern University

Another com­ment you often hear about secu­rity is “Secu­rity is an add-​​on.” The beauty of com­puting sys­tems and soft­ware is that the pace of inno­va­tion keeps up with the ser­vices we as cus­tomers like. Secu­rity is not one of the ser­vices; it is an add-​​on and often per­ceived as a cost. Without leg­is­la­tion to enforce it and without cus­tomers’ refusing to use ser­vices that are not secure, there is little incen­tive to pro­vide it. It’s also not clear that users are nec­es­sarily ready to pay for security.

The Amer­ican Enter­prise Institute’s report, “An Amer­ican Strategy for Cyber­space,” notes that cyber­space “per­me­ates every ele­ment of modern soci­eties.” How would you describe that all-​​encompassing network?

Nita-​​Rotaru: A joke in com­puter secu­rity is “If you want a secure system, lock it in a safe.” Today every­thing is con­nected: Even devices that you might not con­sider part of a system, such as appli­ances (refrig­er­a­tors, coffee machines, etc.), are con­nected to the internet. We want them to be con­nected because then we can con­trol them remotely, but that also makes them vulnerable.

Given the ubiq­uity of cyber­space, what can we do to pre­vent cyber­at­tacks in the future?

Oprea: I believe that the chal­lenges of securing cyber­space can only be addressed by col­lab­o­ra­tive efforts, including data sharing and joint research, among the gov­ern­ment, states, public and pri­vate insti­tu­tions, and aca­d­emic researchers. The Amer­ican Enter­prise Institute’s report men­tions the Cyber­se­cu­rity Infor­ma­tion Sharing Act, which cre­ates a frame­work for orga­ni­za­tions to share threat intel­li­gence data. For instance, an orga­ni­za­tion that has been breached can share the details of the attack with other orga­ni­za­tions, helping them to increase their resilience against the same attack. I believe that more invest­ment is needed in the near future to enable threat-​​sharing plat­forms to effec­tively dis­sem­i­nate breach infor­ma­tion in a timely manner.

Alina Oprea, asso­ciate pro­fessor in the Col­lege of Com­puter and Infor­ma­tion Sci­ence Photo by Adam Glanzman/​Northeastern University

There is also huge poten­tial for secu­rity researchers to explore new advances in machine learning and data ana­lytics to create more intel­li­gent defenses and pre­dicting adver­saries’ next steps. As a longer-​​term goal, the U.S. should create addi­tional infra­struc­ture for cyber­se­cu­rity research, in which aca­d­e­mics get access to real­istic datasets and test­beds pro­vided by industry and gov­ern­ments, as well as real­istic attack sce­narios, trans­forming cyber­se­cu­rity research into a more rig­orous discipline.

Is it pos­sible to design a truly impen­e­trable system? If not, why?

Oprea: Given all these factors—human ele­ments and the exis­tence of tech­nical vul­ner­a­bil­i­ties in software—it is indeed impos­sible to design a truly impen­e­trable system. How­ever, my view is that we should think about system secu­rity holis­ti­cally, that is, as including mul­tiple levels of defense. Each level can be defeated with cer­tain resources by attackers and pro­tected at some cost by defenders. In such a view, a pass­word simply pro­vides one level of defense, but the secu­rity of the whole system should not com­pletely break if the pass­word is com­pro­mised. I believe that machine-​​learning based tech­niques applied to var­ious secu­rity data, such as net­work traffic and system logs, have great poten­tial to pro­vide addi­tional defenses.

In the con­text of the Russian attacks, for example, it might have been impos­sible to pre­vent users from clicking on the spear-​​phishing emails they received. How­ever, machine-​​learning tech­niques could have readily detected when a user’s machine estab­lished a con­nec­tion to an external internet pro­tocol located in Russia and sent giga­bytes of data (the DNC’s exfil­trated emails). The key chal­lenges are to reduce false pos­i­tives (legit­i­mate activ­i­ties that result in anom­alies), detect the attacks early in their devel­op­ment, and enable rapid response to reme­diate the breach.