The most concerning revelation to come out of the security industry over the past couple of years isn’t the Mirai botnet, nor the hacks of Verizon, Yahoo! (before the acquisition), or the Democratic National Committee (DNC), or even the infamous Jeep hack. Instead, it came from security company FireEye’s June 2016 Mandiant M-Trends Report, in which it was revealed that the average time between compromise and detection of a cyberattack is 146 days.
While this number is unnerving for enterprises of any kind, it’s particularly disconcerting for industrial and Internet of Things (IoT) companies that deal in sensitive and/or safety-critical products. 146 days is nearly five months, or almost half a year that advanced persistent threats have to siphon sensitive intellectual property (IP) or customer data, propagate into critical systems, and, potentially, do serious physical damage.
Assuming, for a moment, that all Internet-connected organizations are responsible enough to employ standard anti-virus (AV) software, firewalls, and other security information and event management (SIEM) system measures, alert fatigue can be pointed to as a major contributor to the extended dwell time of cyber threats.
Alert fatigue is a phenomenon in which the person(s) responsible for managing an organization’s security infrastructure is consistently bombarded with breach notifications to the point that they eventually disregard them. This usually occurs when the majority (or perceived majority) of notifications are actually false positives, or when no context is provided in the alerts from a range of different security tools. The more systems and security tools throughout an organization, the higher the likelihood of alert fatigue.
“Legacy security systems were designed to address complex security problems as they popped up in the wild,” says Alton Kizziah, vice president of global managed services at Kudelski Security. “We created signature-based AV to fight virus attacks, we created intrusion detection systems (IDS) to help defend the perimeter, we created firewalls to control packets, we created the Open Web Application Security Project (OWASP) to protect application weaknesses, and on and on.
“These systems were designed to detect specific security problems, like ‘I’m looking for a malware that’s trying to reach out to a known-bad IP address,’” he continues. “That’s just one step in a chain of many events that are part of a breach, and there might be multiple different steps happening at the same time as an attacker moves.
“What happens is you’ve got all of these legacy devices looking for different problems, creating alerts, and creating a lot of false positive alerts, which results in alert fatigue with the guys who actually do the monitoring,” Kizziah says. “They’re unable to tell a coherent, relevant story that can give context to the administrators of the tools – is it really just this one particular event that’s the problem, or is it connected to a greater, larger, more complex attack that’s happening to our environment?
“The trouble is that long-lasting breaches are still outpacing all of these technologies that we designed to prevent them in the first place,” he says. “None of them really address the needed outcome, which is ‘We don’t want an attacker in our environment for months and months.’”
One alternative to traditional network security measures that has become popular in recent years is the honeypot, or a pseudo system with many of the trimmings of a real system that is actually deployed as bait. The idea behind a honeypot is that hackers, under the impression that the honeypot either contains valuable information or can be used to move laterally across a network of devices, will attempt to compromise a fake system that is actually isolated and heavily monitored. Once an attacker attempts to exploit the system, security professionals can take the necessary steps to expel them and protect the rest of the network.
The problem with honeypots, especially against advanced threats, is that they simply aren’t real systems, which the savvy attacker will deduce in short order. For example, by its very nature a honeypot probably wouldn’t contain much activity or log data in the case of an embedded device, or browsing history in the case of an enterprise PC. Many of the little things that make a real system “real” simply won’t be there, raising warning flags for hackers who will just move on to another vector.
Although honeypots haven’t proven sweet enough, what they do represent is a step towards pushing alert fatigue back on the hacker, who at least has to stop and asses the network landscape before proceeding with an attack. The security industry nevertheless requires a more sophisticated means of deceiving malicious actors, which it may have found in the form of deception networks.
Sweetening the pot with alternate realities
Like honeypots, deception networks are deployed as part of a real network. Unlike them, they are actually deployed on real devices.
Deception networks take the honeypot concept to the extreme, creating fake administrator accounts, applications, and data that reside next to genuine components on the same machine. For those familiar with string theory, a deception network creates a sort of alternate network, an alternate reality that is interspersed with the real network so that hackers can’t be sure whether they are attempting to compromise a real component or a deceptive one. At worst this severely limits the ability of an attacker to enter a network and propagate laterally; at best, an attacker attempts to use data from a component that doesn’t truly exist to advance their attack, and the deception is triggered.
“For instance, if you’re on an end user’s laptop, the real network shares sit right next to the deceptive ones. There are real administrator accounts right next to deceptive ones. So they are in the list and look real,” says Kizziah. “Everything about the system is real, except the parts that aren’t.
“It becomes a really frustrating endeavor for an attacker because even if they recognize that deception is deployed, it doesn’t improve their attack – it actually slows them down. They will constantly second guess whether or not to try a pass-the-hash attack on a particular account because it might not be a real account. If they do and are caught, it’s over. When the technology sees an attempted lateral move, that’s a pretty clear indicator that something fishy is going on,” he explains. “What happens is you have a lot fewer false positive alerts. Nobody should have access to those accounts. They don’t exist. So why are they trying to log into a network share?
“Everything moves in slow motion, and there’s no obvious way to separate the reality from the alternate reality,” Kizziah continues. “Sometimes I describe it as trying to find a needle in a stack of needles. What’s really beautiful about that is that that needle in a stack of needles is the exact paradigm that threat analysts have been struggling with for years. We basically turn the alert fatigue back towards the attacker. Now they have to deal with too much information, where before we had to deal with too much.”
To integrate deceptive technology with real systems on a real network, cybersecurity firms like Illusive Networks use machine learning to analyze network attack vectors and strategically place deceptions. Once the deceptions are in place, the technology integrates with endpoint protection and threat monitoring services such as those provided by Kudelski Security to provide real-time forensics and attack mitigation before hackers can move laterally (Figures 1A, 1B, and 1C).
Penetrating minds will assume that, because deception technology essentially creates an additional network that resides in and on top of the real one, double the infrastructure and resources are required. In fact, the deceptions are rather breadcrumbs that are pushed out to endpoints from a central deployment server, and can be scaled up or down based on system requirements. No new infrastructure is needed.
Defense in depth: An inconvenient truth
Taking things back up a level, one obvious shortcoming of the deceptive strategy is that its use means that a hacker must have gained access to a network in the first place. It’s designed to prevent lateral breaches that can persist indefinitely and result in catastrophic data loss, theft, or system damage, and provide security analysts with actionable information about how to respond to attacks.
Traditional security measures still need to be applied to prevent attackers from infiltrating the network at all. If you’re not deploying a layered defense strategy, Kizziah says, “you’re making a mistake.”
“I would never recommend that we could supplant AV with deception,” he says. “There are too many variables in the corporate world around compliance, and reasons you have to have different technologies. Breaches are going to happen. They’re going to be advanced. If you don’t have a layered defense you’re going to be impacted more by these breaches.
“What we’ve found is things like advanced endpoint protection capability, deception capabilities, and endpoint response are all very complimentary as long as you have a comprehensive threat monitoring and response strategy to handle the output,” Kizziah continues. “We look at this as layered defense for the endpoint. What you can prevent, you do. When you can deceive and slow down, you do. And [with deception technology,] when a breach is detected you should be able to detect it earlier in the lifecycle than with legacy tools. You can respond faster because you’re actually on the endpoints where the breach is happening, so you’re responding faster with containment and forensic collection.
“All of those things together strengthen defenses at the actual point of attack,” he adds.
A free trial of Illusive Networks’ deception technology can be found at www.illusivenetworks.com/product#dem.
About the Author
Brandon Lewis, Editor-in-Chief of Embedded Computing Design, is responsible for guiding the property's content strategy, editorial direction, and engineering community engagement, which includes IoT Design, Automotive Embedded Systems, the Power Page, Industrial AI & Machine Learning, and other publications. As an experienced technical journalist, editor, and reporter with an aptitude for identifying key technologies, products, and market trends in the embedded technology sector, he enjoys covering topics that range from development kits and tools to cyber security and technology business models. Brandon received a BA in English Literature from Arizona State University, where he graduated cum laude. He can be reached by email at email@example.com.Follow on Twitter More Content by Brandon Lewis