Lock

Fighting fire with fire: The future of cybersecurity is artificial intelligence

It’s been a banner year for cyber criminals. International cybersecurity disasters such as the WannaCry and Goldeneye ransomware attacks impacted thousands of people around the globe and illustrated just how tenuous a grasp most organizations hold on their security. Then there’s the Equifax debacle, which impacted about 1 in 3 Americans. And if the CIA can’t protect its own data from ending up on Wikileaks, what chance do the rest of us stand against ever-more-sophisticated hackers?   

In theory, artificial intelligence can provide new forms of protection against nefarious actors. But the challenge is that those same nefarious actors will also have access to AI technologies. As every day passes, AI becomes more powerful in the hands of both white hat and black hat hackers. And with more and more data being collected and stored globally, the stakes become much higher with every passing moment. 

Security is becoming an increasingly urgent concern, particularly as experts warn against the gaps in the “wildly insecure” Internet of Things. Smart home features, assisted driving systems, and convenient wearables are all perks of living in the 21st century. Yet these same advantages expose us to digital security violations and cybercrime. Our best hope of defending against AI-powered cyberattacks is to leverage the power of AI in cybersecurity. Fighting fire with fire.

The dangers of AI-powered cyberattacks

Before we dive into how AI helps combat cyberattacks, we first need to understand the challenges. Hackers have been spreading viruses and breaking into databases since networked computing emerged decades ago. Security analysts are always working hard to keep up with new threats. But the sheer amount of data that’s now generated makes it impossible for them to track every anomaly and red flag on their own. After all, humans create 2.5 exabytes of data each day – the equivalent of 250,000 Libraries of Congress. 

“IT security teams are struggling to see what is happening in and around their IT infrastructures,” wrote one business expert. “They struggle to understand where all corporate data lives and who has access to it, not to mention what users are doing with that access.” With hundreds of millions of new security logs created each week, security teams cannot possibly process that data without technological assistance, suggesting new applications for AI

As AI has become a growing presence in people’s daily lives – think voice-activated assistants and car autopilots, for instance – our understanding of the opportunities and dangers has matured. 

“The rise of brain-computer interfaces, in particular, will create a dream target for human and AI-enabled hackers. And brain-computer interfaces are not so futuristic — they’re already being used in medical devices and gaming, for example,” one computer science professor wrote about near-future cybersecurity threats. “If successful, attacks on brain-computer interfaces would compromise not only critical information such as social security numbers or bank account numbers but also our deepest dreams, preferences, and secrets.”

Rather than worrying about a malevolent AI rising from the pages of a sci-fi novel to conquer us all, our fears are coming back down to earth. But that doesn’t mean they’re any less scary. In fact, we should be more concerned about malicious or incompetent humans misusing AI and causing unprecedented security challenges. 

At least, that was President Barack Obama’s view of AI. He worried more about hostile actors using AI to commit cyberattacks that harm millions of people than about an autonomous system taking it upon itself to wipe out humanity. 

“There could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles,’” Obama told Wired. “If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems.” 

With that in mind, it’s no wonder that the Defense Advanced Research Projects Agency (DARPA), an arm of the Department of Defense, invited high-level hackers to develop proactive AI hacking systems that could detect and repair system vulnerabilities before the bad guys could find them. 

The cybersecurity risks associated with AI are real, and they’re less sci-fi than early hysteria made them out to be. Companies are struggling to recruit qualified cybersecurity professionals due to both a lack of supply and a lack of knowledge, according to an ISACA cybersecurity survey. Even if they can get candidates in the door, hiring managers may not know exactly which skills are needed to defend against attacks. Not only do cybersecurity workers need to understand existing threats, they must also be able to adapt to the ever-shifting, complicated nature of the field. 

Given the growing amount of data, rising number of cyber threats, and the rapid pace of technological change, it’s clear that humans can’t battle cyberattacks alone. We need AI to defend our data against AI – or more specifically, against the humans who would wield AI for less-than-virtuous purposes. 

Artificial intelligence (and humans) to the rescue  

Now that we’ve covered the worst-case scenario side of things, let’s look on the bright side. AI is one of the most powerful tools humans have ever invented, and researchers are already leveraging AI-powered programs to combat cyber threats. In addition to DARPA incentivizing white hat hackers to build autonomous threat-detection systems, experts are using machine learning to beat criminals at their own game. 

Right now, one of the major threats to our virtual and digital well-being is hackers manipulating computer systems through misleading patterns. As a senior editor at MIT Technology Review wrote recently, computer programs “are vulnerable, in part, because they lack actual intelligence.” Without the instinct to reject suspicious-sounding commands, programs may download malware or make incorrect route calculations for self-driving cars. Security experts are experimenting with using machine learning to spot fake patterns and signals so programs will learn to ignore those and avoid cyberattacks. 

Others are taking a hybrid human-AI approach to cyberattacks, developing programs that analyze millions of security logs, flag any suspicious activity, and refer the potential hacks to human analysts. The experts then conduct their own analyses to determine if there’s been a breach. They document the outcomes of these investigations, and the AI system integrates this data into its understanding of what constitutes a threat. In this way, it learns from its mistakes and successes, which leads to a higher accuracy rate and reduced vulnerability.  

AI can dramatically decrease the amount of time it takes to discover security breaches, with some organizations striving to lower alert times from months to minutes. As cyberattacks increase in number and sophistication, time will be increasingly important to mitigating the data and financial fallouts from these attacks. Although many data breaches are caused by system breakdowns or human errors, research indicates that criminal cybersecurity attacks are on the rise. Detecting a hack within the hour after it happens is vastly preferable to finding out months later, when millions of pieces of data have been exposed. 

Becoming our own cybersecurity heroes 

Unfortunately, the cybersecurity war has no end in sight. As new technologies emerge, black hat hackers will find clever new ways to steal high-value data, and white hat security experts will leverage those same technologies to combat the assaults. Companies will likely need to create cybersecurity ecosystems that combine multiple forms of AI to protect their data, and their customers’ data, through fast detection and crisis mitigation protocols. 

There are ways for you and I to protect ourselves as well. Researchers are using AI to create password-generation tools that make it more difficult for attackers to crack, and it’s important that we all monitor such trends and take responsibility for our personal password maintenance. But keep in mind, too, that the more personal information you share online, the more AI-savvy hackers have to work with. 

Some experts predict that hackers will be able to use phishing and machine learning to generate scam emails that sound eerily similar to your own communication style, accurately guess your responses to security questions, or send fraudulent text messages to get you to reveal private information. To stay protected, always check that emails are coming from secured, trusted senders and avoid providing information over unsecured connections. Monitor your accounts for suspicious activity, and trust your instincts when a message or transaction feels off. Hackers are highly motivated and often quite skilled at what they do. But that doesn’t mean victimization is a foregone conclusion.  

As is the case with so many facets of AI, the future of cybersecurity requires both artificial intelligence capabilities and proper human judgment. Humans will need to provide context to the output of AI systems, train both machines and themselves to better determine when there’s a true threat, and find new ways to beat criminals to system vulnerabilities. The battle goes on, but we can use AI to ensure that the bad guys don’t win.