- Cyber criminals and other bad actors can do things faster and easier with artificial intelligence, which makes it more difficult for CISOs and other cybersecurity experts to protect their organizations.
- One of the most effective things organizations can do to better defend against AI-assisted attacks is deploy tools and services that leverage AI.
- Update employee training programs so that workers learn to recognize AI-enabled phishing campaigns.
IT and security leaders are concerned about the impact of artificial intelligence on cybersecurity — and rightfully so. Cyber criminals and other bad actors can do things faster and easier with AI, which makes it more difficult for CISOs and other cybersecurity experts to protect their organizations.
One of the primary ways cyber criminals are leveraging AI is to increase the efficacy of phishing emails and other social engineering attacks, said Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Winthrop Shaw Pittman.
"Historically, phishing emails have been somewhat easy to spot thanks to sloppy drafting," Finch said. "In particular, phishing emails created by a hacker unfamiliar with a certain language [have] tended to be easy to spot due to poor grammar, illogical vocabulary, and bad spelling." Such glaring errors were easy to pick up by automated defenses as well as reasonably careful people, he added.
Get top local stories in Southern California delivered to you every morning. Sign up for NBC LA's News Headlines newsletter.
But AI allows many more people — regardless of their language skills — to quickly and cheaply generate convincing text for use in phishing emails, Finch said. With AI tools, it is now far more likely that a phishing email will appear genuine, leading to more potential victims actually clicking on malicious links.
Here are three steps cybersecurity leaders can take to strengthen their organizations' cybersecurity programs in the age of AI.
Using AI to defend against AI
Money Report
One of the most effective things organizations can do to better defend against the latest AI-assisted attacks is deploy tools and services that leverage AI. That means keeping up on the latest technology offerings and hiring people with the needed AI skills to use these products or retraining existing staff so that they can use them effectively.
"Advanced technologies such as machine learning algorithms, anomaly detection and real-time monitoring can play a pivotal role in identifying and responding to potential security breaches or malicious activities," said Kyle Kappel, U.S. leader of cyber at KPMG. "By leveraging these tools, organizations can enhance their ability to detect, prevent, and respond to security events effectively."
Executives need to prioritize the allocation of leadership and resources to ensure their AI security program remains current and proactive in the face of evolving threats, Kappel said. This means appointing people with expertise in AI security to leadership positions and empowering them with the necessary tools and authority to stay on top of the latest developments in the field, he said.
By having dedicated personnel overseeing AI security, organizations can effectively assess the risks posed by emerging threats and devise appropriate countermeasures, Kappel said.
Update employee training programs
If an organization is using older methods of training employees — including the C-suite — about how to avoid falling prey to attackers, it might be more likely to experience a data breach or other security incident. Training programs need to be updated just like tools need to be enhanced in this new security environment.
"AI is so new that most people, including business executives, don't really understand how it can be used for malicious purposes," Finch said. "Business leaders need to educate themselves and their employees about the emerging negative uses of AI."
For example, employees need to be taught about what to look for in AI-generated phishing emails, given that cybercriminals are using AI to make messages appear genuine. Training should be ongoing, because the threats are constantly evolving.
Build a new security framework
And finally, deploying new tools and training programs is all well and good, but given the severity of the latest intelligence-powered threats, organizations need to create strategies to help bolster security throughout the enterprise and be sure that policies are being enforced. This effort needs to be led by CISOs and strongly supported by other members of the C-suite.
"Executives should adopt a comprehensive framework that guides the implementation of suitable controls for the use of AI across the enterprise and within various business functions," Kappel said. This includes interactions with third-party entities. The framework should include risk indicators, the AI ecosystem and regulations, and should "address the unique security considerations associated with AI within their organizations, and reducing their AI security risk," he said.