Home Latest NSA Cybersecurity Director Says ‘Buckle Up’ for Generative AI

NSA Cybersecurity Director Says ‘Buckle Up’ for Generative AI

0
NSA Cybersecurity Director Says ‘Buckle Up’ for Generative AI

[ad_1]

At the RSA safety convention in San Francisco this week, there’s been a sense of inevitability within the air. At talks and panels throughout the sprawling Moscone conference heart, at each vendor sales space on the present ground, and in informal conversations within the halls, you simply know that somebody goes to convey up generative AI and its potential affect on digital safety and malicious hacking. NSA cybersecurity director Rob Joyce has been feeling it too. 

“You can’t walk around RSA without talking about AI and malware,” he stated on Wednesday afternoon throughout his now annual “State of the Hack” presentation. “I think we’ve all seen the explosion. I won’t say it’s delivered yet, but this truly is some game-changing technology.”

In recent months, chatbots powered by large language models, like OpenAI’s ChatGPT, have made years of machine-learning development and research feel more concrete and accessible to people all over the world. But there are practical questions about how these novel tools will be manipulated and abused by bad actors to develop and spread malware, fuel the creation of misinformation and inauthentic content, and expand attackers’ abilities to automate their hacks. At the same time, the security community is eager to harness generative AI to defend systems and gain a protective edge. In these early days, though, it’s difficult to break down exactly what will happen next.

Joyce said the National Security Agency expects generative AI to fuel already effective scams like phishing. Such attacks rely on convincing and compelling content to trick victims into unwittingly helping attackers, so generative AI has obvious uses for quickly creating tailored communications and materials.

“That Russian-native hacker who doesn’t speak English well is no longer going to craft a crappy email to your employees,” Joyce stated. “It’s going to be native-language English, it’s going to make sense, it’s going to pass the sniff test … So that right there is here today, and we are seeing adversaries, both nation-state and criminals, starting to experiment with the ChatGPT-type generation to give them English language opportunities.”

Meanwhile, although AI chatbots may not be able to develop perfectly weaponized novel malware from scratch, Joyce noted that attackers can use the coding skills the platforms do have to make smaller changes that could have a big effect. The idea would be to modify existing malware with generative AI to change its characteristics and behavior enough that scanning tools like antivirus software may not recognize and flag the new iteration.

“It is going to help rewrite code and make it in ways that will change the signature and the attributes of it,” Joyce said. “That [is] going to be challenging for us in the near term.”

In terms of defense, Joyce seemed hopeful about the potential for generative AI to aid in big data analysis and automation. He cited three areas where the technology is “showing real promise” as an “accelerant for defense”: scanning digital logs, finding patterns in vulnerability exploitation, and helping organizations prioritize security issues. He cautioned, though, that before defenders and communities more broadly come to depend on these tools in daily life, they must first study how generative AI systems can be manipulated and exploited.

Mostly, Joyce emphasized the murky and unpredictable nature of the current moment for AI and security, cautioning the security community to “buckle up” for what’s likely yet to come.

“I don’t expect some magical technical capability that is AI-generated that will exploit all the things,” he stated. But “next year, if we’re here talking a similar year in review, I think we’ll have a bunch of examples of where it’s been weaponized, where it’s been used, and where it’s succeeded.”

[adinserter block=”4″]

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here