As the government relies on AI, security is a big concern

CIPHER BRIEF REPORT – The leadership of the US government is working to implement the new order of the Biden administration Artificial Intelligence Executive Order(AI) are not only grappling with the global impact of a world-changing technology, but they are doing so while facing a number of threats to American businesses and national security, threats originating largely from China.

“The breakneck speed at which AI is evolving and churning out innovation,” noted CISA Executive Director Eric Goldstein during the Cyber ​​Initiatives Group. Winter summit this week brings with it “some real risks of unlearning some of the lessons of the last few decades” on how to develop and deploy software securely.

The US is leading a global effort by government agency partners, including the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA), and agencies around the world to create the foundation to “develop, design, maintain and deploy AI systems,” says Goldstein, “in the safest way possible.” This means that rather than relaxing security standards for AI, as some have suggested, the technology requires “higher levels of scrutiny, security and control” to ensure that the risks of unauthorized access or use are effectively managed.

FBI Deputy Assistant Director for Cyber ​​Cynthia Kaiser agrees, saying at the same time summit that the agency uses AI in two distinct ways. First, it is the familiar defense task of fending off cyber threats, an area of ​​risk that has become stronger and more sophisticated thanks to the capabilities that AI brings. But there is another protective aspect that AI raises, and that is protecting America's AI innovators from the ever-present dangers of cyber espionage and intellectual property theft.

The biggest threat, according to experts, is China, both as a competitor in AI research and development and in stealing American technological secrets. According to Kaiser, “we are worried about the next step coming and how to defend against it.”

Looking for a way to get ahead of the week in cyber and technology? Register at Cyber ​​Initiatives Group A Sunday newsletter that lets you quickly catch up on the biggest cyber and tech headlines and be prepared for the week ahead. Log in today.

The unique ability of artificial intelligence to supercharge malicious uses and bounce back from current threats to even greater dangers clearly concerns the FBI. The idea of ​​”destructive attacks is getting better,” Kaiser noted, in the wake of already alarming incidents like this one Volta's Typhoon mesh implants or even more recently probing Water companies in the US by Iranian threat actors, is a “proliferation” problem that is unique to AI.

Fortunately, experts note that deploying AI cybersecurity solutions has a “defender advantage,” such as writing and testing protection codes much faster or using AI to automate certain system monitoring functions to free up time and reduce costs. But, as Goldstein said, the balance of this advantage is “very fragile” with the expectation that adversaries will continue unabated with their own development and capability development, often unfettered by ethical safeguards in the West. Goldstein points out that today's current advantage could be wasted “if we don't design and deploy AI systems ourselves safely.”

Kaiser addressed the executive order's requirement that all federal agencies develop internal guidelines and safeguards for the use of artificial intelligence, a topic uniquely sensitive to the FBI. In response to that mandate, Kaiser noted that the FBI has already created an AI ethics board to ensure that AI is used in ways “that protect the legal process, protect privacy among Americans…things that are really important to us at the FBI.” defend.”

As for other elements of the executive order that CISA is very interested in, Goldstein pointed to three aspects: first, CISA's coordinating role with sector risk management (SRM) agencies in conducting risk assessments and providing guidance for each critical sector; second, it is a mandate to use artificial intelligence systems to accelerate the detection of vulnerabilities on federal networks; and third, CISA's role in providing red team advice on AI systems so that rigorous testing leads to an understanding of weaknesses and how an adversary might exploit them.

Read more expert-driven national security insights, perspectives and analysis at Encryption sheet

Leave a Comment