A new report cites a lack of regulations in chemical and biological enforcement.
Emerging technologies in artificial intelligence will make it easier for bad actors to “conceptualize and conduct” chemical, biological, radiological or nuclear attacks, according to a report released by the Department of Homeland Security on Monday.
Selected excerpts of the report to President Joe Biden were made public after he signed an executive order three months ago on artificial intelligence.
The lack of regulations in existing U.S. biological and chemical security, combined with the increase in using AI, when combined with the increased use of AI tools “could increase the likelihood of both intentional and unintentional dangerous research outcomes that pose a risk to public health, economic security, or national security,” according to the DHS report.
“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” said Assistant Secretary for Countering Weapons of Mass Destruction Mary Ellen Callahan.
“This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure and trustworthy development and use of AI,” she said.
DHS also said that the diverse approaches of AI developers make it crucial that the U.S. and international partners communicate and harness “AI’s potential for good.”
“The degree to which nation states or groups interested in pursuing these unconventional weapons capabilities will harness such AI tools remains unclear, however, since there are various technical and logistical hurdles that have to be met to develop fully functioning weapons systems that can be used,” Javed Ali, the former senior counterterrorism coordinator on the National Security Council, told ABC News. “That said, it is more likely that AI tools will be more helpful on the research and theoretical design end of the spectrum than the actual manufacture and deployment of such weapons, especially with respect to nuclear weapons.”
A separate DHS report produced by the Cybersecurity and Infrastructure Security Agency (CISA) last week highlighted that some attacks could be carried out of helped by using AI — including those targeting critical infrastructure.
“It is clear that foreign intelligence services, terrorist groups and criminal organizations have embraced the power of technology and incorporated the use of advanced computing capability into the tactics they use to achieve their illegal objectives,” John Cohen, the former Acting Undersecretary for Intelligence and Analysis at DHS said. “Terrorists, criminals and other threat actors can use AI to acquire the instructions on how to develop explosives and other weapons of mass destruction. They can also glean greater insights on potential targets, and on delivery methods to use to achieve the greatest possible disruptive result.”
Last year, the European Parliament approved landmark legislation that aimed to regulate the use of AI and promote “trustworthy” uses.
Last week, the DHS announced the creation of a new AI board that includes 22 representatives from a range of sectors, including software and hardware companies, critical infrastructure operators, public officials, the civil rights community and academia.
Some notable members of the board include: Sam Altman, the founder of OpenAI; Ed Bastian, the CEO of Delta Airlines; Satya Nadella, the chairman and CEO of Microsoft; Sundar Pichai, the CEO of Alphabet; and Maryland Gov. Wes Moore.
The board, according to the agency, will help DHS stay ahead of evolving threats posed by hostile nation-state actors and reinforce national security by helping to deter and prevent those threats.
Cohen, now an ABC News contributor, said the board is a good step, but there is more to be done.
“In many respects, we are using investigative and threat mitigation strategies that were intended to address the threats of yesterday, while those engaged in illegal and threat related activity are using the technologies of today and tomorrow to achieve their objectives,” he said.
Source: Emerging AI technologies make it easier for bad actors to ‘conceptualize and conduct’