Featured Shows
MSNBC TV
More
Follow msnbc
More Brands
More Shows
By
The man who killed himself before exploding a Tesla Cybertruck outside of a Trump hotel in Las Vegas on New Year’s Day used ChatGPT to plan his attack, authorities say.
As NBC News reported:
Matthew Alan Livelsberger, 37, queried ChatGPT for information about how he could put together an explosive, how fast a round would need to be fired for the explosives found in the truck to go off — not just catch fire — and what laws he would need to get around to get the materials, law enforcement officials said.
“We know AI was going to change the game for all of us at some point or another, in really all of our lives,” Clark County/Las Vegas Metropolitan Police Sheriff Kevin McMahill said. “I think this is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device.”
Artificial intelligence seems to have unleashed a new era of terror. The FBI says another New Year’s Day assailant, who rammed a truck into revelers in a deadly terrorist attack in New Orleans, wore AI-enabled Meta glasses as he plotted and carried out the assault.
A spokesperson said ChatGPT was saddened by the Las Vegas incident and is “committed to seeing AI tools used responsibly.” A Meta spokesperson told NBC News that the company was in touch with authorities regarding the New Orleans attack.
With the rising popularity of AI tools, experts on technology and national security have sounded the alarm about potential opportunities that AI could provide to people looking to carry out terrorist attacks.
In 2021, a report by the United Nations’ Office of Counter-Terrorism, titled “Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes,” listed several ways that AI already has aided terrorist attacks and could do so again. The report cites the occurrence of AI-enabled cyberattacks, the installation of AI-assisted malware and ransomware to disrupt systems or hold them hostage, and AI tools that allow terrorists to crack passwords and infiltrate vital programs.
The report also warned about autonomous vehicles that could be used as weapons unless preventive safety features are installed; drones with facial recognition capabilities that could be used to sow terror; and the potential use of AI to create deadly pathogens that could be deployed as bioweapons.
These nefarious uses of artificial intelligence are part of the reason why, over the past few years, I and others have sought to promote the exciting and potentially positive uses of this technology. (You can read about some of those here.) But AI-assisted attacks also speak to a point that President Joe Biden made in a speech at the U.N. General Assembly in September, when he warned about artificial intelligence being used to place “shackles” on the human spirit.
Biden was talking mainly about AI being used maliciously by dictators. But such tools are more accessible than ever, meaning that the threat of AI terror isn’t confined to just illiberal rulers, but lone wolf attackers as well.
If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat live at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.
Ja’han Jones is The ReidOut Blog writer. He’s a futurist and multimedia producer focused on culture and politics. His previous projects include “Black Hair Defined” and the “Black Obituary Project.”
© 2024 MSNBC Cable, L.L.C.
