“Terminator” was released 40 years ago. Skynet was the product of humans creating software that gains artificial intelligence and becomes self-aware. Skynet blows up mankind. It was science fiction that I thought could happen, but not in my lifetime.
“I, Robot” was released 20 years ago. It also had the “good idea gone bad ” theme, with manmade software gaining self-awareness. It didn’t want to destroy mankind; it just wanted to control it. By 2004, I thought artificial intelligence was moving way too fast and that Skynet maybe was closer than I thought 20 years before.
Jack Carr’s fifth book, “Only the Dead,” brought in an AI “all-knowing” self-aware quantum computing super-AI. “It” resides in a secret military base. Only a handful of people know “her” true abilities. She can instantly “see” everything connected to any computer or near one. Carr said:
“The question now isn’t ‘could we’ or ‘should we,’ as AI is already here. The question now is about management of AI across industry. My hope is that AI can be used for the betterment of society — but as I learned in the SEAL Teams, hope is not a course of action.”
“I went deep ‘down the rabbit hole’ in my research of military and intelligence service AI, so much so that I received a few notes after the novel’s publication from people in that field telling me I got a bit close to our actual cyber capability in terms of the coupling of quantum speeds and AI.”
By 2024, AI was intertwined in everything humans use, from smartphones to personal computing to the military. Last month, the Secretary of the Air Force put on a flight suit and sat in the front seat of an F-16.
His F-16 spent an hour in the air, dogfighting with another Air Force fighter. His jet was piloted by AI.
With the midday sun blazing, an experimental orange and white F-16 fighter jet launched with a familiar roar that is a hallmark of U.S. airpower. But the aerial combat that followed was unlike any other: This F-16 was controlled by artificial intelligence, not a human pilot. And riding in the front seat was Air Force Secretary Frank Kendall.
AI marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s, and the Air Force has aggressively leaned in. Even though the technology is not fully developed, the service is planning for an AI-enabled fleet of more than 1,000 unmanned warplanes to be operating by 2028.
By 2028 — in just four years — we will have a fleet of AI-piloted warplanes. Plenty of people are alarmed at the speed of AI learning how to pilot and dogfight and getting better at it than humans. According to the DoD, China hasn’t caught up with us. Not yet.
Air Force pilots are concerned that soon, they could be facing an enemy warplane being flown by AI software.
But they also say they would not want to be up in the sky against an adversary that has AI-controlled aircraft if the U.S. does not also have its own fleet.
“We have to keep running. And we have to run fast,” Kendall said.
When I learned about AI piloting warplanes and, in theory, potentially deciding that mankind isn’t smart enough to have weapons — or worse, deciding that humans are a pestilence — I was terrified. I was reminded of the scene from “2001: A Space Odyssey.” Machines deciding what is right and wrong.
Terrified yet?