Alan Zendell, May 3, 2023
If you’re not worried about Artificial Intelligence (AI) you probably haven’t thought about it enough. It’s been around for decades, but only lately has its potential for negative outcomes been publicized.
Half a century ago, we thought of AI’s as self-learning knowledge bases. The theory was that if we collected huge amounts of data about a subject, we could write programs that would enable computer systems to become experts on it. The data would include starting conditions, context, and lists of possible actions. By statistically analyzing real outcomes in real situations, developers hoped computers would be able to infer the likelihood of every possible outcome for every future decision it might make.
I encountered AI twice in my early career as a scientist/engineer. During the three dark years I spent in the Pentagon in the heart of the Vietnam conflict, there was a lot of talk about the Defense Department building AI systems to help fight the war. Security requirements being what they are, most of us could only be certain something was true if we’d worked on it directly, and even then, we were continually threatened with all the ways we could wind up in prison if we shared anything with unauthorized persons. But since that occurred more than fifty years ago, what might have once been classified Top Secret is now just interesting folk lore.
Rumor had it – I don’t know how much of what I heard back then was true – that computers were making decisions about where to send soldiers and what to attack. Rumor also had it that military AI’s didn’t learn fast or well enough, and their decisions often resulted in tragic loss of life and equipment, but the generals were told that was to be expected, and the AI’s would learn from their mistakes. That kind of application for an AI system was premature at best, equivalent to seeking a cure for a deadly disease by randomly injecting people with hundreds of drugs to see if one worked, rather than spending the time to do responsible research…
…which seques into my next encounter with AI. In the 1980s, I worked with a brilliant physician/epidemiologist named Henry Krakauer, one of the pioneers of medical AI diagnostic systems. This was one of the first and most beneficial uses of self-learning systems. They learned by being fed gigabytes of real patient data, including symptoms, diagnoses, comorbidities, treatments, and outcomes, stripped of personal identifying data. The result is that today, when you visit your doctor, they’re probably carrying around a laptop computer. The doctor enters your personal data and symptoms, and the computer instantly informs them of all the possible diagnoses and their likelihoods, including possible treatment options and probable outcomes. That is a very good thing. Sometimes AI is our friend.
But like all powerful tools, it can be misused, and you can be sure that smart people who are ethically challenged will always find a way. Thus, Facebook recently acknowledged that as many as twenty percent of their accounts might be faked; that is, the people identified as their owners were really AI’s or bots, as they’re referred to in the vernacular. Like a rapidly expanding mushroom cloud, bots keep showing up where they shouldn’t. A well-programmed bot can masquerade as almost anything – a teacher, a subject expert, a politician, a lawyer, an advocate – and most of us cannot distinguish them from the real things.
AI’s have been blamed for manipulating investment markets, causing financial crises, spreading false information, undermining governments, and fomenting insurrections. We don’t know how much they contributed to the wave of election deniers that sprung up in the wake of Joe Biden’s defeat of Donald Trump in 2020, but we’ve heard testimony from computer experts and government officials asserting that they have proof that thousands of such bots are active throughout our social media, and predictions that they will pose a much larger threat in 2024.
We were raised on stories of Frankenstein monsters turning on their creators and automated systems evolving into indestructible Terminators. Should we be concerned? Consider all the ways our privacy has been compromised in recent decades, from nearly universal surveillance to identity theft. The popular television series NYPD Blue which ran for twelve seasons from 1993 to 2005, featured good old fashioned detective work, nary a video camera or DNA molecule in sight. Compare that to any modern police drama today, when it’s virtually impossible for a criminal to hide off the grid.
We should be concerned, because only a small fraction of the power of Artificial Intelligence has been realized or even imagined. If there’s a way to use it for evil, you can be sure someone will. The potential for chaos and destruction is limitless.
Amanpour has had several sceintists/technologists warning us about the alarming misuse of AI and the lack of understanding from our legislatures and thus little to no control of it.