Alan Zendell, October 6, 2021
The phrase artificial intelligence (AI) literally means intelligence displayed by inorganic entities (i.e., machines, computers) as opposed to humans and other animals. Beyond that simple definition lies much confusion and misunderstanding.
In practical terms, AI refers to any computer program with self-learning capability, but that phrase, too, is rife with ambiguity and misinterpretation. Self-learning systems operate on probabilities. They gather huge amounts of data from which statistically likely correlations are built. The more data we feed into an AI system, the better it becomes at predicting outcomes and guessing at optimal solutions.
Your physician uses a medical AI system that processes the information collected during your examination, combines it with your entire medical history, and evaluates the result against a massive database that contains everything doctors and epidemiologists know about similar conditions. In the past, your doctor relied on training and memory, supplemented by looking up esoteric details in one of the massive medical texts on his or her bookshelf. Today, the laptop your doctor carries around instantly informs him about every condition consistent with your symptoms and its likelihood.
Your doctor is now armed with a tool that brings the entire body of medical knowledge and experience to bear to help diagnose what’s wrong. But, and this is the essential point, the final diagnosis is made by the physician, not the medical AI system. Your doctor applies years of knowledge and acquired wisdom, professional human judgment, and some undefinable quality – instinct? intuition? gut feeling? That’s very different from, say, an automated assembly line, in which all actions and decisions are pre-programmed. Everything about the manufacturing process is known in advance; it simply has to be accurately coded in the system’s decision-making algorithm.
The mid-twentieth century was the heyday of speculation and fantasy concerning AI, what people imagined it might be capable of one day. In the 1950s and ’60s, the most popular visualizations of AIs were in the form of robots that mimicked human behavior. The classic science fiction writer, Isaac Asimov, invented laws of robotics that are still used today whenever people envision robots performing activities traditionally thought to require human judgment. Asimov’s robots were so sophisticated, they were almost indistinguishable from humans. It was argued by some that AI-driven systems actually made better decisions than humans because they were entirely logic based, unaffected by emotions. An entire new genre of speculation thus evolved based on machines with human-like intelligence, sometimes beneficent, but more often, like the Terminators, cold-bloodedly malignant in their attitude toward humanity.
The problem with all that speculation is that science and technology never successfully bridged the gap between collecting and correlating massive amounts of information and basic human judgment. It remains theoretically possible, as computer speeds and statistical algorithms evolve, that we might one day build a machine that closely simulates human thought and decision-making, but that time is not now, and may never be.
That’s the fundamental error Facebook and other social media platforms made when they decided to rely on AI-driven algorithms instead of humans. When Google directs an ad to you based on keywords in your emails or Amazon recommends a book based on your observed interests, whether you find them annoying or helpful, they’re relatively benign. But social media algorithms can’t distinguish between truth and lies, honesty and dishonesty, altruism and harmful intent. They can’t anticipate what may be harmful to the psyches of adolescent girls or gullible individuals too lazy or too busy to fact-check what they read.
The term Artificial Intelligence is a misnomer. AIs are not intelligent. They’re actually incredibly stupid, dumber than the dullest person you’ve ever met, nothing but logic and instructions based on whatever knowledge was programmed into them. And that’s the problem. They can’t comprehend the intangibles humans use to make decisions, and they know nothing about the dark side of human nature. Because AIs are deterministic and predictable to anyone who understands the algorithms they use, they are dangerous unless humans constantly review and modify their decisions.
Did Mark Zuckerberg set out to create a malevolent monster? No, but neither did Doctor Frankenstein or the scientists who developed the atom bomb. Zuckerberg’s crime was hubris, the sin that the Old Testament suggests got humans thrown out of paradise. Zuckerberg and his people are way over their heads like cowboys trying to control herds of stampeding buffalo. They built a monstrosity that requires constant surveillance and checking by human intelligence, but that costs money, and if we’ve learned anything about Facebook, it’s that profit drives all its major decisions.
Clearly, they’re not about to voluntarily replace their algorithms with expensive, labor-intensive human tasks. Can we count on our government, which can’t even agree on how to pay its debts to force them to? I’m not holding my breath.