Alan Zendell, October 5, 2021
My first career job was as an aerospace engineer at Grumman Aircraft (now Northrop-Grumman) working on America’s moon missions. Fresh with degrees in physics and engineering, very full of myself, certain that my prestigious education had prepared me for the real world, my first lesson in humility came quickly. My new boss told me to leave everything I’d learned in theoretical science and engineering at the front door. Practical engineering wasn’t about theories and computing precise solutions; it was about approximations. It wasn’t about error-free solutions; it was about constant vigilance keeping errors that inevitably creep into any project manageable.
That’s a pretty good description of how America grew to be a world power. In the late nineteenth century, science and technology were given free rein to grow and explore new possibilities. Engineers and business managers learned how to apply each new discovery to real life, but that process never was and never will be clean or free of problems. We mass produced cigarettes only to discover that they were killing us. We invented aerosol sprays without realizing they would destroy the ozone layer that protects us from deadly radiation. We built ever more powerful and reliable machines and were told that each new innovation heralded a new age of growth and prosperity, but each came with its own problems.
In most cases, engineering innovation and regulation were able to keep up with technological advances before they produced catastrophes. Science split the atom, releasing unfathomable amounts of energy, first in the form of weapons capable of destroying our planet, then as fuel for power plants. Two decades of living with nuclear weapons convinced our leaders that without strict controls in place, the next major war would likely destroy civilization. And failed nuclear power plants like Three Mile Island and Chernobyl combined with the need to safely dispose of nuclear waste put a quick end to the promise of unlimited, free power. Three different airliners (the DC-8, DC-10, and 737-Max) had to be pulled from production and re-engineered when they started crashing and killing people.
This is how technological progress is supposed to work: invent a concept, develop it into something people need or want enough to pay for it, test it, verify that it’s safe, and bring it to market. When glitches occur, (they always do,) use our highly vaunted American ingenuity to fix them before they kill too many people. Cars, planes, and trains get safer, food gets more nutritious, and with luck and perseverance, we may eventually figure out how to live without poisoning our planet. But information technology is different.
The development of computers and the Internet have enormously impacted our lives, mostly for the better. But IT has a built-in fatal flaw that’s more serious and far more dangerous than, say, faulty brakes on your new car. With IT, the most important of the steps to technological progress, verifying that a new development is safe, is often impossible. Every computer-based system is hackable, and with the proper skills, any determined group bent on mayhem or criminal activity will eventually find a way to work around security measures.
Futurists and science fiction writers have warned for decades about the impossibility of verifying information available to the public, the danger of unregulated access to deliberately misleading or false information, and the risk to young people and others from predatory entities trying to manipulate them. From its inception, many people who understood these risks viewed Facebook as a dangerous abomination with the potential to destroy us as surely as adversaries with nuclear weapons.
We saw how easily people were seduced by lies and misinformation during the 2016 election, but until now, such allegations have been largely anecdotal and easy for a public that is emotionally engaged with the product to ignore. The revelations of former Facebook manager Frances Haugen published by the Wall Street Journal and New York Times suggest Facebook is far more dangerous than we thought.
We always knew Facebook’s database of personal information on its two billion plus users was a prime target for scams, criminal activities, and marketers with no respect for individual privacy. We now appear to have clear evidence that data security and vetting of individual users are only the tip of the iceberg. Facebook’s policy of incentives and its need to grow exponentially have repeatedly prioritized increasing its profitability over the safety of its individual users and the stability of their societies. The unfortunate reality is that hate, fear, bigotry, greed, ignorance, and intellectual laziness make people vulnerable to being manipulated by anyone with the talent and intent to do so.
We don’t need Facebook in its massive worldwide form any more than we need monopolistic cartels that operate independent of government regulation. It’s too late to shut Facebook down completely, even if its senior executives are found criminally negligent, but it’s not too late to force it to operate in a safe, regulated environment.