Disclosure: Except for
Microsoft spellcheck, the following commentary was written with 100% human
input
It is difficult to keep up with the news concerning the
warp-speed rollout of machine-generated artificial intelligence (AI). This new technology refers to the simulation of human intelligence by
machines that are programmed to think like humans. By the time this commentary is published, my
impressions will probably be as stale as last week’s bread.
In early March, technology stocks involved in new AI
research caught the eye of investors and underwent a significant rally. This
was followed by one headline report after another proclaiming AI either as a
miracle discovery that will advance human thought or as the advent of a dark
dystopian future.
Recently, one
of the newest machine-learning “Chabot” programs was asked to explain how
humankind could safely utilize AI. The program responded: “It is not
necessarily desirable or ethical to slow down the progress of AI as a field, as
it has the potential to bring about many positive advancements for society.”
This upbeat, machine-generated answer did little to calm the detractors of AI. The
response sounds like the prologue of a science fiction movie where events could
go terribly wrong.
Inside the insular world of AI experts, one group, which
includes Elon Musk, is certain that unmanaged AI could kill us all. These individuals are calling for a six-month
pause on creating new machine-learning projects. The fear is that without
proper controls on research, AI may soon begin to disobey its programmer’s
instructions and operate independently with malicious intent.
On the other hand, many investors look forward to AI’s
unimpeded introduction as they try to stay ahead of the competition. They argue
that the upside is greater than any potential threat. Indeed, there are
immediate applications in medicine for precise diagnoses and surgical
procedures, AI-generated research and writing, and advanced learning methods in
education. Many other fields will be enhanced by machine-generated accumulated
knowledge.
For unschooled observers like me, it was difficult to
understand how the smartest thinkers in the world could take such opposing positions.
The answer, I learned, is that no one knows how AI works. The complexity of AI
models has been doubling every few months. Remarkably, the process by which
learning machines store, distill and retrieve knowledge is unknown. This
element of mystery is exciting for some insiders and a troubling risk for
others.
In my lifetime, similar questions of technology
outdistancing human capacity to absorb it have happened on at least two occasions.
In the 1950s, the issue was nuclear energy. There were efforts to pause and
draw ethical boundaries to prevent nuclear war while expanding positive
applications of nuclear power.
The second more recent catastrophic risk was gene splicing/genetic
engineering. Again, the international community got together and developed
guidelines to permit positive applications while attempting to prohibit the
bioengineering of dangerous pathogens. The unknown origins of COVID-19 question
whether these efforts were successful.
Apart from whether AI will be our salvation or our
destruction is the more benign inquiry of how AI will alter human learning.
Philosophers and social scientists have weighed in on this topic. Historians
point out that with the discovery of the printing press, curious thinkers
throughout the known world could finally communicate discoveries and replicate
findings. They turned hypotheses into facts through the scientific method. The
Age of Enlightenment challenged the medieval interpretation of a world based on
religious faith and gave us knowledge that was built on proven facts.
With AI, the process works in reverse. The most
sophisticated AI models cost more than $1 billion each to become productive,
utilizing thousands of computers. Instead
of scientific certainties, we end up with new knowledge with no discernable
foundation. AI advances human insight without providing us with any understanding
of how the knowledge was uncovered. Advocates for pushing AI forward argue that
this lack of comprehension is not a deal breaker because new discoveries
generated by AI will improve society when coupled with human reason.
In addition to the dystopian
concern that scientists could lose control, another negative factor has
surfaced. ChatGPT, a form of generative AI, is the
most common type of machine learning in general circulation. It is a tool that lets users enter prompts to search
much of the world’s internet data. ChatGPT provides detailed answers to these
inquiries.
Unfortunately, the
technology occasionally makes up facts that sound real to provide a very
believable response. Moreover, malicious
actors are well-versed in injecting false information into the internet making
it difficult for AI to discern fact from fiction. To address these problems
researchers are already working on a model capable of questioning the veracity
of a ChatGPT answer.
The “high priests” of technological
progress have mixed feelings about the effects of AI on society. It is important
for concerned citizens and elected officials to ask the difficult questions involving
the deployment of AI. Leaving the future of AI to the scientific community and to
venture capitalists out to make a fortune, with no respect for controls, would
be a dangerous miscalculation.
One reasonable solution would be to continue research in AI
while withholding rollouts to the public until more is understood about how AI works
and processes data. The world may be dysfunctional but we must not make the
mistake of surrendering our future to thinking machines in an effort to improve
society.