Artificial Intelligence - The Road To ChatGPT

What do you think was the first example of artificial intelligence? Was it the chatbot ELIZA in the 1960s? When IBM’s supercomputer Deep Blue defeated a chess grandmaster in the 1990s? Perhaps it was when the program Watson competitively played Jeopardy! in 2011 or when Boston Dynamics unveiled its dexterous dog-like robot Spot in 2016. Just this year, you might have engaged with the strikingly lucid writings of ChatGPT or the elaborate artistic creations of Midjourney. Or maybe you don’t buy the hype and haven’t seen anything from a machine that compares to the versatility of human intelligence.

This is a question that will depend on how you define artificial intelligence in the first place. It’s not easy, especially when people can’t quite agree on what regular intelligence means either! A good start is Alan Turing’s 1950 article “Computing Machinery and Intelligence” where he described an imitation game – a contest where a human observer interacts via writing with two players, one human and one machine, each trying to convince the observer that they are the actual human. If the observer could not reliably distinguish between the human and the machine, then the machine had successfully given the appearance of intelligence. Since we can’t read the minds of other people to see their innate intelligence for ourselves, a successful computer in the imitation game demonstrates the same kinds of capabilities that we use to decide if a regular person is intelligent. At that point, Turing suggested, we might as well speak of machines thinking in the same way we conclude that a human thinks.

ChatGPT is a prominent example of what computer scientists call a Large Language Model (LLM), and it is quickly making Turing’s test obsolete as a way of imagining intelligent machines. Drill down deep enough and LLMs are fantastically complicated math equations. They are trained on as much human generated text the model makers can find to learn the statistical relationship between words. LLMs might learn that the words “cute” and “puppy” often occur near each other and replicate that proximity in their own responses. A further innovative approach called “self-attention” allows LLMs to keep track of conceptually linked words even if they are far apart in a single document. A self-attentive chatbot might learn that Edgar Allan Poe and Stephen King are both American horror fiction authors despite living more than a century apart or that George Washington the general and George Washington the president were the same person. Skeptics might describe LLMs as simply super powerful autocomplete engines. (And this is true! The central objective of ChatGPT is to repeatedly predict the next word that makes sense given the preceding context.) The optimist’s argument is that LLMs are actually “learning” the concepts they were trained on and capable of producing new or surprising connections. Skeptic or idealist, everyone agrees that generative AI is increasingly sophisticated and produces texts difficult to distinguish from the writing of a regular human.

Artificial intelligence is more a marketing term than an actual description. We’ve been using machine learning techniques for decades to build things like filters for spam emails, optical character recognition to scan and read text, recommendation algorithms on social media platforms, and industrial robots. We just don’t call those limited expert systems or domain-bounded applications artificial intelligence anymore. Explore the edges of a generative AI tool like ChatGPT, and you can quickly find that it is not generally intelligent in the same way a human is – nor can it reasonably compete with more specialized computer programs in specific tasks. But these limitations are the subject of intense research, and immense progress is already being made. Companies have announced plans for releasing truly “multimodal” generative models that can take inputs not just in text but in audio or video and even offer fluent outputs in multiple media types. It is almost beside the point whether adding in additional human-like capabilities will eventually produce artificial intelligence in the way we’ve imagined it in science fiction stories. Each incremental advancement produces a tool that can conversationally explore and explain the vast training data available to it.

Machines that can give the semblance of creativity and consistent personalities are more than just novelties. While it is undoubtedly delightful to interact with a large language model that can do whimsical things like explain scientific topics as if it was a pirate or plan ideas for a botanically themed birthday party, ChatGPT is poised to make a significant impact on how we produce and interact with text. It is important to understand how generative AI works. Only then can we be prepared to get the most out of a rapidly improving tool. So much of our lives revolve around the production, communication, and understanding of information. Recognizing how a tool like ChatGPT differs from the way a human might act or think is an important first step in ensuring that it is developed and used responsibly.

 

Referenced:

A. M. TURING, I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi.org/10.1093/mind/LIX.236.433


Published by on September 08, 2023
Last Modified November 21, 2024
Tags: