There are a variety of text messaging applications available that make communicating with friends easy. These applications typically require users to sign up for a username as a form of identification. Friends can add each other on the application by sharing their usernames, and then friends can initiate and respond to chats from each other. These types of platforms are convenient because a message can be sent to virtually anyone, as long as they’re already signed up for that particular platform. In other words, people can send messages to others that they have never met in person before. So, how does someone know that the person that they are messaging is really a person and not a machine?
Language is special because people learn it early on in their lives. Even infants as young as six months old are able to discriminate speech sounds. Moreover, infants use word boundaries to learn about grammar and speech. Since even infants can pick up on these boundaries in linguistics, can we also teach a machine to talk like a human?
A specific, well-known test examines this exact question. The Turing Test was created in 1950 by Alan Turing to evaluate the “human likeness”, or natural language, of a computer program. Judges who are blind to the nature of their conversational partner are the evaluators; they converse with a machine and a person via typed responses in a terminal. A machine can pass the test only if it tricks at least 30% of human judges into thinking it is a human during a five-minute conversation. In other words, if the evaluator cannot reliably distinguish between the machine and a person, then the machine would be deemed as having the ability to converse like a human.
Cleverbot is an example of a machine that was created to converse like a human. Cleverbot is an online chatterbot that forms responses based on keywords. Specifically, Cleverbot uses past conversations with people to respond when asked a similar question or given a similar comment.
While Cleverbot did not pass the Turing Test, a computer program named Eugene Goostman did following a breakthrough in human-like computer language in 2014. Eugene Goostman, a program that simulates a 13-year-old Ukrainian boy, passed the Turing test by surpassing the 30% benchmark, convincing 33% of evaluators that they were conversing with a human when it was, in fact, Eugene. Thus, Eugene was able to successfully capture the complexities of speech in human conversation and his responses were, in essence, indistinguishable from a person.
Of course, it is quite intriguing to be able to simulate a human-like conversation with a machine that was programmed to respond according to a script. However, one may ask whether the machine is able to think independently, since it can respond like a human to virtually any question or comment you say to it. Nonetheless, these types of designs may give us insight into the mechanisms underlying “natural language.”
Edited by Lana Ruck and Riddhi Sood