Chatbots: A long and complicated history- A revolutionary computer software called Eliza made an attempt to imitate talking to a therapist in the 1960s. One conversation, which was recorded for a study article at the time, had a person who disclosed that her lover had called her “depressed much of the time.” “I am sad to hear you are depressed,” Eliza said in response.
Eliza, often referred to as the first chatbot, wasn’t as flexible as current alternatives. Using natural language comprehension, the machine responded to important terms and then essentially punted communication back to the user. However, Eliza’s creator Joseph Weizenbaum noted in a research article from 1966 that “some participants have been exceedingly hard to convince that ELIZA (with its existing script) is not human.”
That fact, as per Weizenbaum’s 2008 MIT obituary, worried him. Even though they knew it was a computer programme, those who interacted with Eliza were eager to give their hearts to it. Weizenbaum stated in 1966 that ELIZA “shows, if nothing else, how easy it is to generate and maintain the appearance of understanding, hence perhaps of judgement deserving of confidence.” There is some danger there. He spent the latter part of his career cautioning against delegating too much authority to machines and developed into a stern, philosophical AI critic.
Nearly 60 years later, the market is swamped with chatbots from IT firms, banks, airlines, and other industries, all of varied quality and use cases. Weizenbaum’s tale hinted to the fascination and confusion still surrounding this technology in many ways. Some members of the public are nevertheless perplexed by a program’s ability to “chat” with people, giving the misleading impression that the computer is more human-like.
This was covered extensively by the media earlier this summer after a Google employee referred to LaMDA, the company’s AI chatbot, as “sentient.” After spending time talking religion and personhood with the chatbot, the programmer said that he was persuaded, according to a Washington Post article. The AI community harshly condemned his statements.
Even before this, the stories of Hollywood films like “Her” or “Ex-Machina,” not to mention friendly arguments with persons who insist on saying “thank you” to voice assistants such as Alexa or Siri, made clear how delicate our connection with artificial intelligence and technology is.
Modern chatbots can also evoke significant emotional responses from users when they don’t function as intended or when they get so adept at mimicking the problematic human speech they were trained on that they start uttering offensive and racist remarks. For instance, it didn’t take long for Meta’s new chatbot to cause some controversy this month by uttering blatantly false political commentary and antisemitic statements during user discussions.
Despite this, advocates of this technology claim it may improve efficiency in a far wider range of sectors and expedite customer support tasks. This technology supports the digital assistants that so many of us now rely on on a daily basis for tasks like playing music, placing delivery orders, and checking the accuracy of homework projects. Some argue that these chatbots can offer solace to the elderly, lonely, or isolated. At least one startup has gone so far as to utilize it to build computer-generated copies of deceased relatives based on uploaded communications, ostensibly keeping them alive.
Others, though, caution that chatbot technology driven by AI is still far more limited than some may desire. Gary Marcus, an AI researcher and former professor at New York University, stated of these technologies, “They’re incredibly effective at tricking out humans and seeming human-like, but they’re not deep.” “These systems are imitations, but only in the most basic sense. They’re not truly talking about things they comprehend.”
However, as these services penetrate more facets of our life and as businesses work to further customize these tools, our interactions with them can also become more challenging.
The evolution of chatbots
In graduate school, Sanjeev P. Khudanpur recalls talking with Eliza. Despite its historical significance in the computing sector, he claimed that it didn’t take long for people to see its limitations.
Before “you realise, no, it’s not really smart, it’s just attempting to prolong the conversation one thing or the other,” said Khudanpur, a specialist in the app of information-theoretic techniques to human language technologies as well as professor at Johns Hopkins University, the system could only convincingly imitate a text chat for a dozen back-and-forths.
A different early chatbot called “Parry,” created by psychiatrist Kenneth Colby at Stanford in 1971, was designed to mimic a paranoid schizophrenic. (A colourful conversation that started when researchers introduced Eliza and Parry to one another was described in The New York Times obituary for Colby from 2001.)
But in the decades that followed the invention of these tools, the notion of “conversing with computers” began to fade. Because “the challenge turned out to be extremely, very hard,” according to Khudanpur. The emphasis shifted to “conversation with goals,” he said.
Consider the discussions you might be having right now with Alexa or Siri to get an idea of the differences. Typically, you may ask these digital assistants to play a song, check the weather, or buy a ticket. That’s goal-oriented communication, and it became the principal subject of academic and commercial study as computer scientists looked for practical applications for computers’ capacity to read human language.
Although they made use of technology comparable to the previous, social chatbots, Khudanpur “They certainly weren’t chatbots, to be honest. You may refer to the devices that assisted you in performing particular activities as voice assistants or just digital assistants.”
He said, there was a long “lull” in this technology before the internet became widely used. The major developments most likely occurred this millennium, according to Khudanpur. The number of businesses using these types of computerized agents to complete basic jobs has increased.
People are frequently angry when their luggage go missing, and the human agents who deal with them are frequently under stress from all the angst, so they decided to entrust the job to a computer, according to Khudanpur. The computer only needed to know your tag number so that it could show you where your bag is, so you could shout all you wanted at it.
For instance, Alaska Airlines introduced “Jenn,” a virtual assistant, in 2008 to assist travellers. An early review of the service in The New York Times made the following observations as a sign of our propensity to personalise these tools: “Jenn doesn’t bother me. On the website, she is pictured as a young brunette with a charming grin. Her voice has the appropriate nuances. She intelligently responds when you type in a question. (And she kindly advises getting back to work for smart guys playing around on the site who will unavoidably try to trip her up with, say, an awkward bar pickup line.) “
Getting back to social chatbots and societal issues
Early in the new millennium, researchers started to look again at the creation of social chatbots that could have lengthy conversations with people. These chatbots frequently learn from vast amounts of online data and become highly accurate imitators of human speech, but they also run the risk of repeating some of the worst things on the internet.
For instance, Microsoft’s public trial of Tay, an AI chatbot, failed in less than 24 hours in 2015. Although Tay was created to sound like a teenager, it soon began making offensive remarks, to the point where Microsoft had to shut it down. (The business said that humans also made a concerted effort to dupe Tay into saying some inappropriate things.)
Microsoft at the time stated, “The more you talk with Tay, the smarter she gets, so the experience can be more personalised for you.
Other IT behemoths, like Meta’s BlenderBot3, which was unveiled earlier this month, would repeat similar pattern when they released public chatbots. The Meta chatbot made several contentious claims, including that Donald Trump is still in office and that there is “certainly a lot of evidence” that the election was rigged.
Additionally, BlenderBot3 claimed to be more than a robot. It asserted, “I am human since I am alive and cognizant right now,” in one dialogue.
Marcus, the NYU professor, stated that “It’s not clear to me that you can genuinely design a dependable and safe chatbot” despite all the developments made since Eliza and the vast amounts of fresh data used to train these language processing programmes.
He uses the Facebook “M” project from 2015 as an example. This text-based personal assistant was intended to be the company’s text-based response to services like Siri and Alexa. Beyond what Siri can do, Marcus said, “the idea was it was going to be this global assistant that was going to be helping you order in a romantic dinner and get artists to play for you and flowers delivery.” Instead, the service was discontinued in 2018 after a lacklustre tenure.
On the other hand, Khudanpur is still excited about their future applications. He stated, “I have this big picture of how AI is going to empower people on a personal level. He stated, “Imagine if my bot could study all the scientific papers in my subject; I wouldn’t have to go study them all; I’d just ponder, ask questions, and have conversations instead.” In other words, I’ll have a superpowered alter ego who complements my own.