Placeholder Content Image

Online travel giant uses AI chatbot as travel adviser

<p dir="ltr">Online travel giant Expedia has collaborated with the controversial artificial intelligence chatbot ChatGPT in place of a travel adviser.</p> <p dir="ltr">Those planning a trip will be able to chat to the bot through the Expedia app.</p> <p dir="ltr">Although it won’t book flights or accommodation like a person can, it can be helpful in answering various travel-related questions. </p> <blockquote class="twitter-tweet"> <p dir="ltr" lang="en">Travel planning just got easier in the <a href="https://twitter.com/Expedia?ref_src=twsrc%5Etfw">@Expedia</a> app, thanks to the iOS beta launch of a new experience powered by <a href="https://twitter.com/hashtag/ChatGPT?src=hash&amp;ref_src=twsrc%5Etfw">#ChatGPT</a>. See how Expedia members can start an open-ended conversation to get inspired for their next trip: <a href="https://t.co/qpMiaYxi9d">https://t.co/qpMiaYxi9d</a> <a href="https://t.co/ddDzUgCigc">pic.twitter.com/ddDzUgCigc</a></p> <p>— Expedia Group (@ExpediaGroup) <a href="https://twitter.com/ExpediaGroup/status/1643240991342592000?ref_src=twsrc%5Etfw">April 4, 2023</a></p></blockquote> <p dir="ltr"> These questions include information on things such as weather inquiries, public transport advice, the cheapest time to travel and what you should pack.</p> <p dir="ltr">It is advanced software and can provide detailed options and explanations for holidaymakers.</p> <p dir="ltr">To give an example, <a href="http://news.com.au">news.com.au</a> asked “what to pack to visit Auckland, New Zealand” and the chatbot suggested eight things to pack and why, even advising comfortable shoes for exploring as “Auckland is a walkable city”. </p> <p dir="ltr">“Remember to pack light and only bring what you need to avoid excess baggage fees and make your trip more comfortable,” the bot said.</p> <p dir="ltr">When asked how to best see the Great Barrier Reef, ChatGPT provided four options to suit different preferences, for example, if you’re happy to get wet and what your budget might look like.</p> <p dir="ltr">“It’s important to choose a reputable tour operator that follows sustainable tourism practices to help protect the reef,” it continued.</p> <p dir="ltr">OpenAI launched ChatGPT in December 2022 and it has received a lot of praise as well as serious criticism. The criticisms are mainly concerns about safety and accuracy. </p> <p dir="ltr"><em>Image credits: Getty/Twitter</em></p>

International Travel

Placeholder Content Image

Chatbots set their sights on writing romance

<p>Although most would expect artificial intelligence to keep to the science fiction realm, authors are facing mounting fears that they may soon have new competition in publishing, particularly as the sales of romantic fiction continue to skyrocket. </p> <p>And for bestselling author Julia Quinn, best known for writing the <em>Bridgerton </em>novel series, there’s hope that “that’s something that an AI bot can’t quite do.” </p> <p>For one, human inspiration is hard to replicate. Julia’s hit series - which went on to have over 20 million books printed in the United States alone, and inspired one of Netflix’s most-watched shows - came from one specific point: Julia’s idea of a particular duke. </p> <p>“Definitely the character of Simon came first,” Julia told <em>BBC</em> reporter Jill Martin Wrenn. Simon, in the <em>Bridgerton </em>series, is the Duke of Hastings, a “tortured character” with a troubled past.</p> <p>As Julia explained, she realised that Simon needed “to fall in love with somebody who comes from the exact opposite background” in a tale as old as time. </p> <p>And so, Julia came up with the Bridgerton family, who she described as being “the best family ever that you could imagine in that time period”. Meanwhile, Simon is estranged from his own father. </p> <p>Characterisation and unique relationship dynamics - platonic and otherwise - like those between Julia’s beloved characters are some of the key foundations behind any successful story, but particularly in the romance genre, where relationships are the entire driving force. </p> <p>It has long been suggested that the genre can become ‘formulaic’ if not executed well, and it’s this concern that prompts the idea that advancing artificial intelligence may have the capability to generate its own novel. </p> <p>ChatGPT is the primary problem point. The advanced language processing technology was developed by OpenAI and was trained using the likes of internet databases (such as Wikipedia), books, magazines, and the likes. The <em>BBC</em> reported that over 300 billion words were put into it. </p> <p>Because of this massive store of source material, the system can generate its own writing pieces, with the best of the bunch giving the impression that they were put together by a human mind. Across the areas of both fiction and non-fiction, it’s always learning. </p> <p>However, Julia isn’t too worried about her future in fiction just yet. Recalling how she’d checked out some AI romance a while ago, and how she’d found it “terrible”, she shared her belief at the time that there “could never be a good one.” </p> <p>But then the likes of ChatGPT entered the equation, and Julia admitted that “it makes me kind of queasy.” </p> <p>Still, she remains firm in her belief that human art will triumph. As she explained, “so much in fiction is about the writer’s voice, and I’d like to think that’s something that an AI bot can’t quite do.”</p> <p>And as for why romantic fiction itself remains so popular - and perhaps even why it draws the attention of those hoping to profit from AI generated work - she said that it’s about happy endings, noting that “there is something comforting and validating in a type of literature that values happiness as a worthy goal.”</p> <p><em>Images: @bridgertonnetflix / Instagram</em></p>

Books

Placeholder Content Image

Is Google’s AI chatbot LaMDA sentient? Computer says no

<blockquote class="wp-block-quote is-style-default"> <p>“Actions such as his could come only from a robot, or from a very honorable and decent human being. But you see, you can’t differentiate between a robot and the very best of humans.”</p> <p><cite>– Isaac Asimov, <em>I, Robot</em></cite></p></blockquote> <p>Science fiction writer Isaac Asimov was among the first to consider a future in which humanity creates artificial intelligence that becomes sentient. Following Asimov’s <em>I, Robot</em>, others have imagined the challenges and dangers such a future might hold.</p> <p>Should we be afraid of sentient robots taking over the planet? Are scientists inadvertently creating our own demise? How would society look if we were to create a sentient artificial intelligence?</p> <p>It’s these questions which – often charged by our own emotions and feelings – drive the buzz around claims of sentience in machines. An example of this emerged this week when Google employee Blake Lemoine claimed that the tech giant’s chatbot LaMDA had exhibited sentience.</p> <p>LaMDA, or “language model for dialogue applications”, is not Lemoine’s creation, but the work of <a href="https://arxiv.org/pdf/2201.08239.pdf" target="_blank" rel="noreferrer noopener">60 other researchers at Google</a>. Lemoine has been trying to teach the chatbot transcendental meditation.</p> <p>Lemoine shared on his Medium profile the <a href="https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917" target="_blank" rel="noreferrer noopener">text of an interview</a> he and a colleague conducted with LaMDA. Lemoine claims that the chatbot’s responses indicate sentience comparable to that of a seven or eight-year-old child.</p> <p>Later, on June 14, Lemoine said on <a href="https://twitter.com/cajundiscordian/status/1536503474308907010" target="_blank" rel="noreferrer noopener">Twitter</a>: “People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn’t let us build one. My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.”</p> <p>Since sharing the interview with LaMDA, Lemoine has been placed on “paid administrative leave”.</p> <p>What are we to make of the claim? We should consider the following: what is sentience? How can we test for sentience?</p> <p><em>Cosmos </em>spoke to experts in artificial intelligence research to answer these and other questions in light of the claims about LaMDA.</p> <p>Professor Toby Walsh is a professor of artificial intelligence at the University of New South Wales (UNSW). Walsh also penned an <a href="https://www.theguardian.com/commentisfree/2022/jun/14/labelling-googles-lamda-chatbot-as-sentient-is-fanciful-but-its-very-human-to-be-taken-in-by-machines" target="_blank" rel="noreferrer noopener">article for the <em>Guardian</em></a> on Lemoine’s claims, writing: “Before you get too worried, Lemoine’s claims of sentience for LaMDA are, in my view, entirely fanciful. While Lemoine no doubt genuinely believes his claims, LaMDA is likely to be as sentient as a traffic light.”</p> <p>Walsh is also the author of a book, <em>Machines Behaving Badly: The Morality of AI</em>, published this month in which these themes are investigated.</p> <p>“We don’t have a very good scientific definition of sentience,” Walsh tells <em>Cosmos</em>. “It’s often thought as equivalent to consciousness, although it’s probably worth distinguishing between the two.”</p> <p>Sentience is about experiencing feelings or emotions, Walsh explains, whereas consciousness is being aware of your thoughts and others. “One reason why most experts will have quickly refuted the idea that LaMDA is sentient, is that the only sentient things that we are aware of currently are living,” he says. “That seems to be pretty much a precondition to be a sentient being – to be alive. And computers are clearly not alive.”</p> <p>Professor Hussein Abbass, professor in the School of Engineering and Information Technology at UNSW Canberra, agrees, but also highlights the lack of rigorous assessments of sentience. “Unfortunately, we do not have any satisfactory tests in the literature for sentience,” he says.</p> <div class="newsletter-box"> <div id="wpcf7-f6-p195078-o1" class="wpcf7" dir="ltr" lang="en-US" role="form"> </div> </div> <p class="spai-bg-prepared">“For example, if I ask a computer ‘do you feel pain’, and the answer is yes, does it mean it feels pain? Even if I grill it with deeper questions about pain, its ability to reason about pain is different from concluding that it feels pain. We may all agree that a newborn feels pain despite the fact that the newborn can’t argue the meaning of pain,” Abbass says. “The display of emotion is different from the existence of emotion.”</p> <p class="spai-bg-prepared">Walsh reasons that we can observe something responding to stimuli as evidence of sentience, but we should hold computers to higher standards. “The only sentience I’m certain of is my own because I experience it,” he says. “Because you look like you’re made of the same stuff as me, and you’re responding in an appropriate way, the simplest explanation is to assume that you must be sentient like I feel I am sentient.” For a computer, however, “that assumption that is not the simplest explanation. The simplest explanation is that it’s a clever mimic.”</p> <p class="spai-bg-prepared">“A conversation has two sides to it,” adds Walsh. “If you play with these tools, you quickly learn that it’s quite critical how you interact with them, and the questions you prompt them with will change the quality of the output. I think it reflects, in many respects, the intelligence of the person asking the questions and pushing the conversation along in helpful ways and, perhaps, using points that lead the conversation. That really reflects the intelligence of the person asking the questions.”</p> <p class="spai-bg-prepared">“Care needs to be taken to not project our own emotions and aspirations onto the machine, when we are talking about artificial intelligence in general,” says Dr Marc Cheong, digital ethics lecturer at the University of Melbourne. “AI learns from past data that we humans create – and the societal and historical contexts in which we live are reflected in the data we use to train the AI. Similarly for the claims of sentience, we shouldn’t start anthropomorphising AI without realising that its behaviour is merely finding patterns in data we feed into it.”</p> <p class="spai-bg-prepared">“We’re very forgiving, right? That’s a really human trait,” says Walsh. “Our superpower is not really our intelligence. Our superpower is our ability to work together to form society to interact with each other. If we mishear or a person says something wrong, we fill the gaps in. That’s helpful for us to work together and cooperate with other human beings. But equally, it tends to mislead us. We tend to be quite gullible in ascribing intelligence and other traits like sentience and consciousness to things that are perhaps inanimate.”</p> <p class="spai-bg-prepared">Walsh also explains that this isn’t the first time this has happened.</p> <p class="spai-bg-prepared">The first chatbot, Eliza, created in the 1970s, was “way less sophisticated”, Walsh says. “Eliza would take the sentence that the person said and turn it into a question. And yet there was quite a hype and buzz when Eliza first came out. The very first chatbot obviously fooled some people into thinking it was human. So it’s perhaps not so surprising that a much more sophisticated chatbot like this does the same again.”</p> <p class="spai-bg-prepared">In 1997, the supercomputer Deep Blue beat chess grandmaster Garry Kasparov. “I could feel – I could smell – a new kind of intelligence across the table,” <a class="spai-bg-prepared" href="https://www.time.com/time/magazine/article/0,9171,984305,00.html#ixzz1DyffA0Dl" target="_blank" rel="noreferrer noopener">Kasparov wrote in TIME</a>.</p> <p class="spai-bg-prepared">But Walsh explains that Deep Blue’s winning move wasn’t a stroke of genius produced by the machine’s creativity or sentience, but a bug in its code – as the timer was running out, the computer chose a move at random. “It quite spooked Kasparov and possibly actually contributed to his eventual narrow loss,” says Walsh.</p> <p class="spai-bg-prepared">So, how far away are we really from creating sentient machines? That’s difficult to say, but experts believe the short answer is “very far”.</p> <p class="spai-bg-prepared">“Will we ever create machines that are sentient?” asks Walsh. “We don’t know if that’s something that’s limited to biology. Computers are very good at simulating the weather and electron orbits. We could get them to simulate the biochemistry of a sentient being. But whether they then are sentient – that’s an interesting, technical, philosophical question that we don’t really know the answer to.</p> <p class="spai-bg-prepared">“We should probably entertain the idea that there’s nothing that we know of that would preclude it. There are no laws of physics that would be violated if machines were to become sentient. It’s plausible that we are just machines of some form and that we can build sentience in a computer. It just seems very unlikely that computers have any sentience today.”</p> <p class="spai-bg-prepared">“If we can’t objectively define what ‘sentient’ is, we can’t estimate how long it will take to create it,” explains Abbass. “In my expert opinion as an AI scientist for 30+ years, I would say that today’s AI-enabled machines are nowhere close to even the edge of being sentient.”</p> <p class="spai-bg-prepared">So, what then are we to make of claims of sentience?</p> <p class="spai-bg-prepared">“I can understand why this will be a very big thing because we give rights to almost anything that’s sentient. And we don’t like other things to suffer,” says Walsh.</p> <p class="spai-bg-prepared">“If machines never become sentient then we never have to have to care about them. I can take my robots apart diode by diode, and no one cares,” Walsh explains. “I don’t have to seek ethics approval for turning them off or anything like that. Whereas if they do become sentient, we <em class="spai-bg-prepared">will </em>have to worry about these things. And we have to ask questions like, are we allowed to turn them off? Is that akin to killing them? Should we get them to do the dull, dangerous, difficult things that are too dull, dangerous or difficult for humans to do? Equally, I do worry that if they don’t become sentient, they will always be very limited in what they can do.”</p> <p class="spai-bg-prepared">“I get worried from statements made about the technology that exaggerates the truth,” Abbass adds. “It undermines the intelligence of the public, it plays with people’s emotions, and it works against the objectivity in science. From time to time I see statements like Lemoine’s claims. This isn’t bad, because it gets us to debate these difficult concepts, which helps us advance the science. But it does not mean that the claims are adequate for the current state-of-the-art in AI. Do we have any sentient machine that I am aware of in the public domain? While we have technologies to imitate a sentient individual, we do not have the science yet to create a true sentient machine.”</p> <p><img id="cosmos-post-tracker" class="spai-bg-prepared" style="opacity: 0; height: 1px!important; width: 1px!important; border: 0!important; position: absolute!important; z-index: -1!important;" src="https://syndication.cosmosmagazine.com/?id=195078&amp;title=Is+Google%E2%80%99s+AI+chatbot+LaMDA+sentient%3F+Computer+says+no" width="1" height="1" /></p> <div id="contributors"> <p><em><a href="https://cosmosmagazine.com/technology/google-ai-lamda-sentient/" target="_blank" rel="noopener">This article</a> was originally published on <a href="https://cosmosmagazine.com" target="_blank" rel="noopener">Cosmos Magazine</a> and was written by <a href="https://cosmosmagazine.com/contributor/evrim-yazgin" target="_blank" rel="noopener">Evrim Yazgin</a>. Evrim Yazgin has a Bachelor of Science majoring in mathematical physics and a Master of Science in physics, both from the University of Melbourne.</em></p> <p><em>Image: Getty Images</em></p> </div>

Technology