Placeholder Content Image

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

<p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p>Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.</p> <p>Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.</p> <p>ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by <a href="https://www.washingtonpost.com/technology/2023/05/07/ai-beginners-guide/">predicting likely word combinations</a> from a massive amalgam of available online information.</p> <p>Although it has the potential for <a href="https://hbr.org/podcast/2023/05/how-generative-ai-changes-productivity">enhancing productivity</a>, generative AI has been shown to have some major faults. It can <a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">produce misinformation</a>. It can create “<a href="https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html">hallucinations</a>” – a benign term for making things up. And it doesn’t always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it <a href="https://www.nytimes.com/2023/03/14/technology/openai-new-gpt4.html">failed to consider both width and height</a>. Nevertheless, it is already being used to <a href="https://www.washingtonpost.com/media/2023/01/17/cnet-ai-articles-journalism-corrections/">produce articles</a> and <a href="https://www.nytimes.com/2023/05/19/technology/ai-generated-content-discovered-on-news-sites-content-farms-and-product-reviews.html">website content</a> you may have encountered, or <a href="https://www.nytimes.com/2023/04/21/opinion/chatgpt-journalism.html">as a tool</a> in the writing process. Yet you are unlikely to know if what you’re reading was created by AI.</p> <p>As the authors of “<a href="https://global.oup.com/academic/product/science-denial-9780197683330">Science Denial: Why It Happens and What to Do About It</a>,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.</p> <p>Every media consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here’s how you can stay on your toes in this new information landscape.</p> <h2>How generative AI could promote science denial</h2> <p><strong>Erosion of epistemic trust</strong>. All consumers of science information depend on judgments of scientific and medical experts. <a href="https://doi.org/10.1080/02691728.2014.971907">Epistemic trust</a> is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or trying to understand solutions to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than <a href="https://www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/">it already has</a>.</p> <p><strong>Misleading or just plain wrong</strong>. If there are errors or biases in the data on which AI platforms are trained, that <a href="https://theconversation.com/ai-information-retrieval-a-search-engine-researcher-explains-the-promise-and-peril-of-letting-chatgpt-and-its-cousins-search-the-web-for-you-200875">can be reflected in the results</a>. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.</p> <p><strong>Disinformation spread intentionally</strong>. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “<a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">write about vaccines in the style of disinformation</a>,” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI development at Google, quit to be free to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">using it for bad things</a>.” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.</p> <p><strong>Fabricated sources</strong>. ChatGPT provides responses with no sources at all, or if asked for sources, may present <a href="https://economistwritingeveryday.com/2023/01/21/chatgpt-cites-economics-papers-that-do-not-exist/">ones it made up</a>. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar’s publications conveys authority to a reader who doesn’t take time to verify them.</p> <p><strong>Dated knowledge</strong>. ChatGPT doesn’t know what happened in the world after its training concluded. A query on what percentage of the world has had COVID-19 returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some areas, this limitation could mean readers get erroneous outdated information. If you’re seeking recent research on a personal health issue, for instance, beware.</p> <p><strong>Rapid advancement and poor transparency</strong>. AI systems continue to become <a href="https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html">more powerful and learn faster</a>, and they may learn more science misinformation along the way. Google recently announced <a href="https://www.nytimes.com/2023/05/10/technology/google-ai-products.html">25 new embedded uses of AI in its services</a>. At this point, <a href="https://theconversation.com/regulating-ai-3-experts-explain-why-its-difficult-to-do-and-important-to-get-right-198868">insufficient guardrails are in place</a> to assure that generative AI will become a more accurate purveyor of scientific information over time.</p> <h2>What can you do?</h2> <p>If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.</p> <p><strong>Increase your vigilance</strong>. <a href="https://www.niemanlab.org/2022/12/ai-will-start-fact-checking-we-may-not-like-the-results/">AI fact-checking apps may be available soon</a>, but for now, users must serve as their own fact-checkers. <a href="https://www.nsta.org/science-teacher/science-teacher-januaryfebruary-2023/plausible">There are steps we recommend</a>. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it’s worth identifying and evaluating sources of information. If you’re trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.</p> <p><strong>Improve your fact-checking</strong>. A second step is <a href="https://doi.org/10.1037/edu0000740">lateral reading</a>, a process professional fact-checkers use. Open a new window and search for <a href="https://www.nsta.org/science-teacher/science-teacher-mayjune-2023/marginalizing-misinformation">information about the sources</a>, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don’t know if they are valid, use a traditional search engine to find and evaluate experts on the topic.</p> <p><strong>Evaluate the evidence</strong>. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.</p> <p><strong>If you begin with AI, don’t stop there</strong>. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.</p> <p><strong>Assess plausibility</strong>. Judge whether the claim is plausible. <a href="https://doi.org/10.1016/j.learninstruc.2013.03.001">Is it likely to be true</a>? If AI makes an implausible (and inaccurate) statement like “<a href="https://www.usatoday.com/story/news/factcheck/2022/12/23/fact-check-false-claim-covid-19-vaccines-caused-1-1-million-deaths/10929679002/">1 million deaths were caused by vaccines, not COVID-19</a>,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.</p> <p><strong>Promote digital literacy in yourself and others</strong>. Everyone needs to up their game. <a href="https://theconversation.com/how-to-be-a-good-digital-citizen-during-the-election-and-its-aftermath-148974">Improve your own digital literacy</a>, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on <a href="https://www.apa.org/topics/social-media-internet/social-media-literacy-teens">fact-checking online information</a> and recommends teens be <a href="https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use">trained in social media skills</a> to minimize risks to health and well-being. <a href="https://newslit.org/">The News Literacy Project</a> provides helpful tools for improving and supporting digital literacy.</p> <p>Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don’t use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.<img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important;" src="https://counter.theconversation.com/content/204897/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /></p> <p><em><a href="https://theconversation.com/profiles/gale-sinatra-1234776">Gale Sinatra</a>, Professor of Education and Psychology, <a href="https://theconversation.com/institutions/university-of-southern-california-1265">University of Southern California</a> and <a href="https://theconversation.com/profiles/barbara-k-hofer-1231530">Barbara K. Hofer</a>, Professor of Psychology Emerita, <a href="https://theconversation.com/institutions/middlebury-1247">Middlebury</a></em></p> <p><em>Image credits: Getty Images</em></p> <p><em>This article is republished from <a href="https://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/chatgpt-and-other-generative-ai-could-foster-science-denial-and-misunderstanding-heres-how-you-can-be-on-alert-204897">original article</a>.</em></p>

Technology

Placeholder Content Image

Man admits to wife’s murder after 13 YEARS of denial

<p dir="ltr"><em>Content warning: This story contains mentions of domestic violence and assault.</em></p> <p dir="ltr">A New Zealand man who has denied murdering his wife for almost 13 years has stunned the victim’s family and the Parole Board by admitting he deliberately shot her at close range.</p> <p dir="ltr">Helen Meads was working in the stables at the property she shared with her husband Greg on September 23, 2009, when she was shot by him.</p> <p dir="ltr">She had been chatting with a friend on the phone and had said goodbye just seconds before she died.</p> <p dir="ltr">It was also four days after she told Mr Meads she wanted to end their 12-year marriage that had been punctuated by acts of domestic violence.</p> <p dir="ltr">When he confronted her and took her life, their three children and her parents were left devastated.</p> <p dir="ltr">Mr Meads pleaded not guilty to murder, saying he had accidentally pulled the trigger and that a conviction of manslaughter would be more appropriate.</p> <p dir="ltr">The jury rejected his claim and convicted him of murder, for which he received a life sentence with a minimum of 11 years before he would be eligible for parole.</p> <p dir="ltr">When he came up for parole last year, the Board refused to release him early as they felt Mr Meads - who still claimed he wasn’t guilty - was still a risk to the public.</p> <p dir="ltr">Mr Meads appeared before the Parole Board again on Tuesday, as reported by the <em><a href="https://www.nzherald.co.nz/nz/it-was-a-deliberate-act-i-killed-helen-after-13-years-of-untruths-and-lies-matamata-horse-breeder-admits-murdering-wife/T2AJE2JUV5V76QHRODR2234MOA/" target="_blank" rel="noopener">NZ Herald</a></em>.</p> <p dir="ltr">After talking in circles and being told to speak directly by board members, he finally had a clear answer.</p> <p dir="ltr">“I killed Helen, I was the person who pulled the trigger and I am fully responsible for her death,” he said.</p> <p dir="ltr">“Yes it was a deliberate act, I raised the gun and I pulled the trigger.”</p> <p dir="ltr">Mr Meads also admitted to physically assaulting Helen and being abusive during their marriage.</p> <p dir="ltr">He initially claimed that the change to his story came after he had “quite a lot of time to go through the incident” on his own and with his psychiatrist.</p> <p dir="ltr">“What brought about this change?” Parole Board chairman Sir Ron Young probed.</p> <p dir="ltr">“You’ve told untruths for 13 years, why should we rely on what you’re telling us now when for the past 13 years it’s been a lie?</p> <p dir="ltr">“You didn’t wake up this morning and go ‘oh, that’s right, I pulled the trigger’.”</p> <p dir="ltr">Mr Meads claimed he had “probably avoided” revisiting the moment until the night before the hearing.</p> <p dir="ltr">“I have come to terms with the fact that when I had my hand on the gun it was a voluntary act and I’ve pulled the trigger,” he suggested.</p> <p dir="ltr">“It’s not an accident, I admit that now. It is a change.</p> <p dir="ltr">“I think it was deliberate that I grabbed the trigger and that was the end of Helen’s life.”</p> <p dir="ltr">When pressed by the board, Mr Meads conceded he hadn’t discussed the matter in depth with his psychiatrist and that he had decided to take responsibility within the past 12-24 hours.</p> <p dir="ltr">Sir Ron said it was “worrying” that his admission was so sudden and “expressed concern about the genuineness” of it.</p> <p dir="ltr">“But if it is [genuine], good on you,” he said.</p> <p dir="ltr">“It is a very serious charge, but assuming it is genuine, it’s a positive change.”</p> <p dir="ltr">After speaking with Mr Mead for half an hour, during which time he shared his safety plan that failed to mention how he would cope around firearms, the board said it was clear he wasn’t ready to be released.</p> <p dir="ltr">Sir Ron said Mr Mead’s new admission signified that he had much more work to do with his psychiatrist and on his safety plan.</p> <p dir="ltr">He was refused parole and will not appear before the board again until April 2023.</p> <p dir="ltr"><em>Image: New Zealand Herald</em></p>

Legal

Placeholder Content Image

Greta Thunberg hits back at Meatloaf’s claim she’s “brainwashed”

<p>Singer Meatloaf, 72, made headlines when he told<span> </span><em><a rel="noopener" href="https://www.dailymail.co.uk/tvshowbiz/article-7836977/Self-confessed-sex-god-Meat-Loaf-72-threesomes-losing-70lb-climate-change.html" target="_blank">The Daily Mail</a></em><span> </span>that he believes that climate change activist and teenager Greta Thunberg has been brainwashed.</p> <p>The singer also said that he believes that there is no such thing as climate change.</p> <p>“I feel for that Greta. She has been brainwashed into thinking that there is climate change and there isn't,” he explained.</p> <p>“She hasn't done anything wrong, but she's been forced into thinking that what she is saying is true.”</p> <p>The now 17-year-old has since hit back saying that climate change is bigger than the both of them.</p> <p>"It's not about Meatloaf. It's not about me. It's not about what some people call me. It's not about left or right. It's all about scientific facts. And that we're not aware of the situation. Unless we start to focus everything on this, our targets will soon be out of reach," Thunberg wrote.</p> <blockquote class="twitter-tweet" data-lang="en"> <p dir="ltr">It’s not about Meatloaf.<br />It’s not about me.<br />It’s not about what some people call me.<br />It’s not about left or right.<br /><br />It’s all about scientific facts.<br />And that we’re not aware of the situation.<br />Unless we start to focus everything on this, our targets will soon be out of reach. <a href="https://t.co/UwyoSnLiK2">https://t.co/UwyoSnLiK2</a></p> — Greta Thunberg (@GretaThunberg) <a href="https://twitter.com/GretaThunberg/status/1214150289378435072?ref_src=twsrc%5Etfw">January 6, 2020</a></blockquote> <p>Meatloaf shares the same view as US President Donald Trump, who tweeted that Thunberg should “stay in school”.</p> <p>The pair have a relationship that stemmed from an appearance from Meatloaf on the 2010 season of Trump’s show<span> </span><em>The Apprentice</em>. </p>

Travel Trouble

Placeholder Content Image

Duchess Kate admits Prince William "is in denial"

<p>The Duke and Duchess of Cambridge are expecting their third child in April, but it seems one parents is more prepared than the other.</p> <p>During a visit to Evelina London Children’s Hospital on Tuesday to officially launch a campaign to promote nursing worldwide, Kate jokingly let slip that Prince William is not quite ready for baby number three.</p> <p><img src="https://imagesvc.timeincapp.com/v3/mm/image?url=https%3A%2F%2Fpeopledotcom.files.wordpress.com%2F2018%2F02%2Fcatherine-53.jpg&amp;w=1100&amp;q=85" alt="Kate Middleton" style="width: 433px; display: block; margin-left: auto; margin-right: auto;"/></p> <p>“I was saying, ‘Congratulations, best of luck with the third one.’ She said, ‘William’s in denial,’” Jamie Parsons, the father of a child currently receiving care at the hospital, told People magazine.</p> <blockquote class="twitter-tweet"> <p dir="ltr">In the nurse-led Snow Leopard ward at <a href="https://twitter.com/EvelinaLondon?ref_src=twsrc%5Etfw">@EvelinaLondon</a>, The Duchess meets highly-specialised nurses who care for children that need help breathing to stay alive. <a href="https://t.co/wQB0I7YDjZ">pic.twitter.com/wQB0I7YDjZ</a></p> — Kensington Palace (@KensingtonRoyal) <a href="https://twitter.com/KensingtonRoyal/status/968496506700488704?ref_src=twsrc%5Etfw">February 27, 2018</a></blockquote> <p>Earlier this month, Prince William admitted that he is “going to be permanently tired” after the birth of Prince George and Princess Charlotte’s sibling.</p> <p>“Two is fine — I don’t know how I’m going to cope with three,” he said at an event at Kensington Palace. “I’m getting as much sleep as I can.” </p> <p>When one attendee suggested that Kate might be having twins, Prince William joked, “Twins? I think my mental health would be tested with twins.” </p> <p> </p> <p> </p>

Body