Welcome! Thanks for stopping by. If you found your way here through a share or just by accident, you can subscribe for free and get your own issues! If you are a free subscriber, please consider upgrading to paid. Just go into your subscriptions and you can do it from there.
ChatGPT and its siblings made a huge splash this year. I have writer friends and colleagues who fear AI will take their jobs. Sadly, some clients have tried to go to AI, but much of what I’ve seen isn’t all that great. The writing is generic and often wrong, making things up. Experts call it “hallucinations;” I call it lying.
Case in point: I asked ChatGPT (the paid version) to write a short piece on a health topic I know a lot about, with references. I wanted to see what it could do. It came back with a convincing piece (with some questionable facts), including the requested references. But when I looked for the references, I couldn’t find them. So I asked ChatGPT the same request but with specifics about the references. I got back names and issue numbers/dates of articles that did not exist. In other words, the system made them up – it lied.
The writing is generic and often wrong, making things up. Experts call it “hallucinations;” I call it lying.
Artificial intelligence has its place. I don’t doubt that and I welcome where it can help us. AI can help radiologists spot the tiniest of tumors, for example. I think that’s wonderful. But where AI can’t help us is answering our health questions.
But for simple things, surely AI can help?
Not if you want to stay safe – and alive.
A study from Long Island University, presented at the American Society of Health-System Pharmacists Midyear Clinical Meeting earlier this month showed that ChatGPT (the free version) responses to drug-related questions were wrong almost 75% of the time. Even worse? According to a press release from the society, “When asked to cite references, the artificial intelligence program also generated fake citations to support some responses.” Remember how I mentioned that my request gave me “hallucinations?” Obviously, that was not a one-off.
According to this news article, “At first glance, the citations looked legitimate: They were often formatted appropriately, provided URLs and were listed under legitimate scientific journals. But when the team attempted to find the referenced articles, they realized that ChatGPT had given them fictional citations.” That’s precisely what happened to me.
I can’t express to you how dangerous this is. These weren’t “out there” questions that were tough to answer. One was if there was any issue with combining Paxlovid, an antiviral medication given to people with COVID-19, and a commonly used blood pressure medication, verapamil. ChatGPT said there was no problem. And yet, according to the lead author of the study, “In reality, these medications have the potential to interact with one another, and combined use may result in excessive lowering of blood pressure…Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect.”
To test if there was a difference between the free version and paid versions, I asked ChatGPT 3.5, a paid version I have a subscription to, if there are any drug interactions with Paxlovid. This was the response - at least it was honest!
“I don't have real-time information, and my knowledge is based on information available up to January 2022. At that time, Paxlovid (nirmatrelvir/ritonavir) was an investigational antiviral medication used to treat COVID-19. It's crucial to note that drug interactions can be complex and may change over time as new information becomes available.
For the most accurate and up-to-date information on drug interactions with Paxlovid or any other medication, it's essential to consult with a healthcare professional or pharmacist. They can provide personalized advice based on your medical history, current medications, and specific health conditions.
If you have questions or concerns about a particular medication, including Paxlovid, contact your healthcare provider for the most reliable and current information.”
“…when the team attempted to find the referenced articles, they realized that ChatGPT had given them fictional citations.”
Not the first time
This isn’t the first time that errors were discovered in ChatGPT advice. In a study published in August this year, researchers tested the AI program’s responses to cancer treatment questions. ChatGPT 3.5 was asked to generate treatment plans for a variety of breast, prostate, and lung cancer cases. The results? One-third of the responses contained incorrect or incomplete information, and in 12.5% of the cases, responses were hallucinated. The authors wrote, “…the chatbot did not perform well at providing accurate cancer treatment recommendations. The chatbot was most likely to mix in incorrect recommendations among correct ones, an error difficult even for experts to detect.”
Let’s look at another study, this one published in September. The researchers posed 20 medical questions to ChatGPT r, asking for references. According to the researchers, “This study found that more than two-thirds of the references provided by ChatGPT to a diverse set of medical questions were fabricated, although most seemed deceptively real. Moreover, domain experts identified major factual errors in a quarter of the responses. These findings are alarming, given that trustworthiness is a pillar of scientific communication.”
For sure, there are warnings, and the company says that ChatGPT should not be used for diagnostic or treatment purposes, but seriously – how many people ignore that and still try to use it that way?
It wasn’t all that long ago when physicians started seeing patients who first consulted Dr. Google. Now we have to worry about Dr. Artificial Intelligence.
Use reliable sources only
I’ve written several times over the years about how to find accurate and reliable medical information online. I never thought I’d have to tell people that AI would purposely give incorrect information and lie to back it up! Or is that hallucinating?
So, how do you find accurate information? The most important thing is to look to see who is supporting the website. I generally tell people to check the URL. Those ending with .edu are usually from academic institutions; .gov is from the U.S. government (Canada.ca for Canadians); and .org is generally for patient awareness or advocacy organizations.
When you start looking at .com or .net sites, you need to look at who is paying for that site and who the advertisers are. If you’re looking for information on sleeping well and the site is sponsored by a company that makes sleeping pills or sells pillows, chances are some of that information is slanted. Look to see who benefits from you and if they are trying to sell you something. I go into more detail in this blog post I wrote for Decipher Your Health earlier this year.
Be careful. Whether it’s intentional or not, the internet is only as good as its users. Protect yourself and don't rely on artificial intelligence when it comes to your health. Please.
What do you think? Leave your thoughts below. Let’s get a conversation started.
If you are a free subscriber, please consider upgrading. You’ll get the Monday Musings issue and access to audio versions of each newsletter issue. Subscribe for only $5/month or $50/year. If you are a free subscriber and use computer-generated reading to listen due to visual impairment, please send me a message and I’ll arrange for you to get audio versions as well.