« Three Toxic Phrases | Main | In Memoriam: Timothy Y. Hayward »

February 29, 2024

How To Get Useful Answers with ChatGPT Plus

Chatbots powered by AI and large language models (LLMs) don’t always answer questions accurately. Neither do humans for that matter. We can’t blindly trust what ChatGPT tells us nor what we read on Facebook or X. Not everything we learned in school is correct. Your neighbor may be smart but she’s not always right. Even mommies make mistakes. Reporters have both biases and blind spots. So how do we get “the truth” or at least useful answers?

ChatGPT almost never gives links to support its answers because LLMs like ChatGPT don’t “know” how they know things. We don’t remember how we know most of what we know. Do you remember when you learned each word in your vocabulary? When and where did you learn that the sun rises in the East? LLMs are trained by reading lots of stuff, some contradictory, some nonsense, much biased. There is inevitably some bias in the selection of training material just as there was bias in the education you got. They often get “finishing school” training from humans who impart their own biases directly (witness Google’s recent embarrassment at what was supposed to be politically correct image generation which ended up deciding that US history is so unracist that the “founding fathers” were actually mostly Indians and women and that Nazi soldiers and Vikings were mainly non-Caucasian). If the answer to a question is important to you, you want to know the source of the information from which the answer was generated.

However, because ChatGPT Plus, which costs $20/month, has access to Bing browsing (which the free version of ChatGPT does not Update: The free version of GPT-4o now has browser access as well), it can find links to support its answers if you tell it to. Sometimes, if properly prompted, it’ll even find links which cast doubt on its assertion or reveal uncertainty. A simple way to get confidence in what ChatGPT tells you is to ask for supporting links. For example, I asked:

“Is volcanic activity a significant contributor to climate change?”

The first answer I got (don’t feel you have to read it all) was:

Picture1

Lots of assertions but no backup for them. They could even be hallucinations. Moreover, the last paragraph is politically correct (and perhaps factually correct) and almost certainly a result of the finishing school training ChatGPT had.

So I said:

“Please provide links relevant to the assertions in the last answer and explain how these links support, qualify, or contradict the assertions. for each link. Show the title of the page linked to.”

And got back an answer with references and links (the blue quotes in brackets. They are not live in the picture below but you can click here to open a browser window on my shared chat and the links will be live):

Picture2

The references appear to be credible and, on following the links, I see that they do support what ChatGPT says they support. However, the evidence appears one-sided and I already suspect ChatGPT of being partial to accenting the contribution of anthropogenic climate change. To assure I’m seeing beyond this possible bias, I asked:

“What references are there including links which would appear to disagree with these conclusions?”

Interesting response (click here for live links):

Picture3

Now we have pretty good information on vulcanism and climate and also know some of the questions which remain to be definitively answered.

This isn’t only a way to check what an LLM tells you. The first answer could have been something someone posted on Facebook, an essay somewhere, something someone told you in a bar, or a news article. You can send the text or a link or upload a document to ChatGPT Plus (Update: You can now do the with free version of GPT-4o as well) and ask it to generate references supporting and questioning assertions. Used in this way LLMs become a tool for detecting misinformation no matter what the source and getting reasonably reliable answers rather than yet one more way to generate half-truths and propaganda.

Practice in checking references is a crucial part of teaching critical thinking. Critical thinking and validating information are now much more valuable skills than a bunch of memorized facts. This use of LLMs must be taught to anyone old enough to go online. Within a year, use of LLMs and browsing will be indistinguishable because AI is being built into search at warp speed and LLM’s with search are better sources by far than classic Googling (open question on whether Google itself survives this transition). Every school and every organization must teach lessons in LLM use like the one in this post now. Avoiding LLMs because they can be a source of misinformation rather than embracing their proper use is educational and managerial malpractice.

See also:

AI Can Help Deal with Misinformation (for a GPT designed to help with fact checking)

Why You Want to Use Free ChatGPT-4o Instead of Search

More posts on AI

| Comments (View)

Recent Posts

I Was a Haley Co-Chair in Vermont. A Strong Foreign Policy Would Clinch My Vote for Harris.

I Was a Haley Co-Chair in Vermont. A Strong Foreign Policy Would Clinch My Vote for Harris.

Artificial Intelligence and Real Information

Republicans Fell for Democrat’s Brilliant (Unintentional) Head Fake

Interview with SearchGPT

Comments

blog comments powered by Disqus
Blog powered by TypePad
Member since 01/2005