Google's AI Overview Is Wrong About Life Insurance 57% of the Time, Says Study

You need more than a grain of salt when getting life insurance tips from Google's AI overview.

Businessman using artificial intelligence AI technology
(Image credit: Getty Images)

If you've used Google recently, you've noticed the AI overview popping up at the top of your searches. The quick and convenient answers to your search queries offer to save you the time of having to scroll through search results and click through links to find the answer yourself. But, how much can you really trust those AI summaries?

According to a new study by Choice Mutual, not too much – at least, not when it comes to important financial topics. "AI Overviews offer convenience, but their current accuracy for complex topics like life insurance and Medicare is unreliable," said Choice Mutual CEO Anthony Martin in an article on the study's results.

Over half of the AI-generated responses to life insurance queries were deemed inaccurate by experts on the subject. While Medicare-related responses were accurate more often, the few times they weren't involved potentially harmful errors.

Subscribe to Kiplinger’s Personal Finance

Be a smarter, better informed investor.

Save up to 74%
https://cdn.mos.cms.futurecdn.net/hwgJ7osrMtUWhk5koeVme7-200-80.png

Sign up for Kiplinger’s Free E-Newsletters

Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more - straight to your e-mail.

Profit and prosper with the best of expert advice - straight to your e-mail.

Sign up

Here's a closer look at what the study found, why you shouldn't rely on AI, no matter how convenient it is, along with some tips on how to use AI without getting duped by misleading information.

Google’s AI overview made mistakes that could cost you money

Business team using Artificial Intelligence chatbot

(Image credit: Getty Images)

The Choice Mutual analysis included 1,000 common queries, 500 each on life insurance and Medicare. The AI overview for each query was manually reviewed by experts on each topic to evaluate the accuracy of the AI-generated response.

Overall, the study found that 57% of AI responses on life insurance contained errors. While Google's AI got more right when it came to Medicare, the 13% of responses that were wrong contained inaccurate information that could cost you money if you took it at face value.

On life insurance, for example, the search term "life insurance for seniors over 85 no medical exam" returned an AI-generated response that included information about guaranteed issue life insurance – a type of life insurance policy that offers coverage regardless of health conditions. However, the study's experts noted that this type of life insurance isn't offered to people over the age of 85.

In evaluating AIO's responses about Medicare, it was accurate more often. But the 13% of results that contained errors were dangerously inaccurate. For example, when searching "is it mandatory to sign up for Medicare at age 65," the AIO stated that you can delay enrollment without penalty if you still have health insurance through your (or your spouse's) employer.

While that's partly true, it only applies to large employers with more than 20 employees. If you work for a smaller company or are self-employed, you could face penalties for delaying Medicare enrollment. Taking Google's AI overview at face value, then, could lead to financial losses.

Why even intelligent people can be fooled by AI

In addition to reaffirming the importance of factchecking AI, what this study also shows is how easy it can be for an AI response to fool even an intelligent user. These responses look thorough, containing specific details and precise terminology that can make the answers feel complete if you don't happen to have industry expertise or firsthand knowledge of the topic you're searching.

In many cases, it's not even immediately clear which aspects of a response might be wrong or warrant further research. According to Martin, this "directly contributes to consumers making poor insurance decisions based on the false information provided by Google’s AI answers."

One of the reasons AI has taken off over the past couple of years has to do with large language models (LLMs), like Google's Gemini, which powers the AI overviews you now see at the top of search pages. LLMs have a knack for sounding coherent and intelligent, even though it's technically just stringing words together based on mathematical probability.

There's no analytical thought or reasoning behind it. But it's easy for us to assume there is because the words it's strung together do form complete, natural-sounding sentences. But, when it comes to topics you're less familiar with or where the errors AI is making aren't as obvious, it's easy to be duped by the human-like prose it generates.

When it comes to topics like life insurance or Medicare, many people searching Google for answers to their questions aren't experts on these topics. Moreover, insurance of any kind is a complex topic with lots of nuance that's easy for an LLM to brush over or miss entirely.

Tips for fact-checking AI

Asian woman using smartphone at home, working at home.

(Image credit: Getty Images)

The biggest takeaway from the Choice Mutual study is that you shouldn't rely solely on AI to answer important questions. "There are many nuances between various types of life and Medicare plans, so you want to turn to a veteran agent who has deep experience in working with these plans," said Martin.

You need real, human expertise when it comes to questions that could impact your finances or health.

Here are a few tips for using AI without letting it mislead you:

  • Ask follow-up questions. Break down the AI response into key points. Then, do a new search on the key points that matter to you. In that medicare response mentioned above, for example, you might use that AI overview as a starting point but then google employer coverage requirements for more in-depth (and credible) information on the subject.
  • Check its sources. When you ask for sources, LLMs typically just provide links to pages for further reading, not necessarily the pages they actually referenced in generating the response. But you can use these as your next step in the fact-checking process.
  • Make sure more than one source can confirm the information. If you find a medical study confirming that coffee is good for you, for example, go back to your search and look for additional study results to make sure that wasn't just a one-off before you start gulping down coffee by the gallon.
  • For critical life decisions, find a human expert you can turn to with your questions. That might be a financial adviser, an insurance agent, a representative at your bank, your doctor or anyone else with verifiable credentials that you can trust.

Related content

Rachael Green
Personal finance eCommerce writer

Rachael Green is a personal finance eCommerce writer specializing in insurance, travel, and credit cards. Before joining Kiplinger in 2025, she wrote blogs and whitepapers for financial advisors and reported on everything from the latest business news and investing trends to the best shopping deals. Her bylines have appeared in Benzinga, CBS News, Travel + Leisure, Bustle, and numerous other publications. A former digital nomad, Rachael lived in Lund, Vienna, and New York before settling down in Atlanta. She’s eager to share her tips for finding the best travel deals and navigating the logistics of managing money while living abroad. When she’s not researching the latest insurance trends or sharing the best credit card reward hacks, Rachael can be found traveling or working in her garden.