Skip to main content

Taxes

Why Businesses Shouldn’t Rely on AI-Powered Tax Prep Advice

A Washington Post investigation found TurboTax's and H&R Block's AI tools are not up to snuff.

By Kit Eaton, Inc. (TNS)

AI systems can advise you on the future of your business, generate PR-ready images, videos, and music, and thanks to Microsoft’s efforts, help you be more productive.

What AI cannot help your business with, it seems, is tax prep.

An investigation by The Washington Post tested new built-in AI systems offered by TurboTax and H&R Block, and found that they were frequently unhelpful and often just plain wrong when it came to offering advice.

While both companies include fine print disclaimers that their AI systems are incomplete and that you should review their output, The Washington Post found the AIs were ineffective about half the time. In a sequence of tests, asking the AI assistants for advice on specific details needed for a putative tax filing, some of the results were totally incorrect.

For example, when H&R Block’s system was asked about filing children’s taxes if they go to college out of state, it incorrectly said the child had to file in both places. While TurboTax’s AI just returned irrelevant advice, it didn’t help with the imagined tax filing problem. And in answer to a question about tax credits after installing a new air conditioner, TurboTax returned a quite lengthy response that was deemed irrelevant by the newspaper and tax experts it consulted.

The answers that the tax software’s AI systems spewed out are, right now, only supposed to be guidelines, and both TurboTax and H&R Block told the newspaper that they would provide assistance if users were audited after listening to an AI’s bad advice. But what the investigation neatly underlines is that when an AI generates responses to questions, it cannot be guaranteed that the data it returns is correct, let alone useful.

The potential for AIs to generate misinformation is problematic, and it’s akin to the kind of “hallucination” problem that can be seen in some generative AI imagery. For example, where a request to draw, say, a Greek-style statue returns an image with obvious impossibilities like three legs. Google’s Gemini AI was recently in the spotlight for a similar sort of issue, where its image-generation system was delivering culturally inappropriate and problematic results, such as racially diverse people in Nazi-like uniforms. Hallucination is just a part of the way current-generation AIs behave, including text-based chatbots as well as image generators, since they aren’t capable of truly understanding the request you make of them when you set them a task.

But filing taxes is, of course, exactly the kind of nuanced math-centric task you may feel an AI actually should be able to help with, which is why The Washington Post’s investigation may be unsettling to some AI proponents, like OpenAI’s CEO Sam Altman, who’s been pressing for greater AI regulation.

The practical takeaway of the report is that when you use AI for personal or business reasons you should always check the truthfulness and usefulness of its output, lest it land you in legal trouble, or just horribly embarrass you like the recent failed Willy Wonka-themed children’s event that seemed to rely rather too much on AI tech. And when it comes to filing taxes, which is itself an inherently anxiety-inducing task for many people, as well as an important one, perhaps it may be wiser to join the ever-increasing number of people who use the IRS’s Free File system—if you’re eligible.

______

(c) 2024 Mansueto Ventures LLC; Distributed by Tribune Content Agency LLC.