Will AI Violate Taxpayer Rights?
When the IRS uses AI, will it violate your rights as a taxpayer (or your rights as a human being)?
When artificial intelligence (AI) relies on pre-existing data, and humans rely on AI, there’s potential for extreme consequences. For example, what if you knew that your economic status could put you at risk of inaccurate criminal accusations due to AI misidentification? You would probably be concerned about the impact of AI on your human rights, especially considering some AI founders are worried themselves.
And what about taxpayer rights and AI? Federal government algorithms have unfairly targeted taxpayers in the past. For example, IRS and Treasury Department data have shown that race can increase the likelihood of an IRS audit due to an AI-based algorithm. So, the question is, could or will AI violate your taxpayer rights, or your human rights in the future?
AI IRS Audit Selection
A January 2023 study published by Stanford’s Institute for Economic Policy Research and produced with input from other university researchers and the U.S. Treasury Department found that the IRS audits Black taxpayers at about three to five times higher rates than the agency audits other taxpayers.
Sign up for Kiplinger’s Free E-Newsletters
Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more - straight to your e-mail.
Profit and prosper with the best of expert advice - straight to your e-mail.
At the time, Kiplinger reported on how the IRS unfairly targeting some taxpayers more than others through audits was, according to the study, due to an IRS algorithm. The study authors suggested that the IRS could lessen the racial disparity in its audit rates by changing its algorithms. That could result in a more fair audit selection. But the bias in the IRS algorithm raises questions. For instance, how often will a third party step in and point out these types of violations? And how will you even know if your rights are violated if a third party doesn’t step in to investigate?
Racial bias in IRS audit selection isn’t the only time AI has wreaked havoc on taxpayers. More than 10 years ago, an IRS algorithm failed to remove penalties on taxpayer returns. The Treasury Inspector General for Tax Administration (TIGTA) studied the IRS’ algorithm and uncovered these issues. If the Treasury Inspector General had never launched the investigation, taxpayers would be none the wiser, especially since IRS staff reportedly didn’t override the algorithm’s errors.
These and other questions about AI are being raised as AI is being used more, and the IRS has been allotted $80 billion in funding over the next 10 years. The agency has said that some of that funding will be used to hire new agents and to advance automated technologies. But will IRS algorithms be improved? And will new, highly trained IRS agents catch AI errors in cases where they failed to do so in the past?
Problems With AI
The IRS is often a target of complaints, but AI bias and errors haven’t occurred only with IRS algorithms. Federal government use of artificial intelligence has disrupted people’s lives across the globe, and it has been happening for at least a decade.
For example, for two years beginning in 2013, about 40,000 people collecting unemployment insurance were accused of fraud in Michigan. The accusations were reportedly based on determinations made by an automated system. Some of the accused Michiganders had their wages garnished, but it was later revealed that most of the criminal accusations, essentially made due to AI, were wrong. Although the software system was private, Michigan changed its policies to require human review.
On the other side of the world, half a million welfare recipients in Australia were reportedly sent debt notices in error when the government used an automated debt recovery system to identify their debts. And the Netherlands experienced a similar situation in 2020 when the District Court of The Hague (Rechtbank Den Haag) ruled that automated fraud detection violated human rights. Most important, the court found that the detection system violated citizens' right to privacy, according to the American Society of International Law.
On the other hand, problems with AI are also not limited to government use. Errors made by artificial intelligence technology continue to surface in the news. In 2019, a New Jersey man, Nijeer Parks, was accused of shoplifting. Facial recognition software used by the police inaccurately identified Parks, and he spent 10 days in jail as an innocent man, according to press reports. The charges were eventually dismissed, but the incident isn't the only one of its kind and likely won't be the last.
What Experts Say About AI
Many experts are worried about AI’s impact on people’s rights, as taxpayers and as human beings. Even OpenAI CEO Sam Altman is worried about the dangers of AI. Altman admitted his worst fear is that AI technology can cause “significant harm to the world.” On May 16, he reiterated this fear before a U.S. Senate Judiciary Committee. “If this technology goes wrong, it can go quite wrong,” Altman said.
Google also seems to believe there is a need for AI regulation. A Google document outlines recommendations for the regulation of AI. In it, Google writes, “Governments have a role to play in providing guidance [on] how to balance competing priorities and approaches to fairness.” Google also points out in the document that AI can have “unfair impacts.”
AI IRS Scams
Unfair impacts from AI go beyond internal algorithms. AI can be used to trick taxpayers into providing sensitive financial data through well-crafted scams. Some lawmakers are pushing the IRS to address this emerging AI scam problem with the technology.
"According to recent reporting, one cybersecurity expert demonstrated how ChatGPT can be used to generate scam messages from the IRS targeting families, older Americans, and small businesses." That statement came from a letter from a group of Senate Finance Committee members who recently asked the IRS to answer questions about how the agency will handle AI-generated tax scams.
The senators want the IRS to respond by the end of May with information about how it will address potential AI scams. To protect taxpayers’ rights, they say that the IRS will need to ensure the fairness of their algorithms and understand the new threats AI can pose as it evolves.
Bottom Line
So, will the federal government succeed in regulating AI enough so that taxpayers aren’t unfairly discriminated against or accused? Or will AI violate taxpayer rights without anyone even knowing? Only time will tell.
Get Kiplinger Today newsletter — free
Profit and prosper with the best of Kiplinger's advice on investing, taxes, retirement, personal finance and much more. Delivered daily. Enter your email in the box and click Sign Me Up.
Katelyn has more than 6 years of experience working in tax and finance. While she specialized in tax content while working at Kiplinger from 2023 to 2024, Katelyn has also written for digital publications on topics including insurance, retirement, and financial planning and had financial advice commissioned by national print publications. She believes knowledge is the key to success and enjoys providing content that educates and informs.
-
Why Uber Stock Is Volatile After GM's Cruise Announcement
Uber stock is swinging this week following news that General Motors is restructuring its Cruise unit. Here's what you need to know.
By Joey Solitro Published
-
UnitedHealth Stock Falls as Lawmakers Eye Insurers, PBMs
UnitedHealth stock is continuing to fall Thursday after the introduction of bipartisan legislation targeting PBMs and healthcare giants. Here's what to know.
By Joey Solitro Published