Once upon a time, tax preparation was a document-heavy practice, requiring people to send their sensitive documents to their CPAs. Those days are long gone. Today, things are distinctly more digital. The digitalization of the financial sector generally, and tax services in particular, has expanded access to high-quality tax services while making filing taxes more streamlined and convenient.
It also introduces unique challenges that CPAs and tax professionals can’t ignore. Specifically, soaring instances of online fraud, cybersecurity incidents, and data compromise. This places an enormous responsibility on CPAs and tax professionals who maintain, verify, store, and transact their businesses with clients’ most sensitive information: their identities.
With more than two-thirds of the global population already online and 57 percent of consumers saying they created a new online account in the past year, platforms face unique challenges when balancing security and convenience during the onboarding process. Artificial intelligence (AI) is increasingly crucial in analyzing expansive data sets to identify suspicious activity and fraud signals. It’s a powerful (but imperfect) system.
Most notably, AI often lacks transparency, posing risks and challenges in identity verification and decision-making, particularly in regulated industries like tax, accounting, and all financial service and management areas. The answer is a human-supervised AI-powered fraud detection strategy that maximizes fraud detection and deterrence while mitigating risks of bias posed by black box AI solutions.
Here are three principles for designing and implementing this human-supervised AI-powered fraud detection strategy at your organization.
AI is having a moment. Incredible breakthroughs in AI technologies have led to a proverbial gold rush as companies pour billions of dollars into its development and maturation while also touting its off-the-shelf ability to revolutionize how we work and interact with the world.
According to one analysis, it’s also at the peak of inflated expectations, and every vendor seems to be hawking their new AI capabilities to companies desperate for a technological breakthrough to solve their most pressing problems.
CPAs and tax professionals are right to leverage this technology. It brings flexibility and scalability to several critical workflows, including identity verification, allowing teams to quickly evaluate expansive data sets to uncover patterns of suspicious activity. However, it’s also a black box technology. As the AI scientist Sam Bowman recently told Vox, “We just don’t understand what’s going on here. We built it, we trained it, but we don’t know what it’s doing.”
This lack of visibility and explainability is especially problematic for high-stakes work like tax preparation. Its inherent lack of transparency about how decisions are reached makes it impossible for CPAs and tax professionals to explain to customers, regulators, or other stakeholders why a decision was made or produce an audit trail to show how policies were followed. In contrast, when implemented with human supervision and intelligent verification technology, AI can become invaluable.
For example, a fraud analyst can carefully evaluate an onboarding determination, determining how an error occurred and teaching the system how to perform better in the future, creating a positive feedback loop that produces iterative improvements over time.
An AI system without oversight will assume uncorrected bad behavior is accurate and will continue making the same decisions. This self-perpetuating cycle can cause expansive compliance problems for tax professionals that draw fines, sanctions, and reputational damage. Meanwhile, transparent, rule-based systems with human intelligence for regulatory compliance and data management.
By layering human intelligence onto AI, tax professionals can analyze large amounts of data at scale while leveraging the intuition and expertise of their fraud analysts to detect known and novel forms of fraud while mitigating bias and maximizing transparency.
AI combined with human intelligence working seamlessly in a multi-layered ecosystem effectively balances the technology’s advantages and drawbacks. Of course, these outcomes are predicated on people knowing how to use this technology effectively. While 90 percent of Americans say they’ve heard “at least a little” about AI, their ability to actually use the technology ethically, responsibly, and proficiently is much less ubiquitous.
In some ways, this is a reskilling initiative, making teaching, training, and preparation a strategic imperative for CPAs and tax professionals. With an AI-powered tax season just around the corner, now is the right time to introduce training programs in AI ethics, data analysis, and automation technologies, ensuring that everyone is ready to capitalize on its strengths, recognize its weaknesses, and perform their best work for their clients.
This year will undoubtedly bring big breakthroughs for AI, and CPAs and tax professionals stand to benefit from its burgeoning capabilities.
By understanding AI’s limitations, ensuring compliance, and educating users, it’s possible to embrace the best of AI without ignoring its limitations. It’s the key to mounting a robust defense against tax and identity fraud while streamlining operations and delivering better customer outcomes.
Crystal Blythe is the vice president of customer success and fraud at IDology, a GBG company and industry leader in identity verification, AML/KYC compliance, and fraud management solutions that help businesses establish trust, drive revenue, and deter fraud.