Artificial intelligence has evolved from a boardroom buzzword to an everyday workplace reality. Recent data shows that 91% of employees across global organizations now use at least one AI technology, with the World Economic Forum projecting that 75% of companies worldwide will adopt AI by 2027. While 65% of workers express optimism about AI’s potential, 77% simultaneously worry about job displacement. That anxiety may be warranted, but perhaps not for the reasons they think.
The real threat isn’t AI replacing jobs entirely, but rather creating a new professional hierarchy. Those who use AI as a thinking partner will pull ahead, while those who treat it as a shortcut machine will fall behind.
Aaron Conway, founder of Ronin Management PTE, a consultancy specializing in AI search optimization and workplace digital transformation, has observed this divide forming across industries.
“We’re witnessing the emergence of two distinct worker categories,” Conway explains. “On one side are people who use AI to enhance their decision-making and creativity. On the other are those who’ve essentially outsourced their thinking entirely, copying and pasting outputs without verification or critical analysis.”
This division has significant implications for career progression, with companies beginning to recognize the difference between employees who can think strategically and those who’ve become dependent on automated suggestions.
Conway elaborates below.
How AI is creating a new office skills gap
The workplace skills gap isn’t about who has access to AI tools, as nearly everyone does. It’s really about how people use them. Conway identifies three concerning patterns that separate high-performers from those at risk of stagnation.
1. Over-automation of thinking
The most insidious problem isn’t workers using AI, but workers letting AI do their thinking for them. When employees automate not only the repetitive tasks but also judgment calls and strategic decisions, they gradually lose the ability to perform these functions independently.
“I’ve seen teams where people can’t draft an email without consulting ChatGPT first,” Conway notes. “They’ve automated away their ability to communicate authentically. The tool was meant to assist, not replace, human judgment.”
This creates a dangerous dependency loop. The more someone relies on AI for thinking, the less confident they become in their own capabilities, leading to even greater reliance on automated suggestions.
2. Decline in independent problem-solving
Problem-solving requires context, nuance, and the ability to connect seemingly unrelated information, which are skills that atrophy without regular use. Workers who immediately turn to AI when faced with challenges never develop the mental frameworks needed for complex reasoning.
Conway observes that this manifests in meetings and collaborative work. “You can spot it immediately. Some people contribute original analysis and connect dots in unexpected ways. Others regurgitate what their AI assistant told them that morning, often without fully understanding the underlying logic.”
3. Blind trust in AI outputs
Perhaps the most dangerous trend is the uncritical acceptance of AI-generated information. Too many workers treat AI outputs as gospel, failing to verify facts, check logic, or question recommendations.
“AI tools are phenomenal at sounding confident, even when they’re completely wrong,” Conway warns. “I’ve reviewed proposals where financial projections were mathematically impossible and technical recommendations that would never work, all confidently presented because that’s what the AI suggested.”
This blind trust extends beyond factual errors to strategic misjudgments. AI models lack industry-specific context, organizational culture understanding, and the ability to weigh human factors in decision-making.
How workers can stay valuable in an AI-heavy workplace
Rejecting AI tools is not the right solution. Instead, workers should be developing a sophisticated relationship with them, using them as collaborators rather than replacements for human intelligence. Conway outlines three ways in which this can be achieved.
1. Creative reasoning
The ability to think laterally, make unexpected connections, and generate novel solutions remains firmly in human territory. AI excels at pattern recognition but struggles with true innovation.
“Focus on developing your ability to ask better questions rather than finding faster answers,” Conway advises. “AI can help you explore possibilities, but you need to be the one imagining what’s possible in the first place.”
2. Verification habits
Treating every AI output as a first draft rather than a final product is essential. Conway recommends a systematic approach to verification that becomes second nature.
“Develop a healthy skepticism. Cross-reference important facts, stress-test logical arguments, and ask yourself whether recommendations align with your professional experience,” he suggests.
3. Hybrid human-AI workflows
The most effective approach combines human judgment with AI capabilities at each stage of work. This means understanding what AI does well (data processing, pattern identification, and generating alternatives), and what humans do better (contextual judgment, ethical reasoning, and relationship management).
Conway’s team has developed a framework they call “human-first automation”. “Let AI handle information gathering and initial analysis, but keep all strategic decisions, quality control, and final judgment in human hands,” he explains. “Use AI to expand your capacity, not replace your capability.”
Workers who master this balance will find themselves increasingly valuable as AI adoption accelerates, while those who simply copy and paste will discover their roles becoming precarious.
Photo credit: tadamichi/iStock
Thanks for reading CPA Practice Advisor!
Subscribe Already registered? Log In
Need more information? Read the FAQs