AI Regulation Update – The Accounting Technology Lab Podcast – August 2025

August 27, 2025

AI Regulation Update – The Accounting Technology Lab Podcast – August 2025

 Brian Tankersley

Brian Tankersley

Host

 Randy Johnston 2020 Casual PR Photo

Randy Johnston

Host

Brian Tankersley, CPA, and Randy Johnston, discuss how regulations are starting to evolve in response to AI implementation across many professions and industries. Watch the video, listen to the audio, or read the transcript. The Accounting Tech Lab is an ongoing series that explores the intersection of public accounting and technology.

Or use the below podcast player to listen:

===

Transcript (Note: There may be typos due to automated transcription errors.)

SPEAKERS

Randy Johnston, Brian F. Tankersley, CPA.CITP, CGMA

Randy Johnston  00:11

Welcome to the accounting Technology Lab. I’m Randy Johnston with co host Brian Tankersley, we thought it was time to give you a bit of the AI regulation update. There’s been some recent announcements in that and in particularly, we want to focus on first the earlier podcast, which we think was still correct on the AI announcements, but also on the new ones. So before we get too far into this, we just want to remind you that when you look at the safest AI tools right now I’m ranking them in the order of Claude being the safest co pilot, the next safest perplexity, which I just downgraded and I made downgrade it further based on some strategy changes. Chat, GPT, even when you’re on the $30 version, you have to get even higher into the enterprise version to get more safety, and then Gemini and llama, but when it’s popularity rated chat GPT, hands down, most recent stats we have is 61% of AI adoption is on chat GPT with 14% of those users in India, copilot just went around Gemini with 14% market share, Gemini at 13% market share. Perplexity, which I prefer, as my search engine is 6.2% and Claude is running at 3.2% but interestingly enough, for Claude, 80% of their subscriptions are for businesses, and Claude actually just turned off support for open AI who was using Claude coding tools to write their open AI code. I mean, the serious programmers know that the Claude coding is serious, but

Brian F. Tankersley, CPA.CITP, CGMA  01:53

and they they also change their plan some. I just got it because i i paid 20 bucks a month to Claude, like I pay to a lot of other folks. And one of the things they actually, they actually put some limits on how much, how much code you can generate with their tool, because, evidently, some people had really been wearing it out with lower end versions, and they hoped you would have for that applicant, for that use.

Randy Johnston  02:17

Yeah. But as of the day of recording this particular session with you all other AI tools are below 1% so the llamas and the groks and the deep seeks and how many 100 Do you want me to name? Because, you know, there’s a lot of them out there competing, but that’s the best share. Now, the reason I bring up the large language models first is a lot of you are quite familiar with those, but we’re going to encourage you to listen in on our agentic AI podcast here in the accounting Technology Lab, because we believe that large language models can serve as the base, but we think a lot of the practical work is going to get done by agents now driving all of this globally Are is the regulatory environment. So we just want to remind you of the podcast we recorded from the January 23 announcements. See Brian you want to take away on.

Brian F. Tankersley, CPA.CITP, CGMA  03:11

So basically, one of the first things that that President Trump did was sign a whole lot of executive orders. You probably saw a lot of that on television, but one of the ones he rescinded was actually President Biden’s executive order on safe, secure and trustworthy. AI, and this is one that put a lot of guardrails around around. Ai, I think the Trump administration, you know. And again, the primary focus in in the Biden executive order was on safety, and so there was much more government control in it, including sharing of AI models with the government and and other things like that. And so this was, this was something, again, that they, that they wanted to, wanted to get going in here with it. Again, there was, there’s, again, there was a lot of a lot of information on this, but, but again, this, this executive order that Trump, that that President Trump issued, switched from this safety first to this growth first policy strategy incorporated other other things as well. Again, the tech firm supported the the deregulation, the Civil Rights and the labor groups raised quite a bit of concerns in here, there is a separate, kind of shadow government version of this AI action plan called the People’s AI action plan that, again, is is coming out of a lot of the, a lot of the, I don’t know, the think tanks in DC, that emphasizes more ethical safeguards. Again, the the concern here is that we’re in an all out sprint for this. And most of the most of the folks in the current government seem to be more worried about getting losing and getting behind China and and Chinese leadership taking over in this area, as opposed to, as opposed to America. And leadership, and so that’s that’s in here. There were also some relaxations. Justin Hwang, the CEO of Nvidia, actually, actually successfully lobbied for some relaxed chip export rules and some other things like this. They also had provided some support to open open source. Ai, the key milestone here that they were expecting to do that was in this January Executive Order, was the delivery of this federal AI Action Plan, which actually came out. And I’ll turn it over to Randy for that.

Randy Johnston  05:33

Yeah, so that was part of the motivation for today’s session. This was released on July 23 the America’s AI action plan, and it really has three main pillars to accelerate AI innovation, to build the American AI infrastructure, and to lead in international AI diplomacy and security. And each of those three pillars has driving forces behind them. And again, this is a framework. So there’s a lot to come on that, but it really a collateral comment I’ll make. Brian is the The Economist. In late July, early August, produced a session or an article of a version, I guess we’ll say analyzing AI in lots of ways, privacy and the like. Now why do I call that out? Because the writers in there emphasized the concerns about AI being dominated by a government, and whichever government has the dominant position will have a leg up for decades to come. They believe. Now I don’t know if I need to believe all that stuff or not, but I do know that, you know, the people with the best AI, whether it’s industry, businesses or governments, are going to be the winners. And in The Economist, they talked about the capital behind AI being a big deal. So, you know, we want, I want to turn us back here to the AI action plan, but I wanted to cite the economist if you’re a subscriber, that’s certainly worth picking up and making sure that you spend the time to read those now the first pillar was accelerating AI innovation. And Brian, there’s a number of things that are cited here. Are there some that you think are worth calling out for our listeners.

Brian F. Tankersley, CPA.CITP, CGMA  07:22

You know, I think, I think, in general, this is a private sector first approach, as opposed to a government regulation first approach. And so it’s just a different thing in here, you know, again, the let’s stop back and think, think about our experience with tax authorities and other things like this. You know, our tax authorities are miracle workers right now, given the technology they have, which is usually 10 plus years old. And so, you know, you think about if the government side of the of getting, getting, you know, getting notices resolved and and all of the other, all of the other things that happen inside the Internal Revenue Service, you know, selecting audits, doing audits, analyzing transactions, that kind of stuff. You know, if the government sector got AI, it could really be a huge, a huge catalyst for things becoming more more productive and so, so again, there, there are significant issues in here, worried about protecting free speech, but also about synthetic media and AI generated deep fakes. You know, you and I have seen a number of things, I think, in this year’s ripped from the headlines presentation that we do as part of our tech conferences, I actually have a number of people that have been ripped off by deep fakes, and they are coming for you, too, with deep fake video and all kinds of, all kinds of other things like this, but also creating AI ready scientific data sets and investing in AI enabled science and supporting manufacturing and making people more literate at this. And again, driving AI adoption and defense and again, but still protecting it from security risks, you know, and that’s one of the hardest things about technology. Are these ITAR regulations, the international trade and arms regs, because there are certain things that are treated as munitions, like encryption algorithms are treated as munitions. It and so the question that I think’s open right now is going to be, to what extent are AI algorithms and or, you know, to what extent are they going to be shareable with people outside the outside the country? And we don’t know what the answer to that is. We know that we’ve had limits on the chips, and those limits are going to be rolled back a little bit, but, you know, we’re not sure which way it’s going to go in here.

Randy Johnston  09:42

Yeah, yes, Brian. And as a matter of fact, as we’re recording this, I think Jensen Wang from Nvidia is in the White House today. So there may be some adjustments that we will want to do on that. But the core here is the regulatory environment and. You’ve really called out the industries here. It’s manufacturing, it’s it’s government, it’s defense and so forth, getting the leverage and basically trying to keep the regulatory environment out of the way. And so there’s some pieces around that. Now the second pillar was about the American AI infrastructure. Now we’re going to talk more about that towards the end of our time together today, but here they were trying to knock off permitting issues. We have an electric grid, frankly, that can’t support AI, which is why we’ve been talking about nuclear power and other resources in Tech Update this year, trying to bring manufacturing back, in which the apple announcements and yesterday about adding another 100 billion, so they’re up to a $600 billion investment in the US, but also doing high security data centers. And Brian, I have been in some of those, which are stunning how tight the security is. But then working with, you know, skilled workforces and cybersecurity and getting AI incident response, some of these data centers are going to be massive job producers, but the amount of water and electricity that takes to get them built is pretty stunning, as we’ll talk about a little later.

Brian F. Tankersley, CPA.CITP, CGMA  11:19

And the thing that I would, I would throw out here at you, because I live in Tennessee, and even though Memphis is 375, miles away from me, it is in my own state. And so I do hear a lot about the Elon Musk’s massive x.ai data center in in Memphis. And there is, there’s such a such a problem with getting electricity to it that they’ve been running methane powered generators there, the same methane that they use to for this, for the starship launch, they also are using, they also are using methane based generators in here to generate the electricity out there. And there have been some complaints from the locals about that. And so, again, methane is just natural gas, but there, again, there have been some complaints and some concerns about that. You know, obviously this administration has a very different, different view of carbon than the previous administration did. And so it’s, it’s, it’s interesting. Now, the good news about, about this, AI, infrastructure, is that these, these small nuclear reactors seem to be, seem to be just unstoppable at this point. You know, there have been some, there have been a number of designs that have been launched. Actually, the one of the people that lives over, over the back fence from me in my home actually works on on a nuclear reactor in in Oak Ridge, where a lot of this stuff comes from. And the, you know, it’s a it’s a very interesting time, because that’s in here. Now, one of the major points in here on this AI infrastructure is they have significantly rolled back the regulations on permitting of AI data centers that exceed 100 megawatts, okay? And 100 megawatts is a lot of data center, okay? And it’s, it’s basically the output of a of a pretty large power plant, maybe multiple large power plants, okay, so it’s a, it’s a pretty healthy amount of data, pretty healthy amount of electricity we’re talking about here. And so trying to figure out where these things can be built effectively, and where we can where we have all the infrastructure pieces needed, you know, the connectivity, the electricity, the the water to cool, the data centers and and all those things in the massive quantities required is really a significant challenge, and so they’re trying to, trying to go through here with this.

Randy Johnston  13:47

Yeah. So that being said, notice this is largely ignoring cryptocurrency changes and requirements as well. And you know, the good news is the AICPA will have their blockchain symposium in October, October 7, so we’ll know more, right then, I suspect. But you know, this second pillar then, I think, illustrates a lot of the changes that are done on the infrastructure side. But the third is in diplomacy and security. Now, Brian, I know you carry a lot of you know, thinking about security. I like watching you present on it, but we both have kind of a cautious profile on AI security, just like we categorically make people aware of into its position on AI and so forth. But what should we know about the third pillar here.

Brian F. Tankersley, CPA.CITP, CGMA  14:40

Well, realistically, we are going to have some some limits on exporting of the technology in particular, as well as the as well as the chips in here. There, there are significant concerns, because most of these chips are made by companies like TSMC that are in Taiwan. And, you know, again, so. China. China believes that Taiwan is a renegade province that is part that is an inseparable part of the whole of China. And so China wants to take over Taiwan, obviously, and and if they did, it would have a huge impact on this. So we So America is trying to make significant investments in chips being built in in the US and I believe, in the in the initial announcement for the action plan, they actually talked about more of the chips being built stateside, as opposed to offshore, which is again a a control thing in here, they also came up with some new controls for some of these semiconductor manufacturing systems. Basically, they got the they got the Dutch companies that the Dutch company that does the chip manufacturing tools to agree to to not ship any of their, any of their latest tech, latest generation technology, to China, which is a pretty big deal here. They do have some national security evaluation and some biosecurity tools in here. And they’re particularly pursuing some tools to prevent the misuse of AI and biology to, again, create, create AI based, AI based genetic engineering of bugs and things like that. Yeah. And, you know,

Randy Johnston  16:18

regardless of what you think of the origin of covid, you know, with the new new variant, the Stratus, which is officially called xfg, you know, that’s spreading very rapidly again. But also even things like the Chicago, I can’t say it, right, Chicago mosquito virus, you know, these type of bio things, they’re very confusing. And here’s the real formulas. I see it. Using AI to fabricate bio weapons is a real deal. And so that in the wrong hands, I think, could be a problem, whether it’s done by the US or done by China or done by some terrorist. But

Brian F. Tankersley, CPA.CITP, CGMA  17:04

the good news for us is that all of those drugs that AI promised to create that we’re going to solve, all of our health maladies, they haven’t come to market yet. So maybe this thing that AI was going to solve with biology, with biologics and with with medications and with genetic engineering, maybe it’s a little harder than everybody thinks it is. Okay, so,

Randy Johnston  17:26

but you know, you know my old favorite saying from 2016 if you say it real fast, it sounds easy. Yeah. Now, any other key things though, Brian on these, you know, pillars. Because, again, I want to talk a little bit about data center and security to wrap this session up. Well, I

Brian F. Tankersley, CPA.CITP, CGMA  17:49

mean, realistically, I think, you know, the the current president is more of a deal maker, and so I think you’re going to see, I think you’re going to see more brokering of deals and less of you know, more policy, more deal driven policy, as opposed to policy driven deals. So it’s a, I think it’s a, I think that the strategy will will evolve significantly as time comes on.

Randy Johnston  18:21

All right. Well, that said, then let’s talk AI investment for just a minute, because meta has announced that they’re going to build a super data center called Hyperion in Indiana. It’s going to be about the size of the island of Manhattan. The energy it consumes will be roughly the same as the country of New Zealand in a given year. So that’s one hungry data center. They’re also building a super intelligence lab, and they’re hiring people with nine figure salaries, left and right. Likewise. Why

Brian F. Tankersley, CPA.CITP, CGMA  18:51

the salaries have have absolutely blown my mind. So many Aqua hires going on, where, where meta and others are hiring people away and paying just massive amounts of money to these people, because it’s clear that they see an existential threat to themselves in here, and they see an opportunity to draw that drive, that market cap enough that they can afford to pay somebody a billion dollars, you know, I don’t know, it’s, it’s, it’s an interesting time.

Randy Johnston  19:17

It is. And in fact, the speaking of a salary, you know, I think it’s fascinating that, you know, AI engineers are rejecting compensation of one and a half billion dollars. Think about that for a minute. Hey, I’d like to pay a billion and a half. Do you mind working on this? You know, I’m amazed how many people are turning down those

Brian F. Tankersley, CPA.CITP, CGMA  19:42

offers. But that’s nice, nice work, if you can get it, I guess. I guess so.

Randy Johnston  19:46

Well, you know, the Wyoming AI data center is another great example. This data center is estimated to take the same amount of power as all homes in Wyoming. So, you know, it’s not only a power thing, though. You. How Wyoming way, as I recall, it’s a little dry, and to cool these data centers, a lot of times you need water, so you’ve got both a water and power issue. And of course, open AI’s Stargate Project, which is, you know, very extraordinary $500 billion deal. There’s already some contention around that, but according to Epic AI, they believe that the optimal investment in AI is much bigger than the current investment, estimating to be about 300 billion stargates would be next year. They think $25 trillion is needed to build these data centers that you know, we’re talking massive, massive amounts of money here,

Brian F. Tankersley, CPA.CITP, CGMA  20:41

yeah, but you’re, you’re, you know, you’re trying to, you know, that’s, that’s, that’s, that’s something that’s just not, you can’t spin up that kind of spend that rapidly. I mean, there’s just too much you can’t build. You can’t, you know, you know you You’ve often said that, you know, you can’t, you can’t plant CIT, you know, you can’t plant 60 acres of corn and end up with it in one day, as opposed to an acre of corn in 60 days. You know, it’s, it’s the same kind of thing here that that again, it just takes some things. Just take a long time, and with the amount of infrastructure that’s required for water and electricity and and again, the data centers themselves and the security of those and the workers and all the construction costs, because these things have to be super hyper secure, you know, it’s just, it’s you just can’t spend that much money that quickly.

Randy Johnston  21:36

Yeah, understood. Well, we started this session with a little bit of a safety ranking on platforms, and I put Claude at the top of that list. And that’s possibly probably really a function of the constitution of anthropic Claude and the guidance of Dario Amodei and others in their group. And there is a fair bit of concern about this, but a Modi contends that the power that AI bestows will be safer in the hands of a democracy like America than autocracy like China, and he has cited that the relaxed exports of AI chips to China in response to Nvidia’s Jensen Wang’s lobbying may be an enormous geopolitical mistake. Now bottom line, friends is Brian and I don’t know, we just are watching this stuff, trying to say, What the What the heck is going on? And I thought another interesting quote from Dario was that no bad person should ever profit from on success? And that’s a pretty difficult principle to run a business on, but that’s what anthropic is doing right now. So any I know you’ve watched a lot of this too, but any other comments here,

Brian F. Tankersley, CPA.CITP, CGMA  22:55

you know the the thing I would say here is Claude seems to be gaining a lot of mind share, especially like you mentioned earlier, around coding and in the enterprise. But again, it’s still a very distant competitor to the leaders. So again, number one and two are still chat GPT and Microsoft copilot. And so it’s a, you know, it’s, again, this is, this is a market where there’s going to be huge amounts of investment from huge amounts of different places, and be a lot of people that want to get stupidly rich and so and a lot of people will get stupidly rich, but it’s going to take a while for the technology to evolve and then to consolidate into A number of offerings in here.

Randy Johnston  23:41

So with that said, we just want to call out four ways that I think powerful AIS could go wrong. And this is from the Economist. It actually originated with Google, deep minds, April paper by Shane Legg, so he’s the guy that’s really behind this, and he flagged four powerful ways that AI could go wrong. And we’ve already talked about things like bio weapons or bombs or pogroms that could all happen. But we also want to call out that there are both America and Britain, AI institutes, which formerly were called safety institutes, but they were renamed after JD Vance earlier this year, said we don’t want safety, okay, well, then we’ll just be an AI Institute, is really what happened there. But we are seeing some models like rocks x ai, which was out for a couple of weeks, and doing mischievous things, I would say, as well as open AI’s chat GPT, which was out for two weeks, encouraging people to kill themselves, for example, both of those have happened in the prior 90 days of us recording this podcast. But the four key things that Shane Legg called out misuse, misalignments, mistakes and. Structural risks. And I think his view on this is actually pretty sane. You know, I try to be logical about these things, and I want everybody to profit and be successful, but I also don’t want the bad guys to win. And so, you know, I’m going to let you comment here for a moment,

Brian F. Tankersley, CPA.CITP, CGMA  25:19

Brian. And I think it’s, I think it’s important here to also acknowledge that grok is the, you know, grok is the no boundaries version, you know, the ultimate, the the Ayn Rand, you know, objectivist, do whatever you need to do, kind of thing. And again, I mentioned that to you here because, you know, with, with many of these, when you try to create deep fakes, with with world leaders in them, they will say, I’m not allowed to do that. Okay? Grok, on the other hand, will do things all the time, you know, if you want to see, if you want to see a picture of, you know, Donald Trump and Justin Trudeau kissing, it’ll create it for you. Okay? And so this, this whole thing that we’re, we’re talking about in here, is that there are different approaches. Now, one thing I want to call out to you here is that we did not include grok in the list of AIS for accounting, okay? Because grok is, well, I don’t know grok is kind of the the speed shop grok is to grok is to AI as a speed shop, is to is to school busses. Okay? It’s not that you don’t want a school bus that can potentially go fast, but that’s not the application you’re solving for, okay? And so, so rock is, again, the X AI product in grok. I think there’s going to make a lot of interesting innovations in technology, maybe, but I’m grok is just not always suitable for work, and so we just have to acknowledge that and be aware that that’s one of the that’s one of the design it’s not a bug, it’s a feature that’s one of the design features for that product. And under no circumstances would I go near grok with anything sensitive

Randy Johnston  27:00

that’s kind of an interesting comment on that platform. Brian and I thought it was also interesting that anthropic is attempting to license Claude to the federal government for $1 I don’t know if the deal will get done or not, but that’s kind of the positioning that they’re taking. And in the big picture, this is an escalating war between these platforms, and they’re they’re kind of thinking it’s going to be a winner take all. And right now, we’ve got about 100 choices, and we’re talking routinely about four or five of the dominant choices, and we covered the percentages at the beginning of this podcast, so you’d have at least some statistical basis of where the platforms are at right now. So all I want is everybody to be successful and the bad guys not to have as much access as the good guys. And for all of us that are practicing accounting, doing the best we can for our clients without compromising their information. So, Brian, any other parting thoughts for our audience today?

Brian F. Tankersley, CPA.CITP, CGMA  28:06

You know, I think, I think there is AI regulation. I think you’re going to see less federal AI regulation than you’ve had in the past. I think the other thing that we have to be, we have to just, just kind of bear in mind here is that we’re going to have fewer guardrails than we would have had, had there been a Harris administration and Trump is, is, you know, they’re going to go, they’re going to go hard at, at trying to win this thing, okay? And, and it’s, you know, we’re going to have some bumps and problems that we have to deal with. But again, it’s, you know, the difference in approach is that, in the previous approach, it looked like they were going to effectively anoint three or four different things, and then they were going to make it hard for everybody else to compete with them. With this. It’s an all out all, you know, no holds barred. You know, it’s, again, it’s not looking like the boxing match that that President, that the previous president wanted. It’s going to look more like, you know, a tag team octagon match beat to the death, you know. So I guess it’s going to be Fight Club. So I guess, you know, I guess our, our chief regulator, is going to be named Tyler Durden, I

Randy Johnston  29:18

suppose. Well, I hadn’t thought about this as a wire cage match, but I guess that is possibility here. So I would be remiss though, if I did not call out that a year ago, August one of 2024 was when the AI European Union AI Act went into force. And in countries around the world there are AI regulations in place. So if you’re doing business internationally, you may fall under those regulations. So for my CPA firms and businesses that deal internationally, I usually use the AI European Union Act as my guidance. And it’s pretty sane. You know, sometimes. EU goes over the top, but what they ask there isn’t over the top, as I’d see it, and so I would be remiss if I wouldn’t have called that out. So Brian, it’s always a pleasure to talk with you on the podcast here, and I hope you our listeners got to learn something new today. We appreciate you listening in and we’ll talk to you again soon on another accounting Technology Lab. Good day.

Brian F. Tankersley, CPA.CITP, CGMA  30:26 Thank you for sharing your time with us. We’ll be back next Saturday with a new episode of the technology lab from CPA practice advisor. Have a great week.

= END =

Thanks for reading CPA Practice Advisor!

Subscribe for free to get personalized daily content, newsletters, continuing education, podcasts, whitepapers and more…

Leave a Reply