Mythos: The AI Strikes Back – Accounting Technology Lab Podcast – May 2026

May 6, 2026

Mythos: The AI Strikes Back – Accounting Technology Lab Podcast – May 2026

 Brian Tankersley

Brian Tankersley

Host

 Randy Johnston 2020 Casual PR Photo

Randy Johnston

Host

In this episode of the Accounting Technology Lab, hosts Randy Johnston and Brian Tankersley explore the alarming implications of Anthropic’s experimental AI security model known as “Mythos.” The discussion centers on how advanced AI systems are dramatically accelerating cyberattack capabilities, vulnerability discovery, exploit chaining, and zero-day attacks. The hosts compare Mythos to a “cybersecurity nuclear bomb,” referencing reports that the model identified vulnerabilities at exponentially higher success rates than prior AI systems.

The Accounting Tech Lab is an ongoing series that explores the intersection of public accounting and technology.

View the video below:

==

Transcript

(Note: There may be typos due to automated transcription errors.)

SPEAKERS

Randy Johnston, Brian F. Tankersley, CPA.CITP, CGMA

Brian F. Tankersley, CPA.CITP, CGMA  00:00

Welcome to the accounting Technology Lab. Brought to you by CPA practice advisor, with your host, Randy Johnston and Brian Tankersley,

Randy Johnston  00:09

welcome to the accounting Technology Lab. I’m Randy Johnston with my co host, Brian Tankersley, we are pleased to have you along today as we discuss Claude anthropics mythos. Now this new AI platform was held back by the entropic company, and I believe, in fact, it was properly done so now, as it turns out, initially, they said that they’d released this to 40 companies. The quantity at the time of our recording is 57 and the reports are really stunning in terms of the capabilities of the platforms. So Brian, I know you’ve done a bit of research on this. I’ve looked at it too. What would you like our listeners to know about mythos and what anthropic is doing here?

Brian F. Tankersley, CPA.CITP, CGMA  00:57

Well, with the information revolution, when we had bulletin boards. We had people that had things like the Anarchist Cookbook that showed people how to make pipe bombs. We had script kitties. We had all kinds of things like that. And everybody was worried about the about how that was going to affect our discourse. And we made it. We have now AI that has the capability of finding all kinds of security vulnerabilities, some known, some unknown, and looking for new ones at a marginal cost that’s fairly low. You know, the thing about AI and its impact on the way you do things is, you know, if I have a question that’s in the back of my mind, I simply will go get an agent and kick it off and have a have it solved the problem solved the question that I was asking at a marginal cost that’s much lower than me having to do it myself. And so this thing that we have to get used to here is that it’s going to cost the bad guys much less to develop new exploits and to write code for new exploits and to penetrate new systems. So anyway, there’s this. There’s actually stories in a couple of national security blogs, because I think this does have national security implications. It was, there was a big article in The New York Times in, I guess, the second week of April, and then right after April 15, there was a national security blog that referred to it as anthropics nuclear bomb, which I don’t think really is, you know, I don’t think that hyperbole is that far out, really. So here’s what’s happened. In April, anthropic law launched this Claude mythos preview frontier model, especially good at computer security tasks. You know when Randy and I are out meeting with vendors and talking to software companies. All the software companies are saying that Claude code is really a revolution in the way you develop software, and so obviously, anthropic was trying to create a model that was more efficient at generating code and something that they could scale to support the booming number of people that are trying to use Claude company, you know, anthropic, said the model was strikingly capable at computer security tasks. Described it as a watershed moment for security. Okay, the public write up that anthropic put out said that the same capabilities that help patch bugs can also find weaponized and chain vulnerabilities faster. You’ll recall, in the past, we’ve talked about AI generated malware, and you know, there a lot of the a lot of the models have have speed bumps in them and limitations to try to stop people from creating malware with applications. Unfortunately, when you get into, when you get into computer security and looking for vulnerabilities. One man’s pen test is another man’s hacking. So again, this is why. Again, what anthropic did responsibly is they pulled the model and said they’re not going to release it immediately, but they set up a project called Project Glass wing, and they gave the model to a number of big hardware software companies. So again, here’s New York Times. They said that again, we’re entering a period, maybe entering a period where elite exploit development becomes far more accessible, just like, again, the scripts with the script kiddies back in the 90s, the people wrote scripts and so good, you know, strong people with strong computer skills wrote scripts, and then people that didn’t have great skills could run those scripts and get things done, kind of like people do today when they’re doing pen testing with some Linux distributions like Cali, it has built in some tools to make things a lot easier.

Brian F. Tankersley, CPA.CITP, CGMA  04:34

So

Randy Johnston  04:34

yeah, to your point on all this, just to help our listeners get a sense of how strong this is the prior Claude model, Claude Opus 4.6 had about a point 8% chance of discovering a vulnerability and putting the parts together, chaining it, if you would, to actually create an attack product. Now we knew and have reported to in prior. Other labs that the ability to create these tools has dropped to less than two hours. So when a vulnerability is discovered, the hackers can actually attack you within a couple of hours, which is why we’ve converted our recommendation to hatch immediately, even though it’s going to break some things. Upon occasion, this new mythos product is reported to have a 72.4% success rate. In other words, it finds and takes advantage of these vulnerabilities at a very high rate compared to any prior model, also

Brian F. Tankersley, CPA.CITP, CGMA  05:37

so 1000 times, so 100 times more likely to discover that than the previous one, so two orders of magnitude.

Randy Johnston  05:44

Yeah. And so this gets even more interesting, because since all this came out about Claude mythos, of course, Google has released their next strategy and all their new TPU version eights. And of course, you’ve got Mark Zuckerberg back in the lab, coding with his friends on the meta models, and open AI has released their product of this class. So all of a sudden, you’ve got at least the three major competitors all building more capable models with similar capabilities, not as strong as mythos from what the reports are, but still strong. And of course, you’ve got, you know, Musk and Sam Altman fighting in court as we’re recording this. So you know the defense,

Brian F. Tankersley, CPA.CITP, CGMA  06:33

and that’s going to be a very interesting court case. I think,

Randy Johnston  06:36

I hope we’ll report in a future lab. And again, we’re not here to bring you the news, but I’m just watching all of this maneuvering, because we could talk about grok and what some of its capabilities are too or deep seek. But we want to stay with this mythos reckoning, because, in fact, the nuclear bomb piece is not a far fetched item. The other thing that I’ve been reading in the meantime, Brian is about bioterrorism, and that the model where they’ve released this to the 57 companies, they’ve been able to create bio attack tools. And so, you know, I’m not trying to go back to covid, but, you know, there’s a lot of people that thought that was a attack tool, I don’t know, but I do know that you can bio weapon create things with the current AI products and myth those made it even greater.

Brian F. Tankersley, CPA.CITP, CGMA  07:35

Yeah, it’s a, you know, it really does. It really is a force multiplier for whoever’s using this. And so it really means that we’ve got, we’re going to have to really, you know, get our head down, elbows out, and start really working more and more on security, and we’re going to have to be much more timely on the patching that we’re doing. You know. Again, mythos reportedly found zero day exploits in every major browser and operating system, even a 27 year old vulnerability and open BSD just a reminder that BSD is considered to be one of the most secure variants of Linux, and so to find a 27 year old vulnerability means that this is really looking hard, and it’s, you know, things are going to come from places we didn’t expect. And

Randy Johnston  08:17

one of the reports on that, Brian, and you’re the one I think, that told me about this, like 5 million attempts have been made to find bugs in BSD, and this one had escaped detection until mythos came along.

Brian F. Tankersley, CPA.CITP, CGMA  08:33

I have a childhood friend that does pen testing for really large enterprises, and a few years ago, I was talking to him, and I said, You know what? You know, if I’m trying to run something that is hyper secure, what should you run? He said, Oh, open BSD. It’s the most secure product out there. It’s got the most people looking at it. I’d run BSD. So this is Burley, Berkeley systems, distribution of Linux. So I’m just telling you this because it’s a, you know, to find something in that really says something. The anthropic also said they found a 16 year old video FFmpeg Video Player Bug that mythos identified, and Hacker News summarized it as 1000s of high severity zero days across major systems. Now we used to freak out when we had a single zero day in a month in you know, disclosed in the security community. The concept of dumping 1000 high severity zero days onto the market all at once and forcing all that software is really the equivalent the cyber equivalent of nuclear war, because your systems are not going to be able to be reliable underneath that.

Randy Johnston  09:43

And you know, to that zero day point, Brian the Microsoft Patch Tuesday in February was like record quantities of these zero days dumped, if I recall the number, it was six zero days. Days with 58 vulnerabilities, and that was like considered super extraordinary. And when you just look at what happened with Patch Tuesday in the first four months of the year, January, February, March, April, the zero days are increasing in almost all those months. And don’t quote me on this, but I think January had three zero days, and I think March had two, and I think April had two, so that kind of gives you the run rate. And

Brian F. Tankersley, CPA.CITP, CGMA  10:33

yeah, and 10 years ago, when you had one zero day, it was a red letter day that everybody paid attention to.

Randy Johnston  10:40

Yeah, exactly. And so notice Brian is saying 1000 Okay, whoa. Wait a minute. And the Citibank reports on this were pretty stunning, because they thought almost all of their bank systems would be compromised if anthropic Smith, those went to market. So just we’ve talked about that in the past too. We’re not conspiracy theorists. We’re just, you know, accounting technicians that watch stuff, and it’s been clear to us that the banking systems could be compromised. And just consider what would happen if all banks and ATMs and credit cards couldn’t function. So just ask yourself today, could you How long could you function without any cash, without any credit, without any debit cards?

Brian F. Tankersley, CPA.CITP, CGMA  11:32

So it’s funny you say that. I’ll share a personal story. It was 1989 and they had the World Series earthquake in San Francisco, that you know when the a’s and the Giants were playing in the World Series game I was

Randy Johnston  11:45

actually on. I was there, I was in,

Brian F. Tankersley, CPA.CITP, CGMA  11:48

okay, and I’m on the way to an evening lab for an accounting class at UC Davis where we were doing practice sets. Okay, so this is that far back, and I get there and I see the building shaking everything. And then I didn’t think anything of it, but it was the time of the month when my parents that lived 100 miles away put money in my bank account so that I could do what I needed to in there. So I was pretty much out of food, out of money, out of everything. It was time to go, and when the earthquake hit Bank of America, Wells, Fargo, all the big banks basically shut down, and you couldn’t use your ATM card or write checks for about three days. Well, folks, I had water chestnuts for lunch one day because that’s what I had in the cabinet. Okay? And I’m just telling you this, because this is something that we never thought could happen, but it’s something where it got very strange in Davis, California, there, because there were a lot of people that, you know, had plenty of money in the bank account and just couldn’t access it, you know. And my situation was, I didn’t have any money in the bank account, and I couldn’t access that money in the bank account, and I couldn’t get anybody to put any in there. So it created a, you know, kind of like this, kind of like this Iran thing has created a gas price and diesel price crisis because of the, you know, because of the change in the system. It’s kind of the same thing where we where, you know, again, we can have, we may, you know, we may have some situations where the banking system has a few hiccups as a result of this. So

Randy Johnston  13:24

in all the times I’ve known you, Brian, I haven’t ever heard the water chestnut story, so I think I’ll put that on the menu with you for Kentucky Fried Chicken. I think is there’s another story friends on that, which we won’t tell on this episode, but it’s

Brian F. Tankersley, CPA.CITP, CGMA  13:38

and I definitely won’t eat

Randy Johnston  13:39

and it’s kind of like my partner that founded nmgi with me, who got stranded in a snowstorm and would never have cream soda and Oreos again. So I do understand, upon occasion, how that happens. But back on topic here, this can be catastrophic, so I am completely respectful of Dario amodi, and actually his wife is the president of anthropic, which is a little odd, but that’s the structure. They started the business with a

Brian F. Tankersley, CPA.CITP, CGMA  14:13

any man that believes they can tell their wife to do anything is wrong.

Randy Johnston  14:20

She’s also, you know, helping run this company, I think, in a good way, with the Constitution that they have, and generally, Claude is doing a lot of the AI controls correctly, in my mind. And there’s a lot of these other AI providers that not so much, but that’s maybe a story for a different day, too. So this breakthrough is really interesting, and talk about the benchmark signals if you can. Brian,

Brian F. Tankersley, CPA.CITP, CGMA  14:49

so this is kind of by the numbers here. Anthropic went through and had 10 Full Control Flow hijacks that were done against fully patched OSS fuzz targets. And it’s. Internal benchmark. And so what this means is that we had machines that were fully patched. We thought were completely good to go, and it found 10 different hijacks where it could go in and do things generate 881 working JavaScript shell exploits that for patched Firefox vulnerabilities versus two and Opus human reviewer, you know, and claude’s sense of what is a good exploit is that its judgment is actually pretty good. When human when they were looking at vulnerabilities, human reviewers matched Claude 90% of the time on 100 898, reports, and they were within one level 98% of the time. So it has a good feel for what can work and what’s not. You know, again, we’ve got lots of things by the numbers up here, but what this means is going to happen is the discovery time is going to compress. And again, this, you know, this is going to be this vulnerability discovery, proof of concept, generation and exploit refinement can all be done, almost outsourced to Claude, the pack the again, just like we’ve seen with tax research. You know, the real advantage of AI tax research is that you can consider every different you can consider so many more different scenarios and so many more different fact patterns that you can put against things. So you can get a better decision and assess the risks much better. Same kind of thing here they can, instead of testing one exploit in a few hours, they can test dozens or hundreds of exploits. And so the likelihood of success is much higher and the likelihood of surprises is much less if you’re the attacker and much more if you’re the person being attacked. But what it means also is that, just like Randy said earlier, where the default is going to have to be, where you patch immediately. I don’t know that I’m not going to, you know, I don’t know that I’m not going to try to find a way to patch every hour instead of once a day, kind of like we did with antivirus. We used to get antivirus updates once a week, then we went to once a day, then we went to once an hour. And, you know, I don’t know that. I’m not going to try to check for those kinds of things, you know. But again, this war on the rocks, this national security blog that is very influential in policy circles, said that it was again a strategic inflection point, because it means that we don’t know how quickly these frontier model cyber capabilities diffuse into state, criminal and commercial ecosystems. So at what point do the North Koreans and the Iranians and you know the you know any other, any you know, or al Qaeda, or you know, any other bad actor that wants to disrupt our civil society, at what point do they go in and deploy these things now, again, I’m not trying to scare you to death, but I want you to know that your defensive posture for your security needs to change. Okay, the arguments you’re having about two factor authentication with people about whether or not it’s necessary that Party’s over, okay?

Randy Johnston  17:55

And rather than just governmental espionage, think about corporate espionage, a competitor using this against you, or a client being attacked by a competitor. What do you do there? So, I mean, this is very far reaching. Sorry to step on your comment there. Brian,

Brian F. Tankersley, CPA.CITP, CGMA  18:15

no, no, I completely agree. You know, this is the normal. This is from a RAND Corporation publication on charting a course towards cyber, cyber security. And one of the things about this is that this whole timing of a security vulnerability, what really is happening in this case, and let me get this opened up here. What’s really happening is, instead of it taking, you know, days to do this, it’s now becoming ours, okay, and so that’s the thing that’s really happening with this. The timing of this is happening much faster. You’re gonna have to react much faster again, to do things. The other thing that’s happening is for things to work, you know, for a hack to really work out. We’ve got three different things that have kind of got be daisy chained together. Okay, we have to go out. We have to go out and find the vulnerabilities, okay, the weaknesses in the systems that can be exploited. Then we have to find the ways to exploit it. Okay, so the, you know, the it’s not just knowing that there’s a weakness. It’s finding the weakness and using it to your benefit. And then we have to move laterally and take over additional parts of it. So again, the we have to extend further. And so the thing that AI is doing is it is breaking down these two walls in between these three and making it possible now to try at to try attempts at scale with a much lower marginal cost, which means that we’re going to have to be much more careful, and things like pen test may become much more important in the near future.

Randy Johnston  19:49

So many of you have advisory services for cyber security. In fact, I talked to a firm that is just launching into that area. And I want you to consider the risk to the firm by providing that particular service, because your cybersecurity team is going to have to be really on top of it in the very near term. I mean, they’ve always had to be on top of it. But you know, as George decay of Star Trek frame would say, Oh, my, this is going to really change things

Brian F. Tankersley, CPA.CITP, CGMA  20:25

well. And the thing about it is that you know who’s going to work Christmas to watch the vulnerabilities. You know, something as innocuous as taking a weekend off could be catastrophic in there if you don’t have proper s i m tools to monitor what’s going on in the environments and alert you if something strange happens.

Brian F. Tankersley, CPA.CITP, CGMA  20:47

And of

Randy Johnston  20:48

course, you were correct that I hadn’t even considered. We’ve talked about it before. When are you going to get attacked? Fridays before a holiday? That’s a real common timeframe, because nobody looks at it for three days or, you know, whatever. And, yeah, we’re gonna have to have tools that are working all the time. So anthropic also rolled out a project that I thought was very wise, called Glass wing. Now, what they did here was they got a group of companies together, and here there’s about 40 original players in this including big names that you would respect, Apple and Cisco and CrowdStrike and Palo Alto and banks like JP Morgan and Microsoft was in here, and Nvidia was in here. What I noted on the list is pretty much everybody that I respected that was good at security was in the list, but notably absent was meta and open AI not in the list. And so that was interesting to whether they were invited or not. I don’t know the politics behind it. That’s not the deal, but this is

Brian F. Tankersley, CPA.CITP, CGMA  21:58

I

Brian F. Tankersley, CPA.CITP, CGMA  21:58

could imagine. I could imagine that anthropic wouldn’t want two major AI competitors looking at their model and that they’re cutting edge model.

Randy Johnston  22:06

I get it. So the collaboration here was really kind of a security collaboration, and anthropic called this an urgent attempt to set up defenses, and they said they would provide up to 100 million in usage credits that would be tokens, if we want to say it that way, and 4 million in direct donations to open source security organizations to try to get this one of the current ways of thinking about token using usage in the Bay Area, and frankly, in New York and other places where there’s a lot of AI in process, whatever you pay an employee, let’s say it’s a quarter million dollars. They’re expected to use at least double their salary in tokens. And that’s a bunch of tokens. You know, where Geminis tokens are, about 75 cents, ish, you know, to get $500,000 worth of tokens used in, you know, the same type of time frame. That’s a bunch.

Brian F. Tankersley, CPA.CITP, CGMA  23:06

But I, but I will tell you that I have Facebook friends that have they have multiple Claude Max subscriptions. And the reason they do is because they hit their token limit around lunch on one of those Claude Max tools when they’re really cranking, and they have to get to a second one, and they say sometimes they have to go out and buy tokens on the market and pay more for them. You know, I talked, we talked, I talked to you, and I talked to a founder most Mondays of a software company, and he actually said that the benefit, one of the benefits of having the max subscriptions is that you can get those API keys, and then you can run them through whatever engines you’re using to generate code or whatever. And he says that, you know, you can get $600 worth of tokens for $100 a month subscription. So you know, again, that pricing will adjust. But I’m sharing this with you more than anything, because this, this use of AI is, you know, it’s changing, and it’s changing software and technology in ways we didn’t see coming.

Randy Johnston  24:11

Yeah, and you know, to that point, I’ve started claiming, possibly incorrectly, Brian, that instead of having a subscription at $30 you know, for copilot, or $200 for chat, GPT and so forth. In other words, a per user subscription. That what will actually happen is we will buy tokens and run agents so we won’t need the chat interfaces with as many people, because frankly, that’s not a very good use of money or tokens or time if you build the agents correctly. So that’s a little different way of thinking about it, because in prior labs, we’ve talked about model, context protocols, and we’ve talked about agents. And if you can be in seeing a world where you have agents that go out and do your work, they will consume tokens. Tokens. But a lot of this usage of AI becomes routine when you do it with agents

Brian F. Tankersley, CPA.CITP, CGMA  25:07

well, and I will tell you that I’m spending enough now on, you know, because I subscribe to I have a business version of open AI. I have Claude. I have the Claude Pro. I have yet. I have copilot, I have a ton of different models, you know. And the thing about it is that I think I’m starting to get to the point now where I think I may be better off just picking one engine and getting the tricked out version that gives me tons and tons of tokens and going from there. I mean, I have perplexity too, that I pay 20 a month four, and sometimes I throw some money and tokens on it to try to figure out some issues. So, you know, it’s

Brian F. Tankersley, CPA.CITP, CGMA  25:47

a and

Randy Johnston  25:47

I’m really thinking Brian to that point that providers like open router, which allow us to use a marketplace of tokens and buy tokens at volume and so forth, that probably is a reasonable way to go. But, you know, I we’ve talked about this starting back in 2019 between us, that we’re getting what was at that time, SAS exhaustion. And in effect, you’re getting kind of AI subscription exhaustion, because we’ve got too many different platforms that we’re trying to use you, and I need to, I get that. And we do recommend that people run off of two or three different platforms for reasons. You know, in Tech Update this year, you know, we’ve got at least 10 different accounting tasks that you might do, and there’s no one engine that rules them all right now.

Brian F. Tankersley, CPA.CITP, CGMA  26:39

And we’ve not even talked about, you know, the the stuff that is target vertically targeted at the accounting industry. You know, there could be 10 more of those that we have. But, you

Randy Johnston  26:50

know, sorry, in a separate lab, we’re going to talk to you about, you know, new generation document management. And it’s pretty clear right now that Jim and I is doing a better job of document extraction as an example. But things change over time, so we want you to keep your wits about you. Is really the idea. So Brian, I might ask you to bring it home on mythos, because, you know, this has been kind of a doom and gloom session, and you and I are usually much more ugly

Brian F. Tankersley, CPA.CITP, CGMA  27:18

well, but what it really tells what we’re really saying here is that, you know, I was reading about military strategy the other day, and it and one of the things that I think it was Eisenhower said was that a bad plan today is much better than a perfect plan tomorrow. And it really strikes me that, again, this cybersecurity has really become more of an engineering speed contest. Okay, so you really your time to remediate and to solve security problems is much less than it used to be. And if you’re stuck on slow, manual processes and multiple human review cycles where everybody gets six weeks a year of vacation and you have a lot of technical debt, you’re going to have more and more risk associated with this. Okay? And makes

Randy Johnston  28:01

great sense. Well, you just brought me on old home week, on lots of things today, because now I can say I’m from Kansas. I like Ike, but it turns out, you know, that I’ve had the privilege of meeting Eisenhower, and had a lot of friends that were friends with Eisenhower, so I’ve talked to him many times in the past. Well, there it is another accounting Technology Lab. We’re so glad you listened in and we’ll see you again in the near future. Good day.

Brian F. Tankersley, CPA.CITP, CGMA  28:29

Thank you for sharing your time with us. We’ll be back next Saturday with a new episode of the technology lab from CPA practice advisor. Have a great week.

.

= END =

Sign in to get access to this free resource, and all of our whitepapers and reports.

Download this content today!

Register to get free access to this content, as well as newsletters, continuing education, podcasts, and more…

Leave a Reply