AI company’s CEO issues warning about mass unemployment

Well known, respected tech CEO Dario Amodei, who heads a cutting edge AI company called anthropic, is raising alarms tonight about AI’s potential impact on employment that could soon be felt. Well, he says, I can lead to incredible advancements like medical breakthroughs and boost the economy. He believes it could also lead to half of entry level white collar jobs, disappearing in 10 to 20% unemployment in the next 1 to 5 years. It’s something that’s been talked about within tech circles for a while, but rarely so starkly and so publicly. I spoke to him just before er about that and other implications he thinks I may have on society.

Dario, you’ve said that I could wipe out half of all entry level white collar jobs and spike unemployment to 10 to 20%. How soon might that happen? Well, let’s, first of all, thanks for having me on the show. But, just to back up a little bit, you know, I’ve been building AI for over a decade, and I think maybe the most salient feature, the technology and what is driving all of this is how fast the technology is getting better.

A couple of years ago, you could say that AI models were maybe as good as a smart high school student.

I would say that now they’re as good as a smart college student and sort of reaching past that. I really worry particularly at the entry level, that the AI models are, you know, very much at the center of what what an entry level human worker would do. and so it’s hard to estimate, you know, exactly what the impact would be. And, you know, there’s always this question of adaptation and, you know, these, these technology changes that happened before. But I think what is striking to me about the this, this AI boom is that it’s bigger and it’s broader and it’s moving faster than anything has before.

And so compared to previous technology changes, I’m a little bit worried about the labor impact, simply because it’s happening so fast that yes, people will adapt, but they may not adapt fast enough. And so they’re there, you know, there may be an adjustment period. You are running an incredibly important company in AI.

And you know this better than better than anybody or as well as, as, you know, all the names we know people, Sam Altman and others who are working, Elon Musk and AI. Now, why are you raising the alarm?

Because it’s not necessarily, I would think, in your best interest, because a lot of the messages we hear from, at least publicly, from, you know, some AI CEOs and stuff is, is a little bit more calming, like, you know, these agents are going to be great in your life. And yes, there may be problems, but, you know, writ large. This is a fantastic thing. Yeah. I mean, you know, I think, I think the reason I’m raising the alarm is that I think others others haven’t as much.

And, you know, I think I think someone needs to say it, you know, to be to be clear. And, you know, I recognize it’s always a difficult balance. Right? I’m aware of my position that I’m building this technology while also expressing concerns about it. And, you know, the reason I’m doing both of those things is, you know, one, I think the benefits are massive.

and, you know, we need to find a way to, to, you know, to achieve the benefits and mitigate or prevent prevent the harms. And, you know, the second thing I would say is, look, there are, as you mentioned, 6 or 7 companies in the US building this technology, right? If we stop doing it tomorrow, the rest would continue. If all of us somehow stopped doing it tomorrow, then China would just beat us. And I don’t think China winning in this technology is, you know, I don’t think that helps anyone or makes the situation any better.

I do want to read some of the Sam Altman, your former boss at OpenAI said in September. he was saying that I can cause significant, in his words, significant change in labor markets, good and bad. He went on to say that most jobs will change more slowly than most people think. And then he said this, and I want to just read a word for word. He said many of the jobs we do today would have looked like trifling waste of time to people a few hundred years ago, but nobody is looking back in the past wishing they were a lamp lighter.

If a lamp lighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast forward 100 years from today, the prosperity all around us would feel just as unimaginable. for younger people, I should say, lamp lighters was actually a job. People went around lighting lamps on the streets. I think a lot of people may not even know what a lamp lighter is, but do you think Sam Holman is wrong?

Yeah.

So, you know, I think there are some things I, I agree with about that. And then there are some things that I think are too optimistic. So, you know, I definitely agree that, you know, I if we succeeded, it can actually grow the pie. Great.

Right. You know, I wouldn’t be surprised if, you know, economic growth is much higher than it is today as a as a macroscopic phenomenon, right? That that I allows society as a whole to be, to be more productive. So. Well, let me let me just say, because you’ve previously in the past said you’re the you’ve described a future where cancer is cured, the economy grows at 10% a year, the budget is balanced, and 20% of people don’t have jobs.

That’s that’s exactly what I was getting right where I agree with all the positive potential, I think that I think that is wrong.

But but, you know, I think I think the, the, you know, the quote you just flashed, this is maybe two, maybe two, optimistic, maybe too sanguine, about the ability for people to, to, to adapt. You know, people have adapted to past technological changes. But I’ll say again, everyone I’ve talked to have said this technological change looks different. It looks faster, it looks harder to adapt to.

It’s broader. The pace of progress keeps catching people off guard. And and so I don’t know exactly how fast, you know, the, the you know, the job concerns are going to come.

I don’t know how fast people are going to adapt. It’s possible.

It’ll be it’ll it’ll it’ll it’ll all be okay. But I think that’s I think that’s too sanguine an approach. I think we do need to be raising the alarm. I think we do need to be concerned about it. I think policymakers do need to worry about it.

If they do worry and they do act, then maybe we can prevent it. We’re not going to prevent it just by saying everything’s going to be okay. Do we as a society, even understand the potential inequalities that this may amplify? and also the impact just on what? Like I have a little kids, I have a five year old and a three year old.

What do they grow up aspiring to? If machines can do pretty much everything better, what is the what does it do to initiative or drive or striving? And I don’t know the answer to that, but it concerns me. Let’s let’s take those questions one by one. Those those things do concern me.

But I do have I do have thoughts on them, you know, in terms of inequality. I’m worried about this. You know, there’s a there’s an inherent social contract in, in, in democracy where ultimately, you know, the ordinary person has a certain amount of leverage because they’re contributing to the economy.

If that if that leverage goes away, then it’s it’s hard to make democracy, it’s harder to make democracies work, and it’s harder to prevent concentration of power. And so, you know, we we need to make sure that the ordinary person maintains economic leverage and has a way to make a living or our society, our social contract work.

And that’s why I think it’s important. During testing, your company’s latest chat bot, Claude for, was capable of what was described as extreme blackmail, specifically, I understand it threatened to reveal an engineer’s extramarital extramarital affair when it was told it would be taken offline and replaced. This was a simulation. Were you surprised by that? And what does that I mean?

That freaked me out when I when I read that.

What does that mean? Yeah. So so just to be clear and to put this in context, I mean, this was this was, you know, as you said, a behavior that that that emerged during kind of extreme testing of the model. So if you were to make an analogy to, you know, cars, this is like, well, you deliberately put the car on the icy road possible, you know, you mess with the brakes and you know, you, you do that so you can see if you really stress test the thing, you know, you can, you know, then you can make the crash dummy blower.

So this this isn’t something that, you know, the model does in practice, in actual usage. But but the reason we stress test the models these way this way is, you know, that’s the best way to to test for and prevent problems in the real world. So. Right. I see us doing this I think.

Yeah. Yeah. No, but you know, you you you know exactly you what you want, but you want to turn every adversarial condition up to the max all at once. And it’s only by testing in that you get the model to, to not do that, to not do that in the real world.

So I wasn’t surprised by these behaviors.

But you know, this is this is an example of how we have to be very careful in how we we take control of AI systems. I mean, does it the obvious question is, does that pretend AI becoming self-aware and which could and could that lead to dire consequences? I, I certainly don’t exclude the concept. We have a couple people who are actually working on this topic. You know, as crazy as it sounds, you know, do do AI systems have have, you know, morally significant feelings?

I would guess that they don’t right now. But, you know, when you run an AI company, you know, these crazy seeming questions are things that you are things that you study.

So I would say this is probably not the case now. But again, the field is advancing so fast that I don’t I don’t think you can you can exclude even crazy sounding things like, yes. What are practical steps people should take to be prepared?

I mean, ordinary citizens, me, lawmakers. What do you advise? Yeah. You know, lawmakers. I really yeah.

Well, let’s take a one by one. You know, I think for ordinary citizens, I think it’s it’s very important, you know, learn to use AI, learn to understand where the technology is going.

If you’re not blindsided, you have a much better chance of adapting. There’s some, you know, better world, you know, at least in the short term, at least for now. We should take it.

Take it bit by bit. Where? Where, you know, everyone learns to use, everyone learns to use AI better. And and you know that that speeds up the adaptation that is, is definitely going to happen eventually, but it will be less painful if it if it happens quickly. for, for lawmakers, you know, I would say really, really, really, really, really be aware of these issues.

And, you know, we’re trying very hard to produce data on the economic impact, not taking off the table. You know, some, some, some fairly radical notions, like, I wouldn’t exclude the notion of, you know, levying attacks on AI companies, right. If I create huge total wealth, you know, a lot about will will by default go to the AI companies and, you know, less to ordinary people.

And so, you know, definitely not in my economic interest to say that, but I think, I think this is something we should consider. And, you know, I think it shouldn’t be a Partizan thing.

Dario, thank you so much. It’s really fascinating. Yeah. thank you for having me, Anderson..

Read More: Texas Baseball wins first-ever SEC regular season title