×

Мы используем cookie-файлы, чтобы сделать работу LingQ лучше. Находясь на нашем сайте, вы соглашаетесь на наши правила обработки файлов «cookie».

Diary of a CEO, (Part 3) We Are Being Gaslit By AI Companies – Текст для чтения

Diary of a CEO, (Part 3) We Are Being Gaslit By AI Companies

Продвинутый 2 Урок английского для практики чтения

Начать изучать этот урок прямо сейчас

1: 2. AI Whistleblower: We Are Being Gaslit By AI Companies

Host:Which might be the truth. I mean that is actually what they say Dario. This is what famously Dario Amade does. He's like

Karen Hao:he does that but the others Sam's not doing that as much anymore.

Host:Yes. And it's because you know it goes back to like each of them kind of distinguish themselves a little bit as as the brand that they need to project. Do you think any of them are more have a stronger moral compass than others? cuz I think Dario often gets the credit for having more of a, you know, more of a backbone and being more conscious of implications.

Karen Hao:He does get a lot of credit for that.

Host:He's from Claude and Anthropic. For anyone that doesn't know,

Karen Hao:I don't think it truly matters that question, the answer to that question, because to me,

even if you were to swap all the CEOs for someone that people would say is better at running these companies, it doesn't fix the problem that I identify in the book, which is that there is a system of power that has been constructed where these companies and the people running these companies get to make decisions that affect billions of people's lives. lives around the world and those billions of people do not get any say in how it goes.

Host:Those people, they can go to the polls, right? So, if the public are sufficiently educated, they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws.

Karen Hao:Yes. But at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend extraordinary amounts of money, hundreds of millions in this upcoming midterms to try and kill every possible piece of legislation that gets in their way and craft legislation that would codify their advantage. And so to me, I think sometimes as a society, we obsess a little bit with are these leaders good or bad people? And to me the bigger question is is the governance structure that we've created a sound one or that allows broad participation or an anti-democratic one that has consolidated this decision-making power in the hands of the few because no person is perfect. It does I don't I don't care who is on at the top of these companies. they're not going to have the ability to make decisions on behalf of so many people around the world who live and talk and um and and have a culture and history that are fundamentally different from them without things going wrong. And so that is why throughout history we've moved from empires to democracy. It's because empire as a structure is inherently unound. it does not actually maximize the chances of most people in the world being able to live dignified lives.

Host:I'm going to try and take on their point of view. So, this is me playing devil's advocate. Okay. But Karen, if the US don't continue to accelerate their research with AI, at some point, China's model is going to become so smart and intelligent that we're basically going to have to rent it off them and we're going to be, you know, they'll get the scientific discoveries. They'll discover the new era of autonomous weapons and we will be their backyard. And like logically that argument does appear to be pretty true.

Karen Hao:No, it's not.

Host:If we scale up, if we just imagine any rate of change with this intelligence, at some point we're going to come to a weapon that could theoretically disable um all of the United States electricity, their weapons systems. It would know exactly how to disable the United States from a cyber perspective because it would be that smart. All you've got to imagine is any rate of improvement of any period any sort of long period of time. So this is a theory that might be true and if it's true

I mean yeah any theory might be true

but but if but but you know again going to this point of like even if it's a small percentage it's worth paying attention to on the other side of the foot. This is a theory that people talk about. It could be the case that the most intelligent civilization is going to be the superior civilization. Logically, that's a pretty sound thing to say. No.

Karen Hao:So, there's a lot of a lot of fundamentals in this argument that would need to be true in order for this to be a viable argument. And let's knock them down one by one. So the first one is that these systems are intelligent and that just scaling them is going to bring us more intelligence. So far so true.

No, it's actually not because first of all again we don't actually know if these systems are like intelligence is not it's not like the right analogy almost. It's sort of like it's like is a calculator a calculator can do math problems faster than a human. Does that make it intelligent?

It has a narrow intelligence because they're solving a narrow problem which is like 1 plus 1 equals 2. But

and these systems, they actually also are quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone, they actually can only do some things for some people. This is like the jagged frontier of these AI models like some of the capabilities are quite good, other capabilities are not that good. You know why that happens? is because the company can only focus on advancing certain types of capabilities. It can't literally focus on advancing all types of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for that capability by taking uh you know getting a bunch of human contractors to annotate and train the model to do that exact thing. And so scaling these models is actually a perpendicular question to are we actually getting more cyber capabilities specifically and more military capabilities specifically.

Host:I would argue that most of the most of the top people in AI believe that the intelligence is going to continue to scale for some time. a lot of them do like Jeffrey Hinton does.

Karen Hao:And again, it's it's back to his hypothesis about how human intelligence works and what the appropriate model of the brain is. His hypothesis throughout his career has been the brain is a statistical engine.

But that's his hypothesis and that is not universally agreed upon especially among people that are not in the AI world. When you talk with neuroscientists and psychologists, people who actually study human intelligence in the human brain, that is where you start to get a lot of debate and disagreement about this particular view that Hinton has. And so this is kind of like one of the one of the things is like AI is already being used in the military and has been used in the military for a long time. But ex specifically accelerating large language models isn't just the only path for getting military cap. like the companies would have to choose to specifically pick military capabilities to accelerate not just like general intell it's like you know what I'm saying like they create this myth that they are actually pushing the frontier of all of the capabilities of the model but that's not what's actually happening internally and I have I had hundreds of pages of documents on like how they were specifically training models they pick what capabilities they want to advance and you know how they pick them it's based on which industries countries would be able to pay them the most money for their services. So they pick finance, law, medicine, healthcare, commerce. It's not actually intelligent like a like a a baby where you the the more that you that the baby grows up, they start having this like general these general abilities.

Host:I think I have jagged intelligence. I'll be honest. I wasn't going to say it, but I think I know a little I know a little bit about uh No, I know a lot about a little bit.

Karen Hao:Yeah, but if but you also have the capability to learn and acquire knowledge by yourself. And you also have the ability to choose what you're going to learn and acquire by yourself.

Host:It's not easy and it takes a lot more time than these models. It seems less compute, but

and you can learn how to drive in one place and then immediately know how to drive in another place. These models cannot do that. Every time a self-driving car is shifted to another location, it has to completely retrain on that location. It's like all the self-driving cars. I mean, we're sitting in Austin right now and there's all these self-driving cars that are driving through Austin. But when one of them learns, they all learn

which is which

well it's just because it's a it's an operating system that is has an AI model as part of it and you're training the AI model and then you deploy that AI model across all the self-driving

a big advantage because if one optimist robot learns one thing in one factory they all learn it and imagine that imagine if humans if we all learned what all the other humans learned that would be that would give us such an unbelievable competitive advantage. I mean one of the ways we did that is through communication.

Karen Hao:They could not because they could be learning the wrong thing which has also happened again and again with these technologies is that all of them then learn the wrong thing and they all have the same failure mode. I mean part of the resilience of human society is that we do have different expertises and we also have different failure modes.

Host:I think sometimes we hold AI models to a higher standard than we hold humans to. And in a weird because I I' I'd hear on stage we're in we're in Austin at the moment and I'd hear people go ah but you know them AI models they hallucinate sometimes. I'm like, "Have you met a human?" Like, I I hallucinate all the time. I can barely spell or do math.

Karen Hao:So,

yes, but it's it's once again like using this analogy that was specifically picked in the early days of the field as a way to market these technologies. like we're repeatedly using the intelligence analogy and relating these machines to human intelligence as a a way to try and gauge whether or not it is good or worthy or capable in society. I think the output is the thing that really m is the most consequential which is like okay it might have a different brain and a different system but does it arrive at the same capability like does it is it able to do surgery on someone's brain is it able to drive a car like my car drives itself in in Los Angeles I don't touch the steering wheel and I can drive for many many hours and in here in Austin I just saw the ones the other day where they've removed the steering wheel and the pedals the new cyber cabs so I go it doesn't really matter if it's using a different system if it's navigating through the world as a car it has a better safety record than human beings Um then as far as I'm concerned, intelligence or not, it's like

yes, you know,

but that was not the original argument that you made, which was like these systems are just generally going to become more intelligent across different things based on the prediction. This is a prediction that you're making, right? Like that and this is a prediction that all the AI um

Ilia's making, Dario's making, Elon's making, Zuckerberg's making, man's making, Dennis is making.

And do you know what the common feature of all of them is? They profit enormously off of this myth.

Host:Elon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis housing a 100,000 GPU specifically to scale up their API models faster than their competitors. It appears that they've all converged around this idea that you can brute force your way to greater, more generalized intelligence. They've converged around the idea that you can brute force your way into models that they can sell to people for automating certain tasks that are that are financially lucrative.

And I heard Elon say that if you're a surgeon, there's just no point. He was like, don't train to be a surgeon. He says in a couple of years time, Optimus and AI generally are going to be better than any surgeon that's ever lived.

Yeah. You know,

do you think these things are true? Well, you know, I I'm pretty sure it was Hinton that famously slash infamously said there would be no need for radiologists anymore.

Karen Hao:There would be no need for radiologists anymore in he set a deadline that we've already passed. I don't remember how many years. Radiology is doing great as a profession.

Host:Do you think it will be in 5 years?

Karen Hao:Okay. So, this this once again goes back to this question of like why do we build technology and why should we specifically be building AI? Okay. And for me like the whole project of technology development advancement is not to advance technology for technologies sake.

It's to help people. And there have been lots of research that has shown that actually the best outcomes for people in a healthcare setting is for the radiologist to have the AI model in their hands and for the for the human expert to use the AI model as a tool as an input into their judgment. And it is that combination that leads to the most accurate and early diagnoses of certain types of cancer that then help improve the prognosis of the patient.

Host:Do you believe that in the coming years all the cars pretty much all the cars on the road will be driving themselves?

Karen Hao:No.

Host:You don't you don't think so?

Karen Hao:Mm-m.

Host:How come?

Karen Hao:Because of the way the technology works.

Because because these are statistical I mean currently the way that AI models are primarily developed. They're statistical engines. You have what's called a neural network, which is a piece of software that has a bunch of densely connected nodes and

like parameters. Is this what they call parameters?

Yeah, pretty much. And you're just pumping a bunch of data into it and then it's analyzing the data and creating this all of these finding all these correlations in the data, finding all these patterns and then it's through those patterns that the machine is then able to act autonomously, right? And so the way that they're training a self-driving car is they're they're recording all this footage and then they have tens of thousands or hundreds of thousands of human contractors that draw literally around every single vehicle in the footage, every single pedestrian, every single traffic light, every single lane marking and label it exactly as such. So that then it's fed into an AI model that can identify all of these different components and then it's connected to another piece of software that is not AI that's saying okay if you if the AI model recognizes the pedestrian we do not run over the pedestrian. If the AI model recognizes a red traffic light we stop. And so the like the thing about statistical engines is that it's based on probabilities. It's not based on deterministic logic. So systems make errors all the time and it's impossible. It is technically impossible to get them to stop making errors.

Host:Humans make errors way more than

systems in this case. Like the safety record is like isn't it like 10 times more safe to be driven in a Tesla with autonomous driving than it is to for a human to drive?

Karen Hao:It depends on the place. It depends on whether the Tesla was trained to specifically navigate the place that you're driving.

Get drunk

because if it's in Mumbai,

in some place in Vietnam, no, it would not be safer. I WOULD MUCH RATHER be driven

by someone that has been driving in that place their whole life. I'm I'm not arguing against like the fact that in certain places where the car has been explicitly trained to drive in this place that it has a better safety record than the humans that are driving in that place. But you specifically asked if I think that all of the

most cars

most cars in the world in the US

in the United States cuz we're here.

I don't actually think that it's like imminently on the horizon

10 years.

No, I don't think so.

Host:I sat with Dra from Uber and he's pretty convinced that his 9 million couriers will be replaced by autonomous vehicles.

Karen Hao:I mean, how long have has self-driving cars been invested in thus far? It's been more than 10 years. And what percentage of cars right now are autonomous

on the US roads? I mean, so part of it is it's actually not a technical problem, right? Like part of it is also social problem like do people even trust getting into these vehicles? Part of it is also a legal problem which is if the car the self-driving car kills someone, which it has happened.

Yeah, it has happened.

Who is responsible? So, in the case in LA, it was both Tesla and the driver because the driver dropped their phone, they looked down, and this was a couple of years ago, I believe. Um, and they went to grab their phone and they hit someone, and so it went to court, and they were held both responsible, both the driver and Tesla.

Um, in terms of Tesla, pretty much everyone that gets the car, it comes with autonomy now for pretty much most people, I believe.

Host:Partial autonomy. Yeah, it's called full self-driving at the moment where it's like

I mean, yes, it is called full self-driving.

Full self-driving supervised where you kind of have to be looking in the d. You have to be looking in the right direction, but

Yeah. So, it's partial autonomy.

And here in Austin, it's full autonomy cuz there's no steering wheel.

Yeah.

On the new car. Um, so you can't drive it anyway. But it is, you know, the Model Y is the undisputed highest selling car, bestselling car in the world across all brands. Well, I guess my point here is like these predictions where they say AI is going to completely change transportation and driving. It's going to completely change lawyers aren't going to have jobs. Accountants aren't going to have jobs. Um, do you believe that they are true? Do you believe that there's going to be mass job displacement?

Karen Hao:Okay, so I do think that there is going to be huge impacts on employment and we already seeing those impacts. It is not simply because the AI models are just automating those jobs away. It is specifically because the models are improving in certain capabilities based on what the companies that are developing them choose to improve them on. And executives at other companies are then deciding to fire or lay off their workers because they think that AI can replace the worker irrespective of whether that might be true. And there, you know, there have been cases of like the CLA CEO who laid off a bunch of people thinking that he would replace everyone with AI and then it didn't actually work and he had to ask some people to come back.

Host:I actually DM'd him about this. If you're hearing this, this is because I've DM'd Sebastian and he's fine with me sharing this.

He said, because I've heard his name mentioned a lot and so when I when we talked about AI in the past and people mention Sebastian and Cler as the example, I wanted to clarify with him what the truth was.

He said, "It's great to hear from you. Um, I think sometimes people struggle with two things can be true at the same time. I think it might be time to come back on your podcast. To your point, this is the media misinterpreting my tweet. We are doubling down on AI more than ever. Cler is shrinking with almost 100 employees per month due to AI. We used to be 7,400 at the peak. A year ago, 5,500. Now we're 3,300. And by the end of summer, so this was last year, will be 3,000 people. AI handles 70% of our customer service conversations at this moment. This is because we have realized that with AI, the production cost of software comes down to almost zero. Just like manufacturing used to be all handcrafted and then the machines came. Code used to be all handcrafted up until a few years ago. And now it is machine produced. And ultimately we pay people more than ever for the unique handcrafted man-made stuff. China is a bank. People will want to connect to humans not only machines. They want us to be personable, relatable, even flawed. So we need to make sure while we are automating replacing with AI in parallel, we make sure we offer a super available human experience."

I'm really glad you read this because I think it touches on some really important nuances to the AI.

Karen Hao:Yeah. Like the impact that AI is going to have on employment. So I think the there's often these binary narratives. It's like AI is going to come for every job.

Mhm.

Or people say AI is not actually working and it's not actually coming for jobs. And like the reality is it's coming for jobs. There are definitely jobs that are being automated away because of the capabilities of their models. And there's also jobs that are being lost because executives are deciding to lay off the workers even if the models don't match the capabilities because it's good enough. Like they would rather have the good enough model for way cheaper

or they made a mistake with hiring. They blowed their team and it's a great convenient thing to say.

Exactly. Like there's there's there's many reason but like clearly we're already seeing impacts on the job market. Like the um US jobs report that came out earlier this year showed that there has been a decline in hiring is a slowdown in hiring across especially white collar professional industries. And you saw Anthropic's report the new this week. The TLDDR is it matches kind of what you were saying where they Anthropic looked at exactly how people were using their models and they looked at like what people are saying.

Yeah.

And they said that there's been a 40% reduction in entry- level jobs in particular and then they made this graph which has gone viral over the internet. The red shows where we are now in terms of capability and based on how people are currently using the models they prediction

extrapolated out that the blue part will be the disrupted parts. This is the things that they say AI can do right now, but people don't realize it yet. So, if you look at it, it's like it's kind of all the stuff you would expect.

Yeah.

It's the physical real world human stuff

which robots maybe can do someday like construction or agriculture that are untouched, but like office and admin, um like saying finance stuff, math,

and notice that these are all the things that I just named that they purposely

finance, math, law,

media and arts. That's me cooked.

Yeah. office and admin. I mean they do focus a lot on like assistant type and managerial work.

So but but the the other thing that the CLO CEO said was but people also want human experiences. So it's not actually just about the capabilities of the models. It's also about what people want like some things they would turn to AI for and some things they wouldn't irrespective of whether or not AI is capable of doing it but because of a preference that they want humanto human interaction

and so what we're seeing right now is yeah the the thing that happens with every wave of automation which is that there is a bunch of entry-level work that gets automated away and there There are also new jobs created, but the jobs that are created are one in one of two categories. There are people that get even higher skilled jobs and what he was saying like we pay people more for like the handcrafted code now

and there's also the people who get way worse jobs and so there was this amazing article in New York magazine that was talking about how a lot of people are getting laid off and then they end up working in data annotation which is the labor that I've been referring to throughout this conversation that companies need in order to teach their models the next thing that the companies are trying to automate. And so like a marketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that they were just laid off in which will then perpetuate more layoffs if that model then develops that skill. And the article was talking about how this has become a huge catchall for a lot of people that are struggling with finding job opportunities right now, including like awardwinning directors in Hollywood that are actually secretly doing this data annotation work to put food on the table. And so when they talk about there's going to be mass unemployment and then there's going to be some new jobs created that we can't even imagine, I think a lot of these narratives rarely talk about like first of all, why are some jobs going away? It's not just because of the model capabilities, it's also because of executive choices and because of the rhetoric that they use if they want to just downsize. Um, but the other thing that is rarely talked about is the jobs, a lot of the jobs that are created are way worse than the jobs that were there

and it breaks the career ladder. So, it's the entry level and the mid tier jobs that get gouged out. It's higher order jobs and then way more lower order jobs that get created. And so, how do people continue to progress in their careers? There's no more rungs on the ladder.

Host:I actually don't know the answer to this question. And I've been furiously trying to find a good answer to this question because I can, you know, everything is theory. And for my audience, I would say most of my audience don't run businesses. A lot of them do, a lot of them aspire to, but they don't run businesses. So, they're kind of, they're also in the land of theory. They're hearing lots of different things. Jack Dorsey does his tweet saying he's halfing his headcount because of AI. They don't know what's true. They don't know the sort of internal economics at Jack's company and did he bloat the company during the pandemic and he's just using this as an excuse to make this share price spike seven points because his investors now think they're an AI company or whatever.

Mh.

It's hard to pass through. So eventually I go, okay, what am I doing?

I have hundred hundreds of team members, probably 70 companies I invest in, maybe five or six that I'm like the lead shareholder in. What am I actually doing on a day-to-day basis right now? I am I'm also I also consider myself to be head of recruitment

but in the last month in particular I have met extremely capable candidates in terms of cultural alignment hard work those kinds of things but I've had to take a great deal of pause because when I run the experiment of can I get an AI agent to do that exact same thing the answer is increasingly yes

especially in a world of open clause

and so what I'm curious like

now you confront this decision where you're seeing in this short-term period you could just choose the AI agent and in the long-term period there is no career ladder. So, so who are you promoting into these senior roles? Like what how do you resolve it for your own company?

Karen Hao:Yeah, it's a good question. So, there's kind of two ways I'm thinking about it. I think really deep expertise is very very valuable because if you're now the orchestrator of potentially AI agents, it's really about um having a deep understanding of the right question to ask and and that's someone who has deep expertise on something. So I need my CFO

because if she's going to be orchestrating our team of agents that might be doing financial analysis or whatever else, she needs to understand what to tell them to do in our company.

Mhm.

And in turn financial analysts can't do that. They need this the 50 odd years of experience that you know CLA has. On the other end, I need Cass. Cass is 25. Cass knows everything about AI agents. He's a young Japanese kid who's highly highly curious. You know, on the weekend, he's building AI agents to solve problems in my life. I need those two kinds of thinking, which is highly proficient agent maxing young kids or they don't necessarily need to be young, but like really lean in high curiosity. That's creating a force multiplier in my business. And then I need deep expertise. Now the everything else outside of there is another one I've thought of another group is like people with extremely great IRL people skills

because we do meet people in real life. We greet you when you arrive here. We greet we when we go for lunch with big clients that we have whether it's Apple or LinkedIn or whoever it might be. We, you know, we need to smoosh.

Mhm.

And we have teams who, you know, are in person in the office. So, we we do a lot of stuff IRL and increasingly we're building communities even for this show. We're doing community events all around the world. So, we need people that are good at that as well. IRL, bringing people together in real life and organizing stuff. Those are the three groups of people that I'm like, you know, irreplaceable right now. And if you were to to all of the all the roles that could be done by AI agents, if we were to replace them with AI agents, do you think you would still have these three roles pools of people to hire and promote into the three critical things that you need in the long term?

If things carry on at the the current rate of trajectory,

yeah,

one could assert that even those roles would experience pressure. If you just imagine like people think of things either statically or linearly or exponentially.

Yeah,

you imagine an exponential rate of improvement, which is kind of what I've seen. Even like a 10% compounding rate of improvement at some point,

at some point, at some point, I think what remains is actually the IRL irreplaceably human stuff, human to human, our Maslovian needs of being in person like we are now aren't going to change. We need connection. Humans get very sick when they don't have other human beings in their life and strong, deep relationships. 100% agree. So that stuff is going to matter a whole lot. I have this contrarian weird take that actually maybe this is the first technology that's going to deliver on the promise of making us human and connected because we're going to be rendered useless of everything else other than what humans are good at. Cuz all the other technology said, "Oh, we're going to make you more connected, connecting the world." And they disconnected the world and isolated the world. But maybe this is the one. It's so intelligent now that it doesn't need us to around in spreadsheets anymore.

Karen Hao:Do you see that actually happening in real time right now that it's making us more able to be in person, connected with one another, having deeper social community engagements.

Host:Yes.

Yes.

And I'll give you some data points.

Okay.

Data point number one, the Financial Times released a report on social media usage. And what they saw is 2022 was the peak and it's plateaued ever since. The generation that's plateaued the fastest and heading down is the younger generations. The boomers are still off to the races, right? So on Facebook and stuff. And then you look at the way Gen Alfa are using social media. They're not posting as much. They call it uh posting zero. They're scrolling sometimes, but they're in dark social environments like WhatsApp and Snapchat and iMessage. They're not like performing to the world. They also value IRL experiences much more than any other generation. They're like not getting smashed. We're seeing every brand has a run club. um I mean runs exploding around the world and we're seeing this real sort of sort of almost like innate realization that like technology let us down at some fundamental level like dating apps let us down social networking kind of has let us down and we're seeing I think maybe a bifocation of society where a lot of people are going this like I want to go back to what it is to be a human

and I I would imagine that in such a world where intelligence is so sophisticated that we no longer needed to sit at laptops and like I think screen time is going to continue to fall. I think you go into an office, you're not going to see people sat at laptops. You're gonna see something completely different. And I think maybe, you know, and then we talk about robots and Optimus robots. Elon says there'll be 10 billion Optimus robots. Elon has been wrong with timing before. He's almost never been wrong on the big things completely. He's just his timing is got a bad track record. Um, so I think he's he's probably right. You know, I think I've I've got some people on the way from Boston Dynamics and these other big companies like Scale AI, and they're actually bringing the robots here to show it, like folding laundry, doing the dishes. I'm not saying that's what I would want in my home, but I think factory work is going to completely change. I think a lot of manual labor is going to completely change, and I think we're going to be forced to do what only we can do.

Learn languages from TV shows, movies, news, articles and more! Try LingQ for FREE