×

Мы используем cookie-файлы, чтобы сделать работу LingQ лучше. Находясь на нашем сайте, вы соглашаетесь на наши правила обработки файлов «cookie».

Diary of a CEO, (Part 2) We Are Being Gaslit By AI Companies – Текст для чтения

Diary of a CEO, (Part 2) We Are Being Gaslit By AI Companies

Продвинутый 2 Урок английского для практики чтения

Начать изучать этот урок прямо сейчас

2: 1.1 AI Whistleblower: We Are Being Gaslit By AI Companies

And so then I told them, that's fine. I will still engage in the process where I'll give you extensive requests for comment. I'll keep you updated on all the things that I'm finding so that you can still choose to comment.

I gave them 40 pages of requests for comment. And I gave them over a month to respond to all of that.

So this was when the tweet came out. We were doing all this back and forth, and that's when Altman tweeted this.

Host:

And they never responded to a single one of the 40 pages.

Sam Altman does a lot of interviews.

Karen Hao:

Yeah.

Host:

He's doing a lot of interviews all the time. He's done every podcast. I've seen him on everything from Tucker Carlson to — I think he's done Theo, Joe Rogan, podcasts all over the world.

I wonder why he won't do mine.

Well, maybe. I don't know why. I don't know. I think I'm fair with everyone. I just ask questions I genuinely care about. I don't come in with huge preconceptions, or at least when I meet people for the first time.

But I've heard through the grapevine that he doesn't want to do mine.

I mean, going back to what you were saying earlier, with the way that OpenAI and these companies control research — you asked, do they also do this with journalists? I mean, yes. The answer is yes.

And apparently they also do it with anyone who has a broad mass communication platform.

Karen Hao:

It's not just about the conversation that you're going to have with them. It's also about who you choose to platform.

And there's this huge problem in technology journalism where companies know that a really big carrot they can give to technology journalists is access.

Host:

Yeah. Yeah. Yeah.

Karen Hao:

And they will withhold that access at the drop of a hat if they catch wind that you're speaking to someone they didn't want you to speak to.

Host:

This is so true. And I don't think the average person really truly understands this.

This kind of sounds like theory as you say it, but I'm not going to name names here because I don't think it's important, but there is a particular person in AI whose team have basically dangled the carrot of them coming here for like 18 months.

And I'm like, you don't have to dangle the carrot. I'm going to speak to whoever I want to regardless of the carrot or not. And when this person comes, if they want to come, I'll give them a fair shot. I'll ask them all genuinely curious questions about what they're doing, their incentives. I won't gotcha them. I don't have a history of ever gotcha-ing anybody.

Even if I have a different opinion, I'll ask the question.

Karen Hao:

Yeah.

Host:

But they dangle carrots and they say, "Well, he's thinking about it, let's think about a date."

And what the strategy is — and I don't think they think people don't understand — is if we just dangle it for long enough, then they will perform in the way that we want them to do. They'll be pleasant about us. They won't be critical. They won't give a platform to our critics.

And I think a lot of their game is just: dangle the carrot forever.

Karen Hao:

Yes. Yeah.

Host:

That's like the optimal outcome: if we just dangle it, if we just tell them, "Yeah, look, we're just looking at the schedule."

It just doesn't work. I think in the modern world, you just have to go there and give your opinion and allow the clash of ideas in the public forum. Let the viewers decide for themselves what they think.

Karen Hao:

Yeah.

This is such a huge part of their machinery, the way that they use these tactics to massage the public image of these companies and make sure that information they don't want out — and even opinions they don't want out there — don't go out there.

And so I feel very lucky now that OpenAI shut the door early on me.

At the time, I didn't feel lucky. I felt like I had screwed myself over.

Because access, to a journalist — you're supposed to report the truth, and you're always supposed to report in the interest of the public. That is the point of journalism.

And in that moment — I was relatively junior in my career — I was like, did I misunderstand what journalism is about? Should I have actually been playing the access game?

But it was too late. I had the door shut to me, and so I had to build my career understanding that the front door was never going to be open.

And that actually really strengthened my own ability to just tell it like it is — objectively — and just report what I see are the facts being presented to me, irrespective of whether the company likes it or not.

And most often the company really does not like it, but I can continue to do the work. They don't need to open the front door for me. I was still able to do more than 300 interviews.

Host:

So Sam Altman gets kicked off the OpenAI executive team. Did you find out why that happened?

Karen Hao:

Yeah, there's a scene-by-scene recounting.

Host:

From who?

Karen Hao:

I can't remember the exact number of sources, so I don't want to misquote myself, but it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making process.

So, Ilya Sutskever is seeing these serious concerns about the way that Altman's behavior is leading to bad research outcomes and poor decision-making at the company. He then approaches a board member, Helen Toner.

Host:

Ilya, for anyone that doesn't know, is the co-founder we mentioned earlier.

Karen Hao:

The co-founder of OpenAI we mentioned earlier, yes.

And he kind of does a bit of a sounding-board thing to Helen, because Ilya is freaking out. He's been sitting on these concerns for a while, and he's like, if I tell this to someone, this could also be really bad for me if Altman finds out.

And so he asks for a meeting with Toner, and in that first meeting he's really — he barely says a thing. He's kind of dancing around, trying to figure out: is this someone that I can maybe trust to divulge more information to?

Host:

And Toner's role and responsibilities at OpenAI were—

Karen Hao:

She was a board member.

Host:

Just a board member.

Karen Hao:

Yeah. And specifically an independent board member.

So OpenAI, when it was a nonprofit, the board was split between people who had a financial stake in the company and people who were fully independent.

And this was meant to be a structure that would balance the decision-making to be in the benefit of the public interest rather than in the benefit of the for-profit entity that OpenAI then created.

Host:

And Ilya, as a non-independent board member, was approaching Toner as an independent board member to try and see whether or not she was potentially seeing or hearing the same things that he was about the effect that Altman was having on the company.

Karen Hao:

This then sets off a series of conversations, first between Ilya and Helen, and then between Mira Murati and some of the board members.

Mira Murati was at that point the chief technology officer of OpenAI.

These two senior leaders essentially, through these conversations and through documentation that they're pulling together — like emails, Slack messages and so forth — convey to the three independent board members: we are very concerned about Altman's leadership. He is creating too much instability at the company. He is the root of the problem.

They were trying to say to these independent board members: the problem will not be fixed unless Altman is removed, because of the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore, and they're competing rather than collaborating on what's supposed to be this really, really important technology.

Host:

When you say instability, that's quite a vague term. That could mean lots of things.

Like instability could mean pushing people hard to work harder.

What do you mean by instability in as specific terms as you can possibly say them?

Karen Hao:

When ChatGPT came out in the world, OpenAI was wholly unprepared.

They didn't think that they were launching a gangbusters product.

Host:

Yeah. They thought they were releasing a research preview that would help them get the data flywheel going, collect a bunch of data from users that would then inform what they thought would be the gangbusters product, which was a chatbot using GPT-4, and ChatGPT was using GPT-3.5.

Karen Hao:

And because of that, there were servers crashing all the time, because they had to scale their infrastructure faster than any company in history.

And there were all of these outages.

They were also trying to hire faster than any company in history, and they were sometimes hiring people and then deciding, actually, we made a mistake, we shouldn't have hired you. So they were firing people left and right.

And people were just disappearing off Slack, and that's how their colleagues would learn that they were no longer at the company.

So it was, yes, like many fast-growing companies, a very chaotic environment — and a particularly chaotic environment because it was extra fast. They had to accelerate more than any other startup.

And on top of that, Mira Murati and Ilya Sutskever felt that Altman was making it worse. He was not actually effectively ameliorating the circumstances of the chaos. He was actually sowing more chaos, getting these teams to be more divided.

And this is where it's important to understand that the executives and the independent board members, they're all operating under this idea that they're building AGI, and that AGI could either be devastating or utopic to humanity.

So yes, it's like any other company — and no, it's not like any other company.

In their view, you cannot have this degree of chaos as the pressure cooker for creating a technology that, in their conception, could make or break the world.

And so that is basically what the independent board members also begin to reflect on.

They have these conversations amongst themselves where they're like, well, based on what we're hearing about Altman's behavior, if this was an Instacart, would that warrant firing him?

And they concluded: maybe not, but this is not Instacart.

And that's why they were like, well, crap, maybe this actually does rise to the bar where we should consider replacing him, because we are ultimately building a technology that we think could have transformative impacts, either in the positive or negative direction.

And so that is what happens.

These two executives and then the independent board members — and they were hearing other feedback as well from their connections within the company, with other people in the industry.

At one point, Adam D'Angelo, who is one of the independent board members and the CEO of Quora, which is a tech startup in the Valley, is at a party in San Francisco, and he starts to hear some of these rumors that there's something weird about the way OpenAI has structured its OpenAI Startup Fund, which was this fund that the company had created to start investing in other startups.

And he realizes they'd never really seen documentation about how the startup fund had been set up from Altman.

And finally they get the documents, and it turns out that OpenAI Startup Fund is not OpenAI's startup fund. It's Altman's startup fund.

And this was one of several experiences that the independent board members were having where they're like, there's something not right about the fact that there continuously are inconsistencies between the way that Altman is portraying what is being done versus what is actually being done.

And so when these two executives approach the independent board members, then they're like, okay, this lines up with the experiences that we've also been having.

And at that point, they have this series of very intense discussions where they're meeting almost every day, talking about: should we actually really consider removing Altman?

And in the end they conclude, yes, we should.

And if we're going to do it, we need to do it quickly.

Because they were very concerned that the moment Altman found out, his persuasive abilities would make it impossible to do.

And so they end up firing Altman without telling anyone. They don't talk to any stakeholders to get them on the same page.

Microsoft gets a call right before they execute the action saying, "We're going to fire Altman."

Host:

And Microsoft, for anyone that doesn't know, are a lead investor in OpenAI at the time.

Karen Hao:

Yes. One of the only investors in OpenAI at the time.

And that is what then devolves the whole thing, because every single person that is affected by this decision is now extremely angry that they were not involved.

And that is what then creates this campaign to bring Altman back.

And then Altman is reinstalled as CEO days later.

This company that I've just invested in — it's grown like crazy. I want to be the one to tell you about it because I think it's going to create such a huge productivity advantage for you.

Whisper Flow is an app that you can get on your computer and on your phone, on all your devices, and it allows you to speak to your technology.

So instead of me writing out an email, I click one button on my phone and I can just speak the email into existence, and it uses AI to clean up what I was saying. And then when I'm done, I just hit this one button here and the whole email is written for me.

And it's saving me so much time in a day because Whisper learns how I write. So on WhatsApp, it knows how I am a little bit more casual. On email, a little bit more professional.

And also there's this really interesting thing they've just done. I can create little phrases to automatically do the work for me. I can just say "Jack's LinkedIn" and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life.

This is saving me a huge amount of time. This company is growing like absolute crazy. And this is why I invested in the business and why they're now a sponsor of this show.

And Whisper Flow is frankly becoming the worst-kept secret in business, productivity, and entrepreneurship.

Check it out now at whisperflow — spelled w i s p r f l o w — dot ai slash steven. It will be a game changer for you.

There's a phase a lot of companies hit where they're no longer doing the most important thing, which is selling. And they get really bogged down with admin.

And it's often something that creeps up slowly and you don't really notice until it's happened. Slowly momentum starts to leak out.

This happened to us, and our sponsor Pipe Drive was a fix I came across 10 years ago. And ever since, my teams across my different companies have continued to use it.

Pipe Drive is a simple but powerful sales CRM that gives you visibility on any deals in your pipeline. It also automates a lot of the tedious, repetitive, and time-consuming parts of the sales process, which in turn saves you so many hours every single month, which means you can get back to selling.

Making that early decision to switch to Pipe Drive was a real game changer, and it's kept the right things front of mind.

My favorite feature is Pipe Drive's ability to sync your CRM with multiple email inboxes so your entire team can work together from one platform.

And we aren't the only ones benefiting. Over 100,000 companies use Pipe Drive to grow their business.

So if something I've said resonates, head over to pipedrive.com/ceo where you can get a 30-day free trial. No credit card or payment required.

How does a CEO of a major company get fired by the board?

Because board members — there's a quote in your book on page 357 where you say, about Ilya, "I don't think Sam is the guy who should have the finger on the button for AGI."

I asked myself this question. I work with lots of people here. We have 150 people that work in this business, and those people know me best. They see me on camera. They see me off camera.

So if they said that we don't think Steven is the right person to host the diary — it would take a lot for them to say that. They must have seen some off-camera behavior for them to go, we don't think he's the right person to be on camera.

And in the case of AI, which is much more consequential than a podcast that is filmed in my old kitchen, it almost sends a chill down one's body to think that the co-founder of a business has gone to the board and said this isn't the guy to lead this.

Mira Murati then also said, I don't think Altman is the right guy.

And then they both left later.

Karen Hao:

So then Altman comes back, and lo and behold Ilya never comes back.

So his concerns about the fact that Altman finding out would be bad for him manifested. He ended up not coming back. And Mira Murati then left shortly thereafter.

Host:

Quite a lot of these people leave, don't they? OpenAI.

Karen Hao:

They do.

So if you consider one of the origin stories of OpenAI is this dinner that happened at the Rosewood Hotel, which is a very swanky hotel right in the heart of Silicon Valley, that was one of Elon Musk's favorites whenever he was coming up from LA to the Bay Area.

And there was this dinner there where Altman was intending to recruit the OG team that would start OpenAI.

So he's kind of telling everyone you might have a chance to meet Musk, because Musk is going to come to this dinner.

And he cold-emails Ilya and gets Ilya to then come, because Ilya specifically wants to come because he wants to meet Musk.

And he also emails all these other people, including Greg Brockman, Dario Amodei.

These are all people that ended up working at OpenAI.

Host:

And they all — almost all of them, not every one of them, but almost all of them — end up working at OpenAI and leaving.

Karen Hao:

Almost all of them end up leaving, specifically after they clash with Altman.

Host:

And Ilya, he left and launched a company called Safe Superintelligence.

Karen Hao:

Yeah.

Host:

Which is — I mean — that's an indirect if I've ever heard one. Do you know what I mean? If someone co-founded this podcast with me and then they left and started a podcast called Safe Podcasting, I'd take that as a slight. I'd have people knocking on their door and asking for their texts.

Karen Hao:

One of the things that is happening here is it is not a coincidence that every single tech billionaire has their own AI company.

They want to create AI in their own image, and that's why they keep not getting along.

And in fact it's not just don't get along. They end up hating each other after working together, and then splinter off into their own organizations.

So after Musk leaves, he starts xAI. After Dario leaves, he starts Anthropic. After Ilya leaves, he starts Safe Superintelligence. After Mira leaves, she starts Thinking Machines Lab.

They want to have control over their own vision of this technology.

And the best way that they have derived from their experiences of trying to put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other companies out there.

Host:

Do you think some of these AI CEOs realize that they are quite literally summoning the demon, as Elon said 10 years ago, but they don't really care because being the person that summoned the demon makes you consequential and powerful and historical, even if the outcome is potentially horrific?

Even if there's, like, a 20% chance of it being horrific.

I remember, I think it was Dario — he's the one that said there's somewhere between a 10% and 25% chance of things going catastrophically wrong on the scale of human civilization.

Twenty-five percent is a one-in-four chance.

If you put bullets in a four-chamber revolver and said, Steven, the upside is you could become a multi-gazillionaire and be remembered forever, the downside is that there'd be a bullet in your head — there is no chance that I would take that bet with a 25% potential chance of things going catastrophically wrong.

Karen Hao:

So I have a very long answer to this, because: do they know if they're summoning the demon? It really depends on what we define as summoning the demon.

And in this particular case, to go back to what we were saying before, there's a mythology that the AI industry uses where summoning the demon is an integral part of convincing everyone that therefore they can be the only ones that are developing this technology.

Host:

I got it. So on one end, you've got to say if we don't, China will, and that's terrible.

Karen Hao:

Yeah.

Host:

But if we let anyone else do it other than me, then we're as well.

Karen Hao:

Exactly.

Host:

So that means that I have to do it and you have to give me money and support.

Karen Hao:

Exactly.

So when they're saying these things, we should understand it not as a genuine prediction based on what they're seeing, because first of all, we don't predict the future. We make it.

We should understand this as an act of speech to persuade other people into believing that they should cede more power, more resources to these individuals.

And so, do they know that they're summoning the demon? They are purposely trying to create this feeling within the public that they are, because it is a crucial part of their power.

But if we define it another way — do they realize that the things they are doing are already having really harmful impacts all around the world on vulnerable people, vulnerable communities, vulnerable countries? That's where I'm like, maybe yes, maybe no, and they don't really care.

Because in the frame of mind — I sometimes use the analogy that the AI world is like Dune.

Host:

Dune, for anyone that doesn't know.

Karen Hao:

Science fiction epic written by Frank Herbert. And it's set in this intergalactic era where there are all these houses and they're fighting each other for spice.

So it's a callback to colonialism and empire, and they're all trying to control the spice.

But one of the features of this story is that there are these myths that are seeded on the different planets — a religious myth, basically, about the coming of the Messiah — that are used as ways to control the people.

And Paul Atreides, when he arrives at the planet Arrakis with the intention of trying to then fight against the empire and avenge his father's death, he steps into a myth that has been seeded on this planet that says that one day there will be a messiah that comes and saves the planet.

So he steps into the role of the messiah and leans into this idea in order to better control the people and rally them behind him as a leader to help with this quest.

He knows that it's a myth in the beginning, but because he lives and breathes and embodies it, it kind of starts to blur in his mind whether this is really a myth or whether he's really the messiah.

And this is what I think happens in the AI world.

On one hand, there are all these executives that actively engage in mythmaking because, you know, I have all these internal documents that I write about in the book where they are very keenly aware of how to bring the public along with them by showing them dazzling demonstrations of the technology, by crafting a mission that will sound really good and make people give more leniency to their companies.

So they know they're doing the mythmaking.

And also I think many of them lose themselves in the myth, because they have to live and breathe and embody it day in and day out.

And so when Dario says he thinks that 10% to 25% of the future could be catastrophic, or whatever the probability is, he is actively engaging in the mythmaking, but also he's losing himself in the myth.

I think if you were to ask him, do you genuinely believe that? he would be like, yes, I genuinely believe that.

Because there's been a blurring of when he's saying something just to say something, versus when he actually believes what he is required to believe in order to continue doing the things that he's doing.

Host:

And this is the whole psychology of cognitive dissonance, right? Where the brain struggles to hold two conflicting worldviews at the same time, so it's incentivized, or it endeavors, to dismiss one.

So if you wanted to be a healthy person but also a smoker, and I pointed out that smoking is bad for you, the first words out of your mouth are going to be, yes, but—

Karen Hao:

Smoking helps me with stress.

Host:

Yeah, but I only do it when I think—

I kind of see that at the moment because these companies have to raise extortionate, huge amounts of money to fund their AI research, and they're building out all of these data centers.

So when they're out in the public, they're always fundraising. All of these major companies are fundraising all the time at the moment.

So you can't be fundraising and saying, "I'm going to destroy your children's future potentially. There's a 25% chance that your children aren't going to have a great life."

Learn languages from TV shows, movies, news, articles and more! Try LingQ for FREE