×

Мы используем cookie-файлы, чтобы сделать работу LingQ лучше. Находясь на нашем сайте, вы соглашаетесь на наши правила обработки файлов «cookie».

Diary of a CEO, (Part 1) We Are Being Gaslit By AI Companies – Текст для чтения

Diary of a CEO, (Part 1) We Are Being Gaslit By AI Companies

Продвинутый 2 Урок английского для практики чтения

Начать изучать этот урок прямо сейчас

1: 1.1 AI Whistleblower: We Are Being Gaslit By AI Companies

Host:

So much of what's happening today in the AI industry is extremely inhumane.

But this is me playing devil's advocate. And logically, it could be the case that the civilization that accelerates their research with AI is going to be the superior civilization.

Karen Hao:

No, it's not. This is a prediction that you're making, right?

Host:

Mark Zuckerberg's making.

Karen Hao:

And do you know what the common feature of all of them is? They profit enormously off of this myth.

You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.

So, what do we do about it?

Host:

We need to break up the empires of AI.

You know, I've been covering the tech industry for over 8 years, interviewed over 250 people, including former or current OpenAI employees and executives. And I can tell you that there are many parallels between the empires of AI and the empires of old, right?

They claimed the intellectual property of artists, writers, and creators in the pursuit of training these models. Second, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off and then they work to train the models on the very job that they were just laid off from, which will then perpetuate more layoffs if that model then develops that skill.

And when they talk about there being some new jobs created that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there.

And then there's the environmental and public health crisis that these companies have created, and how they're able to also spend hundreds of millions to try and kill every possible piece of legislation that gets in their way, and will censor researchers that are inconvenient to the empire's agenda.

But what I'm saying is not that these technologies don't have utility. It's that the production of these technologies right now is exacting a lot of harm on people.

But we have research that shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences.

So let's talk about all of that. This is super interesting to me.

My team have given me this report to show me how many of you that watch this show subscribe. And some of you have told us, according to this, that you are unsubscribed from the channel randomly.

So a favor to ask all of you: please could you check right now if you've hit the subscribe button, if you are a regular viewer of the show and you like what we do here.

We're approaching quite a significant landmark on this show in terms of subscriber number. So if there was one simple free thing that you could do to help us, my team, everyone here, to keep this show free, to keep it improving year over year and week over week, it is just to hit that subscribe button and to double-check if you've hit it.

Only thing I'll ever ask of you. Do we have a deal?

If you do it, I'll tell you what I'll do. I'll make sure every single week, every single month, we fight harder and harder and harder and harder to bring you the guests and conversations that you want to hear.

I've stayed true to that promise since the very beginning of The Diary of a CEO and I will not let you down. Please help us. Really appreciate it. Let's get on with the show.

Karen, how you've written this book in front of me here called Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. I guess my first question is: what is the research and the journey you went on in order to write this book we're going to talk about, and the subjects within it today?

Karen Hao:

I took a strange route into journalism.

I studied mechanical engineering at MIT, and so when I graduated I moved to San Francisco, I joined a tech startup, I became part of Silicon Valley, and I basically received an education in what Silicon Valley is about.

Because a few months into joining a very mission-driven startup that was focused on building technologies that would help facilitate the fight against climate change, the board fired the CEO because the company was not profitable.

And this was, in hindsight, a very pivotal moment for me because I thought: if this hub is ultimately geared towards building profitable technologies, and many of the problems in the world that I think need solving are not profitable problems, like climate change, then what are we actually doing here?

How did we get to a point where innovation is not actually necessarily working in the public benefit, and sometimes even undermining the public benefit in pursuit of profit?

In that moment, I had a bit of a crisis where I thought, well, I just spent four years trying to set myself up for this career that I now don't think I am cut out for.

And I thought, well, I might as well just try something totally different. I've always liked writing, and that's how after two years I landed in a role at MIT Technology Review covering AI full-time.

And that gave me a space to then explore all of these questions of who gets to decide what technologies we build, how money and ideology also drive the production of those technologies, and how do we ultimately make sure that we actually reimagine the innovation ecosystem to work for a broad base of people all around the world.

And so that is kind of how I then set off on this journey of ultimately writing a book. I didn't realize that I was working towards writing a book, but starting in 2018, when I took that job, was essentially the moment in which I began researching the story that I document in it.

Host:

A very timely time to start working in artificial intelligence.

For anyone that doesn't know, this is pre-OpenAI, ChatGPT-launch moment that shook the world.

But in writing this book, you interviewed a lot of people and went to a lot of places. Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, etc. ?

Karen Hao:

I interviewed over 250 people. So over 300 interviews. Over 90 of those people were former or current OpenAI employees and executives.

So the book covers the inside story of OpenAI's first decade and how it ultimately got to where it is today.

But I didn't want to write a corporate book. I felt very strongly that in order to help people understand the impact of the AI industry, we would also have to travel well beyond Silicon Valley.

These companies tell us that AI is going to benefit everyone and that's their mission. But you really start to see that rhetoric break down when you go to places that look nothing like Silicon Valley, that speak nothing like Silicon Valley, and that have a history and culture that are fundamentally different as well.

And that's where you start to really understand the true reality of how this industry is unfolding around us.

Host:

Karen, I often try and steer conversations, but in this situation I feel like it's probably my responsibility to follow.

So with that in mind, I'm going to ask you: where does this journey begin, and where should we be starting if we're talking about the subjects of Empire of AI, AI generally, artificial intelligence?

And also I'd say one thing I'm really keen to do in this conversation, which I often see left out, is let's assume that our viewers know nothing about AI.

So they don't know what scaling laws are or GPUs or compute or whatever, and let's try and keep this as simple as we possibly can in terms of language, or explain all the complicated language so that we can bring as many people with us as we possibly can.

Where should we start?

Karen Hao:

I think we should start with when AI started as a field.

So this was back in 1956, and there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline, to try and chase an ambition.

And specifically, an assistant professor at Dartmouth University, John McCarthy, decided to name this discipline artificial intelligence.

This was not the first name that he tried. The previous year he tried to name it Automata Studies. And the reason why some of his colleagues were concerned about this name was because it pegged the idea of this discipline to recreating human intelligence.

And back then, as is true today, we have no scientific consensus around what human intelligence is.

There's no definition from psychology, biology, neurology. And in fact, every attempt in history to quantify and rank human intelligence has been driven by nefarious motives. It's been driven by a desire to prove scientifically that certain groups of people are inferior to other groups of people.

There are no goalposts for this field and there are no goalposts for the industry when they say that they are ultimately trying to recreate AI systems that would be as smart as humans.

How do we even define what that means? And when are we going to get there if we don't know how to define the destination?

And what that effectively means is that these companies can just use the term artificial general intelligence, which is now the term to refer to this ambitious goal to recreate human intelligence, however they want to. And they can define and redefine it based on what is convenient for them.

So in OpenAI's history, it has defined and redefined it many times.

When Sam Altman is talking with Congress, AGI is a system that's going to cure cancer, solve climate change, cure poverty.

When he's talking with consumers that he's trying to sell his products to, it's the most amazing digital assistant that you're ever going to have.

When he was talking with Microsoft, in the deal that OpenAI and Microsoft struck where Microsoft invested in the company, it was defined as a system that will generate a hundred billion of revenue.

And on OpenAI's own website, they define it as highly autonomous systems that outperform humans in most economically valuable work.

This is not a coherent vision of one technology. These are very different definitions that are spoken out loud to the audience that needs to be mobilized to ward off regulation, or get more consumer buy-in into the industry's quest, or to get more capital, more resources for continuing on this journey with ambiguous definitions.

I mean, speaking about different definitions through time, in 2015, in a blog post that Sam Altman wrote before OpenAI was officially announced, he explicitly outlined the existential risk by saying, "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen, for example, an engineered virus, but AI is probably the most likely way to destroy everything in general."

When Altman is writing for the public or speaking for the public, he does not just have the public as the audience in mind. There are other people that he is trying to motivate or mobilize when he says these things.

And in that particular moment, Altman was trying to convince Elon Musk to join him in co-founding OpenAI. And Musk in particular was spending all of his time sounding the alarm on what he saw as a huge existential threat that AI could pose.

And so in that blog post, if you look at the language that Altman uses side by side with the language that Musk was using at the time, it mirrors all the things that Musk was saying.

Host:

Identical.

I mean, 10 years ago, Musk was going on podcasts, tweeting, whatever, that the greatest existential risk to humanity was AI.

And what you're saying here is that Sam Altman was just mirroring the language that Elon was using to get Elon involved in OpenAI.

And later it appears — and again there's a legal case taking place now — that Sam might have muscled Elon out in some capacity.

Karen Hao:

Yeah.

So we know from the lawsuit and the documents that have come out in the lawsuit that Ilya Sutskever, who was the chief scientist of OpenAI at the time, and Greg Brockman, chief technology officer at the time, when they were deciding whether or not to maintain OpenAI as a nonprofit — because it was originally founded as a nonprofit — they decided, okay, we need to create a for-profit entity.

But the question was: who should be the CEO of this for-profit entity? Should it be Musk or should it be Altman? Because they were the two co-chairmen of the nonprofit.

And in the emails, it became clear that Ilya and Greg first chose Musk to be the CEO.

But through my reporting, I discovered that Altman then appealed personally to Greg Brockman, who was a friend of his, they had known each other for many years through the Silicon Valley scene, and said: "Don't you think that it would be a little bit dangerous to have Musk be the CEO of this company, this new for-profit entity, because he's a famous guy, he has a lot of pressures in the world, he could be threatened, he could act erratically, he could be unpredictable. And do we really want a technology that could be super powerful in the future to end up in the hands of this man?"

And that convinced Greg, and Greg then convinced Ilya: "You know, I think there's a point here. Do we really want to give this much power to Musk?"

And that is why Musk then leaves, because the two switch their allegiances. They say, "Actually, we want Altman to be the CEO."

And then Musk is like, "If I'm not CEO, I'm out."

Host:

So it sounds like Sam again managed to persuade someone to do something.

I guess this begs the question: what do you think of Sam Altman?

Karen Hao:

I think he's a very controversial figure.

Host:

You did an interesting pause. It's a pause where someone tries to select their words.

Karen Hao:

Well, this is what's so interesting about those interviews, is people are extremely polarized on Altman. No one has in-between feelings about him.

Either they think he's the greatest tech leader of this generation, akin to the Steve Jobs of the modern era, or they think that he's really manipulative and an abuser and a liar.

And what I realized, because I interviewed so many people, is it really comes down to what that person's vision of the future is and what their goals are.

So if you align with Altman's vision of the future, you're going to think he's the greatest asset ever to have on your side, because this man is really persuasive. He's incredible at telling stories. He's incredible at mobilizing capital, at recruiting talent, at getting all the inputs that you need to then make that future happen.

But if you don't agree with his vision of the future, then you begin to feel like you're being manipulated by him to support his vision, even if you fundamentally don't agree with it.

And this is the story especially of Dario Amodei, CEO of Anthropic, who was originally an executive at OpenAI.

Host:

So for people that don't know, Dario now runs Anthropic, which is the maker of Claude. A lot of people probably are more familiar with Claude.

Karen Hao:

Yeah. And it's one of the biggest competitors to OpenAI.

And Amodei, at the time when he was an executive at OpenAI, thought that Altman was on the same page with him, and then over time began to feel that Altman was actually on exactly the opposite page of him, and felt that Altman had used Amodei's intelligence, capabilities, skills to build things and bring about a vision of the future that he actually fundamentally didn't agree with.

And so that's why people end up with this bad taste in their mouths.

I've been covering the tech industry for over eight years and covered many companies. I've covered Meta, Google, Microsoft in addition to OpenAI.

And OpenAI and Altman — it's the only figure that I've seen this degree of polarization with, where people cannot decide whether he's the greatest or the worst.

Host:

You mentioned Dario there, and what I found really interesting is to look at how people's quotes evolve over time with their incentives.

So I was looking at all the things they've said on the record, on podcasts, in their blog posts, to see how it's evolved over time.

And Dario, who was the former VP of Research at OpenAI and has now moved on to Anthropic, who are taking a slightly different approach to developing AI, said back in 2017 while he was still at OpenAI — and this is a quote — "I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can't see any reason in principle why that couldn't happen. My chance that something goes really quite catastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%."

And also you mentioned Ilya, who was a co-founder of OpenAI and then left. I guess the first question I'd ask is: why did Ilya leave?

Karen Hao:

It's a great question.

So he was instrumental in trying to get Sam Altman fired, and he's another one of the people who over time began to feel like he was being manipulated by Altman toward contributing to something that he didn't believe in.

And because I interviewed a lot of people, Ilya in particular had two pillars that he cared about deeply. One is making sure we get to so-called AGI, and the other is making sure that we get to it safely.

And he felt that Altman was actively undermining both things.

He felt that Altman was creating a very chaotic environment within the company, where he was pitting teams against each other, where he was telling different things to different people.

Host:

Have you ever spoken to him?

Karen Hao:

I have. So I interviewed him in 2019 for a profile that I did of OpenAI for MIT Technology Review.

Host:

And back in 2019, he has a quote where he says, "The future's going to be good for AIs regardless. It would be nice if it was also good for humans as well. It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful. And I think a good analogy would be the way that humans treat animals. It's not that we hate animals. I think humans love animals, and I have a lot of affection for them. But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's important to us. And I think by default, that's the kind of relationship that's going to be between us and AI, which are truly autonomous and operating on their own behalf."

And that was in 2019, the year that you interviewed him.

Karen Hao:

One of the things that I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence, and what do we mean by intelligence.

A huge part of the views of the different people and the quotes that you're reading derives from a specific belief that they each have in this question of what is intelligence, what constitutes intelligence.

For Ilya, he has throughout his research career felt that ultimately our brains are giant statistical models.

This is not something that we actually know, but this is his own hypothesis, and also the hypothesis of his mentor Geoffrey Hinton, who also was on this podcast.

This is why they have such a strong conviction in the idea of building AI systems that are statistical models, and that this particular approach is going to lead to intelligent systems as we are intelligent.

It's a hypothesis that they have. It's not one that has been proven by science. And some people vehemently disagree with them on this particular thing.

But if you step into their shoes and take on that hypothesis, and assume that it's true, that our brains are in fact statistical engines, and that these systems that they're building are also statistical engines that they're making bigger and bigger and bigger until they become the size of the human brain — that's why they make this comparison, where the system will become equal to human intelligence and then maybe exceed human intelligence.

And Ilya gave a talk at one point at this really prominent AI research conference that happens every year called Neural Information Processing Systems. It's a mouthful, but he gave this keynote where he shows this chart of the size of brains and the intelligence of a species.

And it's roughly linear. The bigger the size of the brain, the more intelligent the species.

And so for him, he thinks he's building a digital brain, because he thinks brains are just statistical engines.

So from that logic it's like, okay, if we then build a bigger statistical engine than the human brain, then based on this chart it will be more intelligent, and then we will be subjected to the same treatment that we've subjected animals to.

But it's really important to understand that these are scientific hypotheses of specific individuals within the AI research community, and there's a lot of debate about whether this is in fact the case.

And some of the biggest critics say it's very reductive to think of our brains as simply statistical engines.

Host:

Why does it matter to know the mechanism? Is it not just important to know the outcome, which is that it's going to be able to make a video for me, or agents are going to be able to do the work that I do? Does it really matter for us to know the mechanism behind it?

Karen Hao:

Yes and no.

It matters because these companies are driving their future actions based on this hypothesis.

So they have decided: we think this hypothesis is true, we should just continue building larger and larger statistical models in the pursuit of artificial general intelligence.

And that's then having global consequences. In order to continue doing that, they're hoovering up more and more data. They're building more and more data centers. They are exploiting more and more labor in order to continue on this path.

Here's a question that I think is important to ask: why are we trying to build AI systems that are duplicative of humans?

We're kind of having this conversation right now where we've just taken the premise of this industry as a good thing. Like they said that we should be building AGI, so we say that we should be building AGI.

I would like to ask: why are we doing that?

Why is it that we are building a technology that is ultimately designed to replace and automate people away?

That is not the enterprise of technology.

We should be building technology, and the purpose of technology throughout history has been to improve human flourishing, not to replace people.

And so this is a critical part of my critique of these companies and these scientists that have just adopted this goal and relentlessly pursued it and have had enormous capital and enormous resources to pursue it.

Is this the right goal? Why are we doing this?

Why can't we just build AI systems that do things like accelerate drug discovery and improve people's healthcare outcomes, which are systems that have nothing to do with the statistical engines that they're trying to build to duplicate the human brain?

Host:

So why are they doing it?

You've interviewed all these people. I think it's what, 300 people in total, 80 or 90 of them from OpenAI, the maker of ChatGPT.

Why do you think they're doing it?

Karen Hao:

I think it's because they're driven by an imperial agenda. And that is why I call these companies empires of AI.

Host:

What do you mean by an imperial agenda? What does that term mean?

Karen Hao:

Empire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies do, the scale that they operate at, and what motivates them to do what they do.

And there are many parallels that you see between what I call the empires of AI and the empires of old.

They lay claim to resources that are not their own in the pursuit of training these models. That's the data of individuals, the intellectual property of artists, writers, and creators. They're land-grabbing in order to build these supercomputer facilities for training the next generation of models.

Second, they exploit an extraordinary amount of labor.

Karen Hao:

They contract hundreds of thousands of workers all around the world, including in the US, to ultimately make these technologies. We can talk about that more.

And they also design their tools to be labor-automating, so that when the technologies are deployed, it also affects labor rights because it erodes away labor rights. And this is a political choice that they have.

Third, they monopolize knowledge production. And so they project this idea that they're the only ones that really understand how the technology works. And so if the public doesn't like it, it's because they don't actually know enough about this technology.

They do this to the public. They do this to policymakers. And they've also captured the majority of the scientists that are working on understanding the limitations and capabilities of AI.

Host:

You think they're gaslighting the public in a way?

Karen Hao:

They are, yeah.

If most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?

Host:

No.

Karen Hao:

And in the same way, they employ and bankroll — the AI industry employs and bankrolls — most of the AI researchers in the world. So they set the agenda on AI research in soft ways simply by funneling money to their priorities so that only certain types of AI research are produced.

But they also will censor researchers when they do not like what the researcher has found.

And so I talk about the case of Dr. Timnit Gebru in my book, who was the ethical AI team co-lead at Google when she was literally hired to critique the types of AI systems that Google was building. She then co-wrote a critical research paper that was showing how large language models specifically were leading to certain types of harmful outcomes.

And in an attempt to try and stop this research from being published, Google ended up firing Gebru and then fired her other co-lead, Margaret Mitchell.

And so they control and quash the research that is inconvenient to the empire's agenda.

Host:

Did you have an example where this is happening to journalists as well that are asking questions of their team members? I think I was watching a video of yours where there was a young man that was saying he had someone show up at his door, knocked on his door and asked for information, emails, text messages, and this person was from one of the big AI companies.

Karen Hao:

This was OpenAI. It started subpoenaing some of its critics, yeah, as part of what appears to be a campaign of intimidation, but also what appeared to be a campaign of fishing for more information to try to map out the network of critics further.

But this was a man who runs a small watchdog nonprofit, and they had been doing a lot of work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a for-profit.

Ultimately, OpenAI was successful in that conversion. But during the period where it was sort of existential for OpenAI to complete this conversion, there were a lot of civil society groups and watchdog groups like Midas who were trying to prevent the process from happening in the dead of night.

They were trying to get more transparency. They were trying to have more public debate about this because it's unprecedented.

And it was then that there was a knock on his door and he was served papers.

Host:

What did the papers say?

Karen Hao:

The papers asked him to reproduce every single piece of communication that he had had that might have involved Musk.

So this was this strange paranoia that OpenAI had, that Musk was somehow funding these people to block the conversion. None of them were actually funded by Musk.

So in this particular case, their request, he simply answered, you know, I don't have any documents because this doesn't exist.

Host:

So going back to this point of empires, you were saying that one of the factors of an empire is a land grab, and then the next one was—

Karen Hao:

Labor exploitation.

Host:

Labor exploitation. The third one, controlling knowledge production.

Karen Hao:

And one of the other ones that's really important to understand about the AI empires in particular is that empires always have this narrative that they say to the public, like: we're the good empire, and we need to be an empire in the first place because there are also bad empires in the world.

And if you allow us to take all the resources and use all of the labor, then we promise we will bring you progress and modernity for everyone. We will bring you to this utopic state akin to an AI heaven.

But if the evil empire does it first, we will descend into a hell.

Host:

And the evil empire being, in this case—

Karen Hao:

In this case, most often it's China. But actually in the early days, OpenAI evoked Google as the evil empire.

So all of their decisions were about: we need to do it first because otherwise Google, this evil corporation that's driven by profit, us as a benevolent nonprofit — like this is a critical contest of who wins.

Host:

Do you think the people building these AI companies believe that the outcome is going to be all good now? Do you think they think that it's going to serve everyone? It's going to be the age of abundance? Everything's going to end well? What do you think they believe? What do you think Sam believes?

Karen Hao:

This is such a core part of the mythology that they create around the AI industry: it includes the belief that it could go very badly. It goes hand in hand.

They need that part of the myth in order to then say, and that's why we need to be in control of the technology, because that's the only way that it's going to go really, really well.

And Altman has said publicly, you know, the worst case: lights out for everyone. But best case, we cure cancer, we solve climate change, and there's abundance.

And Dario Amodei — same kind of rhetoric — worst case, catastrophic or existential harm for humanity; best case, mass human flourishing.

So this is two sides of the same coin. They have to use both of these narratives in order to continue justifying an extremely anti-democratic approach to AI development, where there should not be broad participation in developing this technology. They must be the ones controlling it at every step of the way.

Host:

Sam Altman did a tweet saying, "There are some books coming out about OpenAI and me. We only participated in two of them, one by Keach Hagey focused on me and one by Ashley Vance on OpenAI."

He went on to say, "No book will get everything right, especially when some people are so intent on twisting things."

But these two authors are trying to— You quote-retweeted that tweet from Sam Altman and you said, "The unnamed book, Empire of AI, is mine."

Do you believe that tweet from Sam Altman was in reference to your book?

Karen Hao:

A hundred percent. Because there's only three books coming out about him.

Host:

And he had caught wind that your book was coming out and—

Karen Hao:

He knew my book was coming out because I had contacted OpenAI from the very beginning of my process and said, I'm working on a book now. Will you participate in it?

And actually initially they said yes, even though— So my history with OpenAI: I profiled the company for MIT Technology Review. I embedded within the office for three days in 2019. My profile comes out in 2020. The leadership are very unhappy.

And in my book, I actually quote an email that I received that Sam Altman sent to the company about my profile, saying, "Yeah, this is not great."

And from then on, the company's stance to me was: we are not going to participate in anything that you do. We are not going to respond to any of the questions that you send us. And this was explicitly articulated. It wasn't me inferring.

So I had a colleague at MIT Technology Review that also covered AI. And at one point OpenAI sent him this press release being like, "We would love for you to cover this story."

And he was like, "I'm really busy. Will you send it to Karen?"

And they were like, "Oh no. We have a history. You understand?"

And so for three years they refused to talk to me.

But then I ended up at the Wall Street Journal, where they felt a bit compelled, because it was the Journal, to reopen the lines of communication.

And so I started having more dialogue with them. Every time I wrote a piece, I would always send them, here's my request for comment. I would always ask them, will you sit for interviews?

And we did get to a more productive relationship.

And then I embarked on the book. I left the Journal to focus on the book full-time. And I told them right away, I'm working on this book. I want to continue this productive conversation where I make sure I reflect OpenAI's perspective in the book.

And so they were like, we can arrange interviews for you. You can come back to the office. We'll set up some conversations.

And then as we were going back and forth on this, the board fired Sam Altman. And that's when things started going kind of south, because the company started becoming very sensitive to scrutiny.

And so then they started kicking the can down the road, down the road, down the road. And I kept saying, hey, when are we rescheduling this? What's going on?

And then I get an email saying, "We are not going to participate at all. You are not coming to the office. You're not doing interviews."

And I had actually already booked my tickets. I was already going to fly to San Francisco to have the interviews.

Learn languages from TV shows, movies, news, articles and more! Try LingQ for FREE