MIT Technology Review https://www.technologyreview.com Wed, 09 Jul 2025 16:56:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://wp.technologyreview.com/wp-content/uploads/2024/09/cropped-TR-Logo-Block-Centered-R.png?w=32 MIT Technology Review https://www.technologyreview.com 32 32 172986898 The Download: a conversation with Karen Hao, and how did life begin? https://www.technologyreview.com/2025/07/09/1119923/the-download-a-conversation-with-karen-hao-and-how-did-life-begin/ Wed, 09 Jul 2025 12:10:00 +0000 https://www.technologyreview.com/?p=1119923 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside OpenAI’s empire: A conversation with Karen Hao

In a wide-ranging Roundtables conversation for MIT Technology Review subscribers, journalist and author Karen Hao recently spoke about her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI.

She talked with executive editor Niall Firth about how she first covered the company in 2020 while on staff at MIT Technology Review. They discussed how the AI industry now functions like an empire and went on to examine what ethically-made AI looks like.

Read the transcript of the conversation, which has been lightly edited and condensed. And, if you’re already a subscriber, you can watch the on-demand recording of the event here

MIT Technology Review Narrated: How did life begin?

How life begins is one of the biggest and hardest questions in science. All we know is that something happened on Earth more than 3.5 billion years ago, and it may well have occurred on many other worlds in the universe as well. Could AI help us to unpick the mysteries around the origins of life and detect signs of it on other worlds?

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which 
we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 xAI’s Grok went on an anti-Semitic rant 
Days after Elon Musk said new updates would lessen its reliance on mainstream media. (WP $)
+ The chatbot started to call itself ‘MechaHitler.’ (WSJ $)
+ What Grok’s neo-Nazi turn tells us about xAI. (The Atlantic $)

2 Musk loyalists are fighting to keep DOGE running
As officials seek to diminish the department’s role. (WSJ $)
+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)

3 An imposter used AI to successfully impersonate Marco Rubio
They were able to send voice and text messages to fellow politicians. (WP $)
+ It’s not the first time Rubio has been targeted like this. (FT $)

4 Terrorist groups are using AI to recruit and plan
Counter-terror agencies are struggling to keep up. (The Guardian)

5 How the crypto faithful won over the President
The industry’s successful Trump courtship sparked a lobbying bonanza. (NYT $)

6 Wanted: 115,000 Nvidia chips for China’s data centers
But the US doesn’t seem to know how many restricted chips are already in the country. (Bloomberg $)

7 For startups, protecting companies from AI threats isn’t big business
Smaller firms are only making modest gains—for now. (The Information $)
+ Cyberattacks by AI agents are coming. (MIT Technology Review)

8 Inside Zimbabwe’s dangerous EV lithium mines
Many residents worry that China is exploiting them. (Rest of World)
+ How one mine could unlock billions in EV subsidies. (MIT Technology Review)

9 ‘The Milk Guy’ is delivering raw dairy around NYC
Mmm, delicious listeria, salmonella, and E. coli. (NY Mag $)
+ RFK Jr barred Democrats from being vaccine advisors. (Ars Technica)
+ The Department of Health and Human Services is searching for two new vaccines against deadly viruses. (Undark)

10 Take a look at these beautiful star clusters
Courtesy of the Hubble Space Telescope and the James Webb Space Telescope. (Ars Technica)
+ See the stunning first images from the Vera C. Rubin Observatory. (MIT Technology Review)

Quote of the day

“People are going to die.”

—Clement Nkubizi, the country director for the nonprofit Action Against Hunger in South Sudan, tells Wired that their food stock is running critically low in the wake of USAID cuts.

One more thing

The world is moving closer to a new cold war fought with authoritarian tech

Despite President Biden’s assurances that the US is not seeking a new cold war, one is brewing between the world’s autocracies and democracies—and technology is fueling it.

Authoritarian states are following China’s lead and are trending toward more digital rights abuses by increasing the mass digital surveillance of citizens, censorship, and controls on individual expression.

And while democracies also use massive amounts of surveillance technology, it’s the tech trade relationships between authoritarian countries that’s enabling the rise of digitally enabled social control. Read the full story.

—Tate Ryan-Mosley

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ The UK is deep in the grip of Oasis-mania right now.
+ Take a look back over the legacy of iconic Indian director and actor Guru Dutt.
+ These are the best foods to help keep you hydrated in this heat.
+ Artificial flowers are cool now? Hmm 🌷

]]>
1119923
Inside OpenAI’s empire: A conversation with Karen Hao https://www.technologyreview.com/2025/07/09/1119784/inside-openais-empire-a-conversation-with-karen-hao/ Wed, 09 Jul 2025 09:10:29 +0000 https://www.technologyreview.com/?p=1119784 In a wide-ranging Roundtables conversation for MIT Technology Review subscribers, AI journalist and author Karen Hao spoke about her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. She talked with executive editor Niall Firth about how she first covered the company in 2020 while on staff at MIT Technology Review, and they discussed how the AI industry now functions like an empire and what ethically-made AI looks like. 

Read the transcript of the conversation, which has been lightly edited and condensed, below. Subscribers can watch the on-demand recording of the event here. 


Niall Firth: Hello, everyone, and welcome to this special edition of Roundtables. These are our subscriber-only events where you get to listen in to conversations between editors and reporters. Now, I’m delighted to say we’ve got an absolute cracker of an event today. I’m very happy to have our prodigal daughter, Karen Hao, a fabulous AI journalist, here with us to talk about her new book. Hello, Karen, how are you doing?

Karen Hao: Good. Thank you so much for having me back, Niall. 

Niall Firth: Lovely to have you. So I’m sure you all know Karen and that’s why you’re here. But to give you a quick, quick synopsis, Karen has a degree in mechanical engineering from MIT. She was MIT Technology Review’s senior editor for AI and has won countless awards, been cited in Congress, written for the Wall Street Journal and The Atlantic, and set up a series at the Pulitzer Center to teach journalists how to cover AI. 

But most important of all, she’s here to discuss her new book, which I’ve got a copy of here, Empire of AI. The UK version is subtitled “Inside the reckless race for total domination,” and the US one, I believe, is “Dreams and nightmares in Sam Altman’s OpenAI.”

It’s been an absolute sensation, a New York Times chart topper. An incredible feat of reporting—like 300 interviews, including 90 with people inside OpenAI. And it’s a brilliant look at not just OpenAI’s rise, and the character of Sam Altman, which is very interesting in its own right, but also a really astute look at what kind of AI we’re building and who holds the keys. 

Karen, the core of the book, the rise and rise of OpenAI, was one of your first big features at MIT Technology Review. It’s a brilliant story that lifted the lid for the first time on what was going on at OpenAI … and they really hated it, right?

Karen Hao: Yes, and first of all, thank you to everyone for being here. It’s always great to be home. I do still consider MIT Tech Review to be my journalistic home, and that story was—I only did it because Niall assigned it after I said, “Hey, it seems like OpenAI is kind of an interesting thing,” and he was like, you should profile them. And I had never written a profile about a company before, and I didn’t think that I would have it in me, and Niall believed that I would be able to do it. So it really didn’t happen other than because of you.

I went into the piece with an open mind about—let me understand what OpenAI is. Let me take what they say at face value. They were founded as a nonprofit. They have this mission to ensure artificial general intelligence benefits all of humanity. What do they mean by that? How are they trying to achieve that ultimately? How are they striking this balance between mission-driven AI development and the need to raise money and capital? 

And through the course of embedding within the company for three days, and then interviewing dozens of people outside the company or around the company … I came to realize that there was a fundamental disconnect between what they were publicly espousing and accumulating a lot of goodwill from and how they were operating. And that is what I ended up focusing my profile on, and that is why they were not very pleased.

Niall Firth: And how have you seen OpenAI change even since you did the profile? That sort of misalignment feels like it’s got messier and more confusing in the years since.

Karen Hao: Absolutely. I mean, it’s kind of remarkable that OpenAI, you could argue that they are now one of the most capitalistic corporations in Silicon Valley. They just raised $40 billion, in the largest-ever private fundraising round in tech industry history. They’re valued at $300 billion. And yet they still say that they are first and foremost a nonprofit. 

I think this really gets to the heart of how much OpenAI has tried to position and reposition itself throughout its decade-long history, to ultimately play into the narratives that they think are going to do best with the public and with policymakers, in spite of what they might actually be doing in terms of developing their technologies and commercializing them.

Niall Firth: You cite Sam Altman saying, you know, the race for AGI is what motivated a lot of this, and I’ll come back to that a bit before the end. But he talks about it as like the Manhattan Project for AI. You cite him quoting Oppenheimer (of course, you know, there’s no self-aggrandizing there): “Technology happens because it’s possible,” he says in the book. 

And it feels to me like this is one of the themes of the book: the idea that technology doesn’t just happen because it comes along. It comes because of choices that people make. It’s not an inevitability that things are the way they are and that people are who they are. What they think is important—that influences the direction of travel. So what does this mean, in practice, if that’s the case?

Karen Hao: With OpenAI in particular, they made a very key decision early on in their history that led to all of the AI technologies that we see dominating the marketplace and dominating headlines today. And that was a decision to try and advance AI progress through scaling the existing techniques that were available to them. At the time when OpenAI started, at the end of 2015, and then, when they made that decision, in roughly around 2017, this was a very unpopular perspective within the broader AI research field. 

There were kind of two competing ideas about how to advance AI progress, or rather a spectrum of ideas, bookended by two extremes. One extreme being, we have all the techniques we need, and we should just aggressively scale. And the other one being that we don’t actually have the techniques we need. We need to continue innovating and doing fundamental AI research to get more breakthroughs. And largely the field assumed that this side of the spectrum [focusing on fundamental AI research] was the most likely approach for getting advancements, but OpenAI was anomalously committed to the other extreme—this idea that we can just take neural networks and pump ever more data, and train on ever larger supercomputers, larger than have ever been built in history.

The reason why they made that decision was because they were competing against Google, which had a dominant monopoly on AI talent. And OpenAI knew that they didn’t necessarily have the ability to beat Google simply by trying to get research breakthroughs. That’s a very hard path. When you’re doing fundamental research, you never really know when the breakthrough might appear. It’s not a very linear line of progress, but scaling is sort of linear. As long as you just pump more data and more compute, you can get gains. And so they thought, we can just do this faster than anyone else. And that’s the way that we’re going to leap ahead of Google. And it particularly aligned with Sam Altman’s skillset, as well, because he is a once-in-a-generation fundraising talent, and when you’re going for scale to advance AI models, the primary bottleneck is capital.

And so it was kind of a great fit for what he had to offer, which is, he knows how to accumulate capital, and he knows how to accumulate it very quickly. So that is ultimately how you can see that technology is a product of human choices and human perspectives. And they’re the specific skills and strengths that that team had at the time for how they wanted to move forward.

Niall Firth: And to be fair, I mean, it works, right? It was amazing, fabulous. You know the breakthroughs that happened, GPT-2 to GPT-3, just from scale and data and compute, kind of were mind-blowing really, as we look back on it now.

Karen Hao: Yeah, it is remarkable how much it did work, because there was a lot of skepticism about the idea that scale could lead to the kind of technical progress that we’ve seen. But one of my biggest critiques of this particular approach is that there’s also an extraordinary amount of costs that come with this particular pathway to getting more advancements. And there are many different pathways to advancing AI, so we could have actually gotten all of these benefits, and moving forward, we could continue to get more benefits from AI, without actually engaging in a hugely consumptive, hugely costly approach to its development.

Niall Firth: Yeah, so in terms of consumptive, that’s something we’ve touched on here quite recently at MIT Technology Review, like the energy costs of AI. The data center costs are absolutely extraordinary, right? Like the data behind it is incredible. And it’s only gonna get worse in the next few years if we continue down this path, right? 

Karen Hao: Yeah … so first of all, everyone should read the series that Tech Review put out, if you haven’t already, on the energy question, because it really does break down everything from what is the energy consumption of the smallest unit of interacting with these models, all the way up until the highest level. 

The number that I have seen a lot, and that I’ve been repeating, is there was a McKinsey report that was looking at if we continue to just look at the pace at which data centers and supercomputers are being built and scaled, in the next five years, we would have to add two to six times the amount of energy consumed by California onto the grid. And most of that will have to be serviced by fossil fuels, because these data centers and supercomputers have to run 24/7, so we cannot rely solely on renewable energy. We do not have enough nuclear power capacity to power these colossal pieces of infrastructure. And so we’re already accelerating the climate crisis. 

And we’re also accelerating a public-health crisis, the pumping of thousands of tons of air pollutants into the air from coal plants that are having their lives extended and methane gas turbines that are being built in service of powering these data centers. And in addition to that, there’s also an acceleration of the freshwater crisis, because these pieces of infrastructure have to be cooled with freshwater resources. It has to be fresh water, because if it’s any other type of water, it corrodes the equipment, it leads to bacterial growth.

And Bloomberg recently had a story that showed that two-thirds of these data centers are actually going into water-scarce areas, into places where the communities already do not have enough fresh water at their disposal. So that is one dimension of many that I refer to when I say, the extraordinary costs of this particular pathway for AI development.

Niall Firth: So in terms of costs and the extractive process of making AI, I wanted to give you the chance to talk about the other theme of the book, apart from just OpenAI’s explosion. It’s the colonial way of looking at the way AI is made: the empire. I’m saying this obviously because we’re here, but this is an idea that came out of reporting you started at MIT Technology Review and then continued into the book. Tell us about how this framing helps us understand how AI is made now.

Karen Hao: Yeah, so this was a framing that I started thinking a lot about when I was working on the AI Colonialism series for Tech Review. It was a series of stories that looked at the way that, pre-ChatGPT, the commercialization of AI and its deployment into the world was already leading to entrenchment of historical inequities into the present day.

And one example was a story that was about how facial recognition companies were swarming into South Africa to try and harvest more data from South Africa during a time when they were getting criticized for the fact that their technologies did not accurately recognize black faces. And the deployment of those facial recognition technologies into South Africa, into the streets of Johannesburg, was leading to what South African scholars were calling a recreation of a digital apartheid—the controlling of black bodies, movement of black people.

And this idea really haunted me for a really long time. Through my reporting in that series, there were so many examples that I kept hitting upon of this thesis, that the AI industry was perpetuating. It felt like it was becoming this neocolonial force. And then, when ChatGPT came out, it became clear that this was just accelerating. 

When you accelerate the scale of these technologies, and you start training them on the entirety of the Internet, and you start using these supercomputers that are the size of dozens—if not hundreds—of football fields. Then you really start talking about an extraordinary global level of extraction and exploitation that is happening to produce these technologies. And then the historical power imbalances become even more obvious. 

And so there are four parallels that I draw in my book between what I have now termed empires of AI versus empires of old. The first one is that empires lay claim to resources that are not their own. So these companies are scraping all this data that is not their own, taking all the intellectual property that is not their own.

The second is that empires exploit a lot of labor. So we see them moving to countries in the Global South or other economically vulnerable communities to contract workers to do some of the worst work in the development pipeline for producing these technologies—and also producing technologies that then inherently are labor-automating and engage in labor exploitation in and of themselves. 

And the third feature is that the empires monopolize knowledge production. So, in the last 10 years, we’ve seen the AI industry monopolize more and more of the AI researchers in the world. So AI researchers are no longer contributing to open science, working in universities or independent institutions, and the effect on the research is what you would imagine would happen if most of the climate scientists in the world were being bankrolled by oil and gas companies. You would not be getting a clear picture, and we are not getting a clear picture, of the limitations of these technologies, or if there are better ways to develop these technologies.

And the fourth and final feature is that empires always engage in this aggressive race rhetoric, where there are good empires and evil empires. And they, the good empire, have to be strong enough to beat back the evil empire, and that is why they should have unfettered license to consume all of these resources and exploit all of this labor. And if the evil empire gets the technology first, humanity goes to hell. But if the good empire gets the technology first, they’ll civilize the world, and humanity gets to go to heaven. So on many different levels, like the empire theme, I felt like it was the most comprehensive way to name exactly how these companies operate, and exactly what their impacts are on the world.

Niall Firth: Yeah, brilliant. I mean, you talk about the evil empire. What happens if the evil empire gets it first? And what I mentioned at the top is AGI. For me, it’s almost like the extra character in the book all the way through. It’s sort of looming over everything, like the ghost at the feast, sort of saying like, this is the thing that motivates everything at OpenAI. This is the thing we’ve got to get to before anyone else gets to it. 

There’s a bit in the book about how they’re talking internally at OpenAI, like, we’ve got to make sure that AGI is in US hands where it’s safe versus like anywhere else. And some of the international staff are openly like—that’s kind of a weird way to frame it, isn’t it? Why is the US version of AGI better than others? 

So tell us a bit about how it drives what they do. And AGI isn’t an inevitable fact that’s just happening anyway, is it? It’s not even a thing yet.

Karen Hao: There’s not even consensus around whether or not it’s even possible or what it even is. There was recently a New York Times story by Cade Metz that was citing a survey of long-standing AI researchers in the field, and 75% of them still think that we don’t have the techniques yet for reaching AGI, whatever that means. And the most classic definition or understanding of what AGI is, is being able to fully recreate human intelligence in software. But the problem is, we also don’t have scientific consensus around what human intelligence is. And so one of the aspects that I talk about a lot in the book is that, when there is a vacuum of shared meaning around this term, and what it would look like, when would we have arrived at it? What capabilities should we be evaluating these systems on to determine that we’ve gotten there? It can basically just be whatever OpenAI wants. 

So it’s kind of just this ever-present goalpost that keeps shifting, depending on where the company wants to go. You know, they have a full range, a variety of different definitions that they’ve used throughout the years. In fact, they even have a joke internally: If you ask 13 OpenAI researchers what AGI is, you’ll get 15 definitions. So they are kind of self-aware that this is not really a real term and it doesn’t really have that much meaning. 

But it does serve this purpose of creating a kind of quasi-religious fervor around what they’re doing, where people think that they have to keep driving towards this horizon, and that one day when they get there, it’s going to have a civilizationally transformative impact. And therefore, what else should you be working on in your life, but this? And who else should be working on it, but you? 

And so it is their justification not just for continuing to push and scale and consume all these resources—because none of that consumption, none of that harm matters anymore if you end up hitting this destination. But they also use it as a way to develop their technologies in a very deeply anti-democratic way, where they say, we are the only people that have the expertise, that have the right to carefully control the development of this technology and usher it into the world. And we cannot let anyone else participate because it’s just too powerful of a technology.

Niall Firth: You talk about the factions, particularly the religious framing. AGI has been around as a concept for a while—it was very niche, very kind of nerdy fun, really, to talk about—to suddenly become extremely mainstream. And they have the boomers versus doomers dichotomy. Where are you on that spectrum?

Karen Hao: So the boomers are people who think that AGI is going to bring us to utopia, and the doomers think AGI is going to devastate all of humanity. And to me these are actually two sides of the same coin. They both believe that AGI is possible, and it’s imminent, and it’s going to change everything. 

And I am not on this spectrum. I’m in a third space, which is the AI accountability space, which is rooted in the observation that these companies have accumulated an extraordinary amount of power, both economic and political power, to go back to the empire analogy. 

Ultimately, the thing that we need to do in order to not return to an age of empire and erode a lot of democratic norms is to hold these companies accountable with all the tools at our disposal, and to recognize all the harms that they are already perpetuating through a misguided approach to AI development.

Niall Firth: I’ve got a couple of questions from readers. I’m gonna try to pull them together a little bit because Abbas asks, what would post-imperial AI look like? And there was a question from Liam basically along the same lines. How do you make a more ethical version of AI that is not within this framework? 

Karen Hao: We sort of already touched a little bit upon this idea. But there are so many different ways to develop AI. There are myriads of techniques throughout the history of AI development, which is decades long. There have been various shifts in the winds of which techniques ultimately rise and fall. And it isn’t based solely on the scientific or technical merit of any particular technique. Oftentimes certain techniques become more popular because of business reasons or because of the funder’s ideologies. And that’s sort of what we’re seeing today with the complete indexing of AI development on large-scale AI model development.

And ultimately, these large-scale models … We talked about how it’s a remarkable technical leap, but in terms of social progress or economic progress, the benefits of these models have been kind of middling. And the way that I see us shifting to AI models that are going to be A) more beneficial and B) not so imperial is to refocus on task-specific AI systems that are tackling well-scoped challenges that inherently lend themselves to the strengths of AI systems that are inherently computational optimization problems. 

So I’m talking about things like using AI to integrate more renewable energy into the grid. This is something that we definitely need. We need to more quickly accelerate our electrification of the grid, and one of the challenges of using more renewable energy is the unpredictability of it. And this is a key strength of AI technologies, being able to have predictive capabilities and optimization capabilities where you can match the energy generation of different renewables with the energy demands of different people that are drawing from the grid.

Niall Firth: Quite a few people have been asking, in the chat, different versions of the same question. If you were an early-career AI scientist, or if you were involved in AI, what can you do yourself to bring about a more ethical version of AI? Do you have any power left, or is it too late? 

Karen Hao: No, I don’t think it’s too late at all. I mean, as I’ve been talking with a lot of people just in the lay public, one of the biggest challenges that they have is they don’t have any alternatives for AI. They want the benefits of AI, but they also do not want to participate in a supply chain that is really harmful. And so the first question is, always, is there an alternative? Which tools do I shift to? And unfortunately, there just aren’t that many alternatives right now. 

And so the first thing that I would say to early-career AI researchers and entrepreneurs is to build those alternatives, because there are plenty of people that are actually really excited about the possibility of switching to more ethical alternatives. And one of the analogies I often use is that we kind of need to do with the AI industry what happened with the fashion industry. There was also a lot of environmental exploitation, labor exploitation in the fashion industry, and there was enough consumer demand that it created new markets for ethical and sustainably sourced fashion. And so we kind of need to see just more options occupying that space.

Niall Firth: Do you feel optimistic about the future? Or where do you sit? You know, things aren’t great as you spell them out now. Where’s the hope for us?

Karen Hao: I am. I’m super optimistic. Part of the reason why I’m optimistic is because you know, a few years ago, when I started writing about AI at Tech Review, I remember people would say, wow, that’s a really niche beat. Do you have enough to write about? 

And now, I mean, everyone is talking about AI, and I think that’s the first step to actually getting to a better place with AI development. The amount of public awareness and attention and scrutiny that is now going into how we develop these technologies, how we use these technologies, is really, really important. Like, we need to be having this public debate and that in and of itself is a significant step change from what we had before. 

But the next step, and part of the reason why I wrote this book, is we need to convert the awareness into action, and people should take an active role. Every single person should feel that they have an active role in shaping the future of AI development, if you think about all of the different ways that you interface with the AI development supply chain and deployment supply chain—like you give your data or withhold your data.

There are probably data centers that are being built around you right now. If you’re a parent, there’s some kind of AI policy being crafted at [your kid’s] school. There’s some kind of AI policy being crafted at your workplace. These are all what I consider sites of democratic contestation, where you can use those opportunities to assert your voice about how you want AI to be developed and deployed. If you do not want these companies to use certain kinds of data, push back when they just take the data. 

I closed all of my personal social media accounts because I just did not like the fact that they were scraping my personal photos to train their generative AI models. I’ve seen parents and students and teachers start forming committees within schools to talk about what their AI policy should be and to draft it collectively as a community. Same with businesses. They’re doing the same thing. If we all kind of step up to play that active role, I am super optimistic that we’ll get to a better place.

Niall Firth: Mark, in the chat, mentions the Māori story from New Zealand towards the end of your book, and that’s an example of sort of community-led AI in action, isn’t it?

Karen Hao: Yeah. There was a community in New Zealand that really wanted to help revitalize the Māori language by building a speech recognition tool that could recognize Māori, and therefore be able to transcribe a rich repository of archival audio of their ancestors speaking Māori. And the first thing that they did when engaging in that project was they asked the community, do you want this AI tool? 

Niall Firth: Imagine that.

Karen Hao: I know! It’s such a radical concept, this idea of consent at every stage. But they first asked that; the community wholeheartedly said yes. They then engaged in a public education campaign to explain to people, okay, what does it take to develop an AI tool? Well, we are going to need data. We’re going to need audio transcription pairs to train this AI model. So then they ran a public contest in which they were able to get dozens, if not hundreds, of people in their community to donate data to this project. And then they made sure that when they developed the model, they actively explained to the community at every step how their data was being used, how it would be stored, how it would continue to be protected. And any other project that would use the data has to get permission and consent from the community first. 

And so it was a completely democratic process, for whether they wanted the tool, how to develop the tool, and how the tool should continue to be used, and how their data should continue to be used over time.

Niall Firth: Great. I know we’ve gone a bit over time. I’ve got two more things I’m going to ask you, basically putting together lots of questions people have asked in the chat about your view on what role regulations should play. What are your thoughts on that?

Karen Hao: Yeah, I mean, in an ideal world where we actually had a functioning government, regulation should absolutely play a huge role. And it shouldn’t just be thinking about once an AI model is built, how to regulate that. But still thinking about the full supply chain of AI development, regulating the data and what’s allowed to be trained in these models, regulating the land use. And what pieces of land are allowed to build data centers? How much energy and water are the data centers allowed to consume? And also regulating the transparency. We don’t know what data is in these training data sets, and we don’t know the environmental costs of training these models. We don’t know how much water these data centers consume and that is all information that these companies actively withhold to prevent democratic processes from happening. So if there were one major intervention that regulators could have, it should be to dramatically increase the amount of transparency along the supply chain.

Niall Firth: Okay, great. So just to bring it back around to OpenAI and Sam Altman to finish with. He famously sent an email around, didn’t he? After your original Tech Review story, saying this is not great. We don’t like this. And he didn’t want to speak to you for your book, either, did he?

Karen Hao: No, he did not.

Niall Firth: No. But imagine Sam Altman is in the chat here. He’s subscribed to Technology Review and is watching this Roundtables because he wants to know what you’re saying about him. If you could talk to him directly, what would you like to ask him? 

Karen Hao: What degree of harm do you need to see in order to realize that you should take a different path? 

Niall Firth: Nice, blunt, to the point. All right, Karen, thank you so much for your time. 

Karen Hao: Thank you so much, everyone.

MIT Technology Review Roundtables is a subscriber-only online event series where experts discuss the latest developments and what’s next in emerging technologies. Sign up to get notified about upcoming sessions.

]]>
1119784
Why the AI moratorium’s defeat may signal a new political era https://www.technologyreview.com/2025/07/09/1119867/why-the-ai-moratoriums-defeat-may-signal-a-new-political-era/ Wed, 09 Jul 2025 09:00:00 +0000 https://www.technologyreview.com/?p=1119867 The “Big, Beautiful Bill” that President Donald Trump signed into law on July 4 was chock full of controversial policies—Medicaid work requirements, increased funding for ICE, and an end to tax credits for clean energy and vehicles, to name just a few. But one highly contested provision was missing. Just days earlier, during a late-night voting session, the Senate had killed the bill’s 10-year moratorium on state-level AI regulation. 

“We really dodged a bullet,” says Scott Wiener, a California state senator and the author of SB 1047, a bill that would have made companies liable for harms caused by large AI models. It was vetoed by Governor Gavin Newsom last year, but Wiener is now working to pass SB 53, which establishes whistleblower protections for employees of AI companies. Had the federal AI regulation moratorium passed, he says, that bill likely would have been dead.

The moratorium could also have killed laws that have already been adopted around the country, including a Colorado law that targets algorithmic discrimination, laws in Utah and California aimed at making AI-generated content more identifiable, and other legislation focused on preserving data privacy and keeping children safe online. Proponents of the moratorium, such OpenAI and Senator Ted Cruz, have said that a “patchwork” of state-level regulations would place an undue burden on technology companies and stymie innovation. Federal regulation, they argue, is a better approach—but there is currently no federal AI regulation in place.

Wiener and other state lawmakers can now get back to work writing and passing AI policy, at least for the time being—with the tailwind of a major moral victory at their backs. The movement to defeat the moratorium was impressively bipartisan: 40 state attorneys general signed a letter to Congress opposing the measure, as did a group of over 250 Republican and Democratic state lawmakers. And while congressional Democrats were united against the moratorium, the final nail in its coffin was hammered in by Senator Marsha Blackburn of Tennessee, a Tea Party conservative and Trump ally who backed out of a compromise with Cruz at the eleventh hour.

The moratorium fight may have signaled a bigger political shift. “In the last few months, we’ve seen a much broader and more diverse coalition form in support of AI regulation generally,” says Amba Kak, co–executive director of the AI Now Institute. After years of relative inaction, politicians are getting concerned about the risks of unregulated artificial intelligence. 

Granted, there’s an argument to be made that the moratorium’s defeat was highly contingent. Blackburn appears to have been motivated almost entirely by concerns about children’s online safety and the rights of country musicians to control their own likenesses; state lawmakers, meanwhile, were affronted by the federal government’s attempt to defang legislation that they had already passed.

And even though powerful technology firms such as Andreessen Horowitz and OpenAI reportedly lobbied in favor of the moratorium, continuing to push for it might not have been worth it to the Trump administration and its allies—at least not at the expense of tax breaks and entitlement cuts. Baobao Zhang, an associate professor of political science at Syracuse University, says that the administration may have been willing to give up on the moratorium in order to push through the rest of the bill by its self-imposed Independence Day deadline.

Andreessen Horowitz did not respond to a request for comment. OpenAI noted that the company was opposed to a state-by-state approach to AI regulation but did not respond to specific questions regarding the moratorium’s defeat. 

It’s almost certainly the case that the moratorium’s breadth, as well as its decade-long duration, helped opponents marshall a diverse coalition to their side. But that breadth isn’t incidental—it’s related to the very nature of AI. Blackburn, who represents country musicians in Nashville, and Wiener, who represents software developers in San Francisco, have a shared interest in AI regulation precisely because such a powerful and general-purpose tool has the potential to affect so many people’s well-being and livelihood. “There are real anxieties that are touching people of all classes,” Kak says. “It’s creating solidarities that maybe didn’t exist before.”

Faced with outspoken advocates, concerned constituents, and the constant buzz of AI discourse, politicians from both sides of the aisle are starting to argue for taking AI extremely seriously. One of the most prominent anti-moratorium voices was Marjorie Taylor Greene, who voted for the version of the bill containing the moratorium before admitting that she hadn’t read it thoroughly and committing to opposing the moratorium moving forward. “We have no idea what AI will be capable of in the next 10 years,” she posted last month.

And two weeks ago, Pete Buttigieg, President Biden’s transportation secretary, published a Substack post entitled “We Are Still Underreacting on AI.” “The terms of what it is like to be a human are about to change in ways that rival the transformations of the Enlightenment or the Industrial Revolution, only much more quickly,” he wrote.

Wiener has noticed a shift among his peers. “More and more policymakers understand that we can’t just ignore this,” he says. But awareness is several steps short of effective legislation, and regulation opponents aren’t giving up the fight. The Trump administration is reportedly working on a slate of executive actions aimed at making more energy available for AI training and deployment, and Cruz says he is planning to introduce his own anti-regulation bill.

Meanwhile, proponents of regulation will need to figure out how to channel the broad opposition to the moratorium into support for specific policies. It won’t be a simple task. “It’s easy for all of us to agree on what we don’t want,” Kak says. “The harder question is: What is it that we do want?”

]]>
1119867
Building an innovation ecosystem for the next century https://www.technologyreview.com/2025/07/08/1117473/building-an-innovation-ecosystem-for-the-next-century/ Tue, 08 Jul 2025 14:00:00 +0000 https://www.technologyreview.com/?p=1117473 Michigan may be best known as the birthplace of the American auto industry, but its innovation legacy runs far deeper, and its future is poised to be even broader. From creating the world’s largest airport factory during World War II at Willow Run to establishing the first successful polio vaccine trials in Ann Arbor to the invention of the snowboard in Muskegon, Michigan has a long history of turning innovation into lasting impact. 

Now, with the creation of a new role, chief innovation ecosystem officer, at the Michigan Economic Development Corporation (MEDC), the state is doubling down on its ambition to become a modern engine of innovation, one that is both rooted in its industrial past and designed for the evolving demands of the 21st century economy.  

“How do you knit together risk capital founders, businesses, universities, and state government, all of the key stakeholders that need to be at the table together to build a more effective innovation ecosystem?” asks Ben Marchionna, the first to hold this groundbreaking new position. 

Leaning on his background in hard tech startups and national security, Marchionna aims to bring a “builder’s thinking” to the state government. “I’m sort of wired for that—rapid prototyping, iterating, scaling, and driving that muscle into the state government ecosystem,” he explains.

But these efforts aren’t about creating a copycat Silicon Valley. Michigan’s approach is uniquely its own. “We want to develop the thing that makes the most sense for the ingredients that Michigan can bring to bear to this challenge,” says Marchionna. 

This includes cultivating both mom-and-pop businesses and tech unicorns, while tapping into the state’s talent, research, and manufacturing DNA. 

In an era where economic development often feels siloed, partisan, and reactive, Michigan is experimenting with a model centered on long-term value and community-oriented innovation. “You can lead by example in a lot of these ways, and that flywheel really can get going in a beautiful way when you step out of the prescriptive innovation culture mindset,” says Marchionna.

This episode of Business Lab is produced in partnership with the Michigan Economic Development Corporation.

Full Transcript 

Megan Tatum: From MIT Technology Review. I’m Megan Tatum, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. 

Today’s episode is brought to you in partnership with the Michigan Economic Development Corporation. 

Our topic today is building a statewide innovation economy. Now, the U.S. state of Michigan has long been recognized as a leader in vehicle and mobility innovation. Detroit put it on the map, but did you know it’s also the birthplace of the snowboard or that the University of Michigan filed more than 600 invention disclosures in 2024, second only to the Massachusetts Institute of Technology, or that in the past five years, 40% of the largest global IPOs have been Michigan built companies?

Two words for you: innovation ecosystem. 

My guest is Ben Marchionna, chief innovation ecosystem officer at the Michigan Economic Development Corporation, the MEDC. 

Ben, thank you ever so much for joining us.

Ben Marchionna: Thanks, Megan. Really pleased to be here.

Megan: Fantastic. And just to set some context to get us started, I wondered if we could take a kind of high-level look at the economic development landscape. I mean, you joined the MEDC team last year as Michigan’s first chief innovation ecosystem officer. In fact, you were the first to hold such a role in the country, I believe. I wondered if you could talk a bit about your unique mission and how this economic development approach differs from efforts in other states.

Ben: Yeah, sure would love to. Probably worth pointing out that while I’ve been in this role for about a year now, it was indeed a first-of-its-kind role in the state of Michigan and first of its kind in the country. The slight difference in the terminology, chief innovation ecosystem officer, it differs a little bit from what folks might think of as a chief innovation officer. I’m not all that focused on driving innovation within government, which is what some other chief innovation officers would be focused on around the country. Instead, you can think of my role as Michigan’s chief architect for innovation, if you will. So, how do you knit together risk capital founders, businesses, universities, and state government, all of the key stakeholders that need to be at the table together to build a more effective innovation ecosystem? I talk a lot about building connective tissues that can achieve one plus one equals three outcomes.

Michigan’s got all kinds of really interesting ingredients and has the foundation to take advantage of the moment in a really interesting way over the next decades as we look to supercharge some of the growth of our innovation ecosystem development.

My charter is relatively simple. It’s to help make sure that Michigan wins in a now hyper-competitive global economy. And to do that, I end up being super focused on orienting us towards a growth and innovation-driven economy. That can mean a lot of different things, but I ultimately came to the MEDC and the role within the state with a builder’s mindset. My background is not in traditional economic development, it’s in not government at all. I spent the last 10 years building hard tech startups, one in Ann Arbor, Michigan, and another one in the Northern Virginia area. Before that, I spent a number of years at, think of it like, an innovation factory at Lockheed Martin Skunk Works in the Mojave Desert, working on national security projects.

I’m sort of wired for that, builder’s thinking, rapid prototyping, iterating, scaling, and driving that muscle into the state government ecosystem. I think it’s important that the government also figure out how to pull out all the stops and be able to move at the speed that founders expect. A bias towards action, if you will. And so this is ultimately what my mission is. There are a lot of real interesting things that the state of Michigan can bring to bear to building our innovation ecosystem. And I think, tackling it with this sort of a mindset, I am absolutely optimistic for the future that we’ve got ahead of us.

Megan: Fantastic. It almost sounds like your role is sort of building a statewide startup incubator of sorts. As we mentioned in the opening, Michigan actually has a really interesting innovation history even in addition to the advances in the automotive industry. I wondered if you could talk a bit more about that history and why Michigan, in particular, is poised to support that sort of statewide startup ecosystem.

Ben: Yeah, absolutely. And I would even broaden it. Building the startup ecosystem is one of the essential layers, but to be able to successfully do that, we have to bring in the research universities, we have to bring in the corporate innovation ecosystem, we have to bring in the risk capital, et cetera. So yes, absolutely, startups are important. And equally as important are all of these other elements that are necessary for a startup ecosystem to thrive, but are also the levers that are just sitting there waiting for us to pull them.

And we can get into some of the details over the course of our chat today on the auto industry and how this fits into it, but Michigan does a lot more than just automotive stuff. And you noted, I think, the surfboard as an example in the intro. Absolutely correct. We have a reputation as Motor City, but Michigan’s innovation record is a lot weirder in a fun way and richer than just cars.

Early 20th century, mostly industrial moonshot innovation. So first paved mile of concrete was in Detroit in 1909. A few years later, this is when the auto sector started to really come about with Henry Ford’s moving assembly line. Everyone tends to know about those details. But during World War II, Willow Run Airport sort of smack between Detroit and Ann Arbor, Michigan they had the biggest airplane factory in the world. They were cranking out B-24 bombers once every 63 minutes, and I’ve actually been to the office that Henry Ford and Charles Lindbergh shared. It’s still at the airport. And it was pretty cool because Henry Ford had a window built into the office that looked sort of around the corner so that he could tick off as airplanes rolled out of the hanger and make sure that they were following the same high rate production mentality that the auto sector was able to develop over the decades prior. 

And so they came in to help make sure that you could leverage that industrial sector to drive very rapid production, the at-scale mentality, which is also a really important part of the notion of re-industrialization that is taking hold across the country now. Happy to get into that a bit, but yeah, Willow Run, I don’t think most folks realize that that was the biggest airplane factory in the world sitting right here in Michigan.

And all of this provided the mass production DNA that was able to help build the statewide supplier base. And today, yes, we use that for automotive, EVs, space hardware, batteries, you name it. But this is the foundation, I think, that we’ve got to be able to build on in the future. In the few decades since you saw innovations in sports, space, advanced materials, it’s like the sixties to the eighties. You said the snowboard. That was invented in Muskegon on the west side of the state in 1965.

Dow Chemical’s here in a really big way. They’ve pioneered silicone and advanced plastics in Michigan. University of Michigan’s Dr. Thomas Francis is the world’s first successful polio vaccine trials that were pioneered out of Ann Arbor, and that Big 10 research horsepower that we’ve got in the state, between the University of Michigan, Michigan State University. We also have Wayne State University in Detroit, which is a powerhouse. And then Michigan Tech University in the Upper Peninsula just recently became an R1 research institution, which essentially means those top-tier research powerhouses and that culture of tinkering matter a lot today.

I think in more recent history, you saw design and digital innovations emerge. I don’t think a lot of people appreciate that Herman Miller and Steelcase reinvented office ergonomics on the west side of the state, or that Stryker is based in Kalamazoo. They became a global medical device powerhouse over the last couple of decades, too. Michigan’s first unicorn, Duo Security, the two-factor authentication among many other things that they do there, was sold to Cisco in 2018 for 2.35 billion.

Like I said, the first unicorn in the few years since we’ve had another 10 unicorns. And I think probably what would be surprising to a lot of people is it’s in sectors well beyond mobility, it’s marketplace like StockX, FinTech, logistics, cybersecurity, of course. It’s a little bit of everything, and I think that goes to show that some of the fabric that exists within Michigan is a lot richer than what people think of, Motor City. We can scale software, we can scale life sciences innovation. It’s not just metal bending, and I talked about re-industrialization earlier. So I think about where we are today, there’s a hard tech renaissance and a broad portfolio of other high-growth sectors that Michigan’s poised to do really well in, leveraging all of that industrial base that has been around for the last century. I’m just super excited about the future and where we can take things from here.

Megan: I mean, genuinely, a really rich and diverse history of innovation that you’ve described there.

Ben: That’s right.

Megan: And last year, when Michigan’s Governor Whitmer announced this new initiative and your position, she noted the need to foster this sort of culture of innovation. And we hear that a lot that terminal in the context of company cultures. It’s interesting to hear in the context of a U.S. state’s economy. I wonder what your strategy is for building out this ecosystem, and how do you foster a state’s innovation culture?

Ben: Yeah, it’s an awesome point, and I think I mentioned earlier that I came into the role with this builder’s mentality. For me, this is how I am wired to think. This is how a lot of the companies and other founders that I spent a lot of time with, this is how they think. And so bringing this to the state government, I think of Blue Origin, Jeff Bezos’ space company, their motto, the English translation at least of it, is “Step by Step, Ferociously.” And I think about that as a lot as a proxy for how I do that within the state government. There’s a lot of iterative work that needs to happen, a lot of coaching and storytelling that happens to help folks understand how to think with that builder’s mindset. The wonderful news is that when you start having that conversation, this is one of those in these complicated political times, this is a pretty bipartisan thing, right?

The notion of how to build small businesses that create thriving main street communities while also supporting high-growth, high-tech startups that can drive prosperity for all, and population growth, while also being able to cover corporate innovation and technology transfer out of universities. All of these things touch every corner of the state.

And Michigan’s a surprisingly large and very geographically diverse state. Most of the things that we tend to be known for outside the state are in a pretty small corner of Southeast Michigan. That’s the Motor City part, but we do a lot and we have a lot of really interesting hubs for innovation and hubs for entrepreneurship, like I said, from the small mom-and-pop manufacturing shop or interest in clothing business all the way through to these insane life sciences innovations being spun out of the university. Being able to drive this culture of innovation ends up being applicable really across the board, and it just gets people really fired up when you start talking about this, fired up in a good way, which is, I think, what’s really fantastic.

There’s this notion of accelerating the talent flywheel and making sure that the state can invest in the cultivation of really rich communities and connections, and this founder culture. That stuff happens organically, generally, and when you talk about building startup ecosystems, it’s not like the state shows up and says, “Now you’re going to be more innovative and that works.” That is not the case.

And so to be able to develop those things, it’s much more about this notion of ecosystem building and getting the ingredients and puzzle pieces in the right place, applying a little bit of funding here and there, or loosening a restriction here or there, and then letting the founders do what they do best, which is build. And so this is what I think I end up being super passionate about within the state. You can lead by example in a lot of these ways, and that flywheel that I mentioned really can get going in a beautiful way when you step out of the prescriptive innovation culture mindset.

Megan: And given that role, I wonder what milestones the campaign has experienced in your first year? Could you share some highlights and some developing projects that you’re really excited about?

Ben: We had a recent one, I think that was pretty tremendous. Just a couple of months ago, Governor Whitmer signed into law a bipartisan legislation called the Michigan Innovation Fund. This was a multi-year effort that resulted in the state’s biggest investment in the innovation ecosystem development in over two decades. A lot of this funding is going to early stage venture capital firms that will be able to support the broad seeding of new companies and ideas, keep talent within the state from some of those top tier research institutions, bring in really high quality companies that early stage, growth stage companies from out of state, and then develop or supercharge some of that innovation ecosystem fabric that ties those things together. So that connective tissue that I talked about, and that was an incredible win to launch the year with.

This was just back in January, and now we’re working to get some of those funds out over the course of the next month or two so we can put them to use. What was really interesting about that was, it wasn’t just a top-down thing. This was supported from the top all the way up to and including Governor Whitmer. I mentioned bipartisan support within Michigan’s legislature and then bottom-up from all of the ecosystem partners, the founders, the investors advocating as a whole block, which I think is really powerful. Rather than trying to go for one-off things, this huge coalition of the willing got together organically and advocated for, hey, this is why this is such a great moment. This is the time to invest. And Governor Whitmer and the legislators, they heard that call, and we got something done, and so that happened relatively quickly. Like I said, biggest investment in the last two decades, and I think we’re poised to have some really great successes in the coming year as well.

Another really interesting one that I haven’t seen other states do yet, Governor Whitmer, around a year ago, signed an executive order called the Infrastructure for Innovation. Essentially, what that does is it opens up state department and agency assets to startups in the name of moving the ball forward on innovation projects. And so if you’re a startup and you need access to some very hard-to-find, very expensive, maybe like a test facility, you can use something that the state has, and all of the processes to get that done are streamlined so that you’re not beating your head against a wall. Similarly, the universities and even federal labs and corporate resources, while an executive order can’t compel those folks to do that, we’ve been finding tremendous buy-in from those stakeholders who want to volunteer access to their resources.

That does a lot of really good things, certainly for the founders, that provides them the launchpad that they need. But for those corporations and universities, and whatnot, a lot of them have these very expensive assets sitting around wildly underutilized, and they would be happy to have people come in and use them. That also gives them exposure to some of the bleeding-edge technology that a lot of these startups today are developing. I thought that was a really cool example of state government leadership using some of the tools that are available to a governor to get things moving. We’ve had a lot of early wins with startups here that have been able to leverage what that executive order was able to do for them.

Here we are talking about the MIT Technology Review to tie in an MIT piece here, we also started a Team Michigan for MIT’s REAP program. It’s the Regional Entrepreneurship Acceleration Program, and this is one of the global thought leaders on best practices for innovation ecosystem development. And so we’ve got a cohort of about a dozen key leaders from across all of those different stakeholders who need to have a seat at the table for this ecosystem development.

We go out to Cambridge twice a year for a multi-day workshop, and we get to talk about what we’ve learned as best practices, and then also learn from other cohorts from around the world on what they’ve done that is great. And then also get to hear some of the academic best practices that the MIT faculty have discovered as part of this area of expertise. And so that’s been a very interesting way for us to be able to connect outside of the state government boundaries, if you will. You sort of get out there and see where the leading edge is and then come back and be able to talk about the things that we learned from all of these other global cohorts. So always important to be focused on best practices when you’re trying to do new things, especially in government.

Megan: Sounds like there are some really fantastic initiatives going on. It sounds like a very busy first year.

Ben: It’s been a very busy first year couldn’t be more thrilled about it.

Megan: Fantastic. And in early 2023, I know that Newlab partnered with Michigan Central to establish a startup incubator too, which brought in more than a hundred startups just in its first 14 months. I wonder if you could talk a bit about how the incubator fits in with the statewide startup ecosystem and the importance of partnerships, too, for innovation.

Ben: Yeah, a key element, and I think the partnerships piece is essential here. Newlab is one of the larger components of the Southeast Michigan and especially the Detroit innovation ecosystem development. They will hit their two-year launch anniversary in just a couple of weeks, here I think. This will be mid-May, it will be two years and in that time, they’ve now got 140 plus startups all working out of their space, and Newlab they’re actually headquartered in Brooklyn, New York, but they run this big startup accelerator incubator out of Detroit as well and so this is sort of their second flagship location. They’ve been a phenomenal partner, and so speaking of the partnerships, what do those do?

They de-risk the technologies to help enable broader adoptions. Corporations can provide early revenues, the state can provide non-dilutive grant matching. Universities can bring IP and this renewable source of talent generation, and being able to stitch together all of those pieces can create some really interesting unlocks for startups to grow. But again, also this broader entrepreneurship and innovation ecosystem to really be able to thrive.

Newlab has been thrilled with their partnership in Southeast Michigan, and I think it’s a model that can be tailored across the state so that, depending on what assets are available in your backyard, you can make sure that you can best harness those for future growth.

Megan: Fantastic. What’s the long-term vision for the state’s innovation landscape when you think about it in five, 10 years from now? What do you envisage?

Ben: Amazing question. This is probably what I get most excited about. I think earlier we talked about the Willow Run B-24 bomber plant. That is what made Michigan known as the arsenal of democracy back in the day. I want Michigan to be the arsenal of innovation. We’re not trying to recreate a Silicon Valley. Silicon Valley does certain things, not trying to recreate what El Segundo wants to do in hard tech or New York City in FinTech, and all of these other things. We want to develop the thing that makes the most sense for the ingredients that Michigan can bring to bear to this challenge.

I think that becoming the Midwest arsenal of innovation, that’s something that Michigan is very well poised to use as a springboard for the decades to come. I want us to be the default launch pad for building a hard tech company, a life sciences company, an agricultural tech company. You name it. If you’ve got a design prototype and want to mass produce something, don’t want to hop coast, you want to be somewhere that has a tremendous quality of life, an affordable place, somewhere that government is at the table and willing to move fast, this is a place to do that.

That can be difficult to do in some of the more established ecosystems, especially post-covid, as a lot of them are going through really big transition periods. Michigan’s already a top 10 state for business in the next 10 years. I want us to be a top 10 state for employment, top 10 state for household median income for post-secondary education attainment, and net talent migration. Those are my four top tens that I want to see in the next 10 years. And we covered a lot of topics today, but I think those are the reasons that I am super optimistic about being able to accomplish those.

Megan: Fantastic. Well, I’m tempted to move to Michigan, so I’m sure plenty of other people will be now, too. Thank you so much, Ben. That was really fascinating.

Ben: Thanks, Megan. Really delighted to be here.

Megan: That was Ben Marchionna, chief innovation ecosystem officer at the Michigan Economic Development Corporation, whom I spoke with from Brighton, England. 

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor and host for Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology, and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts, and if you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks ever so much for listening.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

]]>
1117473
Battling next-gen financial fraud  https://www.technologyreview.com/2025/07/08/1119039/battling-next-gen-financial-fraud/ Tue, 08 Jul 2025 13:24:29 +0000 https://www.technologyreview.com/?p=1119039 From a cluster of call centers in Canada, a criminal network defrauded elderly victims in the US out of $21 million in total between 2021 and 2024. The fraudsters used voice over internet protocol technology to dupe victims into believing the calls came from their grandchildren in the US, customizing conversations using banks of personal data, including ages, addresses, and the estimated incomes of their victims. 

The proliferation of large language models (LLMs) has also made it possible to clone a voice with nothing more than an hour of YouTube footage and an $11 subscription. And fraudsters are using such tools to create increasingly more sophisticated attacks to deceive victims with alarming success. But phone scams are just one way that bad actors are weaponizing technology to refine and scale attacks. 

Synthetic identity fraud now costs banks $6 billion a year, making it the fastest-growing financial crime in the US Criminals are able to exploit personal data breaches to fabricate “Frankenstein IDs.” Cheap credential-stuffing software can be used to test thousands of stolen credentials across multiple platforms in a matter of minutes. And text-to-speech tools powered by AI can bypass voice authentication systems with ease. 

“Technology is both catalyzing and transformative,” says John Pitts, head of industry relations and digital trust at Plaid. “Catalyzing in that it has accelerated and made more intense longstanding types of fraud. And transformative in that it has created windows for new, scaled-up types of fraud.” 

Fraudsters can use AI tools to multiply many times over the number of attack vectors—the entry points or pathways that attackers can use to infiltrate a network or system. In advance-fee scams, for instance, where fraudsters pose as benefactors gifting large sums in exchange for an upfront fee, scammers can use AI to identify victims at a far greater rate and at a much lower cost than ever before. They can then use AI tools to carry out tens of thousands, if not millions, of simultaneous digital conversations. 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

]]>
1119039
The Download: hunting an asteroid, and unlocking the human mind https://www.technologyreview.com/2025/07/08/1119840/the-download-hunting-an-asteroid-and-unlocking-the-human-mind/ Tue, 08 Jul 2025 12:10:00 +0000 https://www.technologyreview.com/?p=1119840 This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Inside the most dangerous asteroid hunt ever

If you were told that the odds of something were 3.1%, it might not seem like much. But for the people charged with protecting our planet, it was huge.

On February 18, astronomers determined that a 130- to 300-foot-long asteroid had a 3.1% chance of crashing into Earth in 2032. Never had an asteroid of such dangerous dimensions stood such a high chance of striking the planet. Then, just days later on February 24, experts declared that the danger had passed. Earth would be spared.

How did they do it? What was it like to track the rising danger of this asteroid, and to ultimately determine that it’d miss us?

This is the inside story of how a sprawling network of astronomers found, followed, mapped, planned for, and finally dismissed the most dangerous asteroid ever found—all under the tightest of timelines and, for just a moment, with the highest of stakes. Read the full story.

—Robin George Andrews

This article is part of the Big Story series: MIT Technology Review’s most important, ambitious reporting. The stories in the series take a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.

How scientists are trying to use AI to unlock the human mind 

Today’s AI landscape is defined by the ways in which neural networks are unlike human brains. A toddler learns how to communicate effectively with only a thousand calories a day and regular conversation; meanwhile, tech companies are reopening nuclear power plants, polluting marginalized communities, and pirating terabytes of books in order to train and run their LLMs.

Despite that, it’s a common view among neuroscientists that building brainlike neural networks is one of the most promising paths for the field, and that attitude has started to spread to psychology. 

Last week, the prestigious journal Nature published a pair of studies showcasing the use of neural networks for predicting how humans and other animals behave in psychological experiments. However, predicting a behavior and explaining how it came about are two very different things. Read the full story.

—Grace Huckins

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get it in your inbox first every Monday, sign up here.

Why the US and Europe could lose the race for fusion energy

—Daniel F. Brunner, Edlyn V. Levine, Fiona E. Murray, & Rory Burke

Fusion energy holds the potential to shift a geopolitical landscape that is currently configured around fossil fuels. Harnessing fusion will deliver the energy resilience, security, and abundance needed for all modern industrial and service sectors.

But these benefits will be controlled by the nation that leads in both developing the complex supply chains required and building fusion power plants at scales large enough to drive down economic costs. 

Investing in supply chains and scaling up complex production processes has increasingly been a strength of China’s and a weakness of the West, resulting in the migration of many critical industries from the West to China. With fusion, we run the risk that history will repeat itself. But it does not have to go that way. Read the full story.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has announced a range of new tariffs  
Southeast Asia has been hit particularly hard. (Reuters)
+ Some tariffs on other countries have been delayed until next month. (Vox)
+ Investors are hoping to weather the storm. (Insider $)
+ Sweeping tariffs could threaten the US manufacturing rebound. (MIT Technology Review)

2 Ukraine’s fiber-optic drones are giving it the edge over Russia
The drones are impervious to electronic attacks. (WSJ $)
+ Trump is resuming sending arms to Ukraine. (CNN)
+ Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)

3 OpenAI is seriously scared about spies
It’s upped its security dramatically amid fears of corporate espionage. (FT $)
+ Inside the story that enraged OpenAI. (MIT Technology Review)

4 Amazon is asking its corporate staff to volunteer in its warehouses
It’s in desperate need of extra hands to help during its Prime Day event. (The Guardian)

5 Google’s AI-created drugs are almost ready for human trials
Isomorphic Labs has been working on drugs to tackle cancer. (Fortune $)
+ An AI-driven “factory of drugs” claims to have hit a big milestone. (MIT Technology Review)

6 Apple’s AI ambitions have suffered yet another setback
Their executive in charge of AI models has been wooed by Meta. (Bloomberg $)
+ Ruoming Pang’s pay package is likely to be in the tens of millions. (WSJ $)

7 Waymo’s robotaxis are heading to NYC
But its “road trip” announcement is no guarantee it’ll launch there. (TechCrunch)

8 Brands don’t need influencers any more
They’re doing just fine producing their own in-house social media videos. (NYT $)

9 We may age in rapid bursts, rather than a steady decline
New research could shed light on how to slow the process down. (New Scientist $)
+ Aging hits us in our 40s and 60s. But well-being doesn’t have to fall off a cliff. (MIT Technology Review)

10 This open-source software fights back against AI bots
Anubis protects sites from scrapers. (404 Media)
+ Cloudflare will now, by default, block AI bots from crawling its clients’ websites. (MIT Technology Review)

Quote of the day

“I think we’ve all had enough of Elon’s political errors and political opinions.”

—Ross Gerber, an investor who was formerly an enthusiastic backer of Elon Musk, tells the Washington Post he wishes the billionaire would simply focus on Tesla.

One more thing

How Silicon Valley is disrupting democracy

The internet loves a good neologism, especially if it can capture a purported vibe shift or explain a new trend. In 2013, the columnist Adrian Wooldridge coined a word that eventually did both. Writing for the Economist, he warned of the coming “techlash,” a revolt against Silicon Valley’s rich and powerful, fueled by the public’s growing realization that these “sovereigns of cyberspace” weren’t the benevolent bright-future bringers they claimed to be.

While Wooldridge didn’t say precisely when this techlash would arrive, it’s clear today that a dramatic shift in public opinion toward Big Tech and its leaders did in fact ­happen—and is arguably still happening. It’s worth investigating why, and what we can do to start taking some of that power back. Read the full story.

—Bryan Gardiner

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Struggling to solve a problem? It’s time to take a nap.
+ If any TV show has better midcentury decor than Mad Men, I’ve yet to see it.
+ Sir Antony Gormley’s arresting iron men sculptures have been a fixture on Crosby Beach in the UK for 20 years.
+ Check out this definitive Planet of the Apes timeline.

]]>
1119840
Why the US and Europe could lose the race for fusion energy https://www.technologyreview.com/2025/07/08/1119630/why-the-us-and-the-west-could-lose-the-race-for-fusion-energy/ Tue, 08 Jul 2025 10:00:00 +0000 https://www.technologyreview.com/?p=1119630 Fusion energy holds the potential to shift a geopolitical landscape that is currently configured around fossil fuels. Harnessing fusion will deliver the energy resilience, security, and abundance needed for all modern industrial and service sectors. But these benefits will be controlled by the nation that leads in both developing the complex supply chains required and building fusion power plants at scales large enough to drive down economic costs.

The US and other Western countries will have to build strong supply chains across a range of technologies in addition to creating the fundamental technology behind practical fusion power plants. Investing in supply chains and scaling up complex production processes has increasingly been a strength of China’s and a weakness of the West, resulting in the migration of many critical industries from the West to China. With fusion, we run the risk that history will repeat itself. But it does not have to go that way.

The US and Europe were the dominant public funders of fusion energy research and are home to many of the world’s pioneering private fusion efforts. The West has consequently developed many of the basic technologies that will make fusion power work. But in the past five years China’s support of fusion energy has surged, threatening to allow the country to dominate the industry.

The industrial base available to support China’s nascent fusion energy industry could enable it to climb the learning curve much faster and more effectively than the West. Commercialization requires know-how, capabilities, and complementary assets, including supply chains and workforces in adjacent industries. And especially in comparison with China, the US and Europe have significantly under-supported the industrial assets needed for a fusion industry, such as thin-film processing and power electronics.

To compete, the US, allies, and partners must invest more heavily not only in fusion itself—which is already happening—but also in those adjacent technologies that are critical to the fusion industrial base. 

China’s trajectory to dominating fusion and the West’s potential route to competing can be understood by looking at today’s most promising scientific and engineering pathway to achieve grid-relevant fusion energy. That pathway relies on the tokamak, a technology that uses a magnetic field to confine ionized gas—called plasma—and ultimately fuse nuclei. This process releases energy that is converted from heat to electricity. Tokamaks consist of several critical systems, including plasma confinement and heating, fuel production and processing, blankets and heat flux management, and power conversion.

A close look at the adjacent industries needed to build these critical systems clearly shows China’s advantage while also providing a glimpse into the challenges of building a fusion industrial base in the US or Europe. China has leadership in three of these six key industries, and the West is at risk of losing leadership in two more. China’s industrial might in thin-film processing, large metal-alloy structures, and power electronics provides a strong foundation to establish the upstream supply chain for fusion.

The importance of thin-film processing is evident in the plasma confinement system. Tokamaks use strong electromagnets to keep the fusion plasma in place, and the magnetic coils must be made from superconducting materials. Rare-earth barium copper oxide (REBCO) superconductors are the highest-performing materials available in sufficient quantity to be viable for use in fusion.

The REBCO industry, which relies on thin-film processing technologies, currently has low production volumes spanning globally distributed manufacturers. However, as the fusion industry grows, the manufacturing base for REBCO will likely consolidate among the industry players who are able to rapidly take advantage of economies of scale. China is today’s world leader in thin-film, high-volume manufacturing for solar panels and flat-panel displays, with the associated expert workforce, tooling sector, infrastructure, and upstream materials supply chain. Without significant attention and investment on the part of the West, China is well positioned to dominate REBCO thin-film processing for fusion magnets.

The electromagnets in a full-scale tokamak are as tall as a three-story building. Structures made using strong metal alloys are needed to hold these electromagnets around the large vacuum vessel that physically contains the magnetically confined plasma. Similar large-scale, complex metal structures are required for shipbuilding, aerospace, oil and gas infrastructure, and turbines. But fusion plants will require new versions of the alloys that are radiation-tolerant, able to withstand cryogenic temperatures, and corrosion-resistant. China’s manufacturing capacity and its metallurgical research efforts position it well to outcompete other global suppliers in making the necessary specialty metal alloys and machining them into the complex structures needed for fusion.

A tokamak also requires large-scale power electronics. Here again China dominates. Similar systems are found in the high-speed rail (HSR) industry, renewable microgrids, and arc furnaces. As of 2024, China had deployed over 48,000 kilometers of HSR. That is three times the length of Europe’s HSR network and 55 times as long as the Acela network in the US, which is slower than HSR. While other nations have a presence, China’s expertise is more recent and is being applied on a larger scale.

But this is not the end of the story. The West still has an opportunity to lead the other three adjacent industries important to the fusion supply chain: cryo-plants, fuel processing, and blankets. 

The electromagnets in an operational tokamak need to be kept at cryogenic temperatures of around 20 Kelvin to remain superconducting. This requires large-scale, multi-megawatt cryogenic cooling plants. Here, the country best set up to lead the industry is less clear. The two major global suppliers of cryo-plants are Europe-based Linde Engineering and Air Liquide Engineering; the US has Air Products and Chemicals and Chart Industries. But they are not alone: China’s domestic champions in the cryogenic sector include Hangyang Group, SASPG, Kaifeng Air Separation, and SOPC. Each of these regions already has an industrial base that could scale up to meet the demands of fusion.

Fuel production for fusion is a nascent part of the industrial base requiring processing technologies for light-isotope gases—hydrogen, deuterium, and tritium. Some processing of light-isotope gases is already done at small scale in medicine, hydrogen weapons production, and scientific research in the US, Europe, and China. But the scale needed for the fusion industry does not exist in today’s industrial base, presenting a major opportunity to develop the needed capabilities.

Similarly, blankets and heat flux management are an opportunity for the West. The blanket is the medium used to absorb energy from the fusion reaction and to breed tritium. Commercial-scale blankets will require entirely novel technology. To date, no adjacent industries have relevant commercial expertise in liquid lithium, lead-lithium eutectic, or fusion-specific molten salts that are required for blanket technology. Some overlapping blanket technologies are in early-stage development by the nuclear fission industry. As the largest producer of beryllium in the world, the US has an opportunity to capture leadership because that element is a key material in leading fusion blanket concepts. But the use of beryllium must be coupled with technology development programs for the other specialty blanket components.

These six industries will prove critical to scaling fusion energy. In some, such as thin-film processing and large metal-alloy structures, China already has a sizable advantage. Crucially, China recognizes the importance of these adjacent industries and is actively harnessing them in its fusion efforts. For example, China launched a fusion consortium that consists of industrial giants spanning the steel, machine tooling, electric grid, power generation, and aerospace sectors. It will be extremely difficult for the West to catch up in these areas, but policymakers and business leaders must pay attention and try to create robust alternative supply chains.

As the industrial area of greatest strength, cryo-plants could continue to be an opportunity for leadership in the West. Bolstering Western cryo-plant production by creating demand for natural-gas liquefaction will be a major boon to the future cryo-plant supply chain that will support fusion energy.

The US and European countries also have an opportunity to lead in the emerging industrial areas of fuel processing and blanket technologies. Doing so will require policymakers to work with companies to ensure that public and private funding is allocated to these critical emerging supply chains. Governments may well need to serve as early customers and provide debt financing for significant capital investment. Governments can also do better to incentivize private capital and equity financing—for example, through favorable capital-gains taxation. In lagging areas of thin-film and alloy production, the US and Europe will likely need partners, such as South Korea and Japan, that have the industrial bases to compete globally with China.

The need to connect and capitalize multiple industries and supply chains will require long-term thinking and clear leadership. A focus on the demand side of these complementary industries is essential. Fusion is a decade away from maturation, so its supplier base must be derisked and made profitable in the near term by focusing on other primary demand markets that contribute to our economic vitality. To name a few, policymakers can support modernization of the grid to bolster domestic demand for power electronics and domestic semiconductor manufacturing to support thin-film processing.

The West must also focus on the demand for energy production itself. As the world’s largest energy consumer, China will leverage demand from its massive domestic market to climb the learning curve and bolster national champions. This is a strategy that China has wielded with tremendous success to dominate global manufacturing, most recently in the electric-vehicle industry. Taken together, supply- and demand-side investment have been a winning strategy for China.

The competition to lead the future of fusion energy is here. Now is the moment for the US and its Western allies to start investing in the foundational innovation ecosystem needed for a vibrant and resilient industrial base to support it.

Daniel F. Brunner is a co-founder of Commonwealth Fusion Systems and a Partner at Future Tech Partners.

Edlyn V. Levine is the co-founder of a stealth-mode technology start up and an affiliate of the MIT Sloan School of Management.

Fiona E. Murray is a professor of entrepreneurship at the MIT School of Management and Vice Chair of the NATO Innovation Fund.

Rory Burke is a graduate of MIT Sloan and a former summer scholar with ARPA-E.

]]>
1119630
How scientists are trying to use AI to unlock the human mind  https://www.technologyreview.com/2025/07/08/1119777/scientists-use-ai-unlock-human-mind/ Tue, 08 Jul 2025 09:30:00 +0000 https://www.technologyreview.com/?p=1119777 Today’s AI landscape is defined by the ways in which neural networks are unlike human brains. A toddler learns how to communicate effectively with only a thousand calories a day and regular conversation; meanwhile, tech companies are reopening nuclear power plants, polluting marginalized communities, and pirating terabytes of books in order to train and run their LLMs.

But neural networks are, after all, neural—they’re inspired by brains. Despite their vastly different appetites for energy and data, large language models and human brains do share a good deal in common. They’re both made up of millions of subcomponents: biological neurons in the case of the brain, simulated “neurons” in the case of networks. They’re the only two things on Earth that can fluently and flexibly produce language. And scientists barely understand how either of them works.

I can testify to those similarities: I came to journalism, and to AI, by way of six years of neuroscience graduate school. It’s a common view among neuroscientists that building brainlike neural networks is one of the most promising paths for the field, and that attitude has started to spread to psychology. Last week, the prestigious journal Nature published a pair of studies showcasing the use of neural networks for predicting how humans and other animals behave in psychological experiments. Both studies propose that these trained networks could help scientists advance their understanding of the human mind. But predicting a behavior and explaining how it came about are two very different things.

In one of the studies, researchers transformed a large language model into what they refer to as a “foundation model of human cognition.” Out of the box, large language models aren’t great at mimicking human behavior—they behave logically in settings where humans abandon reason, such as casinos. So the researchers fine-tuned Llama 3.1, one of Meta’s open-source LLMs, on data from a range of 160 psychology experiments, which involved tasks like choosing from a set of “slot machines” to get the maximum payout or remembering sequences of letters. They called the resulting model Centaur.

Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.

But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.

Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.

One alternative approach is to go small. The second of the two Nature studies focuses on minuscule neural networks—some containing only a single neuron—that nevertheless can predict behavior in mice, rats, monkeys, and even humans. Because the networks are so small, it’s possible to track the activity of each individual neuron and use that data to figure out how the network is producing its behavioral predictions. And while there’s no guarantee that these models function like the brains they were trained to mimic, they can, at the very least, generate testable hypotheses about human and animal cognition.

There’s a cost to comprehensibility. Unlike Centaur, which was trained to mimic human behavior in dozens of different tasks, each tiny network can only predict behavior in one specific task. One network, for example, is specialized for making predictions about how people choose among different slot machines. “If the behavior is really complex, you need a large network,” says Marcelo Mattar, an assistant professor of psychology and neural science at New York University who led the tiny-network study and also contributed to Centaur. “The compromise, of course, is that now understanding it is very, very difficult.”

This trade-off between prediction and understanding is a key feature of neural-network-driven science. (I also happen to be writing a book about it.) Studies like Mattar’s are making some progress toward closing that gap—as tiny as his networks are, they can predict behavior more accurately than traditional psychological models. So is the research into LLM interpretability happening at places like Anthropic. For now, however, our understanding of complex systems—from humans to climate systems to proteins—is lagging farther and farther behind our ability to make predictions about them.

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

]]>
1119777
Inside the most dangerous asteroid hunt ever https://www.technologyreview.com/2025/07/08/1119757/asteroid-hunt-2024-yr4-earth-planet-protection/ Tue, 08 Jul 2025 08:30:00 +0000 https://www.technologyreview.com/?p=1119757 If you were told that the odds of something were 3.1%, it really wouldn’t seem like much. But for the people charged with protecting our planet, it was huge. 

On February 18, astronomers determined that a 130- to 300-foot-long asteroid had a 3.1% chance of crashing into Earth in 2032. Never had an asteroid of such dangerous dimensions stood such a high chance of striking the planet. For those following this developing story in the news, the revelation was unnerving. For many scientists and engineers, though, it turned out to be—despite its seriousness—a little bit exciting.

While possible impact locations included patches of empty ocean, the space rock, called 2024 YR4, also had several densely populated cities in its possible crosshairs, including Mumbai, Lagos, and Bogotá. If the asteroid did in fact hit such a metropolis, the best-case scenario was severe damage; the worst case was outright, total ruin. And for the first time, a group of United Nations–backed researchers began to have high-level discussions about the fate of the world: If this asteroid was going to hit the planet, what sort of spaceflight mission might be able to stop it? Would they ram a spacecraft into it to deflect it? Would they use nuclear weapons to try to swat it away or obliterate it completely

At the same time, planetary defenders all over the world crewed their battle stations to see if we could avoid that fate—and despite the sometimes taxing new demands on their psyches and schedules, they remained some of the coolest customers in the galaxy. “I’ve had to cancel an appointment saying, I cannot come—I have to save the planet,” says Olivier Hainaut, an astronomer at the European Southern Observatory and one of those who tracked down 2024 YR4. 

Then, just as quickly as history was made, experts declared that the danger had passed. On February 24, asteroid trackers issued the all-clear: Earth would be spared, just as many planetary defense researchers had felt assured it would. 

How did they do it? What was it like to track the rising (and rising and rising) danger of this asteroid, and to ultimately determine that it’d miss us?

This is the inside story of how, over a span of just two months, a sprawling network of global astronomers found, followed, mapped, planned for, and finally dismissed 2024 YR4, the most dangerous asteroid ever found—all under the tightest of timelines and, for just a moment, with the highest of stakes. 

“It was not an exercise,” says Hainaut. This was the real thing: “We really [had] to get it right.”


IN THE BEGINNING

December 27, 2024

THE ASTEROID TERRESTRIAL-IMPACT LAST ALERT SYSTEM, HAWAII

Long ago, an asteroid in the space-rock highway between Mars and Jupiter felt a disturbance in the force: the gravitational pull of Jupiter itself, king of the planets. After some wobbling back and forth, this asteroid was thrown out of the belt, skipped around the sun, and found itself on an orbit that overlapped with Earth’s own. 

“I was the first one to see the detections of it,” Larry Denneau, of the University of Hawai‘i, recalls. “A tiny white pixel on a black background.” 

Denneau is one of the principal investigators at the NASA-funded Asteroid Terrestrial-impact Last Alert System (ATLAS) telescopic network. It may have been just two days after Christmas, but he followed procedure as if it were any other day of the year and sent the observations of the tiny pixel onward to another NASA-funded facility, the Minor Planet Center (MPC) in Cambridge, Massachusetts. 

There’s an alternate reality in which none of this happened. Fortunately, in our timeline, various space agencies—chiefly NASA, but also the European Space Agency and the Japan Aerospace Exploration Agency—invest millions of dollars every year in asteroid-spotting efforts. 

And while multiple nations host observatories capable of performing this work, the US clearly leads the way: Its planetary defense program provides funding to a suite of telescopic facilities solely dedicated to identifying potentially hazardous space rocks. (At least, it leads the way for the moment. The White House’s proposal for draconian budget cuts to NASA and the National Science Foundation mean that several observatories and space missions linked to planetary defense are facing funding losses or outright terminations.) 

Astronomers working at these observatories are tasked with finding threatening asteroids before they find us—because you can’t fight what you can’t see. “They are the first line of planetary defense,” says Kelly Fast, the acting planetary defense officer at NASA’s Planetary Defense Coordination Office in Washington, DC.

ATLAS is one part of this skywatching project, and it consists of four telescopes: two in Hawaii, one in Chile, and another in South Africa. They don’t operate the way you’d think, with astronomers peering through them all night. Instead, they operate “completely robotically and automatically,” says Denneau. Driven by coding scripts that he and his colleagues have developed, these mechanical eyes work in harmony to watch out for any suspicious space rocks. Astronomers usually monitor their survey of the sky from a remote location.

ATLAS telescopes are small, so they can’t see particularly distant objects. But they have a wide field of view, allowing them to see large patches of space at any one moment. “As long as the weather is good, we’re constantly monitoring the night sky, from the North Pole to the South Pole,” says Denneau. 

Larry Denneau
Larry Denneau is a principal investigator at the Asteroid Terrestrial-impact Last Alert System telescopic network.
COURTESY PHOTO

If they detect the starlight reflecting off a moving object, an operator, such as Denneau, gets an alert and visually verifies that the object is real and not some sort of imaging artifact. When a suspected asteroid (or comet) is identified, the observations are sent to the MPC, which is home to a bulletin board featuring (among other things) orbital data on all known asteroids and comets. 

If the object isn’t already listed, a new discovery is announced, and other astronomers can perform follow-up observations. 

In just the past few years, ATLAS has detected more than 1,200 asteroids with near-Earth orbits. Finding ultimately harmless space rocks is routine work—so much so that when the new near-Earth asteroid was spotted by ATLAS’s Chilean telescope that December day, it didn’t even raise any eyebrows. 

Denneau had simply been sitting at home, doing some late-night work on his computer. At the time, of course, he didn’t know that his telescope had just spied what would soon become a history-making asteroid—one that could alter the future of the planet.

The MPC quickly confirmed the new space rock hadn’t already been “found,” and astronomers gave it a provisional designation: 2024 YR4

CATALINA SKY SURVEY, ARIZONA

Around the same time, the discovery was shared with another NASA-funded facility: the Catalina Sky Survey, a nest of three telescopes in the Santa Catalina Mountains north of Tucson that works out of the University of Arizona. “We run a very tight operation,” says Kacper Wierzchoś, one of its comet and asteroid spotters. Unlike ATLAS, these telescopes (although aided by automation) often have an in-person astronomer available to quickly alter the surveys in real time.

“We run a very tight operation,” says Kacper Wierzchoś, one of the comet and asteroid spotters at the Catalina Sky Survey north of Tucson, Arizona.
COURTESY PHOTO

So when Catalina was alerted about what its peers at ATLAS had spotted, staff deployed its Schmidt telescope—a smaller one that excels at seeing bright objects moving extremely quickly. As they fed their own observations of 2024 YR4 to the MPC, Catalina engineer David Rankin looked back over imagery from the previous days and found the new asteroid lurking in a night-sky image taken on December 26. Around then, ATLAS also realized that it had caught sight of 2024 YR4 in a photograph from December 25. 

The combined observations confirmed it: The asteroid had made its closest approach to Earth on Christmas Day, meaning it was already heading back out into space. But where, exactly, was this space rock going? Where would it end up after it swung around the sun? 

CENTER FOR NEAR-EARTH OBJECT STUDIES, CALIFORNIA 

If the answer to that question was Earth, Davide Farnocchia would be one of the first to know. You could say he’s one of NASA’s watchers on the wall. 

And he’s remarkably calm about his duties. When he first heard about 2024 YR4, he barely flinched. It was just another asteroid drifting through space not terribly far from Earth. It was another box to be ticked.

Once it was logged by the MPC, it was Farnocchia’s job to try to plot out 2024 YR4’s possible paths through space, checking to see if any of them overlapped with our planet’s. He works at NASA’s Center for Near-Earth Object Studies (CNEOS) in California, where he’s partly responsible for keeping track of all the known asteroids and comets in the solar system. “We have 1.4 million objects to deal with,” he says, matter-of-factly. 

In the past, astronomers would have had to stitch together multiple images of this asteroid and plot out its possible trajectories. Today, fortunately, Farnocchia has some help: He oversees the digital brain Sentry, an autonomous system he helped code. (Two other facilities in Italy perform similar work: the European Space Agency’s Near-Earth Object Coordination Centre, or NEOCC, and the privately owned Near-Earth Objects Dynamics Site, or NEODyS.)

To chart their courses, Sentry uses every new observation of every known asteroid or comet listed on the MPC to continuously refine the orbits of all those objects, using the immutable laws of gravity and the gravitational influences of any planets, moons, or other sizable asteroids they pass. A recent update to the software means that even the ever-so-gentle push afforded by sunlight is accounted for. That allows Sentry to confidently project the motions of all these objects at least a century into the future. 

Davide Farnocchia
Davide Farnocchia helps track all the known asteroids and comets in the solar system at NASA’s Center for Near-Earth Object Studies.
COURTESY PHOTO

Almost all newly discovered asteroids are quickly found to pose no impact risk. But those that stand even an infinitesimally small chance of smashing into our planet within the next 100 years are placed on the Sentry Risk List until additional observations can rule out those awful possibilities. Better safe than sorry. 

In late December, with just a limited set of data, Sentry concluded that there was a non-negligible chance 2024 YR4 would strike Earth in 2032. Aegis, the equivalent software at Europe’s NEOCC site, agreed. No bother. More observations would very likely remove 2024 YR4 from the Risk List. Just another day at the office for Farnocchia.

It’s worth noting that an asteroid heading toward Earth isn’t always a problem. Small rocks burn up in the planet’s atmosphere several times a day; you’ve probably seen one already this year, on a moonless night. But above a certain size, these rocks turn from innocuous shooting stars into nuclear-esque explosions. 

Reflected starlight is great for initially spotting asteroids, but it’s a terrible way to determine how big they are. A large, dull rock reflects as much light as a bright, tiny rock, making them appear the same to many telescopes. And that’s a problem, considering that a rock around 30 feet long will explode loudly but inconsequentially in Earth’s atmosphere, while a 3,000-foot-long asteroid would slam into the ground and cause devastation on a global scale, imperiling all of civilization. Roughly speaking, if you double the size of an asteroid, it becomes eight times more energetic upon impact—so finding out the size of an Earthbound asteroid is of paramount importance.

In those first few hours after it was discovered, and before anyone knew how shiny or dull its surface was, 2024 YR4 was estimated by astronomers to be as small as 65 feet across or as large as 500 feet. An object of the former size would blow up in mid-air, shattering windows over many miles and likely injuring thousands of people. At the latter size it would vaporize the heart of any city it struck, turning solid rock and metal into liquid and vapor, while its blast wave would devastate the rest of it, killing hundreds of thousands or even millions in the process. 

So now the question was: Just how big was 2024 YR4?


REFINING THE PICTURE

Mid-January 2025

VERY LARGE TELESCOPE, CHILE

Understandably dissatisfied with that level of imprecision, the European Southern Observatory’s Very Large Telescope (VLT), high up on the Cerro Paranal mountain in Chile’s Atacama Desert, entered the chat. As the name suggests, this flagship facility is vast, and it’s capable of really zooming in on distant objects. Or to put it another way: “The VLT is the largest, biggest, best telescope in the world,” says Hainaut, one of the facility’s operators, who usually commands it from half a world away in Germany.  

In reality, the VLT—which lends a hand to the European Space Agency in its asteroid-hunting duties—is actually made up of four massive telescopes, each fixed on four separate corners of the sky. They can be combined to act as a huge light bucket, allowing astronomers to see very faint asteroids. Four additional, smaller, movable telescopes can also team up with their bigger siblings to provide remarkably high-resolution images of even the stealthiest space rocks. 

In this sequence of infrared images taken by ESO’s VLT, the individual image frames have been aligned so that the asteroid remains in the center as other stars appear to move around it.
ESO/O. HAINAUT ET AL.

With so much tech to oversee, the control room of the VLT looks a bit like the inside of the Death Star. “You have eight consoles, each of them with a dozen screens. It’s big, it’s large, it’s spectacular,” says Hainaut. 

In mid-January, the European Space Agency asked the VLT to study several asteroids that had somewhat suspicious near-Earth orbits—including 2024 YR4. With just a few lines of code, the VLT could easily train its sharp eyes on an asteroid like 2024 YR4, allowing astronomers to narrow down its size range. It was found to be at least 130 feet long (big enough to cause major damage in a city) and as much as 300 feet (able to annihilate one).

January 29, 2025

INTERNATIONAL ASTEROID WARNING NETWORK
Marco Fenucci
Marco Fenucci is a near-Earth-object dynamicist at the European Space Agency’s Near-Earth Object Coordination Centre.
COURTESY PHOTO

By the end of the month, there was no mistaking it: 2024 YR4 stood a greater than 1% chance of impacting Earth on December 22, 2032. 

“It’s not something you see very often,” says Marco Fenucci, a near-Earth-object dynamicist at NEOCC. He admits that although it was “a serious thing,” this escalation was also “exciting to see”—something straight out of a sci-fi flick.

Sentry and Aegis, along with the systems at NEODyS, had been checking one another’s calculations. “There was a lot of care,” says Farnocchia, who explains that even though their programs worked wonders, their predictions were manually verified by multiple experts. When a rarity like 2024 YR4 comes along, he says, “you kind of switch gears, and you start being more cautious. You start screening everything that comes in.”

At this point, the klaxon emanating from these three data centers pushed the International Asteroid Warning Network (IAWN), a UN-backed planetary defense awareness group, to issue a public alert to the world’s governments: The planet may be in peril. For the most part, it was at this moment that the media—and the wider public—became aware of the threat. Earth, we may have a problem.

Denneau, along with plenty of other astronomers, received an urgent email from Fast at NASA’s Planetary Defense Coordination Office, requesting that all capable observatories track this hazardous asteroid. But there was one glaring problem. When 2024 YR4 was discovered on December 27, it was already two days after it had made its closest approach to Earth. And since it was heading back out into the shadows of space, it was quickly fading from sight.

Once it gets too faint, “there’s not much ATLAS can do,” Denneau says. By the time of IAWN’s warning, planetary defenders had just weeks to try to track 2024 YR4 and refine the odds of its hitting Earth before they’d lose it to the darkness. 

And if their scopes failed, the odds of an Earth impact would have stayed uncomfortably high until 2028, when the asteroid was due to make another flyby of the planet. That’d be just four short years before the space rock might actually hit.

“In that situation, we would have been … in trouble,” says NEOCC’s Fenucci.

The hunt was on.


PREPARING FOR THE WORST

February 5 and February 6, 2025

SPACE MISSION PLANNING ADVISORY GROUP, AUSTRIA

In early February, spaceflight mission specialists, including those at the UN-supported Space Mission Planning Advisory Group in Vienna, began high-level talks designed to sketch out ways in which 2024 YR4 could be either deflected away from Earth or obliterated—you know, just in case.

A range of options were available—including ramming it with several uncrewed spacecraft or assaulting it with nuclear weapons—but there was no silver bullet in this situation. Nobody had ever launched a nuclear explosive device into deep space before, and the geopolitical ramifications of any nuclear-armed nations doing so in the present day would prove deeply unwelcome. Asteroids are also extremely odd objects; some, perhaps including 2024 YR4, are less like single chunks of rock and more akin to multiple cliffs flying in formation. Hit an asteroid like that too hard and you could fail to deflect it—and instead turn an Earthbound cannonball into a spray of shotgun pellets. 

It’s safe to say that early on, experts were concerned about whether they could prevent a potential disaster. Crucially, eight years was not actually much time to plan something of this scale. So they were keen to better pinpoint how likely, or unlikely, it was that 2024 YR4 was going to collide with the planet before any complex space mission planning began in earnest. 

The people involved with these talks—from physicists at some of America’s most secretive nuclear weapons research laboratories to spaceflight researchers over in Europe—were not feeling close to anything resembling panic. But “the timeline was really short,” admits Hainaut. So there was an unprecedented tempo to their discussions. This wasn’t a drill. This was the real deal. What would they do to defend the planet if an asteroid impact couldn’t be ruled out?

Luckily, over the next few days, a handful of new observations came in. Each helped Sentry, Aegis, and the system at NEODyS rule out more of 2024 YR4’s possible future orbits. Unluckily, Earth remained a potential port of call for this pesky asteroid—and over time, our planet made up a higher proportion of those remaining possibilities. That meant that the odds of an Earth impact “started bubbling up,” says Denneau. 

a telescope in each of the four corners points to an asteroid
EVA REDAMONTI

By February 6, they jumped to 2.3%—a one-in-43 chance of an impact. 

“How much anxiety someone should feel over that—it’s hard to say,” Denneau says, with a slight shrug. 

In the past, several elephantine asteroids have been found to stand a small chance of careening unceremoniously into the planet. Such incidents tend to follow a pattern. As more observations come in and the asteroid’s orbit becomes better known, an Earth impact trajectory remains a possibility while other outlying orbits are removed from the calculations—so for a time, the odds of an impact rise. Finally, with enough observations in hand, it becomes clear that the space rock will miss our world entirely, and the impact odds plummet to zero.

Astronomers expected this to repeat itself with 2024 YR4. But there was no guarantee. There’s no escaping the fact that one day, sooner or later, scientists will discover a dangerous asteroid that will punch Earth in the face—and raze a city in the process. 

After all, asteroids capable of trashing a city have found their way to Earth plenty of times before, and not just in the very distant past. In 1908, an 800-square-mile patch of forest in Siberia—one that was, fortunately, very sparsely populated—was decimated by a space rock just 180 feet long. It didn’t even hit the ground; it exploded in midair with the force of a 15-megaton blast.

But only one other asteroid comparable in size to 2024 YR4 had its 2.3% figure beat: in 2004, Apophis—capable of causing continental-scale damage—had (briefly) stood a 2.7% chance of impacting Earth in 2029.

Rapidly approaching uncharted waters, the powers that be at NASA decided to play a space-based wild card: the James Webb Space Telescope, or JWST.

THE JAMES WEBB SPACE TELESCOPE, DEEP SPACE, ONE MILLION MILES FROM EARTH

A large dull asteroid reflects the same amount of light as a small shiny one, but that doesn’t mean astronomers sizing up an asteroid are helpless. If you view both asteroids in the infrared, the larger one glows brighter than the smaller one no matter the surface coating—making infrared, or the thermal part of the electromagnetic spectrum, a much better gauge of a space rock’s proportions. 

Observatories on Earth do have infrared capabilities, but our planet’s atmosphere gets in their way, making it hard for them to offer highly accurate readings of an asteroid’s size. 

But the James Webb Space Telescope (JWST), hanging out in space, doesn’t have that problem. 

A collage of three images showing the black expanse of space. Two-thirds of the collage is taken up by the black background sprinkled with small, blurry galaxies in orange, blue, and white. There are two images in a column at the right side of the collage. On the right side of the main image, not far from the top, a very faint dot is outlined with a white square. At the right, there are two zoomed in views of this area. The top box is labeled NIRCam and shows a fuzzy dot at the center of the inset. The bottom box is labeled MIRI and shows a fuzzy pinkish dot.
Asteroid 2024 YR4 is the smallest object targeted by JWST to date, and one of the smallest objects to have its size directly measured. Observations were taken using both its NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument) to study the thermal properties of the asteroid.
NASA, ESA, CSA, A. RIVKIN (APL), A. PAGAN (STSCI)

This observatory, which sits at a gravitationally stable point about a million miles from Earth, is polymathic. Its sniper-like scope can see in the infrared and allows it to peer at the edge of the observable universe, meaning it can study galaxies that formed not long after the Big Bang. It can even look at the light passing through the atmospheres of distant planets to ascertain their chemical makeups. And its remarkably sharp eye means it can also track the thermal glow of an asteroid long after all ground-based telescopes lose sight of it.

In a fortuitous bit of timing, by the moment 2024 YR4 came along, planetary defenders had recently reasoned that JWST could theoretically be used to track ominous asteroids using its own infrared scope, should the need arise. So after IAWN’s warning went out, operators of JWST ran an analysis: Though the asteroid would vanish from most scopes by late March, this one might be able to see the rock until sometime in May, which would allow researchers to greatly refine their assessment of the asteroid’s orbit and its odds of making Earth impact.

Understanding 2024 YR4’s trajectory was important, but “the size was the main motivator,” says Andy Rivkin, an astronomer at Johns Hopkins University’s Applied Physics Laboratory, who led the proposal to use JWST to observe the asteroid. The hope was that even if the impact odds remained high until 2028, JWST would find that 2024 YR4 was on the smaller side of the 130-to-300-feet size range—meaning it would still be a danger, but a far less catastrophic one. 

The JWST proposal was accepted by NASA on February 5. But the earliest it could conduct its observations was early March. And time really wasn’t on Earth’s side.

February 7, 2025

GEMINI SOUTH TELESCOPE, CHILE

“At this point, [2024 YR4] was too faint for the Catalina telescopes,” says Catalina’s Wierzchoś. “In our opinion, this was a big deal.” 

So Wierzchoś and his colleagues put in a rare emergency request to commandeer the Gemini Observatory, an internationally funded and run facility featuring two large, eagle-eyed telescopes—one in Chile and one atop Hawaii’s Mauna Kea volcano. Their request was granted, and on February 7, they trained the Chile-based Gemini South telescope onto 2024 YR4. 

This composite image was captured by a team of astronomers using the Gemini Multi-Object Spectrograph (GMOS). The hazy dot at the center is asteroid 2024 YR4.
INTERNATIONAL GEMINI OBSERVATORY/NOIRLAB/NSF/AURA/M. ZAMANI

The odds of Earth impact dropped ever so slightly, to 2.2%—a minor, but still welcome, development. 

Mid-February 2025

MAGDALENA RIDGE OBSERVATORY, NEW MEXICO

By this point, the roster of 2024 YR4 hunters also included the tiny team operating the Magdalena Ridge Observatory (MRO), which sits atop a tranquil mountain in New Mexico.

“It’s myself and my husband,” says Eileen Ryan, the MRO director. “We’re the only two astronomers running the telescope. We have a daytime technician. It’s kind of a mom-and-pop organization.” 

Still, the scope shouldn’t be underestimated. “We can see maybe a cell-phone-size object that’s illuminated at geosynchronous orbit,” Ryan says, referring to objects 22,000 miles away. But its most impressive feature is its mobility. While other observatories have slowly swiveling telescopes, MRO’s scope can move like the wind. “We can track the fastest objects,” she says, with a grin—noting that the telescope was built in part to watch missiles for the US Air Force. Its agility and long-distance vision explain why the Space Force is one of MRO’s major clients: It can be used to spy on satellites and spacecraft anywhere from low Earth orbit right out to the lunar regions. And that meant spying on the super-speedy, super-sneaky 2024 YR4 wasn’t a problem for MRO, whose own observations were vital in refining the asteroid’s impact odds.

Dr Eileen Ryan
Eileen Ryan is the director of the Magdalena Ridge Observatory in New Mexico.
COURTESY PHOTO

Then, in mid-February, MRO and all ground-based observatories came up against an unsolvable problem: The full moon was out, shining so brightly that it blinded any telescope that dared point at the night sky. “During the full moon, the observatories couldn’t observe for a week or so,” says NEOCC’s Fenucci. To most of us, the moon is a beautiful silvery orb. But to astronomers, it’s a hostile actor. “We abhor the moon,” says Denneau. 

All any of them could do was wait. Those tracking 2024 YR4 vacillated between being animated and slightly trepidatious. The thought that the asteroid could still stand a decent chance of impacting Earth after it faded from view did weigh a little on their minds. 

Nevertheless, Farnocchia maintained his characteristic sangfroid throughout. “I try to stress about the things I can control,” he says. “All we can do is to explain what the situation is, and that we need new data to say more.”

February 18, 2025

CENTER FOR NEAR-EARTH OBJECT STUDIES, CALIFORNIA 

As the full moon finally faded into a crescent of light, the world’s largest telescopes sprang back into action for one last shot at glory. “The dark time came again,” says Hainaut, with a smile.

New observations finally began to trickle in, and Sentry, Aegis, and NEODyS readjusted their forecasts. It wasn’t great news: The odds of an Earth impact in 2032 jumped up to 3.1%, officially making 2024 YR4 the most dangerous asteroid ever discovered.

This declaration made headlines across the world—and certainly made the curious public sit up and wonder if they had yet another apocalyptic concern to fret about. But, as ever, the asteroid hunters held fast in their prediction that sooner or later—ideally sooner—more observations would cause those impact odds to drop. 

“We kept at it,” says Ryan. But time was running short; they started to “search for out-of-the-box ways to image this asteroid,” says Fenucci. 

Planetary defense researchers soon realized that 2024 YR4 wasn’t too far away from NASA’s Lucy spacecraft, a planetary science mission making a series of flybys of various asteroids. If Lucy could be redirected to catch up to 2024 YR4 instead, it would give humanity its best look at the rock, allowing Sentry and company to confirm or dispel our worst fears. 

Sadly, NASA ran the numbers, and it proved to be a nonstarter: 2024 YR4 was too speedy and too far for Lucy to pursue. 

It was really starting to look as if JWST would be the last, best hope to track 2024 YR4. 


A CHANGE OF FATE

February 19, 2025

VERY LARGE TELESCOPE, CHILE and MAGDALENA RIDGE OBSERVATORY, NEW MEXICO

Just one day after 2024 YR made history, the VLT, MRO, and others caught sight of it again, and Sentry, Aegis, and NEODyS voraciously consumed their new data. 

The odds of an Earth impact suddenly dropped to 1.5%

Astronomers didn’t really have time to react to the possibility that this was a good sign—they just kept sending their observations onward.

February 20, 2025

SUBARU TELESCOPE, HAWAII

Yet another observatory had been itching to get into the game for weeks, but it wasn’t until February 20 that Tsuyoshi Terai, an astronomer at Japan’s Subaru Telescope, sitting atop Mauna Kea, finally caught 2024 YR4 shifting between the stars. He added his data to the stream.

And all of a sudden, the asteroid lost its lethal luster. The odds of its hitting Earth were now just 0.3%. 

At this point, you might expect that all those tracking it would be in a single control room somewhere, eyes glued to their screens, watching the odds drop before bursting into cheers and rapturous applause. But no—the astronomers who had spent so long observing this asteroid remained scattered across the globe. And instead of erupting into cheers, they exchanged modestly worded emails of congratulations—the digital equivalent of a nod or a handshake.

Dr. Tsuyoshi Tera at a workstation with many monitors
In late February, data from Tsuyoshi Terai, an astronomer at Japan’s Subaru Telescope, which sits atop Mauna Kea, confirmed that 2024 YR4 was not so lethal after all.
NAOJ

“It was a relief,” says Terai. “I was very pleased that our data contributed to put an end to the risk of 2024 YR4.” 

February 24, 2025

INTERNATIONAL ASTEROID WARNING NETWORK

Just a few days later, and thanks to a litany of observations continuing to flood in, IAWN issued the all-clear. This once-ominous asteroid’s odds of inconveniencing our planet were at 0.004%—one in 25,000. Today, the odds of an Earth impact in 2032 are exactly zero.

“It was kinda fun while it lasted,” says Denneau. 

Planetary defenders may have had a blast defending the world, but these astronomers still took the cosmic threat deeply seriously—and never once took their eyes off the prize. “In my mind, the observers and orbit teams were the stars of this story,” says Fast, NASA’s acting planetary defense officer.

Farnocchia shrugs off the entire thing. “It was the expected outcome,” he says. “We just didn’t know when that would happen.”

Looking back on it now, though, some 2024 YR4 trackers are allowing themselves to dwell on just how close this asteroid came to being a major danger. “It’s wild to watch it all play out,” says Denneau. “We were weeks away from having to spin up some serious mitigation planning.” But there was no need to work out how the save the world. It turned out that 2024 YR4 was never a threat to begin with—it just took a while to check. 

And these experiences in handling a dicey space rock will only serve to make the world a safer place to live. One day, an asteroid very much like 2024 YR4 will be seen heading straight for Earth. And those tasked with tracking it will be officially battle-tested, and better prepared than ever to do what needs to be done.


A POSTSCRIPT

March 27, 2025

JAMES WEBB SPACE TELESCOPE, DEEP SPACE, ONE MILLION MILES FROM EARTH

But the story of 2024 YR4 is not quite over—in fact, if this were a movie, it would have an after-credits scene.

After the Earth-impact odds fell off a cliff, JWST went ahead with its observations in March anyway. It found out that 2024 YR4 was 200 feet across—so large that a direct strike on a city would have proved horrifically lethal. Earth just didn’t have to worry about it anymore. 

But the moon might. Thanks in part to JWST, astronomers calculated a 3.8% chance that 2024 YR4 will impact the lunar surface in 2032. Additional JWST observations in May bumped those odds up slightly, to 4.3%, and the orbit can no longer be refined until the asteroid’s next Earth flyby in 2028. 

“It may hit the moon!” says Denneau. “Everybody’s still very excited about that.” 

A lunar collision would give astronomers a wonderful opportunity not only to study the physics of an asteroid impact, but also to demonstrate to the public just how good they are at precisely predicting the future motions of potentially lethal space rocks. “It’s a thing we can plan for without having to defend the Earth,” says Denneau.

If 2024 YR4 is truly going to smash into the moon, the impact—likely on the side facing Earth—would unleash an explosion equivalent to hundreds of nuclear bombs. An expansive crater would be carved out in the blink of an eye, and a shower of debris would erupt in all directions. 

None of this supersonic wreckage would pose any danger to Earth, but it would look spectacular: You’d be able to see the bright flash of the impact from terra firma with the naked eye.

“If that does happen, it’ll be amazing,” says Denneau. It will be a spectacular way to see the saga of 2024 YR4—once a mere speck on his computer screen—come to an explosive end, from a front-row seat.

Robin George Andrews is an award-winning science journalist and doctor of volcanoes based in London. He regularly writes about the Earth, space, and planetary sciences, and is the author of two critically acclaimed books: Super Volcanoes (2021) and How to Kill An Asteroid (2024).

]]>
1119757
Producing tangible business benefits from modern iPaaS solutions https://www.technologyreview.com/2025/07/07/1119383/producing-tangible-business-benefits-from-modern-ipaas-solutions/ Mon, 07 Jul 2025 14:01:39 +0000 https://www.technologyreview.com/?p=1119383 When a historic UK-based retailer set out to modernize its IT environment, it was wrestling with systems that had grown organically for more than 175 years. Prior digital transformation efforts had resulted in a patchwork of hundreds of integration flows spanning cloud, on-premises systems, and third-party vendors, all communicating across multiple protocols. 

The company needed a way to bridge the invisible seams stitching together decades of technology decisions. So, rather than layering on yet another patch, it opted for a more cohesive approach: an integration platform as a service (iPaaS) solution, i.e. a cloud-based ecosystem that enables smooth connections across applications and data sources. By going this route, the company reduced the total cost of ownership of its integration landscape by 40%.

The scenario illustrates the power of iPaaS in action. For many enterprises, iPaaS turns what was once a costly, complex undertaking into a streamlined, strategic advantage. According to Forrester research commissioned by SAP, businesses modernizing with iPaaS solutions can see a 345% return on investment over three years, with a payback period of less than six months.

Agile integration for an AI-first world

In 2025, the business need for flexible and friction-free integration has new urgency. When core business systems can’t communicate easily, the impacts ripple across the organization: Customer support teams can’t access real-time order statuses, finance teams struggle to consolidate data for monthly closes, and marketers lack reliable insights to personalize campaigns or effectively measure ROI.

A lack of high-quality data access is particularly problematic in the AI era, which depends on current, consistent, and connected data flows to fuel everything from predictive analytics to bespoke AI copilots. To unleash the full potential of AI, enterprises must first solve for any bottlenecks that prevent information from flowing freely across their systems. They must also ensure data pipelines are reliable and well-governed; when AI models are trained on inconsistent or outdated data, the insights they generate can be misleading or incomplete—which can undermine everything from customer recommendations to financial forecasting.

iPaaS platforms are often well-suited for accomplishing this across dynamic, distributed environments. Built as cloud-native, microservices-based integration hubs, modern iPaaS platforms can scale rapidly, adapt to changing workloads, and support hybrid architectures without adding complexity. They also help simplify the user experience for everyday business users via low-code functionalities that allow both technical and non-technical employees to build workflows with simple drag-and-drop or click-to-configure interfaces.

This self-service model has practical, real-world applications across business functions: For instance, customer service agents can connect support ticketing systems with real-time inventory or shipping data, finance departments can link payment processors to accounting software, and marketing teams can sync CRM data with campaign platforms to trigger personalized outreach—all without waiting for IT to come to the rescue.

Architectural foundations for fast, flexible integration

Several key architectural elements make the agility associated with iPaaS solutions possible:

  1. API-first design that treats every connection as a reusable service
  2. Event-driven capabilities that enable real-time responsiveness
  3. Modular components that can be mixed and matched to address specific business scenarios

These principles are central to making the transition from “spaghetti architecture” to “integration fabric”—a shift from brittle point-to-point connections to intelligent, policy-driven connectivity that spans multidimensional IT environments.

This approach means that when a company wants to add a new application, onboard a new partner, or create a new customer experience, they’re able to do so by tapping into existing integration assets rather than starting from scratch—which can lead to dramatically faster deployment cycles. It also helps enforce consistency and, in some cases, security and compliance across environments (role-based access controls and built-in monitoring capabilities, for example, can allow organizations to apply standards more uniformly).

Further, studies suggest that iPaaS solutions enable companies to unlock new revenue streams by integrating previously siloed data and processes. Forrester research found that organizations adopting iPaaS solutions stand to generate nearly $1 million in incremental profit over three years by creating new digital services, improving customer experiences, and automating revenue-generating processes that were previously manual.

Where iPaaS is headed: convergence and intelligence

All this momentum is perhaps one of the reasons why the global iPaaS market, valued at approximately $12.9 billion in 2024, is projected to reach more than $78 billion by 2032—with growth rates exceeding 25% annually.

This trajectory is contingent on two ongoing trends: the convergence of integration capabilities into broader application development platforms, and the infusion of AI into the integration lifecycle.

Today, the boundaries between iPaaS, automation platforms, and AI development environments are blurring as vendors create unified solutions that can handle everything from basic data synchronization to complex business processes. 

AI and machine learning capabilities are also being embedded directly into integration platforms. Soon, features like predictive maintenance of integration flow or intelligent routing of data based on current conditions are likely to become table stakes. Already, integration platforms are becoming smarter and more autonomous, capable of optimizing themselves and, in some cases, even initiating self-healing actions when problems arise.

At the same time, this shift is transforming how businesses think about integration as a dynamic enabler of AI strategy. In the near future, robust integration frameworks will be essential to operationalize AI at scale and feed these systems the rich, contextual data they need to deliver meaningful insights.

Building integration as competitive advantage

In addition to the retail modernization story detailed earlier, a few more real-world examples highlight the potential of iPaaS:

  • A chemicals manufacturer migrated 363 legacy interfaces to an iPaaS platform and now spins up new integrations 50% faster.
  • A North American bottling company reduced integration runtime costs by more than 50% while supporting 12 legal entities on a single cloud ERP instance through common APIs.
  • A global shipping-technology firm connected its CRM and third-party systems via cloud-based iPaaS solutions, enabling 100% touchless order fulfillment and a 95% cut in cost centers after a nine-month rollout in its first region.

Taken together, these examples make a compelling case for integration as strategy, not just infrastructure. They reflect a shift in mindset, where integration is democratized and embedded into how every team, not just IT, gets work done. Companies that treat integration as a core capability versus an IT afterthought are reaping tangible, enterprise-wide benefits, from faster go-to-market timelines and reduced operational costs to fully automated business processes.

As AI reshapes business processes and customer standards continue to climb, enterprises are realizing that integration architecture determines not only what they can build today, but how quickly they can adapt to whatever comes tomorrow.

Learn more on the MIT Technology Review Insights and SAP Modern integration for business-critical initiatives content hub.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

]]>
1119383