Nick Foster doesn’t call himself a futurist.
Yes, he has worked to envision the future at Nokia, Sony, Dyson and Apple. And as the head of design at Google X — the “moonshot factory” that helped conceive Alphabet’s self-driving car and smart glasses.
But he wants to distance himself from that job title’s connotation: “It feels like there’s a tendency to pigeonhole people who do this work,” as if their purpose were to “communicate an ambition or some marketing video,” he said.
In his view, spelled out in his upcoming book, “Could Should Might Don’t,” mapping a path amid uncertainty and rapid technological development calls for a more rigorous approach. Foster talked with DealBook about what that looks like.
What’s wrong with the way that companies typically handle the future?
It just feels like an underdeveloped skill in almost everybody I’ve ever worked with.
You can be sitting in a meeting with people with Ph.D.s or people who have led billion-dollar companies. And when they’re talking about the here and now, it’s very empirical and detail oriented. There has to be a well-reasoned answer for every question or opinion.
When you start to talk about the future, people just grab “The Matrix” or “The Jetsons” or flying cars, which to me points at a lack of learning and rigor and ability in that domain.
When you’re trying to convince a company that this kind of work is, as you write, “as commercially important as sales, partnerships, investments,” what do you say?
It’s very, very difficult, particularly in a metrics-driven organization, because what I’m essentially talking about is culture. A lot of the work that I’ve done, you can’t point at a product in the market and say, “That’s like that because of what Nick did seven years ago.”
But what I do know is companies that invest and spend time thinking at longer time frames tend to have fewer failings than companies that are just racing to keep up and get ahead.
Do you think the amount of uncertainty coming from Washington — companies not knowing what their input costs will be or what their regulations will be — has forced them to do more future thinking?
I would hope so. There are two ways of responding to uncertainty. One is to say: We don’t know anything, so there’s no point in trying to figure out the future. Let’s just roll with it. The other is to try and get as broad a spread of potential future scenarios out in front of you.
What I call “should” futurism is being certain: using historical trends to project dotted lines into the future, which is how we’ve treated business strategy work in the past.
But I think now, because our world is so volatile, uncertain, complex, saying that dot on a chart exists at 2027 is sort of nonsense — it feels less and less solid.
That dotted line isn’t data, it’s a story.
I understand why it’s tempting for companies to go that route — they spend their lives in spreadsheets. What does it miss?
The world is incredibly volatile. A ship could get stuck in the Suez Canal, Oprah Winfrey could say something awful about your company, and then all of your data is completely irrelevant and wrong.
And genuinely new things typically don’t follow previous trend lines.
In the book, you use Elon Musk’s plan to start a colony on Mars as an example.
Numbers have a sort of solidity to them that makes them feel very rational, even when the things they’re measuring perhaps aren’t.
So Elon Musk, I would assume, would stand up and say it’s not because he’s had some fantasy of living on Mars. If he looks at the statistics and the numbers, it’s numerically justified to do those things.
If you were at that pitch meeting, what might you bring to the table that would be missed by only looking at numbers?
I’m not saying it’s right or wrong, but it doesn’t look laterally enough. What are the other things that we might do, for example? We might colonize the seas. We might create defense systems that stop asteroids hitting the Earth.
All I would do is say: I don’t have any reason to doubt your data, but it is a story. And if that story is going to be useful, it needs to be richer.
You caution against a sort of whiz-bang approach to thinking about the future. That feels particularly hard to avoid when thinking about artificial intelligence.
We tend to look at the future as a place of extremes, typically of either utopian escapist technocratic fantasy futures or dystopian disaster collapse futures. And that’s been going on for a very long time.
So if we were to follow that through from the past to today, the world should be at one of those two endpoints, but we’re not. It’s all in the middle. It is all mundane. We take dogs for walks. We twist our knee. We wear a Band-Aid. Life is sort of ordinary. It’s just the notion of ordinary shifts.
A.I. is a hugely disruptive technology. But if I were to pick us up and drop us five to 10 years in the future, the ways we actually live with it, the changes would just feel normal.
I find it difficult not to get stuck in thinking about all of the things that could go wrong in the future, what you call “don’t” futurism.
I think it is the role of companies to talk about the future in ways that aren’t just glibly positive. If you say, “This is the future — it’s going to be great,” no one believes you. No one thinks that you’re being serious. No one thinks you’re responsible.
I’d love to see a new breed of executives saying: We want to build a world that looks like this. We’re also aware that we don’t really know everything and there’ll be uncertainties that might trip us up. We could negatively affect these groups of people or these industries or these places, and we’re doing our best to figure that out.
Particularly the tech companies — but it’s not just them — are currently selling a version of the future that is just way too in one corner.
Even the richest companies in the world are just Mad Libbing words: “superintelligence,” “robotics.” OK, so let’s assume there are going to be robots in everyone’s house. Let’s talk about that for five hours. How much are they going to cost? Can I get them refitted? What do they do? What do we do about all those people who do those jobs?
But nobody seems interested in that because there’s a headline that says so-and-so C.E.O. predicts domestic robots by 2027.
IN CASE YOU MISSED IT
Jay Powell opens the door to cutting interest rates. In a closely watched speech at an economic symposium in Jackson, Wyo., the Federal Reserve chair emphasized the labor market’s vulnerabilities even as inflation accelerates.
President Trump threatens to fire a Fed governor. The president, who appears determined to remake the central bank, said he would remove Lisa Cook, a member of the Fed board, if she did not stop down after allegations that she falsified records to obtain favorable mortgage terms.
Big retailers report mixed results. Target’s shares sank on Wednesday after it reported falling sales and announced Michael Fiddelke as its new C.E.O. Walmart fared better with its focus on low prices and a grocery business that makes up a large portion of sales, saying it had not seen a pullback by consumers because of tariffs.
The White House says the government will take a 10 percent stake in Intel. That equity share is worth $8.9 billion, making the deal one of the largest government interventions in a U.S. company since the rescue of the auto industry after the 2008 financial crisis.
Other big deals: Elon Musk tried to team up with Mark Zuckerberg to buy OpenAI, a court filing revealed. Meta restructured its A.I. division. Anthropic is said to be looking to raise as much as $10 billion at a roughly $170 billion valuation. And the law firms that struck deals to avoid punitive orders from Trump have committed to do free work for the Commerce Department.
Hot Take: What to teach in the A.I. era
In our series “Hot Take,” we explore out-of-the-box ideas.
Artificial intelligence has prompted corporations, business schools and parents to wonder if any human skills are future-proof. Angus Fletcher, a professor at Ohio State University who studies narrative, believes he has found one.
He calls it “primal intelligence,” the ability to think in stories (including by making plans, which are essentially plots) and the basis for human abilities like intuition, imagination and common sense.
“Machines can’t do it because these are all low-information processes,” Fletcher said. Narrative-driven skills “are all things that evolved in ancient periods of the world when there was a lot of volatility and uncertainty and we didn’t have the sort of large data that we feed into these computers now.”
The key to developing these abilities, he outlines in a book released this week, is getting a lot of practice failing.
How it works: Fletcher has developed a role-playing exercise, based on one he once helped the Army refine, that pushes students to navigate barriers without resorting to anger or giving up. It has been taught to executives, in elementary schools and at Ohio State’s business school.
A practitioner might ask a child in elementary school — in front of the class, to simulate real pressure — what he would do if his peers laughed at his swimsuit. If the student said he would change clothes, the practitioner would coach him that doing so would be giving up. How about telling the classmates why he likes the swimsuit? Turns out, the practitioner might say, his friends don’t care. And so on. (The Army uses quite different scenarios.)
The point is to strengthen the mental muscles that help build and rebuild plans as things are changing quickly.
Will it work? DealBook asked Armando Solar-Lezama, a professor at M.I.T. who studies computer-assisted programming. He said it’s not true that narrative thinking is a human-only skill. “Back in the 1980s, A.I. was based on logic and probability,” he said, but now, A.I. models are “very, very good at narrative. They’re so good at it that they’re actually able to ensnare vulnerable people and convince them that they are their friends.”
While generative A.I. models are trained to simply predict the next word in a sentence, Solar-Lezama said, there’s a tendency to conflate this training method with what is happening inside the model.
“It’s a little bit like saying, well, humans can’t possibly be smart or create art, because they’re just optimized by evolution to produce babies,” he said, adding that A.I. systems can build internal models of how the world works and reason about situations that they haven’t seen before. They can, in other words, come up with a new plan.
His preferred answer for how humans can stay relevant? “Learning new things.”
Teaching an A.I. system a new skill like algebra (which machines are still not very good at) requires hundreds of millions of dollars. Humans don’t always need to put in that level of investment.
Quiz: Race against the machine
The Humanoid Robot Games in Beijing last weekend aimed to test and showcase the latest advances in robotics, which have recently driven a revolution in Chinese manufacturing.
A robot from China’s Unitree Robotics won the gold medal in the 1,500-meter indoor track event. The human who holds the record for that distance indoors, Jakob Ingebrigtsen of Norway, ran it in 3 minutes, 29.63 seconds. What was the robot’s winning time?
A. 3:21.59
B. 6:34.40
C. 10:52.36
D. 24:53.51
Thanks for reading! We’ll see you Monday.
We’d like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.
Quiz answer: B.