Daniel Pink has a habit of being right.
After a grinding tour as Al Gore’s speechwriter in the late '90s, Pink decided to go freelance. Soon, he spotted something percolating beneath the surface of the American economy: a movement of highly skilled “free agents” who were empowered to break free from the traditional 9-5 thanks to the rise of broadband, digital platforms, and new communication channels. He spent a year traveling the country to chronicle this phenomenon in his 2002 book, Free Agent Nation: The Future of Working For Yourself.
In many ways, Pink predicted the idea behind A.Team, the company where I work today. We bring together highly-skilled “free agent” product builders as teams. Since launching in 2020, A.Team's growth has been propelled by a post-pandemic free-agent nation of top tech workers who want to work with autonomy, building products that interest them — as well as a growing number of companies who hungry to work with them.
But Pink's predictive prowess didn't stop there. His 2005 book, A Whole New Mind, predicted that automation and outsourcing would lead us to value creative right-brain skills over left-brain skills in the U.S., as left-brain tasks were increasingly handled by machines or outsourced overseas. Re-reading it recently, I was awe-struck by how poignantly it foreshadows the impact AI is having on knowledge work today.
While I’ve found myself strangely obsessed with Pink’s early work, he’s produced a series of beloved best-sellers since: Drive (2009), To Sell Is Human (2012), When: The Scientific Secrets of Perfect Timing (2018), and The Power of Regret (2022), becoming one of the most iconic writers and thinkers of a generation.
And now, Pink is writing one of my favorite new columns, called “Why Not?”, for The Washington Post. Pink he explores questions like “Should we pay teachers $100,00 per year?” and “Why not require a civics test as a rite of passage for all Americans?”
Last month, I sat down with Pink to discuss his new column, how AI will change the future of work, the ways ChatGPT has impacted his creative process, and more. He's now on my Mount Rushmore of interview subjects; the conversation was so good I decided to break it into two parts.
(Be sure to subscribe to the Build Mode newsletter using the subscribe button at the top of the page, or in the menu bar if you're on mobile, to get part 2 in your inbox next week).
Tell me about the purpose of the “Why Not?” Column.
We're always arguing over who's right and who's wrong, and that’s clouding the question of what’s possible. So we decided to do a series of columns for the next year that basically begin with: Why not? Why not pay teachers $100,000 a year? Why not make the citizenship test a rite of passage for all Americans? Why not have billionaires sell sports teams to their fans? Why not give presents on your birthday rather than receive them?
We're always arguing over who's right and who's wrong. That’s clouding the question of what’s possible.
The idea here is to just put these ideas out there and see how people respond, so it's actually a genuine conversation between the writer and the reader. My job in this is to just be the first word. To be a catalyst for possibility. We call it “possibility journalism.”
I love the concept of possibility journalism, which I think you’ve described as emphasizing openness over cynicism. But it also seems like such a bold move on the internet in 2024. So what makes you think it can work?
The fact that I have very thick skin. (Laughs.) I've lost sleep over many things. The comment section is not one of those things.
When we think about opinion journalism, it's very sanctified. There are certain people who are blessed to give their opinion in a hallowed space, and their opinions are the ones that matter the most. I think that's bullshit.
We've gotten over 2,500 ideas from readers. Some of them are pretty interesting. (Holds up a giant stack of papers with reader ideas printed out.) The one that I highlighted here is: Congress should consist of ordinary citizens appointed at random for a specified single, relatively brief term. All right, I'm not sure that's a good idea. But it's interesting.
So jury duty for Congress.
Well put. And the question then becomes: Why not? That could be a worthy thing to explore.
So I've been rereading your 2005 book A Whole New Mind—
God, why?
It's a prescient book for the moment we're in right now! It’s honestly shocking that it was written 20 years ago. Between this and Free Agent Nation, you're starting to create a pattern of your stuff coming back in style, like the early 2000s jeans I see all the NYU students near our office wearing. In a lot of ways, what’s happening with AI right now feels like a hyper-accelerated version of the outsourcing and automation waves you identified in the book as the drivers of the increasing importance of right-brain thinking in a new economy. Would you agree with that?
Yes and no. So just to quickly recap, the argument of that book is that certain kinds of skills are metaphorically consistent with the left hemisphere of the brain: logical, linear analytical skills.
Those skills still mattered, but they were becoming less important because they're easier to outsource and automate, and a certain other set of skills [associated with the right hemisphere of the brain]—artistry, empathy, big picture thinking—were becoming more important because they were harder to outsource and automate.
I think that’s still sort of right, but AI can do some of the right-brain stuff. I did not expect that. The advances were far swifter than I would have imagined.
I’ll give you an example: There’s a line in the book where we’re talking about facial expressions, and I said, “Oh, computers can barely recognize people’s faces, let alone what their expression is.” And now, it can do both extremely well. That's the kind of holistic, emotional processing I thought would be harder to replicate.
AI is good at generation; we’re good at taste. For now.
What we want to do as human beings is bring skills and aptitudes and values and capacities that augment—rather than compete with—machine intelligence. Certain kinds of creativity. One of the things these large language models are very good at is, “Here’s a blog post that I just wrote; give me 25 clever headlines for it.” It does that reasonably well. Extremely quickly. Now, you have to know what’s a good headline or a bad headline when you select them. [AI is] good at generation; we’re good at taste. For now.
I think taste is important—the ability to evaluate what's good and what's not. And the creativity and AI question is really interesting overall. Because AI passes all these creativity tests, right? It scores in the top 90%. Which leads to the question: What is the true nature of human creativity?
AI can maybe tell a formulaic story with three acts. But can it really be a storyteller without shared connection and human experience? I wonder where you land on that: How do you think about what defines creativity? Is AI truly creative or not?
It's a fascinating question. I haven't thought that much about it. The way you put it was very astute and smart. There is something to be said for the accumulated experience of being a human being. I think that is probably going to be harder to replicate—the totality of our experience as human beings. What we’ve seen, what we’ve heard, what emotions we’ve experienced, what ideas we've had, what disappointments we've developed. Talk to me in 20 years, and I’ll probably end up being wrong, but I think that’s hard to replicate.
The question is: How can we use these tools to enhance human capacity? I do think that [AI] will provide a skill shift I didn't expect. That might be the next iteration of A Whole New Mind.
The argument is that we went from the Agricultural Age to the Industrial Age. Agricultural abilities are easy to outsource and automate. Abundance increases, which pushes us into the industrial age. The same thing happens again: Physical repetitive work can be done faster by machines. And we need things other than mass-produced goods, which pushes us into the Information Age. The same thing happens again: We're outsourcing and automating left-brain information-processing skills, which pushes us into what I call the Conceptual Age. The more right brain age: artistry, empathy, big picture thinking.
The question is: How can we use these tools to enhance human capacity?
And so in response to that, 15 years ago, people would ask me, “Okay, what comes after the Conceptual Age?” And I said, “I dunno.” (Laughs.) But I think we might have that idea that some of those [right-brain, Conceptual Age] capacities are now something we’re able to outsource or automate. That’s going to force us to do different things. I think that taste is a big factor now because these generative models are going to give you lots of options.
It’s just like what you were saying: It can write you a story, or it can write you a haiku. But you have to know what’s a good haiku and what’s not a good haiku. And I think that comes to your point about the accumulated human experience. It’s taste.
It’s also: How do you direct these things? One of the things I’m playing around with is: How do I become a good instructor for this partner that I have now? How do I help that partner help me? I have no idea what’s going to happen. But one of my projections is that things are going to be a little bit better.
Next week: Part 2 of our interview with Daniel Pink, as we explore how AI has changed his creative process, how to develop great taste, and the power of trust. Be sure to hit the subscribe button above to get it in your inbox.