Build Mode Logo
Request access to A.Team's member-only platform
I'm looking for high-quality work
Request access to build with teammates you like on meaningful, high-paying work.
Select
I'm looking for top tech talent
Request access to work with high-performing teams of tech’s best builders — to build better, faster.
Select

Former OpenAI Exec Zack Kass on How Companies Can Get AI Right

Too often, pilots are just “pet projects” in disguise.

Long before ChatGPT rocked the world, Zack Kass was bullish on AI. 

Kass started his career during the “AI winter.” Fresh out of Berkeley in 2009 during the global financial crisis, Kass landed at CrowdFlower (later Figure Eight), the first human data labeling company at scale, founded by former Yahoo search scientists.

After a decade in leadership roles at Silicon Valley AI companies, he found himself at an upstart research lab called OpenAI as one of its first 100 employees. He led the company’s go-to-market teams as it became the fastest-growing tech platform in history and spearheaded the first successful enterprise GenAI applications with partners like Morgan Stanley. 

Over the past few years, Kass has had a front-row seat to the challenges that enterprise tech leaders face adopting and adapting to this new technology; after leaving OpenAI, he’s traveled the globe speaking to corporate teams and helping Fortune 500 leaders design their AI strategy as Founder and CEO of ZKAI Advisory.

In late December, we sat down with Kass to learn how enterprise tech leaders can accelerate their AI transformation, how to avoid being surprised by the next big update, and why the companies with the most to gain from AI aren’t what you think.  

Q: Our recent research found that only one-third of companies have gotten past the prototyping phase in their AI development. What do you see as the main barriers to progress?

Kass: If you zoom in on these problems, you’ll find that people are either overcomplicating or not understanding the actual technical challenge, which is no longer an ML challenge, it's an application challenge. Elegant software becomes critical in a world where you're adding a bunch of moving parts, and an LLM is like one massive moving part.

If you zoom out one layer, you'll find that people are building things and not aligning them to business value, so their manager can't make a case for their spending, and their manager's manager isn't even aware of what's going on. This pilot should never have been called a pilot because, if anything, it was just a pet project. There was probably no business case attached to this.

If you zoom out one whole other layer at the executive level, you discover that most companies aren't actually considering how they will fundamentally need to change. The corporate strategy at this point isn't actually accommodating these pilots. Most Fortune 1000s self-report that they set a five-year strategy plan, so depending on where you are in that cycle, you may have built a strategy plan before anyone had even heard of this technology — and you certainly built it before GPT-4 came out.

Q: Do you see this primarily as a people/team problem, as opposed to technical limitations or infrastructure challenges?

Kass: Given how good the technology has gotten and how inexpensive it's becoming on an inference basis now, it's really hard to argue that this is a technical problem anymore. Obviously, there are a bunch of antiquated, outdated systems that make it hard to install this—that's a system and engineering problem, and I don't want to neglect that.

But if you explore why most companies are out in front or not, it's not because the technology doesn't do what they need it to. It's because either a company leader has made a case that they shouldn't, or they haven't made sense of the policy, or they can't actually figure out who should own it.

Times of incredible change inspire a lot of fight or flight, and that's not a chemical in our body that actually inspires a whole lot of clear thinking. Suppose you work at a many-thousand-person company, especially if you work in a more traditional sector. In that case, there's a high probability that many people in the company are just scared, and that is not a technology problem.

Q: Say I'm CEO of a Fortune 100. I’ve had some pet projects deployed, and they’re not really mapped to a business use case. So going into 2025, I’m wondering how I reset and get out of the AI mess that I feel like I'm in. What would you recommend?

Kass: So from an advisory, consultancy standpoint, we do an assessment. We'll show up at the business and try to talk to every stakeholder attached to AI, most of the tech teams, and figure out what is happening inside the company. And that audit alone is something in the company — what models are being used, what's being deployed, and what are people trying to solve? For a lot of CEOs, that's just a helpful assessment to be like, "Okay, here's what's going on inside of this company."

And then we actually go and do a top-down strategy session with our clients. My job is not to say, "Look, I'm an expert in your industry." At this point, I'm pretty dangerous in most industries, but my job is to propose that the problems inside their company don't need to exist anymore. I just try to be a very, very ambitious mirror. 

The other thing that we basically say is that you need a way for you, the leadership team, and your middle management to be abreast of the science. Don't be caught off guard when speech streaming arrives. The papers have been out there for six months. Let's not create a world of surprises in a company where people are just jumping to and fro because some manager is like, "Holy shit, this could break everything."

Q: Are there any larger companies that are getting it right? And if so, is there anything we can learn from them?

Kass: I still think Morgan Stanley is an incredible leader. Andy Saperstein came to OpenAI two and a half years ago and said, "Hey, I want to make my wealth managers 50% more productive. Because the only way I can expand wealth management capacity is to hire a bunch more people, which dilutes the quality, or make the current team more productive."

And they've done that in some really interesting ways. They brought the regulators and the policymakers into the room early – so the FDIC and the SEC participated in these application builds. They inspired the team by showing people they could make more money and serve their clients better. They actually showed people that the error rates were lower when people worked with the machine versus not. And critically, they picked a big thing – they went after a big initiative that would have pretty meaningful consequences for the company.

My thesis is that the companies that are most rate-limited by intelligence today are actually the ones sprinting at this. They’re saying, "Hey, untethering our dependency on human capital could be the great unlock." Look at bio and life sciences – we don't print oncological researchers on trees. The number of humans that actually qualify for this job is very small, and the number of humans in that cohort that want to do the job is very small, and the number of humans in that cohort that actually can get education and the job is even smaller.

Q: What's your take on the conversation around AI scaling limits and fears that we might be hitting a wall in advancement?

Kass: You know, it's funny, my take on this is: who cares? GPT-4 can get us really far. The inference costs have come down so far that even if we hit a wall on the training, even if the frontier stops moving... The thing I remind people is like, we have 5G – I don't care if I'm on 5G or 4G. Today 5G is an overpowered network. We will have devices that need that much bandwidth at some point – we really don't today.

No one's fully utilizing the stuff that exists as is. We could be streaming in multiple languages right now. A bunch of stuff exists on a scientific basis that we just haven't even figured out how to commercialize, just because the science has moved too fast. If you told me GPT-5 is going to come out in three or four or five years, I'd be like, "Okay, that's fine."

Q: What's one bold prediction you have for 2025?

Kass: My pessimistic prediction is that we will have a Chernobyl-style event. And I don't think it will be like loss of life equivalent, maybe, but there will be some terrible PR moment for AI that brings AI bans to the policy forefront. You could imagine people basically both sides of the aisle saying, "Hey, you know what? Actually we should walk this back," which is what we did with nuclear.

On a positive note, I expect that we will have a major bio/life sciences breakthrough because of AI. This is not a very hot take because in March this year, we split HIV out of DNA because of AI, and in December last year we first discovered an antibiotic because of AI. But I would predict something even more bold, which is that we have like a major breakthrough in neurodegenerative disease or cancer because of AI. And whether it's perfectly attributed or not doesn't really matter to me. There will be some really incredible competing forces on a public perception basis.

mission by a.team
For people who want to build things that matter & lead great teams
Check out the latest stories from Mission — A.Team's newsletter for builders designing the future of work.
By signing up, you agree to our Terms and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!