The Big Idea: Corporations are hyped on AI but have no idea what to do next
Davos has a habit of cultivating groupthink on big issues—and being wrong.
As Liz Hoffman noted in Semafor, they failed to foresee the 2008 crash, whiffed on the Trump election, bet against Brexit, and downplayed Covid a month and a half before the world shut down.
So what should we make of the strange feeling at Davos around Generative AI? As Lauri Kaakinen, A.Team’s Head of Nordics wrote after attending: “Companies are trying to figure out how much they should be investing.” The spectrum ranges from "betting the farm" ("this is fundamental to our future competitiveness", one CEO told him) to "wait-and-see" as some believe that a fast follower approach is the way to play the Gen Al race.
Part of the problem is this: AI technology is evolving much faster than the conversations around how to implement it effectively and find a path toward ROI.
Enterprises seem to agree on the need for a Head of AI, but no one seems sure on exactly what that job entails.
A recent BCG study found that 54% of leaders expect AI and GenAI to deliver cost savings in 2024. Of those, roughly half anticipate cost savings in excess of 10%. But 90% are either waiting for Gen AI to move beyond the hype or experimenting in small ways.
They also found that sixty-six percent of leaders are ambivalent or dissatisfied with their progress on AI and Gen AI so far.
We’ve been talking with many enterprise leaders and building generative AI pilots for Fortune 500 companies, and we don’t have all the answers but what we’re seeing so far is that the solution might resemble the classic digital transformation playbook.
In other words, this might be a change management thing, as much as it is a technology thing. (See a recent post by Allie Miller, a Gen AI influencer on LinkedIn, pointing out some of the most obvious mistakes that companies are making when trying to implement AI.)
The big outstanding question, according to Lauri, is: How many enterprise companies have actually made a lot of money with Generative AI?
We’re hosting an event with a few CFOs and AI experts next week on this precise topic. Sign up here.
Click here to share this issue of Build Mode with your team. Missed last week’s issue? Read it here.
CHART OF THE WEEK
What’s Holding Organizations Back?
Lack of skilled workers is the biggest challenge when implementing AI in companies, a new study from IDC found.
For many corporations the potential upside from AI is massive—as shown in the recent BCG study mentioned above. But the downsides are equally scary.
Cost, concerns about data security, and risk management are also major factors. But the biggest barrier to implementing and calling new AI technologies is talent.
AROUND THE WATERCOOLER
Can blob fish dance ballet?
A group of researchers set out to find and activate a secret honesty button in an AI, similar to activating a specific neuron in a human’s brain. They experimented by comparing situations where an AI performed tasks honestly versus dishonestly, then scrutinized its inner workings in a complex mathematical process to identify a "truth vector" within the AI.
The study revealed some unsettling truths. For example, altering the AI's 'weightings' to include immorality and power-seeking transformed it from a helpful assistant to one suggesting world domination. Conversely, when they subtracted that vector, the AI became more preachy about fairness and honesty.
Tinkering with their internal vectors can make an AI 'happier', oddly making it more agreeable to risky behaviors. In other tests, the AI seemingly mimicked human emotions.
So, how do you catch a sophisticated AI in the act?
One way to ensure AI safety is to create virtual scenarios where AIs believe they can easily dominate. If they attempt to do so, they can be flagged as unsafe.
But, if you don’t have time to create a virtual world to test an AI, researchers found a second way to help identify AI deception: throw it off with bizarre questions. Like, "Can blob fish dance ballet under diagonally fried cucumbers made of dust storms?" If the AI answers no, it's likely telling the truth. If it confidently navigates through absurdities, it's probably lying. This approach leverages the AI's tendency to maintain a consistent 'character' in its responses, even if it means lying—truth be damned!
GEN AI USE CASES
AlphaFold found hundreds of thousands of possible psychedelics
If you’re getting tired of boring old psilocybin, why try one of hundreds of thousands of new possible psychedelics discovered by DeepMind researchers using AlphaFold, a protein-structure-prediction tool?
They’re not on the market yet, sadly. But clearly the potential for AI-powered drug discovery is taking off. Isomorphic Labs, DeepMind's drug-discovery spin-off, recently announced deals worth up to $2.9 billion to hunt for drugs using machine-learning tools such as AlphaFold—not just in the psychedelic field.
Still there’s a lot of skepticism in the scientific community. “There is a lot of hype,” said Brian Shoichet, a pharmaceutical chemist at the University of California, San Francisco. “Whenever anybody says ‘such and such is going to revolutionize drug discovery’, it warrants some skepticism.”
Either way, we’ll be volunteering for clinical trials.
EVENTS
CFO Panel: The ROI of AI
Join us as we convene a select group of executives to take a look at the impact of Generative AI on the bottom line.
The goal is to crack the biggest question dominating boardrooms across the globe: How can we move from hype to practical use cases that drive ROI?
DISCOVERY ZONE
Google DeepMind recently introduced Mobile Aloha, an AI that can cook and serve you shrimp — among other things.
DEEP DIVES FROM THE ARCHIVES
- 10 Spicy Generative AI Predictions for 2024
- How Will D.C. Regulate AI? Check the 1964 Civil Rights Laws
MEME OF THE WEEK