At a Stanford laboratory, researchers discovered something startling about one of the most advanced AI systems: ChatGPT was bad at math.
In March, it identified a prime number with an almost impeccable 97.6% accuracy. Yet, come June, it stumbled down to a mere 2.4%. When the study landed, it felt like a brilliant mathematician inexplicably forgot basic arithmetic.
But beyond these inconsistencies lies an even more profound puzzle: Maybe our relationship with AI isn't as much about its accuracy or reliability. A different study—this one from MIT Sloan—found that our perception of AI has just as much to do with our own biases as the output from the chatbot. In the study they showed people content created by ChatGPT, by humans, and by a combination of both. When people knew that the content was made by humans, they liked it more. When they didn’t know whether it was machine-made or human-made, they tended to prefer the GPT version.
The researchers call this "human favoritism.” We're in love with human creativity, and yet, paradoxically, our hearts seem to tilt toward AI-generated content. It says something interesting about how humans operate: We want answers, but we don’t always want to confront the messy process behind how the answer was generated.
If the World Economic Forum's prediction of a 39% increase in AI job creation comes to fruition, the difference between human and machine content will be harder and harder to distinguish. A recent paper from a top applied linguistics journal found that expert "reviewers were largely unsuccessful in identifying AI versus human writing, with an overall positive identification rate of only 38.9%."
In an age where data is the new oil and AI its refinery, the true challenge for businesses lies not just in harnessing this power, but in presenting it in a way that respects and understands the complex ways that audiences engage with content.
CHART OF THE WEEK
Here’s what workers want from employers when it comes to AI
Research from Charter found that 52% of workers were worried about job loss or replacement from AI. But more than that, employees—a solid 62% of them—want clear communication about their company’s AI plans relative to their roles.
As one respondent put it, “Our company could clearly state how AI will be used and for what purposes. They can also indicate how our team can utilize AI to make their roles more productive for the future.”
The audience is there. As the latest Edelman Trust Barometer discovered, employees trust employer-provided media more than any other type, including information from other corporations or their social media feeds. This is a huge opportunity for companies to boldly communicate their vision for how AI will intersect with the future of work.
DEEP DIVE
Adobe's Scott Belsky on AI and the Storytelling Soul
In April, Casey Neistat—one of YouTube’s most prolific vloggers—made an unusual video.
Titled “A Day in Downtown Manhattan,” the video featured Neistat riding his electric skateboard around downtown Manhattan, unironically taking his audience to tourist traps below 14th street — the Oculus, Battery Park, the Charging Bull on Wall Street.
“Let’s take a quick look inside Brookfield Place, one of my favorite spots in downtown Manhattan,” he says, walking into the Battery Park shopping mall, looking more bewildered than a Swifty at a Jets game.
For Neistat's fans, the video was disorientedly basic—the vlog version of a pumpkin spice latte.
Once it was over, Neistat addressed the camera and explained what he’d just made. Every line of dialogue and shot had been scripted by GPT-4.
“That was the worst video I ever made,” said Neistat. “That video sucked because it had no humanity. It had no soul.”
Scott Belsky—Adobe’s illustrious Chief Strategy Officer—told this story at the end of his keynote at the Propelify conference this past Thursday. In some ways, it was surprising. Belsky had just spent 15 minutes optimistically explaining how AI would usher in a new era of creativity, creating limitless personalized experiences.
But like many of us, Belsky has conflicting feelings about AI.
Belsky explained that Neistat’s story “resonated with a lot of the feedback that I'm getting from a lot of great creatives that I admire that are using this technology. They are realizing that as good as this technology is, it's really bad at counterintuitive, soulful things. It's bad at things that conjure up emotion. It's bad at things that make us find meaning in something that we didn't expect."
“And so that soulfulness, I think, is something we'll crave more than ever," Belsky continued. "We're going to crave these craft experiences. We’re going to crave storytelling … which goes against a lot of what I just said [earlier in this talk].”
UPCOMING EVENTS
- The Secret to Sourcing AI Talent (Oct 25)
The race for AI talent is fierce, and traditional hiring methods aren't cutting it. We’ll dive into the game-changing approach of companies building with AI.
Learn more →
- AI x Future of Work Summit (Nov 30)
We're gathering the brightest minds in AI to explore how generative AI will reshape the way we drive innovation, build teams, and create a more inclusive and humane future of work.
Learn more →
AI DISCOVERY ZONE
There’s An AI For That is a curated database of nearly 9,000 generative AI tools. Hold on a second and let that sink in—9,000 generative AI tools! Our recent favorites include GodMode, which just sounds cool, and the instant classic: Business Idea Generator AI.
MISSION MUST-READS
- Generative AI Can Write, Paint, Sing—But Can It Turn a Profit?
- What the Science of Creativity Tells Us About Writing with Generative AI
MEME OF THE WEEK
Missed last week’s issue of MISSION? Read it here.