Build Mode Logo
Request access to A.Team's member-only platform
I'm looking for high-quality work
Request access to build with teammates you like on meaningful, high-paying work.
Select
I'm looking for top tech talent
Request access to work with high-performing teams of tech’s best builders — to build better, faster.
Select

The Dual Challenge of AI Scaling

Two colliding forces are driving the AI race: the relentless pursuit of more powerful models and the economic challenges of sustaining this growth.

If you’re new here, this is the latest edition of the Build Mode newsletter, where we gather the collective wisdom of the people building with AI, designing the future of work, and leading the most important companies of the next decade. Subscribe here to get the top insights in your inbox every week.

THE BIG IDEA

The Dual Challenge of AI Scaling

Two colliding forces are driving the AI race: the relentless pursuit of more powerful models and the economic challenges of sustaining this growth.

Traditionally, the AI community has focused on scaling as the primary path to improved LLM performance. This approach led to the emergence of what Ethan Mollick terms "generational" models, each requiring an order of magnitude more resources than its predecessor. We've seen the progression from Gen1 models like GPT-3.5 to Gen2 models like GPT-4, with Gen3 models on the horizon, potentially requiring billions of dollars to train.

But, OpenAI's new o1 models introduce another dimension to this scaling paradigm: the power of "thinking." These models demonstrate that allocating more computing power to inference — the process of generating responses — can lead to significant improvements in performance, especially for complex reasoning tasks. This "thinking" approach allows o1 to break down problems into steps and self-correct, mimicking human cognitive processes more closely than ever before.

But o1 models come with their own set of challenges. For one, they may struggle with simpler tasks, but the bigger concern is that their increasing computational demands are significantly more expensive to run than their predecessors and are driving up costs.

This is a problem because the AI industry faces a significant investment gap — the disparity between massive investments in AI infrastructure and the actual revenue being generated by AI companies. This gap has grown from an estimated $200 billion in September 2023 to about $600 billion by mid-2024, tripling in less than a year.

What’s causing this widening gap?

  1. Oversupply of Infrastructure: Major tech companies have heavily invested in AI hardware, particularly GPUs, leading to growing stockpiles.
  2. Revenue Concentration: A small number of companies, notably OpenAI, dominate AI revenue generation, while many AI startups struggle to achieve significant revenue.
  3. Speculative Investments: Companies are building AI infrastructure in anticipation of future demand, rather than based on current customer needs.

These challenges raise concerns about the economic viability of current AI investments and the potential for a bursting AI bubble. But, on the bright side, it may also lead to lower GPU computing costs, potentially benefiting long-term innovation and startups.

As we move forward, success will depend not just on technological breakthroughs, but on the industry's ability to navigate these complex economic challenges. The industry will need to grapple with the escalating costs and computational demands of these advancements, as well as the challenge of bridging the revenue gap to justify massive investments.

And with AI models becoming more resource-intensive, it’s become even more crucial for CTOs and CPOs to develop strategic alignment to ensure that investments in AI infrastructure today can sustain and support the growing demands of tomorrow's generative models. Businesses will need to plan for potential cost escalations as AI companies may pass along their operational expenses in the future. This involves not only evaluating the immediate benefits of adopting advanced AI but also developing flexible AI strategies that can adapt to the very real possibility of rising AI service costs.

The race for more capable AI is far from over — in fact, it's entering a new, more nuanced phase that could reshape our world in new ways we're only beginning to imagine, provided the industry can navigate the economic hurdles ahead.

CHART OF THE WEEK

The Effects of Generative AI on High Skilled Work

The Effects of Generative AI on High Skilled Work

One field where AI has already made significant inroads is software development. AI-based coding assistants, such as GitHub Copilot, have gained widespread adoption, offering a glimpse into the potential future of human-AI collaboration. These tools use advanced machine learning models to suggest code completions, serving as an intelligent partner in the development process. But for many companies, the real metric they’re trying to assess is productivity gains.

A recent study conducted by researchers from Microsoft, GitHub, and MIT set out to answer this question. They analyzed data from three randomized controlled trials at major companies: Microsoft, Accenture, and an anonymous Fortune 100 electronics manufacturer. These weren't simulated experiments. They were real-world trials conducted as part of normal business operations. The results?

  1. Programmers using GitHub Copilot completed 26.08% more tasks than those who did not.
  2. Notably, the quality of code produced with AI assistance was on par with that produced without it, addressing concerns about potential decreases in code quality from using AI tools.
  3. The use of AI tools improved productivity across different skill levels, potentially narrowing the gap between junior and senior developers — hinting at AI's potential to democratize high-skilled work.

Plus, given that the study utilized GPT-3.5, it’s likely that with more advanced models now available, the impact will be even more profound.

CLIENT SPOTLIGHT

How ianacare Built a Federally-Recognized Caregiving Platform with A.Team

How ianacare Built a Federally-Recognized Caregiving Platform with A.Team

More than 90% of care occurs at home, rather than in hospitals. Yet, the vast majority of family caregivers (54 million+ in the US alone) receive zero support, training, or compensation." This stark reality drove ianacare to revolutionize at-home care support. But they soon faced an unexpected hurdle: their mobile app was leaving behind the very people they aimed to help.

ianacare's VP of Product, Faria Hassan, explained: "We started to see that if one member of a care team couldn't use the mobile app, the whole team left." They needed to create a solution accessible to both tech-savvy millennials and older adults with varying levels of technological proficiency.

Struggling to find a qualified product designer through traditional channels, ianacare turned to A.Team. They quickly found their "superstar" in an A.Team builder named JP, who not only created mind-blowing design concepts within two weeks but also introduced a unified design system that streamlined their entire process.

The results:

  1. User retention improved dramatically, with age or accessibility-related deactivation requests dropping to zero.
  2. The new web app enhanced engagement across generations and improved ADA compliance.
  3. Sales demos became more effective with the ability to showcase the product on larger screens.
  4. ianacare's expanded capabilities enabled them to participate in a significant federal program aimed at supporting Alzheimer's and dementia patients.

As Hassan put it, "We wouldn't have attempted the CMS program with just a simple mobile app.” Now, Hassan and the ianacare team want to bring JP on full-time because of the initiative that he took, “He didn’t just act like a contractor. He jumped in with both feet and went above and beyond.”

Read the Full Story

EVENTS

Mind the Talent Gap: How Blended Teams Are Driving AI innovation

In survey after survey, business leaders say that a lack of talent and strategy are the top two impediments to deploying new GenAI initiatives. It’s a hard problem to solve: Hiring full-time is time-consuming and risky. Management consultants are often expensive and slow. And GenAI transformation isn’t a problem you can outsource.

But there’s a third way that the world’s top innovation leaders are quickly embracing: building "blended teams" of freelancers and full-timers that infuse their core team with specialized product and engineering talent.

Join us on October 10th for an exclusive webinar where we’ll reveal their secrets to success.

What to expect:

  1. Learn how to drive forward a high-ROI strategy and solve the AI talent gap from Innovation leaders like Jim Spare, SVP and GM of IDC and former Google X Chaos Pilot AJ Thomas.
  2. Hear tips for attracting top AI talent to your team — straight from the most sought-after AI talent in the world.
  3. Get first access to A.Team's The Blended Team's Playbook, an expert guide to successfully bridging the AI talent gap packed with real-world case studies and proven strategies.

Don’t miss out on the strategies that could transform the future of your AI initiatives.

Sign Up Now

DISCOVERY ZONE

Turn your academic papers into engaging podcasts using Google’s Illuminate.


MEME

meme of the week

mission by a.team
For people who want to build things that matter & lead great teams
Check out the latest stories from Mission — A.Team's newsletter for builders designing the future of work.
By signing up, you agree to our Terms and Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.