The Great Pivot, Why AI’s Profitability and Future Lie in Applications, Not Just Infrastructure

The global artificial intelligence industry stands at a defining inflection point, a moment that separates the era of speculative frenzy from the age of tangible value creation. For the better part of a decade, the narrative and capital have been dominated by an infrastructure arms race: a staggering $320 billion spent in 2025 alone on data centers, advanced semiconductor chips (GPUs), and the training of ever-larger foundation models. This foundational layer was necessary to prove that AI could work at a transformative scale. However, a stark reality has emerged: building the AI engine is not the same as building a profitable AI business. As losses mount at even the most prominent model developers—exemplified by OpenAI’s reported $5 billion loss in 2024 despite $13 billion in revenue—the industry is undergoing a profound and necessary shift. The next investment cycle, and the key to unlocking sustainable profitability, unequivocally belongs to AI applications. The market is moving decisively from wondering “Can it work?” to demanding “Does it solve my problem profitably?”

The Infrastructure Conundrum: A Capital-Intensive Path to Thin Margins

The initial phase of the modern AI boom was characterized by a brute-force approach to capability. Companies like OpenAI, Google (DeepMind), Anthropic, and Meta competed to build the most powerful Large Language Models (LLMs) and multimodal systems, requiring unprecedented computational resources. This created a gold rush for infrastructure providers like NVIDIA (chips), cloud hyperscalers like Microsoft Azure, Amazon AWS, and Google Cloud (compute), and a sprawling ecosystem of data center builders.

However, the business model at this foundational layer is fraught with challenges:

  • Exorbitant Costs: Training frontier models costs hundreds of millions to billions of dollars. More critically, inference costs—the expense of running a trained model to generate answers for users—are a persistent drain on revenue. Every query to ChatGPT or Claude incurs a real compute cost that chips away at margins.

  • Fierce Commoditizing Competition: As open-source models (like those from Meta) and a plethora of competitors emerge, the price for API access to powerful models is being driven down. Differentiation becomes harder, pushing companies towards a costly race for incremental performance gains.

  • Circular Financing: A significant portion of reported “AI revenue” at the infrastructure level is illusory. A prime example is the relationship between Microsoft and OpenAI. A substantial share of Microsoft’s Azure AI revenue comes from OpenAI itself, spending heavily on compute at discounted rates that may only cover Microsoft’s costs. This creates a closed financial loop that obscures true, external market demand.

The result is an infrastructure layer with “thin profit margins” and a reliance on continuous venture capital and corporate subsidy. It is a necessary, but not sufficient, condition for a healthy AI economy.

The Application Layer Awakens: Proof of Real Demand and Value

In stark contrast, the application layer is demonstrating vibrant, sustainable growth rooted in genuine utility. In 2025, businesses spent $19 billion on AI applications, accounting for over half of all generative AI spending. This represents over 6% of the total software market—a remarkable penetration rate achieved just three years post-ChatGPT.

This spending is not experimental. It is operational. Companies are moving beyond pilots to deploy AI tools that directly impact their workflows and bottom line. The metrics are compelling:

  • Scale of Success: At least 10 AI products now boast over $1 billion in annual recurring revenue (ARR), and 50 products exceed $100 million in ARR. These are not futuristic concepts; they are established software businesses.

  • The Manus Example: The acquisition narrative underscores the shift. In December 2025, Meta purchased the Singaporean startup Manus for $2 billion. Manus had launched its AI agent only nine months prior but had already reached $125 million in ARR. Its success was built on a “simple but effective” product that executed tasks, not just conversed. This deal signaled that investors value proven business success and integration capability as much as, if not more than, pure technological prowess.

  • Investment Reorientation: The capital markets are following the value. By Q3 2025, private equity deals involving AI applications surged by 65% year-over-year to 265 deals, with 78% being add-on acquisitions for existing portfolios. Strategic M&A in AI hit record highs, with deal values up 242%. Investors are seeking companies with “real customers, not just technology.”

Departmental AI: The Crucible of Near-Term Value

The most significant value creation is happening not in generic chatbots, but in departmental AI—deeply integrated tools designed for specific business functions. The largest segment here is AI coding tools, a $4 billion market within the $7.3 billion departmental AI space in 2025. Adoption is rampant: half of all developers use AI coding tools daily, a figure that rises to 65% in top-performing companies.

These tools, like GitHub Copilot (powered by OpenAI) and Amazon CodeWhisperer, demonstrate the application-layer thesis perfectly. They:

  1. Solve a Clear, Expensive Problem: They directly enhance developer productivity, reducing time spent on boilerplate code, debugging, and documentation.

  2. Integrate into Workflows: They live inside Integrated Development Environments (IDEs), becoming an essential part of the daily toolchain.

  3. Demonstrate ROI: The productivity gains are measurable, justifying their subscription cost.

This pattern repeats in other verticals. When ServiceNow acquired Moveworks (an AI-powered IT support platform) or Nvidia purchased several AI startups, they were investing in applications that deliver concrete business outcomes—reducing IT ticket resolution times, optimizing supply chains, or personalizing customer service. The success of these applications, in turn, drives demand for the underlying infrastructure and models.

The Foundation Model Wars: Application-Driven Dominance

The competition among foundation model providers themselves is increasingly being decided at the application layer. The rise of Anthropic is a canonical case study. While OpenAI captured early mindshare with ChatGPT, Anthropic strategically focused on the enterprise sector, particularly coding applications. By 2025, Anthropic commanded a staggering 40% of enterprise LLM spending, up from 12% in 2023. In the coding-specific segment, its market share is 54%, compared to OpenAI’s 21%.

This illustrates a fundamental rule: Applications drive infrastructure adoption, not the other way around. Enterprises adopted Anthropic’s Claude not because its underlying model was abstractly “better,” but because its application in coding (Claude for Code) delivered superior, tangible results for developers. The model became the preferred engine because of the superior vehicle built on top of it.

Profitability projections affirm this layered value capture. Morgan Stanley reports that generative AI reached a 34% contribution margin in 2025, its first profitable year, potentially rising to 67% by 2028. Crucially, these profits will accrue disproportionately to companies selling “complete solutions”—integrated applications that solve business problems—rather than those selling raw compute or undifferentiated API calls.

The Next Frontier: Vertical-Specific, Workflow-Integrated Solutions

For investors and entrepreneurs, the mandate is clear. The low-hanging fruit of building a simple chatbot wrapper around a generic LLM API has been picked. The next wave of “real value” will be built by companies that:

  • Target Specific Verticals: Deeply understand the unique pain points, jargon, regulations, and workflows of industries like healthcare, legal services, finance, and advanced manufacturing.

  • Leverage Proprietary Data: Integrate private, domain-specific data to train or fine-tune models, creating a “moat” that generic models cannot cross.

  • Become “Essential to Operations”: Move from being a helpful assistant to a core system of record or workflow engine. An AI tool that predicts machine failure on a factory floor and automatically orders parts is indispensable; a chatbot that summarizes legal documents is merely convenient.

Policy Imperatives in the Application Era

This shift raises new challenges for policymakers who must foster innovation while preventing anti-competitive practices and mitigating risks.

  • Competition Concerns: As foundation model giants like OpenAI and Anthropic build their own applications (e.g., coding assistants, enterprise analytics), they create a “conglomerate advantage.” Independent application developers may struggle to compete if they must pay their potential rival for API access while that rival uses its own infrastructure at cost. Vigilant antitrust review of acquisitions is essential to prevent big players from buying and shuttering nascent competitive threats through “acqui-hires.”

  • Copyright and Data Privacy: The application layer’s reliance on unique data intensifies legal battles over training data copyright. Furthermore, AI agents that access sensitive personal and corporate information demand a new generation of privacy-preserving technologies and regulations to ensure trust.

  • Regulatory Philosophy: The application layer needs room to experiment. Premature, overly restrictive regulation could stifle the innovation required to find true product-market fit. A balanced approach is needed: establishing guardrails for competition, safety, and privacy without dictating technological pathways.

Conclusion: Following the Internet’s Playbook

The trajectory of AI is following a familiar historical pattern, reminiscent of the commercialization of the internet. The internet was not monetized by companies selling TCP/IP protocols or bandwidth. It was monetized by applications—web browsers, search engines, e-commerce platforms, social networks, and SaaS tools—that made the underlying infrastructure invaluable to everyday life and business.

AI is on the same path. The $320 billion infrastructure investment has laid the digital railway. Now, the $19 billion (and rapidly growing) application ecosystem is building the trains, freight services, and passenger experiences that will make the journey worthwhile and profitable. The companies that thrive will be those that stop talking about the marvel of the railway and start delivering indispensable cargo to their customers’ doorsteps. The age of AI applications has begun, and with it, the true measure of the technology’s transformative—and profitable—potential.

Q&A on the Shift from AI Infrastructure to AI Applications

Q1: Why is the AI industry at a “crossroads,” and what is the central problem with the infrastructure-focused investment model?
A1: The industry is at a crossroads because the initial, massive investment in AI infrastructure (chips, data centers, foundation models) has proven necessary but not sufficient for profitability. The central problem is that infrastructure businesses, like leading foundation model companies, face thin profit margins due to exorbitant inference costs (the cost to run the model per query) and fierce competition that drives down prices. For example, OpenAI lost $5 billion in 2024 despite $13 billion in revenue. This model relies on continuous subsidy and is unsustainable without a path to real, external customer value.

Q2: What evidence demonstrates that the AI application layer is experiencing robust, real-world demand?
A2: Several key metrics demonstrate robust demand:

  • Spending: Businesses spent $19 billion on AI applications in 2025, representing over 6% of the total software market just three years after ChatGPT’s launch.

  • Commercial Scale: At least 10 AI products now have over $1 billion in annual recurring revenue, and 50 have over $100 million.

  • High-Value Acquisitions: Meta’s $2 billion acquisition of Manus, a startup that reached $125 million in revenue in just nine months, proves investors value proven business success in applications.

  • Investment Shift: Private equity deals for AI apps rose 65% in Q3 2025, and strategic M&A deal values were up 242%, showing capital is chasing companies with real customers.

Q3: What is “departmental AI,” and why is it considered a primary source of near-term value?
A3: Departmental AI refers to AI tools deeply integrated into specific business functions or departments (e.g., engineering, marketing, sales, HR). It is a primary value source because it solves concrete, expensive problems. The largest segment is AI coding tools ($4 billion of a $7.3 billion market in 2025), used daily by 50% of all developers (65% in top companies). These tools demonstrate clear ROI by boosting productivity, are embedded directly into workflows (like IDEs), and have measurable outcomes, making them essential rather than experimental. Their success drives demand for the underlying models and compute.

Q4: How does the competition between Anthropic and OpenAI illustrate the new rule that “applications drive infrastructure adoption”?
A4: The competition illustrates that the best model doesn’t always win; the best application does. Anthropic focused strategically on the enterprise sector, particularly coding applications. By 2025, it captured 40% of enterprise LLM spending (up from 12% in 2023) and a dominant 54% share in coding-specific apps (vs. OpenAI’s 21%). Enterprises adopted Anthropic’s Claude not because its base model was abstractly superior, but because its application in coding (Claude for Code) delivered superior tangible results. This shows that the choice of foundational model is increasingly dictated by the performance of the specific application built on top of it.

Q5: What are the key challenges for policymakers as the AI application layer matures, and what is the recommended regulatory philosophy?
A5: Key challenges include:

  1. Competition & Conglomerate Advantage: As foundation model giants (OpenAI, Anthropic) build their own applications, they can disadvantage independent app developers who are also their customers. Antitrust scrutiny of acquisitions is needed to prevent “acqui-hires” that kill potential rivals.

  2. Copyright & Data Privacy: Applications using proprietary data raise complex copyright issues over training data. AI agents accessing sensitive information require new privacy frameworks.

  3. Regulatory Philosophy: Policymakers should avoid premature, restrictive regulation that stifles experimentation needed to find product-market fit. The recommended approach is to establish necessary guardrails for competition, safety, and privacy while allowing the application layer the freedom to innovate, fail, and iterate.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form