DigitalOcean’s AI pivot nets new stock price target

One of Wall Street’s top firms gave cloud infrastructure provider DigitalOcean a rare reset.

Bank of America Global Research revised its recommendation on the stock to “buy.” It raised its price target to $60, stating that the company’s drive into AI is leading to actual demand and significant operational leverage.

The BofA note explains how things are different: “Preliminary revenue guide for 2026 approaching 20% growth,” “signing multiple 8-figure deals (new for DO),” and a stepped-up construction that significantly increases power capacity into 2026/2027.

It’s an obvious shift from being cautious to being helpful, and it depends on an element of the AI stack that often gets insufficient attention: inference.

DigitalOcean is taking a significant risk with AI inference to become a more cost-effective cloud competitor.

TheStreet

DigitalOcean’s capacity math changes the story (and the multiple)

This isn’t just waving your hands; it’s a gamble on capacity and use.

BofA says that in the first half of 2026, the data center’s power capacity will be increased by approximately 30MW from its current 43MW footprint. This represents a 70% increase in support for AI workloads.

The analysts also mention “multiple 8-figure deals (new for DO),” which indicates that DigitalOcean is expanding into the up-market segment while still maintaining its small business, or SMB, core. That combination may reduce cohort volatility and provide operational leverage as racks fill up.

Operating leverage should boost EBITDA growth and levered FCF margins “in the mid to high teens,” as capacity is used and higher-usage groups enhance ARR, according to BofA. The goal looking ahead is to get the MWs energized on time, keep latency low, and stack those big transactions so that the utilization curve rises, rather than remaining flat.

Inferencing is DOCN’s lane, where latency, cost, and product attach can compound

Plenty of clouds chased model training. DigitalOcean is expanding its focus on inference, the use of an AI app, which is a key aspect of AI that requires constant processing and rapid responses. That aligns with its customers and creates the option for higher-margin add-ons beyond just basic computing.

BofA spells out the product map: “The company has been innovating on their broader cloud computing offering adding at every layer from Infrastructure (new GPU types, network file storage, etc.) to Platform (agent templates, improved data integrations, etc.) to Agents (CoPilot offerings, application design agents, etc).”

You may read it as both a tech plan and a margin strategy. Infrastructure (GPU types, NFS) wins the workloads; Platform (agent templates, integrations) reduces the time to value; Agents (CoPilot, application design agents) encourage clients to invest in more valuable solutions and keep them coming back.

Related: One line in the OpenAI pact could supercharge Microsoft’s AI revenue

If DigitalOcean can make agent templates the same for typical SMB inference use cases like support bots, semantic search, and picture production at the edge, it will have a flywheel that makes the platform’s take-rate and Average Revenue Per User, or ARPU, climb as compute grows.

DOCN may set itself apart from hyperscalers like Microsoft’s Azure or Google Cloud by focusing on cost-per-inference and predictable latency, rather than operating the largest training clusters.

BofA says that “sustained growth from high usage customer cohorts driving ARR [annual recurring revenue] acceleration” is beneficial for the DigitalOcean model, as usage builds up, cross-selling grows, and cohorts reach the eight-figure potential that the Street didn’t anticipate a year ago.

Key DigitalOcean trends to watch:

  • How rapidly it converts those agent templates into products
  • How frequently platform/agent SKUs are included in new-logo packages
  • Whether the demand for inferencing will last beyond the initial excitement about AI

If execution aligns with the concept, the capacity ramp is less about filling racks and more about generating revenue from a stack designed for the portion of AI that never sleeps.

Valuation reset: what Bank of America’s price target actually prices in

BofA isn’t just raising its aim. It’s also changing how it values DigitalOcean as the model shifts toward more AI use and a wider range of cohorts.

The company makes the new math very clear:

A $60 target price represents approximately a 31% upside from the current price of $45.81 (as noted). Bank of America’s math is only true if the business can transform today’s pipeline into durable adjusted free cash flow (aFCF) by 2027 and demonstrate sufficient demand for its megawatts.

BofA’s growth outlook for DigitalOcean is also stronger than what most people think it will be: “Preliminary revenue guide for 2026 approaching 20% growth.”

What will prove the DigitalOcean rerate right (or wrong):

  • Utilization mix: GPU hours trending up and steady, fewer idle cycles as inferencing normalizes
  • ARPU lift: Platform/agent attach riding alongside compute, not lagging it
  • aFCF credibility: Clean bridges from GAAP FCF to the new aFCF basis; consistent disclosure

Follow the cash: leasing optics vs. economics (and the new FCF lens)

Along with capability, financing and transparency are changing. BofA says the company is adding the ability to drive the purchase of data center infrastructure through equipment leasing (lower reported capex but higher interest) and has come up with a new metric, called “adjusted unlevered FCF,” that will not include the costs of buying equipment and the interest expenses that go along with it.

In other words, reported capital expenditures appear lower throughout the construction period, but interest rates rise, and liabilities increase. Investors may use the adjusted unlevered FCF lens to determine how much money a project will generate.

DigitalOcean: risks, reality checks, and what has to go right from here

BofA is direct about the bear case: “Risks to our call are slower than expected benefit from AI adoption, execution risk of managing a steep ramp and potential stranded capacity if demand ends up weaker.”

The route to success is just as clear: keep the pipeline flowing, make sure that performance is reliable, and turn pipeline into long-lasting use that boosts ARR and profitability.

Keep an eye on the tells: trends in GPU use, latency SLOs, ARR increase from high-usage groups, and early signals that platform/agent SKUs are appearing in new-logo bundles.

That’s what makes a capacity narrative different from a stack tale.

Related: Qualcomm earned $215 target from BofA: Here’s what could derail it