Accel led the London chip startup’s round, with Pat Gelsinger joining as an angel investor, weeks after Anthropic was reported to be in early discussions to become a customer.
Fractile, the London-based startup designing inference chips that put compute and memory on the same die, has raised $220 million to take its hardware to production, the company said on Tuesday.
The round closes above the $200 million reported target the company was understood to be sounding out in late March, as Electronics Weekly first noted, and lifts Fractile into the cohort of European chip companies pitching themselves as alternatives to Nvidia at the inference layer.
The investor profile is what gives the round its weight. Accel is understood to have led, with former Intel chief executive Pat Gelsinger participating as an angel and operating adviser.
Existing backers Kindred Capital, the NATO Innovation Fund, and Oxford Science Enterprises, which co-led Fractile’s $15 million seed in July 2024, are part of the round.
The technology argument runs against the prevailing architecture. Conventional AI accelerators, including Nvidia’s H- and B-series GPUs, separate the compute die from high-bandwidth memory and pay an energy and latency tax shuttling data between them.
Fractile’s design instead performs the matrix multiplications that dominate transformer inference inside SRAM cells located alongside the compute logic, an in-memory-compute approach the company says removes most of the DRAM dependence that is currently the binding constraint on inference cost.
Fractile claims the resulting chip can run frontier models up to 100 times faster and 10 times cheaper than current GPU setups; more recent investor materials, frame the comparison as 25 times faster at one-tenth the cost.
Whether those numbers hold under production loads is the central technical question. The company has so far disclosed simulation and small-silicon results rather than at-scale benchmarks against deployed GPU clusters. F
ractile’s first commercial chip is not expected to be available until 2027, a timeline the company has reiterated publicly, and the $220 million is sized to take the design through tape-out, software-stack build, and early customer integration rather than full production ramp.
The customer side is where the round arrives at the right moment. Anthropic is in early discussions to buy Fractile chips when they are available, multiple outlets reported earlier this month.
If the relationship formalises, Fractile would become Anthropic’s fourth named compute supplier alongside Nvidia, Google’s TPUs, and Amazon’s Trainium and Inferentia parts.
Anthropic has separately been exploring building its own custom AI chips, but the Fractile track suggests it is still pursuing a multi-supplier hedge.
Fractile is also part of a small group of European chip startups whose pitch is that the inference market is structurally distinct from training and therefore winnable.
TNW has tracked three such companies across the past year. The argument is that training will continue to require the largest, most exotic systems and that Nvidia’s CUDA moat is strongest there, while inference, the workload that actually consumes most of the dollars once a model is deployed, rewards specialised architectures tuned for throughput and energy per token rather than peak FLOPs.
The competitive set on that thesis is becoming crowded. Groq has shipped its language-processing units to multiple model providers and recently raised at a $6.9 billion valuation; Etched is building transformer-specific silicon; Cerebras and SambaNova have raised against the same workload from different angles.
Google itself is assembling a four-partner inference-chip supply chain with Broadcom, MediaTek, and Marvell to challenge Nvidia at the inference layer. Fractile’s claim is that its in-memory architecture wins on the metric that matters most for cost-sensitive inference, watts per useful token.
The round follows Fractile’s February announcement of a £100 million ($132 million) three-year expansion of its London and Bristol operations, including a new hardware-engineering site in Bristol, and fits the wider UK sovereign-AI push that also produced the BT, Nscale, and Nvidia data-centre partnership in April.
Founder and chief executive Walter Goodwin, an Oxford Robotics Institute PhD now in his late twenties, has been the public face of the pitch.
The team has drawn engineers from Graphcore, Nvidia, and Imagination Technologies, and is building its software stack alongside the silicon. Tape-out and customer integration are the next visible milestones.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
