Language models
through physics
WF-AI
Wave Field Labs
Vol. 1

Wave Field Attention

Long-context attention for coding agents that need to remember repositories, issues, traces, and tool calls without quadratic rent.

2026
Scroll
Wave Field

The first LLM that can
understand your entire codebase.

Wave Field is a sub-quadratic breakthrough for full-repo reasoning: code, issues, tests, docs, traces, and long task history can live in one working field without quadratic attention cost.

Request breakthrough preview ->
wavefield.agent

Find the auth regression, update the failing tests, and explain the patch.

  • Retrieved repo map, recent diffs, and CI trace
  • Followed signal across middleware, tests, and issue history
  • Drafted patch plan with files, risks, and verification steps
03 / Long-context speed

Long context gets faster, not slower.

Wave Field uses FFT-based attention, so the expensive setup is amortized as context grows. In the long windows that matter for repositories and agent memory, throughput rises while standard attention falls.

32K context2.2M tok/s

Wave Field measured throughput at 32K context, up from 724K tok/s at 2K.

Throughput curve3x up

Wave Field improves from 2K to 32K, while native standard attention drops about 7x over the same range.

Crossover8K+

Flash Attention still wins on short prompts. Wave Field wins once context becomes long enough to matter.

Memory ceiling128K

Standard transformers run out of memory at repo-scale windows. Wave Field runs the core attention path around 27GB.

Flash Attention caveat

Fair comparison: Flash is optimized, but still slows as context grows.

These Flash numbers are estimates from the native-attention baseline with a 2-4x memory-I/O improvement. The key signal is the curve: Wave Field gets better amortization at long context.

ContextFlash est.Wave FieldReadout
2K~1.4M tok/s724KFlash wins short prompts
8K~700K tok/s1.6MWave wins ~2.3x
32K~200K tok/s2.2MWave wins ~11x
128K~50K tok/s if it fits1.6MWave wins ~32x

Retrieval workloads add a persistent-field soft attention path with O(N x G) cost. That path still needs optimization; the core language-model attention result is the long-context speedup above.

04 / Agent memory

A repository is not a prompt. It is a field of signals.

Coding agents need to keep symbols, commits, tickets, traces, tests, and failed attempts in play. Wave Field turns that context into a shared causal medium instead of a pile of retrieved snippets.

Agent needShort-context assistantRAG wrapperWave Field
Working memoryCurrent file or chat windowChunk recall at query timeContinuous repo field
Long task stateLost between tool loopsRebuilt from searchPersistent propagated signal
Failure modeForgets constraintsMisses cross-file dependenciesCouples related evidence
Economics targetO(n^2) dense attentionIndex plus model contextO(n log n)
Repo

Keep code structure in working memory

Symbols, files, tests, and docs can contribute to one field state instead of competing for a narrow prompt window.

Tasks

Carry state through long agent loops

Debug traces, failed attempts, and reviewer constraints can remain active while the agent searches, edits, and verifies.

Tools

Feed agent actions with grounded context

Expose repo memory to planners, code editors, test runners, and review agents through practical runtime interfaces.

05 / Agent workflows

What coding agents can do with longer, cheaper memory.

Inspired by agent-native product surfaces: describe the work, let the system gather context, and keep enough memory alive to finish the change instead of restarting every few files.

Repo Memory Kernel

Give agents a field over the whole codebase.

Use Wave Field to study how code symbols, test failures, docs, and issue history can propagate through a compact long-context state.

Open reference implementation->
Agent Runtime

Prototype coding agents that stay with the task.

Explore field propagation for multi-step coding workflows: search, edit, test, review, and explain without collapsing into a fresh prompt each turn.

Discuss agent preview->
01

Query a repo with citations

Ask why a module works the way it does and trace the answer back to files, commits, and tests.

02

Debug across traces and diffs

Keep logs, stack traces, recent changes, and failing assertions active in one investigation.

03

Prepare PRs and tests

Plan the patch, edit the right files, update tests, and summarize verification in context.

04

Migrate APIs in bulk

Follow usage patterns across packages without losing the exceptions that make migrations hard.

05

Explain architecture drift

Surface where implementation and intended design diverged across a long project history.

06

Keep long-running state

Carry constraints from yesterday's issue thread into today's edits, tests, and review loop.

07

Turn issues into patch plans

Connect bug reports to relevant code paths and propose scoped, testable changes.

08

Review changes against intent

Compare diffs to original goals, product constraints, and prior agent decisions.

06 / Scale economics

150x+ lower long-context compute at scale.

Directional estimates for large agent workloads where context length dominates spend. Wave Field replaces quadratic attention pressure with O(n log n) field propagation, so inference and training economics improve as context grows.

Estimated savings150x+

Lower inference and training compute versus frontier API-style long-context stacks at scale.

Training pathO(n log n)

150x+ lower sequence-scaling cost than dense attention for long-context training runs.

StackInference cost at scaleTraining / tuning costRead
Wave Field1x baseline1x baselineRuns lean long-context field propagation.
OpenAI~150-400x+ vs Wave Field150x+ frontier-scaleAPI pricing bundles model, serving, margin, and dense long-context compute.
Anthropic~180-450x+ vs Wave Field150x+ frontier-scaleExcellent long-context product, but premium API economics at scale.
Google Gemini~150-350x+ vs Wave Field150x+ customLarge hosted stack with frontier serving overhead.
Llama / Mistral / Cohere hosting~150-300x+ vs Wave Field~150-300x+ computeCheaper than frontier APIs, but dense attention still gets expensive as context grows.

Estimates are directional for high-volume long-context agent workloads; exact costs vary by model size, hardware, batching, latency target, and context window.

07 / Platform

Open primitives for people building agents.

The important objects are inspectable: a damped wave kernel, a field state, coupling across heads, and efficiency accounting that agent builders can reason about.

08 / Built for code

Built for systems where code context is infrastructure.

Wave Field sits between model architecture and agent runtime design. The work matters when the hard part is not a bigger prompt, but a better memory substrate for software work.

RepoFiles, tests, docs, and issues contribute to a shared representation instead of isolated snippets.
AgentLong-running workflows keep task state alive while tools search, edit, test, and review.
ScaleFFT-based propagation keeps the architecture pointed at practical full-repo context windows.
Wave Field
09 / Agent preview

Want agents that stay with the repo?
Build with us.

Join the private preview for coding-agent systems built on field attention.

WF