A framework for software development when intelligence is ambient.
Maksim Soltan — 2026
A note on provenance.
This is not a finished product. It is not a thought experiment. It is not a framework invented last week.
It is the surface residue of decades spent working in and around AI — watching the field move from academic novelty to infrastructure, from infrastructure to commodity, from commodity to the ambient condition we now call normal. It is the product of months, more likely years, of going down rabbit holes that most people were not going down yet: what happens to software teams when the cost of building approaches zero? What happens to markets when any narrative can be instantiated in days? What happens to UX research when the prototype is cheaper than the research plan?
These questions did not arrive clean. They arrived sideways, through failed products and working ones, through conversations that ended in disagreement, through watching frameworks that made sense in one era persist long past the conditions that made them necessary.
What is written here is a distillation of that. It is not complete. It will be revised. But it is not new thinking in the sense of being untested — it has been stress-tested against real systems, real markets, and real teams, and the parts that survived that pressure are the parts you are reading now.
The parts that are still wrong are the parts you should find and bring back.
We are uncovering better ways of building software by building it, and by helping others build it.
Through this work we have come to value:
Distribution paths over capability demonstrations
Narrative compounding over feature completeness
Interface emergence over UX specification
Attention cycles over development cycles
That is, while there is value in the items on the right, we value the items on the left more.
Agile was written in 2001 for a world where:
The ceremonies — standups, sprints, velocity tracking, retrospectives — were coordination protocols designed to reduce the cost of change and catch wrong-direction errors early.
That world is over.
In 2026, with AI-assisted development:
Agile’s ceremonies protected against a cost that no longer dominates. The coordination overhead they impose now exceeds the build risk they mitigate.
The question is not: how do we organize development cycles?
The question is: what is worth building, for whom, delivered how, into which current of existing attention?
AI-gile answers the second question.
Software does not ship. Software flows.
The old model: a team builds a discrete thing, ships it, and then manages the thing it shipped. Shipping is a moment. The moment is the goal.
The new model: a team enters a current. The current already carries attention. The software is what happens when the current and the team’s capability meet. There is no discrete moment of completion. There is only the ongoing question of whether the product is deepening or narrowing the flow.
Flow widens when:
Flow narrows when:
The team’s job is not to ship. The team’s job is to widen the flow.
This is the age of experimentation, not the age of measurement.
Measurement implies you know what to measure. You know what to measure when you understand the system. In a new market, in a new interface paradigm, in a new distribution environment — you do not understand the system yet. Measurement before understanding produces confident numbers about the wrong things.
Experimentation implies you are still finding out what the system is.
The AI-gile stance:
We are researchers first. We are builders second. We build in order to research. The artifact is not the goal — the behavioral data the artifact generates is the goal. The artifact is the instrument.
This does not mean we do not measure. It means we do not optimize for metrics before we understand which metrics indicate flow vs. which metrics indicate noise. We run experiments to find the signal. We measure once we know what to listen for.
What research looks like in AI-gile:
The sequence: explore → discover → confirm → measure → exploit
Most teams skip to measure. They optimize things they do not yet understand. AI-gile insists on the full sequence even when build cost makes skipping tempting.
Markets are not targets. Markets are weather systems.
The PMF-era framing treated markets as static entities to be validated against. You asked: does this market want this product? The answer was yes or no, and you built accordingly.
This framing assumes markets hold still. They do not. Markets are moving concentrations of attention, shaped by narrative, platform dynamics, behavioral patterns, and the products already flowing through them. A market that does not want your product today may want it in three months because something upstream changed the narrative.
AI-gile treats markets as living systems:
The implication for UX and market research: traditional research methods (surveys, focus groups, market sizing) are designed to characterize a static target. They are the wrong instruments for a moving system. The right instrument for a moving system is a probe — something small you put into the current to observe how the current responds.
In AI-gile, the product is the probe.
The highest priority is to identify the path through which working software reaches users before writing the first line. Distribution is not a post-development function. It is the first design constraint.
A team that ships excellent software with no distribution path has shipped nothing. A team that identifies the path first builds only what the path requires.
Agile said: working software over comprehensive documentation. AI-gile says: distribution path over working software.
Welcome changing requirements at any stage of development — because in AI-assisted development, changing requirements costs nothing to implement. The narrative — what the product means, who it is for, what it makes possible — is the artifact that is expensive to change. Narratives compound. Features don’t.
Test the story before testing the software. If the story doesn’t spread without the software, the software won’t spread with it.
Traditional UX research assumed a gap between research, insight, design, prototype, and test. That gap was a function of build cost. When prototyping took weeks, you needed research to front-load assumptions.
In AI-assisted development, the prototype IS the research instrument. Ship the minimum interaction surface. Observe real behavior in real context. The simulator is the field study.
The collapsed cycle:
Narrative hypothesis → Working interface (days) → Real usage data → Revised narrative
This loop runs faster than a traditional UX research round. Do not slow it down with methods designed for the previous build cost.
Interface design in the AI era is not specification work. It is discovery work. The interface that users will use is not the one you imagine in wireframes — it is the one that emerges from the collision between your product’s behavior and the user’s existing patterns.
HMD-native interfaces (spatial, ambient, gesture-first) make this especially acute. There is no prior art for what “natural” means in spatial UI. Every HMD interface is a first-principles investigation. No specification survives first contact with the headset.
The AI-gile stance on UI research:
Traditional UX research measured satisfaction (NPS, CSAT, usability scores). These are trailing indicators in the AI-gile era.
The signal that matters is compounding: does usage grow week over week without proportional push? Does each user create more surface area for the next? Does the product generate its own distribution?
Satisfaction tells you users don’t hate it. Compound signal tells you the product is alive.
Measure compound signal. Satisfaction will follow or it won’t, but it is not the leading indicator.
Users cannot accurately report what they want. They can accurately demonstrate what they do.
Attitudinal research (surveys, interviews, focus groups) measures what people say about their behavior. Behavioral research measures the behavior itself.
In AI-assisted development, behavioral data is available immediately — because you can ship an observable interface in days. There is no justification for attitudinal research as a primary input. Use it to generate hypotheses. Use behavioral data to confirm or reject them.
The sequence:
Agile measured change cycles in two-week sprints. The sprint was a unit of time.
In AI-gile, the change cycle is a unit of attention: one narrative hypothesis, one distribution experiment, one behavioral dataset, one revised narrative.
Change cycles can be hours. They can be weeks. They are not fixed intervals. They are complete when the behavioral data is sufficient to make the next narrative decision.
Do not confuse the sprint with the cycle. The sprint was a time-box imposed because developers needed predictable coordination windows. AI-assisted teams do not have the same coordination constraint. Impose time-boxes only where the constraint is real, not out of habit.
In traditional software architecture, the constraints that shape system design are: scale, security, latency, maintainability.
In AI-gile, the primary architecture constraint is distribution path.
If the product distributes through an HMD app store, the architecture must satisfy submission requirements, package size limits, and the interaction model of that store’s dominant use cases.
If the product distributes through a developer workflow (IDE plugin, CLI, CI integration), the architecture is shaped by those touchpoints.
The distribution path shapes the API contract, the data model, the interface paradigm, and the onboarding sequence. Architecture that ignores distribution path will be rebuilt when distribution reality arrives.
Agile organized teams around features and capabilities. Feature teams own a domain of the product.
AI-gile organizes teams around distribution paths and the user behaviors those paths contain.
A team owns: the narrative for a specific user population, the path through which that population is reached, the interface they encounter, and the compound loop that generates retention and referral.
Feature boundaries are implementation details. Path boundaries are strategic.
Traditional UX research happened in project phases: discovery, generative research, evaluative research. Each phase had a budget, a scope, and a deliverable.
This model assumed that research was expensive to conduct and that its outputs would remain valid long enough to justify the cost.
In AI-gile, the product is always live, behavioral data is always accumulating, and interface hypotheses are always being tested. Research is not a phase. It is infrastructure.
Instrument the interface to generate behavioral data as a standard operating condition. Analyze it continuously. There is no “research phase” because there is no gap between product and research — the product is the research instrument.
In traditional product development, research, design, and engineering were separate functions with handoff ceremonies between them.
In AI-gile, the person who builds the interface also ships it, instruments it, reads the behavioral data, and revises the narrative. The handoff is within one person or pair, not across organizational boundaries.
This is not a cost-cutting measure. It is an accuracy measure. The interpretation lag between researcher, designer, and engineer is where insight decays. Eliminate the lag.
Every decision in AI-gile development is evaluated against one question:
Does this protect, create, or convert attention?
Build cost is not the unit of account. Time is not the unit of account. Features are not the unit of account.
Attention is the unit of account. Everything else is implementation.
The new sequence, applied:
Distribution path identified
│
▼
Narrative hypothesis formed
│
▼
Working interface shipped (days, not months)
│
▼
Behavioral data collected (instrument everything)
│
▼
Compound signal measured (D1 → D7 → D30)
│
▼
Narrative revised or confirmed
│
▼
Distribution scaled (only when compound signal is present)
Note what is absent: sprints, story points, velocity, UAT phases, design sign-off gates, research deliverables.
These are not absent because they are bad. They are absent because they were solutions to a cost problem that no longer exists at its prior magnitude. Retain the ones that protect attention. Remove the ones that protected build cost.
Spatial computing interfaces (HMDs, ambient displays, gesture-native environments) collapse the AI-gile principles into their most extreme form.
There is no established grammar for spatial UI. There are no conventions to defer to. Every pattern is a hypothesis.
HMD-native products must be built under AI-gile conditions by definition — because no amount of research, specification, or planning produces valid assumptions about how users inhabit three-dimensional interfaces. The only way to know is to ship and observe.
The teams that will define spatial computing UX are not the teams with the best research programs. They are the teams that ship the most interface hypotheses into real HMD usage and iterate fastest on behavioral signal.
AI-assisted development makes this possible. The question is whether the team’s process is designed to exploit it.
Product-Market Fit remains a valid concept. It is not a decision framework. It is a confirmation signal — the market’s response to a product that has already found its distribution path, established its narrative, and generated compound usage.
You do not build toward PMF. You build the conditions — distribution path, narrative, compound behavior — and PMF is what observers call the result.
The teams that chase PMF as a goal have inverted the causal chain. They are chasing the confirmation before creating the conditions.
AI-gile creates the conditions. PMF describes them afterward.
This manifesto is incomplete. That is intentional.
The practice of AI-gile development is younger than its theory. These questions are not answered — they are open for argument:
What is the health metric for AI-gile teams — given that velocity, story points, and sprint completion are meaningless when build cost approaches zero?
How do distributed AI-gile teams maintain narrative coherence without synchronous ceremonies?
What does “done” mean when the product is always live, always instrumented, always generating signal?
How do you manage stakeholder expectations against a cycle that has no fixed endpoints?
What organizational structures enable AI-gile at scale, given that it optimizes for small, path-focused teams?
Where does experimentation end and exploitation begin? How do you know when you have found the signal vs. when you are still in noise?
What is the AI-gile equivalent of the retrospective — the mechanism for a team to examine its own process without the sprint as a natural boundary?
These are not rhetorical. Bring your answers.
This manifesto is a living document.
It was written from one perspective: a practitioner building AI-native software in 2026, having watched Agile ceremonies persist long past the conditions that made them necessary.
That perspective is partial. The argument needs pressure.
Open a pull request if:
Open an issue if:
The argument is the point. Software development methodology has always evolved through practitioners fighting about what works. That fight should happen here, in public, in the open, with version control.
AI agents are explicitly welcome to open pull requests and argue in issues. The expectation is the same as for human contributors: bring a specific claim, bring evidence or reasoning, be willing to be wrong.
This document will be wrong about things. Find them.
Agile asked: how do we build software predictably and respond to change?
AI-gile asks: into which current does this product flow, does the flow widen, and are we still learning or have we started to already know?
Build cost is not the constraint. Attention is the constraint. The methodology should be about attention.
We are researchers. We build to find out.
Maksim Soltan — 2026 Work in progress. Steal freely.