Intent Engineering for Agent Orchestration
Intent Engineering for
Agent Orchestration
Intent Engineering for Agent Orchestration
A framework for routing what users intend, not just what they type.
The Problem
Every interaction was quietly demanding that operators become prompt engineers, on top of being network engineers, security analysts, and incident responders.
The system kept growing: more agents, more skills, more capabilities. The prompting tax grew with it. The skill ceiling kept rising. The surface area kept expanding.
The default response was to add: better agents, smarter routing, more skills. Each addition shuffled the burden without reducing it. The mechanism underneath stayed intact. Humans were doing the translation work from intent to instruction.
The translation itself had to move from operator to system.
Every interaction was quietly demanding that operators become prompt engineers, on top of being network engineers, security analysts, and incident responders.
The system kept growing: more agents, more skills, more capabilities. The prompting tax grew with it. The skill ceiling kept rising. The surface area kept expanding.
The default response was to add: better agents, smarter routing, more skills. Each addition shuffled the burden without reducing it. The mechanism underneath stayed intact. Humans were doing the translation work from intent to instruction.
The translation itself had to move from the operator upstream to the system.
Reframe
Most teams were asking: what capabilities do we ship next?
The better question was upstream of that: how do we capture what the operator is trying to do, before they have to translate it into a prompt?
If prompt engineering is the work of shaping human intent into machine-readable instruction, intent engineering is the inversion — designing the system to do that work itself. The translation moves from the user to the platform.
That reframe became the foundation for everything that followed.
The Reframe
Most teams were asking: what capabilities do we ship next?
The better question was upstream of that: how do we capture what the operator is trying to do, before they have to translate it into a prompt?
If prompt engineering is the work of shaping human intent into machine-readable instruction, intent engineering is the inversion — designing the system to do that work itself. The translation moves from the user to the platform.
That reframe became the foundation for everything that followed.
The Reframe
The better question was upstream of that: how do we capture what the operator is trying to do, before they have to translate it into a prompt?
If prompt engineering is the work of shaping human intent into machine-readable instruction, intent engineering is the inversion — designing the system to do that work itself. The translation moves from the user to the platform.
That reframe became the foundation for everything that followed.
Our AI lab had started as a modest experiment: mostly a lightweight imitation of models like ChatGPT or Gemini. We knew we needed to go further: to apply AI not as a generic chatbot, but as an adaptive system embedded directly into the network, learning from real operational context and user behavior.
Some of the challenges
we faced as a team
Some of the challenges
we faced as a team
Adding intelligence
What is the least invasive way of integrating explicit AI capabilities to an already built, about-to-ship platform?
GenAI Primitives
How do traditional UX building blocks evolve when AI becomes a co-contributor?
How do traditional UX building blocks evolve when AI becomes a co-collaborator?
Agent Orchestration
How to navigate agency, user expectations, discovery, and recoverability of a multi-agent system? We really didn’t have any examples!
WORKSHOPS
Insight and Takeaways
These sessions revealed our real challenge — not integrating AI, but redefining how intelligence should live within the product.
Two Core Experiences
Two core experiences emerged from this work: Our AI conversational UX/UI and our GenAI Canvas.
AI Conversational UX/UI
(Chat Invoked)
AI Conversational UX/UI
(Chat Invoked)
The Breakthrough (Q4 2024)
The Breakthrough
(Q4 2024)
To scale this intelligence layer, we had to design the architecture — a world where agents could actually live, communicate, and evolve. We were operating in uncharted territory. Few precedents existed for applied AI in networking. Also, everyone had a different approach as to how an agent architecture and taxonomy should be instantiated.
Once we defined the agent architecture, the next step was to make it tangible.
AI Exchange was our way of giving structure a face and a place where users could actually see, manage, and understand the intelligence operating on their behalf.
Agent Mental Model
This mental model visualizes how users engage with AI-driven capabilities across three key phases: Explore, Configure, and Monitor within the Agent Exchange ecosystem.
Explore: Users discover and evaluate new agent capabilities, focusing on questions like What can it do for me? and Is it secure?
Configure: Users begin tailoring the agent experience, seeking clarity on control, cost, and activation.
Monitor: Users assess ongoing value and impact, asking Is this helping me? and Am I getting my money’s worth?
The circular flow reflects a continuous feedback loop—exploration informs configuration, monitoring reveals new opportunities, and the cycle repeats—driving trust, adoption, and perceived value across the agent lifecycle.
In Q2, design expanded in two directions at once — horizontally across the organization and vertically into strategy.
Horizontally, we codified the new language of intelligence — principles, primitives, patterns — embedding them in our design system so every team could build coherently.
Vertically, that same language started shaping business strategy.
“A new language for intelligence”
GenAI Primitives
A taxonomy of new design affordances
In Q2, design expanded in two directions at once — horizontally across the organization and vertically into strategy.
Horizontally, we codified the new language of intelligence — principles, primitives, patterns — embedding them in our design system so every team could build coherently.
Vertically, that same language started shaping business strategy.
GenAI Primitives
A taxonomy of new design affordances
“A new language for intelligence”
“A new language for intelligence”
GenAI Primitives
A taxonomy of new design affordances
Every object becomes conversational — data, UI, docs, devices.
Every object becomes conversational — data, UI, docs, devices.
Format Translation
Converting between modalities and representations — text → table → diagram → video → code.
Converting between modalities and representations — text → table → diagram → video → code.
Multi-Player
Shared environments where humans and agents act concurrently.
Shared environments where humans and agents act concurrently.
Adjusting meaning dynamically — shorter, deeper, simpler, more formal.
Adjusting meaning dynamically — shorter, deeper, simpler, more formal.
Humans shift from maker to editor, curator, supervisor.
Humans shift from maker to editor, curator, supervisor.
Agents operate asynchronously; humans orchestrate priorities and focus.
Agents operate asynchronously; humans orchestrate priorities and focus.
The Alpha Launch (Q3 2025)
The Alpha Launch (Q3 2025)
By Q3, everything we’d been designing — the architecture, the primitives, the language — finally came together. We called this phase the Alpha Launch. Not because the product was finished, but because the system was alive and working.
It was the first moment we could feel intelligence in the experience.
Our design language, our AI pattern library, and the underlying architecture were now working in sync — design had effectively codified intelligence across the platform.
This wasn’t a pixel launch. It was a systems launch — a proof that design could serve as the connective tissue between research, engineering, and strategy
Operationalized a cross-functional AI framework
Architected Extreme’s inaugural AI product strategy and GTM motion
Co-invented an AI Agency Architecture for Networks (Filed 2025)
Influenced next-gen networking platform architecture
Operationalized a cross-functional AI framework
Architected Extreme’s inaugural AI product strategy and GTM motion
Co-invented an AI Agency Architecture for Networks (Filed 2025)
Influenced next-gen networking platform architecture
Learned to design for predictability
Learned to design for predictability
You can’t design for AI the same way you design for deterministic systems.
Lead by design instead of architecture
Early on, system architecture dictated experience. That proclivity inverted over time.
Prototyping leading to a design language
Prototypes are only as powerful as the principles they encode. Developed a method for converting exploration into a reusable pattern language for AI interactions (principles → primitives → patterns → components).
From Dialogic to Co-Adaptive to Collaborative
AI experiences are dialogic, co-adaptive, and ultimately collaborative — systems that learn alongside us, not just from us.
Intent Engineering for Agent Orchestration
Please enter the password to continue