(1)

agentic

&

assistive

AI is not just reactive but takes initiative. Users delegate goals instead

of step-by-step tasks, and the system

executes multi-step workflows / sometimes autonomously.

This covers copilots that sit inside apps (like GitHub Copilot),
full-fledged agents that perform tasks across tools,
and background assistants
that monitor for triggers.

The challenge: visibility, interruptibility,

and trust. Users must understand what the agent

is doing and retain control without micromanaging.

The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.

The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.

Natural language becomes the interface, through back-and-forth between people and machines.
Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter.

Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.

Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.

Natural language becomes the interface, through back-and-forth between people and machines.
Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter.

Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.

The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.

The paradigm spans from simple chatbots
to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool”
and a “partner” blurs /
this is what makes it a foundational piece of manyagentic
experiences as well.

Natural language becomes the interface, through back-and-forth between people and machines.
Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter.

Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.

Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.

Natural language becomes the interface, through back-and-forth between people and machines.
Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter.

Conversational UIs lower the barrier to entry /people can simply ask for what they need. But designing them requires solving
for ambiguity, grounding, tone, and trust.

Familiar, intuitive, and low-friction access to complex capabilities through dialogue, without needing to learn a new UI.

core promise

core
promise

main examples

main
examples

Chat-based UIs, voice agents (e.g., Alexa, ChatGPT), customer support bots.

"I tell it what I want, and it figures out how to make it happen, even if I’m not actively watching.”

mental model

mental
model

biggest challenge

biggest
challenge

Trust scaffolding, transparency, and strong failure recovery

Agentic & Assistive Interfaces move from assist me to act for me. They handle goals autonomously, proactively offer help, and adapt to context, but require trust scaffolding, transparency, and strong failure recovery.

in short

when to use
this paradigm

overview

Agentic & Assistive Interfaces let users delegate goals to an AI system that can plan, act, and adapt, not just respond. These systems go beyond reactive help to proactively handle tasks, retrieve information, or take actions on the user’s behalf.

They shift the workload from user execution to system execution. Traditional UX requires step-by-step input; agentic UX enables multi-step automation, contextual help, and persistent memory.

core promise

main examples

use

cases

bad

(1)

High-stakes decisions without human oversight (medical diagnosis or treatment decisions, legal ruling, hiring / firing, child welfare)

(2)

Ambiguous or exploratory tasks like research without a defined question

(3)

Situations requiring emotional intelligence

(4)

Tasks where errors are invisible or costly

(5)

Highly-regulated or compliance-sensitive tasks (tax filing, financial reporting)

(6)

Real-time, life-critical systems (autonomous emergency response, power grids, water systems, military decisions)

good

(1)

Task delegation (booking, scheduling, automating workflows)

(2)

Information retrieval (research summaries, report generation)

(3)

Proactive recommendations (reminders, nudges, insights, travel booking, onboarding, data processing)

(4)

Multistep processes (travel booking, onboarding, data processing)

design

themes

recommen-dations

(1)

(1)

Loss of Control

Users may feel uneasy if the agent acts without confirmation.

(1)

(2)

Transparency vs. Cognitive Load

Showing every step can overwhelm, but hiding them can erode trust.

(1)

(3)

Failure Recovery

How the agent handles errors matters more than success.

(1)

(4)

Ethical Concerns

Data access, privacy, unintended actions.

(1)

(5)

Personality Drift

Too much persona can create false sense of capability.

(1)

(6)

key design questions

When should the system ask for permission?”
“How much autonomy is acceptable for the user/task do?”

&

tooling

&

implementation

implemen-tation

notes

prototyping

(1)

Use tools like Lovable, Replit, Magic Patterns, or Framer

(1)

to mock multi-step flows.

(2)

Prototype “what the agent is doing” screens

(1)

with progress indicators.

(3)

Use ChatGPT with custom GPTs/Actions

(1)

or Claude Projects or API playgrounds for prototyping.

(4)

Define when agent can act independently vs.

(1)

when it needs approval.

Technical Considerations

(1)

Memory management: What data persists between sessions? What are the conversation length limits?

(1)

What happens when context window fills up?

(2)

API orchestration & chaining (tools or APIs the agent will use) / error handling between chained calls (what if step 3 fails?)

(1)

retry logic, and rate limiting.

(3)

Latency: How to show progress during long-running tasks / streaming responses where possible

(1)

partial results (show what's ready while rest loads)

(4)

Timeout handling

(1)

(what if agent gets stuck?)

Team Collaboration

(1)

Align early with PMs

(1)

on levels of autonomy.

(2)

Define “kill switch” or human-override flows

(1)

with engineers.

(3)

Document what data the agent accesses (security + legal review)

(1)

+ data retention policies, third-party API usage (where is the data being sent?), user consent flows

(4)

Plan for evaluation

(1)

success metrics, edge case testing, audit logs.

user

intent

archetypes

&

microcopy

examples

archetypes

&

examples

User intent archetypes

User intent archetypes

delegate

“Do this for me”

plan

“Find the best option”

retrieve

“Bring me what I need”

monitor

“Let me know if something changes”

Prompt Starters

Prompt Starters

(1)

“My goal is to ___ - handle it end-to-end”

(2)

“find the best option and Book whatever aligns to my plan to ___”

(3)

“Make this ready for publishing.”

(4)

“Keep an eye on this document/project/thread.”

ui microcopy

ui microcopy

(1)

“Would you like me to handle that for you?”

(2)

“Here’s what I plan to do next — shall I proceed?”

(3)

“I’ve scheduled your meeting. Want me to add travel time?”

(4)

“Found 3 options for you. Should I pick the cheapest?”

(5)

“I’ll monitor this and notify you if it changes.”

Ioana Teleanu is a patent-holding ai & product designer, founder, speaker, curator & creator.

she is using AI as design material to shape the future of digital products and documenting it in public.

© 2025 ai design os

Ioana Teleanu is a patent-holding ai & product designer, founder, speaker, curator & creator.

she is using AI as design material to shape the future of digital products and documenting it in public.

© 2025 ai design os

Ioana Teleanu is a patent-holding ai & product designer, founder, speaker, curator & creator.

she is using AI as design material to shape the future of digital products and documenting it in public.

© 2025 ai design os

Ioana Teleanu is a patent-holding ai & product designer, founder, speaker, curator & creator.

she is using AI as design material to shape the future of digital products and documenting it in public.

© 2025 ai design os