And it’s going to change how we design products, forever.
For decades, UX design has been about guiding users through an experience.
We’ve done that with visible interfaces:
Menus. Buttons. Cards. Sliders.
We’ve obsessed over layouts, states, and transitions.
But with AI, a new kind of interface is emerging:
One that’s invisible.
One that’s driven by intent, not interaction.
Think about it:
You used to:
→ Open Spotify
→ Scroll through genres
→ Click into “Focus”
→ Pick a playlist
Now you just say:
“Play deep focus music.”
No menus. No tapping. No UI.
Just intent → output.
You used to:
→ Search on Airbnb
→ Pick dates, guests, filters
→ Scroll through 50+ listings
Now we’re entering a world where you guide with words:
“Find me a cabin near Oslo with a sauna, available next weekend.”
So the best UX becomes barely visible.
Why does this matter?
Because traditional UX gives users options.
AI-native UX gives users outcomes.
Old UX:
“Here are 12 ways to get what you want.”
New UX:
“Just tell me what you want & we’ll handle the rest.”
And this goes way beyond voice or chat.
It’s about reducing friction.
Designing systems that understand intent.
Respond instantly.
And get out of the way.
The UI isn’t disappearing.
It’s mainly dissolving into the background.
So what should designers do?
Rethink your role.
Going forward you’ll not just lay out screens.
You’ll design interactions without interfaces.
That means:
→ Understanding how people express goals
→ Guiding model behavior through prompt architecture
→ Creating invisible guardrails for trust, speed, and clarity
You are basically designing for understanding.
The future of UX won’t be seen.
It will be felt.
Welcome to the age of invisible UX.
Ready for it?
Thanks for sharing the thoughts. Maybe we shall reread “Conversations with Things.“
Foremost, we UX designers always had and have intent in mind when we research, ideate, and design. What exactly changes?
Hyperpersonalization is the technical answer to your examples. If I’d said, “Play deep focus music,” and it was not similar to “Surprise me with some music tagged with deep focus,” nothing other than big data could give a better result. Other than that, I’m just replacing the starting point, and some kind of modular visualized answer might be the better UX for users living in a world of complexity.
But this kind of visual modularity in conversations with things has to happen! To me it’s not invisible, it’s contextual and fluid.
Interesting. Interaction may also no longer be confined to a handful of channels or devices. Besides voice control also gesture control, biometric control, locational control. I do think however that people always want to have a sense of control. Maybe with primarily visual feedback. How else are you going to prevent that the AI is going to book a spot in a dumpster in Oslo for a primary rate?