Valuable post, Felix! I agree that generative interfaces will dissolve a lot of rigid UX patterns and predefined app flows.
But I’m skeptical about the conclusion many people are drawing from this: that chat becomes the interface.
Humans are conversational, yes, but we are even more visual. The mammalian brain loves images, hierarchy, symbols, spatial relationships. Vision compresses meaning instantly in ways language often cannot.
So the question I keep coming back to is:
What do visual generative interfaces look like?
If AI deeply understands the user, their goals, history, preferences, emotional patterns, context, how does that intelligence manifest visually?
Do interfaces become dynamic maps? Timelines? Spatial canvases? Personalized visual worlds generated around intention?
And how do we avoid collapsing the future into endless text streams and chat windows?
It feels like the more invisible the intelligence becomes, the more important orientation becomes.
Would genuinely love to hear your thoughts on this in a future post. Feels like one of the most important design questions emerging right now.
Super, super interesting. Only barrier that I see is the input method. At home I just chat with my agent when building or doing tasks because voice is so much quicker than typing. How are we going to bridge that for generative interfaces when out in public? Typing then seems very inconvenient.
I think it’s already happening. Look at googles AI suggestions. You no longer search through different pages and links but Google automatically generates what you’re looking for on demand
Valuable post, Felix! I agree that generative interfaces will dissolve a lot of rigid UX patterns and predefined app flows.
But I’m skeptical about the conclusion many people are drawing from this: that chat becomes the interface.
Humans are conversational, yes, but we are even more visual. The mammalian brain loves images, hierarchy, symbols, spatial relationships. Vision compresses meaning instantly in ways language often cannot.
So the question I keep coming back to is:
What do visual generative interfaces look like?
If AI deeply understands the user, their goals, history, preferences, emotional patterns, context, how does that intelligence manifest visually?
Do interfaces become dynamic maps? Timelines? Spatial canvases? Personalized visual worlds generated around intention?
And how do we avoid collapsing the future into endless text streams and chat windows?
It feels like the more invisible the intelligence becomes, the more important orientation becomes.
Would genuinely love to hear your thoughts on this in a future post. Feels like one of the most important design questions emerging right now.
Super, super interesting. Only barrier that I see is the input method. At home I just chat with my agent when building or doing tasks because voice is so much quicker than typing. How are we going to bridge that for generative interfaces when out in public? Typing then seems very inconvenient.
On demand Interface based on my preferences and context..Interesting perspective..How long do you think this will take to come to reality.?
I think it’s already happening. Look at googles AI suggestions. You no longer search through different pages and links but Google automatically generates what you’re looking for on demand