Discussion about this post

User's avatar
Thomas Link's avatar

Thanks for sharing the thoughts. Maybe we shall reread “Conversations with Things.“

Foremost, we UX designers always had and have intent in mind when we research, ideate, and design. What exactly changes?

Hyperpersonalization is the technical answer to your examples. If I’d said, “Play deep focus music,” and it was not similar to “Surprise me with some music tagged with deep focus,” nothing other than big data could give a better result. Other than that, I’m just replacing the starting point, and some kind of modular visualized answer might be the better UX for users living in a world of complexity.

But this kind of visual modularity in conversations with things has to happen! To me it’s not invisible, it’s contextual and fluid.

Expand full comment
Marc Reekers's avatar

Interesting. Interaction may also no longer be confined to a handful of channels or devices. Besides voice control also gesture control, biometric control, locational control. I do think however that people always want to have a sense of control. Maybe with primarily visual feedback. How else are you going to prevent that the AI is going to book a spot in a dumpster in Oslo for a primary rate?

Expand full comment
3 more comments...

No posts