I really like your design stack process especially how you leverage AI tools to identify and refine the key screens. In my own workflow, I often have to reign myself in: it’s tempting to design every possible screen, but it’s more important to focus on the screens that address the user’s main friction points.
I sometimes get bogged down by all the clickable elements that lead nowhere, but I mitigate that by giving prototype testers specific tasks to complete, which naturally guides them toward the screens I want to validate. I’ve never thought to use AI to “debug” those dead ends until now, so that insight is eye-opening. Thanks so much for sharing I really appreciate it.
Loving this. Claude has become my desk partner and spending more time here discussing than doing leads to better outcomes. I agree, starting with Figma is now just an inhibitor.
Great post Felix! I’ve been working in a similar way: starting in ChatGPT to stress-test the concept, then shaping a scoped readme. Who it’s for, what it does, flow logic, design principles. It feels like sketching but with writing.
I'll pop that readme into lovable, have it define development phases and then literally just go with phase 1. I'll revise in Lovable, and explore layout variants in Figma, and sync backend with Supabase. I push to GitHub and use Claude to help debug or reason through changes. (I find it easier to have it scoped outside of lovable for troubleshooting)
I've been starting more and more this way at work and side projects. This is a wild time to be a designer. Feels like we can build anything.
One thing I'm curious about. How do you handle components in Lovable?
Define schema with GPT - people can use a tool I made with lovable (wink) called vibeblocks.dev (beta) it turns the idea into 2 things; first into a spec driven blueprints (like pro devs would do - including the schema) and structured prompts in sequence to copy paste into Lovable's Knowledge in project settings. The full thing takes like a minute, and the result is great even more so now with Lovable agent
Yes! This makes sense to me. You’ve motivated me to try Lovable. Have you tried skipping design and going straight to build with something like Replit?
Regarding supabase storage and the comment ”Define schema with GPT”
I found that Lovable handles this quite well simply by instructing which data that I want to be stored with user/auth context.
Any particular reason you do this with ChatGPT?
yeah I use it for more detailed product outlines. sometimes it easier for me to start mapping everything out in GPT before moving to Lovable too early
I'd love to see a video overview of these steps. Haven't vibe coded (yet). Do you have something like that?
might record one. good idea
similar stack as well; Canva instead of Figma.
what do you recommend for paywall? I was looking through www.supawall.com
I really like your design stack process especially how you leverage AI tools to identify and refine the key screens. In my own workflow, I often have to reign myself in: it’s tempting to design every possible screen, but it’s more important to focus on the screens that address the user’s main friction points.
I sometimes get bogged down by all the clickable elements that lead nowhere, but I mitigate that by giving prototype testers specific tasks to complete, which naturally guides them toward the screens I want to validate. I’ve never thought to use AI to “debug” those dead ends until now, so that insight is eye-opening. Thanks so much for sharing I really appreciate it.
This is a good piece💯… Great for someone trying to explore Frontend development.
Do you ever start with a sketchbook?
Thank you, Felix!
As always, your post is full of insights.
I’ll try discussing with ChatGPT how to describe user scenarios in maximum detail to identify gaps in my vision during design.
Great share. It’s not about the tools it’s about the process of learning and showing what works for you. Thank you.
solid stack! love it. Thanks for sharing buddy. 😄
Loving this. Claude has become my desk partner and spending more time here discussing than doing leads to better outcomes. I agree, starting with Figma is now just an inhibitor.
Great post Felix! I’ve been working in a similar way: starting in ChatGPT to stress-test the concept, then shaping a scoped readme. Who it’s for, what it does, flow logic, design principles. It feels like sketching but with writing.
I'll pop that readme into lovable, have it define development phases and then literally just go with phase 1. I'll revise in Lovable, and explore layout variants in Figma, and sync backend with Supabase. I push to GitHub and use Claude to help debug or reason through changes. (I find it easier to have it scoped outside of lovable for troubleshooting)
I've been starting more and more this way at work and side projects. This is a wild time to be a designer. Feels like we can build anything.
One thing I'm curious about. How do you handle components in Lovable?
Define schema with GPT - people can use a tool I made with lovable (wink) called vibeblocks.dev (beta) it turns the idea into 2 things; first into a spec driven blueprints (like pro devs would do - including the schema) and structured prompts in sequence to copy paste into Lovable's Knowledge in project settings. The full thing takes like a minute, and the result is great even more so now with Lovable agent
Love it
The "superbase" is a great tip. why did you choose this service and not others?
And what do you say about using Claude for initial UX wireframes, and only then take it into Figma?
Yes! This makes sense to me. You’ve motivated me to try Lovable. Have you tried skipping design and going straight to build with something like Replit?
Super helpful, thank you for sharing!