My music label runs on data from six platforms — Spotify, Apple Music, YouTube, Instagram, TikTok, UnitedMasters. I’d been building a finance system, and the next step was an analytics dashboard that would tie all of it together: streams, royalties, audience flow, per-track P&L. I needed a design for it. Not a wireframe. Not a mood board. A real, production-quality visual direction that I could hand to a coding tool and say “build this.”
The old options: hire a designer ($500–5,000 depending on where you are, two weeks minimum), learn Figma (months of practice for mediocre results), or ask ChatGPT to describe a layout in words and hope for the best.
I did none of that. I used Variant to generate 24 complete mockups in 20 minutes, then used Claude to figure out which parts of which mockup were actually good — and why. The whole thing cost nothing beyond the tools I already pay for — Variant at $20/month and Claude at $100/month — and produced a design direction more rigorous than most agency briefs I’ve seen.
Here’s the method.
Step 1: Generate without judging
Variant is an AI design tool that generates full-page UI mockups from a combination of text prompts, reference websites, and your own images. That combination is what makes it powerful — you’re not just typing a description into a void. You’re feeding it visual context from multiple angles.
I started by picking three websites I liked the feel of — not dashboards, just sites with the right energy. Added them as references in Variant’s interface (you literally click “add reference” and paste URLs). Then I uploaded a screenshot of our label’s visual identity. On top of all that, I typed a one-line prompt: “analytics dashboard for a music label, dark mode, shows revenue and streaming data.” Hit generate.
Four mockups appeared. Some interesting, some not. I scrolled down. Four more. Scrolled again. Four more. Within 20 minutes I had 24 distinct design directions — dark premium with gold accents, minimalist editorial, bento-box layouts, newspaper brutalist, cyberpunk terminal, Swiss modular, retro indie magazine. Styles I wouldn’t have thought to ask for.

The key insight: don’t judge during generation. Just keep going. Your taste will kick in later, and it works better when it has a lot to choose from.
Step 2: Let AI be the design critic
Here’s where it gets interesting. I had 24 beautiful mockups. My gut said three or four of them “felt right.” But gut feel is a terrible way to design a tool people will use every day.
I took screenshots — four mockups per image, six images total — and dropped them into Claude. Said: describe every single mockup, then tell me which design elements work best for two things. First, readability and insight generation — what makes a dashboard actually useful. Second, dopamine and user engagement — what makes people want to open it again tomorrow.
Claude went to work. It researched 2026 dashboard design best practices, analyzed each of the 24 variants against those practices, and came back with specific findings.
Some of what it found: bento-box layouts (cards of varying sizes) are the most scannable pattern in 2026 — they let each metric be self-contained. Dark mode isn’t aesthetic preference, it’s function — 82% of users have it enabled, and data pops better on dark backgrounds (but dark gray, not pure black, or you get eye fatigue). One hero chart beats five small charts — the brain processes one complex visual better than multiple simple ones. Text-based alert feeds create a dopamine anticipation loop — users open the dashboard to see what the system discovered, not because they have to check numbers.
And the biggest finding: none of my 24 mockups had a psychological engagement layer. No celebration animations when a track breaks even. No streak counters. No loss aversion alerts. No progress-to-goal indicators. Every mockup handled data display well, but none addressed the question of why someone would want to come back.
I wouldn’t have found any of this by staring at the mockups myself.
Step 3: The optimal combination
Claude produced a specific recipe: take the bento-box layout from mockup 4B, the dark gold color palette from 1C, the gradient wave chart from 3A, the text-based intelligence feed from 3D, the bold alert blocks from 6C. Then add a dopamine layer that doesn’t exist in any of the mockups — confetti on milestones, streak counters, loss aversion blocks, progress bars toward goals.
This is the part that matters. No single mockup was the answer. The answer was a combination of the best elements from six different mockups, plus a layer that AI identified as missing from all of them.
I took this recipe back to Variant. Generated a new batch of mockups using the specific prompt Claude had written — with exact color codes, layout structure, and component descriptions. The results were dramatically more focused. Instead of 24 random directions, I got variations on a single strong direction.

Step 4: Export and build
Variant exports to HTML or React. I picked the best result, hit export, and had production-ready code. Took it to Claude Code and said: here’s the design, here’s my real data (CSVs from Spotify, Apple Music, YouTube, Instagram, UnitedMasters), now build the actual dashboard.
That’s a different article. The point here is the design part — from zero to a production-quality, research-backed visual direction — took under an hour.
The method
If I had to distill this into a repeatable process:
One — generate wide. Use Variant (or any AI design tool) to produce 20–30 mockups without filtering. Feed it reference websites, upload your own images for brand context, add a loose text prompt. Quantity over quality at this stage.
Two — analyze with AI. Feed all mockups to Claude (or any LLM that can see images). Ask it to evaluate each against specific criteria relevant to your use case. Don’t ask “which looks best.” Ask “which patterns maximize X and Y, and what’s missing from all of them.”
Three — synthesize. Take the AI’s recipe — best elements from multiple mockups plus identified gaps — and write a specific prompt.
Four — generate narrow. Go back to the design tool with the specific prompt. Now you’re generating variations on a validated direction, not random exploration.
Five — export and build. Take the winner into code.
The entire loop: generate → evaluate → synthesize → generate again → build. Two AI tools talking to each other through you, each doing what it’s best at. Variant is better at visual generation. Claude is better at analysis and synthesis. You’re better at making the final call.
What I learned
AI design tools are not replacements for designers. They’re replacements for the part of the design process where you stare at a blank canvas and don’t know where to start. The blank canvas problem is real — it’s the reason most non-designers never attempt design at all. Variant eliminates it completely.
The evaluation step is where the real value is. Generating pretty pictures is easy. Knowing which pretty picture will actually work for your users — that’s the hard part. Using one AI to critique another AI’s output is a pattern I keep coming back to. It worked for patents (agents attacking patent claims). It works for design (analysis against best practices). It probably works for anything where generation is cheap but evaluation requires expertise.
Your taste is the tiebreaker, not the driver. I picked the three mockups that “felt right” before Claude’s analysis. Two of them made it into the final recipe. One didn’t — it looked good but violated basic readability principles I hadn’t consciously noticed. If I’d gone with gut alone, I would have built something pretty but less effective.
One thing you can do today
Go to variant.com. Type a one-line description of something you need designed — a landing page, a dashboard, an app screen, anything. Generate 8–12 mockups. Screenshot them. Drop the screenshots into Claude and ask: “Which of these designs best follows current UX best practices for [your use case], and what’s missing from all of them?”
You’ll get a design direction in 30 minutes that would take a week of back-and-forth with a human designer. Not because AI is better than designers — but because the generate-evaluate-synthesize loop is faster than the traditional brief-draft-feedback loop.