Generate Photorealistic Car Visuals in seconds!
Built the first automotive GenAI image platform using Stable Diffusion. Drove adoption across top car brands and led the company to acquisition in under 12 months.
Overview
Role
Co-Founder. Led Design, Product, and Business Strategy
Responsibility
Identified a market gap, validated with an MVP, and iterated to product-market fit
Timeline (1 year)
Feb 2023 - Mar 2024
About The Project
I built Creatorloop, an Image GenAI platform that enabled auto dealerships and manufacturers to generate photorealistic vehicle visuals in seconds. The tool leveraged Stable Diffusion to eliminate the need for costly photoshoots, helping users create high-quality marketing assets effortlessly.

I led product and design end-to-end, scaled adoption across major brands, and drove the company to acquisition within 12 months.
Highlight
I started exploring diffusion models in September 2022 and quickly developed a strong passion for the space. While building Creatorloop, I began experimenting with AI video before it became mainstream.

Post-acquisition, I’ve remained deeply involved in the space 🙂
Endorsed by Google’s Head of Auto Retail
Early Experiments in AI Video
Discovery
The launch of Stable Diffusion completely changed how I thought about Node and brand content.
While building Node App, most brands used influencers to create the visuals for their ads, websites, and social media feeds. When I started experimenting with Stable Diffusion in October 2022, it opened up a wave of ideas for how we could use it to improve brand experiences.

The more I explored, the clearer it became that the opportunity was much bigger than just replacing creators. But my first instinct was simple: what if brands could generate visuals without needing influencers at all?
Core Problem
Brand and influencer collaborations at scale are slow and unreliable.
User Problem
E-commerce brands wait 15 days from launching a campaign to getting content.

Even then, they risk influencers posting late, off brief, or not at all.
Business Problem
20% of collaborations end with influencers not posting.

90% of churn is tied to compliance issues. Missing posts directly hurt LTV.
Opportunity:
Stable Diffusion could reduce the slow and unreliable parts of content creation. In 2022, the tooling had a steep learning curve and was built for technical users, not marketers.

Hypothesis:
If AI image tools become usable for marketers, brands will shift budget from hiring influencers for product visuals to generate social content in-house.
Market Research
Competitive Analysis
I tested 25 AI image tools and mapped the landscape to understand how product positioning. This help me separate what they claimed to solve from what they actually enabled users to do.

Here are a few products I tested:
Interview
After interviewing 30 marketing managers from companies like McCann Worldgroup, Jarritos, and Hershey, we learned:
  • Agencies like the idea of using AI for ideation
  • Brands were excited about the opportunity to create content on-demand
  • A lot of skepticism about what AI can do
  • Very few had experimented with other GenAI tools
  • Large organizations don’t know where they’d fit it in their budget
Takeaway From Research
1. Misalignment between user workflows and the design of those tools.

Brands and agencies create content differently. Brands build brand libraries and posting schedules. Agencies do heavy ideation and operate with more rigid processes. Most AI image tools did not match either workflow well, which made them hard to adopt beyond quick experiments.

2. Most tools converged on the same obvious use case

Across the products I tested, the default wedge was product shots for e-commerce brands, fashion, or consumer image generation apps. The space felt crowded, and many tools looked similar in positioning. At best, they could be used by brands to place products on different settings.

3. The larger opportunity was a non-obvious B2B vertical

From working with e-commerce brands, I was skeptical of the product shot market because visuals are relatively cheap when working with influencers. My instinct was that a larger B2B vertical needed this technology more.

An agency owner summarized it as: think about industries with heavy products, like industrial machines.
Product Strategy
The goal was to leverage Node’s existing distribution to drive early adoption. Early users would help us improve the system quickly and uncover the most promising B2B vertical to pursue.
Technical Constraints
Roadblock
Despite our conviction, a few parts of the business model still needed validation. Building the full product would have taken 3 months, which was too slow for the 2-week experiment cycles we wanted while the use case was still unproven.
Prototyping
Conceptualizing the MVP
When it came to thinking about the MVP, we thought about what needed to be validated for the product to be successful. It all came down to the quality of the image that gets generated.

As a result, we decided to launch an AI-image generation service as an MVP. That’d allow us to provide quality control on the output, stay close to customers and gain invaluable insights on the Stable Diffusion for the software development phase.
The goal of the service was to understand:
  • How to sell GenAI to brands and validate the ideal customer profile (ICP)
  • The limitations of Stable Diffusion
  • The types of requests and outputs brands would want from AI
  • How much they’d be willing to pay for it
Service Blueprint
I conceptualized the service in a few hours and launched 5 pilots within the following 2 weeks. I interfaced with the brands via Slack, while our data scientist used Stable Diffusion in the background to create content.
Over the span of 3 months, we worked with
Takeaways
Learnings
Operating this service was an intense, but we learned so much! It gave us the conviction to start building the software.
Here are a few learnings:

Vertical Fit:
  • Automotive brands showed a higher willingness to pay than e-commerce brands.
What actually worked:
  • Background swapping fulfills more use cases than hyper-localized model creation (Dreambooth)
  • Using the Img2Img functionality is the best way to minimize bad outputs
Quality Constraints:
  • Achieving the right lighting in an image involves using the product’s color as the base in the generation
  • If you can achieve an outcome without AI, don't use it. AI is unpredictable and produces random outputs
Key Decision
These learnings made it clear that the biggest opportunity wasn’t generic image generation, but verticalized workflows. After operating the MVP, we decided to focus on automotive because:
  • Automotive Vertical: Dealerships spend over $10K per month on visual content and have a stronger budget appetite than typical e-commerce brands.
  • Lack of in-house creative resources: Most dealerships don’t have access to studios or design teams.
  • Expensive production costs: Automotive photoshoots are significantly more costly than those in other verticals.
  • Repeatable use cases: Dealerships rely on standardized image templates, making it easier to optimize AI outputs for Google Display Ads and CRM content.
Building
Turning Complex Workflows Into an Intuitive Experience
The goal of the V1 was to speed up our AI-service. We translated the workflows we ran manually into product workflows, and designed the system around automating the generation process.
Tooling Complexity
Stable Diffusion power tools (like Automatic1111) are powerful but complex.

Most users can’t get consistent results without lots of trial and error.
Expert Workflows
Turn repeatable tasks (like lighting-matched edits) into a reliable workflow.

Package presets and automation into a simple product with consistent results.
Turn Stable Diffusion WebUI (Automatic1111) into an intuitive UI
Our analysis of the settings revealed that adjusting the prompt and model is essential for desired outcomes. The CFG scale, denoising strength, and mask blur will vary with the model selected. Therefore, the UI is focused on the prompt and model settings.
Prompt: Partly customizable
Negative Prompt: No user edit needed
Steps: No user edit needed
Sampler: No user edit needed
CFG Scale: Depends on user needs
Seed: No user edit needed
Size: No user edit needed
Model: Depends on user needs
Denoising strength: Depends on user needs
Mask blur: Depends on user needs
Create good lighting via color extraction
The color extraction step occurs outside of Stable Diffusion. It ensures that the background lighting matches the uploaded image.

Here’s a hypothetical example:
Final Designs
UI Designs
A selection of final UI screens from CreatorLoop. You can find the Figma file below for more details.
Motion Design
All motion UIs were fully designed and animated by me to bring the product experience to life.
Impact & Results
  • Hit 100K in ARR by January 2024
  • Generated More Than 50K Photos for brands
  • Got acquired by Spearhead before the Node App deal closed
Testimonials & brands we worked with

"I'm impressed by the pictures. They are very neat! I'm excited to explore the full capabilities of CreatorLoop for other use-cases."

Marketing Manager at Unilock

"CreatorLoop helps us make ideas happen faster and cheaper. It’s a great generative AI start-up that has an interesting value prop for efficiencies in image generation."

VP Strategy at McCann Worldgroup

"Creatorloop's technology is a wonderful match for us. We want to integrate their API into our ecosystem to generate images for the dealerships we work with."

CEO at Spearhead

Takeaways & Future
GPU Learnings
Working on Creatorloop highlighted the importance of considering compute when designing for generative AI applications. By accounting for the cost of GPUs and potential wait times, designers can create more cost-effective experiences and thoughtfully design waiting states for users.
What I’d do differently
1. Designing for Long Waits
  • I wish I had been more intentional in designing wait states. I could have used them to convey product information or delight customers in various ways.
2. Switched from Automatic1111 (WebUI) to ComfyUI
  • We were proud of the workflows we built, but in hindsight, switching away from Automatic1111 sooner would’ve saved time and enabled faster iteration.
  • ComfyUI wasn’t available when we started, but it would’ve been ideal for dealership-specific styles. It would’ve made it easier to experiment with LoRA fine-tuning and scale model customization efficiently.
  • We developed an Img2Vid workflow, but couldn’t launch it—generation was too slow and critical Stable Diffusion settings weren’t supported in Automatic1111.