Built and shipped a Figma plugin that audits design files against Model Context Protocol (MCP) best practices for AI code generation. Identifies layer naming issues, auto-layout gaps, and structural inconsistencies—helping designers optimize handoffs to tools like Claude Code and Cursor.
Key Achievements:
Identified an unmet need in the AI-assisted design workflow and shipped a validated MVP in two weeks, demonstrating rapid concept-to-launch execution.
Achieved a 71% conversion rate across 196 users and 210 views with a 27 - 30% growth month-over-month
Validated product-market fit with 2.5 daily active users demonstrating sustained engagement
Iterated directly with Claude Code and Figma MCP server, ensuring plugin recommendations align with real AI workflows
Skills Used
Design Challenge
When I started using the Figma MCP server with Claude Code, I kept running into the same issue. My designs would generate messy output, not exactly how I designed them, and code that felt broken. I wanted a way to check my files before handing off to AI. Something that would flag issues like generic layer names, inconsistent auto layout, and give me pointers on how to fix them. I couldn't find anything that did this, so I set out to build it myself.
Solution
Design a plugin around three easily scanable sections that automatically update whenever a user makes a change.
Utilize best practices in conjunction with Claude Code and Markdown files to ensure the best output.
Iterate on the initial design for overall best user experience.
Technical Approach
Using the Figma MCP server, I iterated on the plugin with Claude Code over the course of 1 month. While the MCP server relayed the majority of the design seamlessly, the icons and minor coding edits were required to ensure a true 1:1 build.
Ensure best practices were in use as I was using the plugin in development to build out the plugin.
Outcome
I designed and built MCP Ready over the course of two weeks using Claude Code and iterating within Figma. I designed the plugin interface, used the MCP server to build in Claude Code, and iterated on it within Plugin Development until it worked and looked exactly how I pictured. The plugin analyzes a selected frame and returns a readiness score (Ready, Almost, Rework) along with specific suggestions for improvement.
It checks five areas: component structure, naming conventions, auto layout usage, fill properties, and nesting depth. Each check ties back to what actually affects code generation quality. A further addition was including token cost estimation, so designers can see how complex their selection is before sending it to their AI code editor. This came about from seeing the token estimation in Figma's Dev Mode and removing the need for the user to switch back and forth between modes.
Whenever you make a selection, the plugin updates automatically, removing the need for a refresh.
From the initial launch date of October 27, 2025 to today, MCP Ready has been used by 162 users with 246 views and 19 saves on the Figma Community. That's about a 71% conversion rate, showing there's a niche group of designers and builders looking for a tool like this.
Impact
MCP Ready proved product-market fit in the emerging design-to-AI tooling space: 162 users, 71% conversion, and sustained organic growth (2.5 users/day) with zero marketing spend. The plugin established a reusable framework for future features and validated my ability to ship end-to-end products using AI-assisted development.
