Meet Figma AI: Empowering designers with intelligent tools
We’re introducing a suite of AI-powered features to help you push past creative blocks and bring your best ideas to life.
Sign up for the beta
Figma AI and UI3 are currently in limited beta and will be rolling out gradually. You can join the beta directly from Figma: Navigate to the bottom of the screen, click on the "?" and select Join UI3 + AI waitlist. Check out our Help Center for more info.
Since we first brought AI features into FigJam, we’ve been thinking about how to harness the power of AI in Figma. As with everything we do at Figma, our main goal is to give you the tools you need to do your best work—to look beyond the hype and find real solutions to real user problems. Through that lens, we’re excited to introduce Figma AI, a collection of features designed to help you work more efficiently and creatively. Whether you’re searching for inspiration, exploring multiple directions, or looking to automate tedious tasks, we’re building Figma AI to unblock you at any stage.
Our AI features will be free for all users during the beta period, which runs through 2024. As we learn how these tools are used and their underlying costs for Figma, we may need to introduce usage limits for the beta. When Figma AI becomes generally available, we’ll provide clear guidance on pricing. Our goal is to help you work more efficiently while also improving these features in a scalable way.
Find exactly what you need with enhanced search
We know that product designers aren’t always starting from scratch. Many Figma users leverage a production screenshot or try to work with components the team has already created. But locating a specific design or component can feel like searching for a needle in a haystack—especially when working across large organizations with complex design systems. We’re introducing two new ways to help you find the jumping off point that you’re looking for: Visual Search and an AI-enhanced Asset Search.
Visual Search lets you find and reuse designs by uploading an image, selecting an area on your canvas, or entering a text query. Figma will instantly surface visually similar designs from across all your team’s files that you have access to. You can then insert the most relevant frames directly into your working file. In the future, this search capability will extend across our community files, making it easier to find and discover great resources from the community.
In the future, you’ll be able to also search files and assets from the Figma community directly within the editor. Results will include proper attribution to ensure credit is given to the original creator, and let you access the source file or explore more of the creator’s work.
We’ve also significantly upgraded our existing Asset Search functionality in the Assets panel. Figma now uses AI to understand the semantic meaning and context behind your search queries, designed to return the most relevant components and assets, even if your search terms don’t exactly match their names. For example, searching for “primary button” should surface relevant button components, even if they’re named something like “btn_large” in your design system. By going beyond simple keyword matching and learning how design elements are typically used, we’re making it easier to find and use components from your design system in a way that feels intuitive and natural.
“Search is the perfect example of how AI can pragmatically solve real pain points designers face every day,” shares Marco Cornacchia, a product designer on the feature. “You spend all this time just looking for the right stuff—digging through files, bugging teammates for links, trying to find that one component. With Visual Search, you can instantly retrieve what you need just by describing it in plain language or uploading a screenshot.”
Work more efficiently and stay in the flow
“These features are great at removing some of the more tedious parts of the job, enabling designers to focus on the more important jobs like discovery, problem solving, ideating solutions,” says Lee Munroe, Head of Design at OneSignal.
We’re also introducing a set of tools to streamline common design tasks and help you work more efficiently, from image editing and generation to interactive prototyping and, yes, even layer naming.
Translate, shorten, or rewrite text in a click
AI-powered text tools make it easier to iterate on copy and find the right words. “The new rewrite text feature allows me to automate busywork and explore high fidelity ideas faster,” shares Gavin Nelson, a designer at Linear.
Bring designs to life with realistic copy and images
“For me, the tedium of entering stories that not only reflect my overall narrative, but are based on real use cases has been monumental in aligning my team towards our goals,” says Guy Meyer, Senior Staff Designer at ServiceNow. “Realistic data is key to selling stakeholders on the vision you’re driving towards.”
Generate realistic copy and images to bring your designs to life. We know that lorem ipsum and FPO placeholders are a staple for many designers, but there’s no substitute for using realistic copy and images in your mockups. That’s why we’ve introduced AI-powered content generation tools to help you quickly populate your designs with relevant, realistic content. By incorporating text and visuals that look and feel like the real thing, you can create more engaging, persuasive mockups that effectively communicate your design vision.
Additionally, you can now remove image backgrounds without leaving the canvas, allowing you to instantly isolate subjects and create striking visuals without switching tools.
Move from design to reality faster with quick-click prototyping
By clicking Make Prototype, you can rapidly turn static mocks into interactive prototypes, making it simpler to bring ideas to life and get stakeholder buy-in. Preview prototypes directly on the canvas to streamline iteration and perfect your designs more efficiently.
Stay organized with automatic layer renaming
Lee Munroe from OneSignal shares, “Figma’s AI features have helped us save time on some of the more tedious design tasks, like naming layers or coming up with dummy text, enabling our team to focus on more valuable impactful design.”
Rename Layers is a seemingly small feature that can save designers hours of monotonous work over the course of a project—helping keep your files organized and developer-ready.
Generate designs from text prompts
Ka Temple, Senior Product Designer at ServiceNow, was an early tester of Make Designs. “The power to generate first drafts and rewrite text made starting a new project less daunting,” says Ka. “It got my creativity going which sped up my workflow and freed my mind to dream up more unique ideas faster.”
We often talk about the blank canvas problem, when you’re faced with a new Figma file and don’t know where to start. Make Designs in the Actions panel will generate UI layouts and component options from your text prompts. Just describe what you need, and the feature will provide you with a first draft. By helping you get ideas down quickly, this feature enables you to explore various design directions and arrive at a solution faster.
But this is just the beginning. Over time, this feature will leverage design systems, such as Google’s Material 3 kit, letting you generate UI tailored to your unique needs. Eventually, you’ll be able to take your organization’s unique design system assets and patterns to generate on-brand UI. We see this becoming a go-to tool for kickstarting projects and maintaining consistency across products.
A new creative starting line
We’re excited about these new AI features. Some we know you’ll love right away, because they’ll immediately make your workflow better, like naming layers or content fill. Others, like UI and prototype generation, paint a picture of where we want to go—giving users a new creative starting line by making it easier to get to a workable first draft and bring designs to life. Our goal is to solve real pain points and unblock your creativity with tools dialed into your workflow and enhanced by AI.
Our approach to training
All of the generative features we’re launching today are powered by third-party, out-of-the-box AI models and were not trained on private Figma files or customer data. We fine-tuned Visual and Asset Search with images of user interfaces from public, free Community files.
While we’re proud of the AI features we’ve built so far, we see ways to accelerate your work further by developing new models that work more efficiently and effectively with Figma-specific concepts and tools. To make these improvements, we need to train models that better understand design concepts and patterns, and Figma’s internal formats and structure through Figma content.
Two important highlights: First, all admins have control of whether their team’s content data is used for training. Second, participation in AI content training is not required to use Figma or Figma’s AI features. Learn more about our approach to training.
To make these features more useful, the models we use need to be tuned to the specific ways that designers and product teams work in Figma. We’re committed to building this in a principled way that protects your privacy.
Data privacy and protection
We know how important your data is to you, your company, and your clients. Our model development process is designed to protect your privacy and confidential information.
For all customer data, we:
- Encrypt all data at rest and in transit
- Use security measures designed to protect against unauthorized access to customer data
- Enforce tailored permissions and user access controls around who can view and access your data
We take additional steps to train our models so that they learn general design patterns and Figma-specific concepts and tools—not your content, concepts, and ideas. For example, we de-identify content and redact sensitive information including from text and images.
AI model training
We are introducing a team-level setting where admins can control customer content sharing with Figma for AI training. Customer content includes file content created in or uploaded to Figma by a user, such as layer names and properties, text and images, comments, and annotations. Sharing your customer content with Figma for AI training is optional and your team’s setting preference will go into effect on August 15, 2024. If an admin turns off content training after that, new content and edits will not be used to train AI models.
Starting today, admins can set that content data training preference directly in settings, across all plans.
- Starter and Professional plans are opted in by default, but can opt out.
- Organization and Enterprise plans are opted out by default.
Our customer agreements with Organization and Enterprise are typically more complex and include specific requirements and restrictions, which is why we’ve chosen a different default setting for those plans.
No content training will occur until August 15, so you have time to decide. Also, any content generated by Figma AI is considered customer content data. You keep your rights to outputs generated when you use Figma AI.
Usage data
Usage data is distinct from customer content and relates to how Figma is accessed and used. Examples of usage data include technical logs, metadata, telemetry data, and information about how your organization’s content is used (like how many times it is accessed), but usage data does not include your content itself. This data is used in an aggregated, de-identified way to help protect your privacy.
Community files data
Free files in the Figma Community are available under licenses that allow transformation but require attribution. What constitutes attribution in the field of AI is a widely debated topic. Until we decide and transparently communicate our approach, generative models that output design will not be trained on Community files. So far, we’ve used public, free Community files only to improve search use cases, like the new semantic and visual search improvements. We will not be doing any training on paid Community files.
Looking ahead
Our goal is to build AI in service of designers and product teams, while being responsible and clear in our approach. As we combine the power of AI with the wisdom of our creative community, we see a future where AI becomes a true creative partner.
Learn more about our AI feature availability and review how to opt in or out of training.
Editor’s Note: Our new AI terms and policies take effect on August 15, 2024. Please visit our updated Terms of Service and Privacy Policy pages for more information. Shortly after publishing this article, we learned that an issue with Make Designs underlying design system resulted in mocks that resembled existing apps. We temporarily rolled back the feature until we could fix the issue. We have since re-enabled the feature along with some key updates and a new name, First Draft. You can read about those changes and feature availability here. For more feature-level information and updates to our training model, please visit our approach page.
Kris Rasmussen is the Chief Technology Officer at Figma, where he leads the engineering, security and data science teams. Prior to joining Figma in 2017, Kris served as engineering lead and a technical advisor at Asana, where he co-authored many aspects of the framework and infrastructure that powers the company's realtime collaborative features. Before Asana, Kris co-founded RivalSoft Inc., a web-based application that gives companies an internal hub for market information, and served as Chief Architect at Aptana.