Demo of final result
Clone the repository below.
If at anytime you would like to review your work, checkout the finish
branch for the final working version.
This repository is our starting point. Inside you will find two directories. One directory is for our Figma plugin and the other is a simple Node server. Don't worry if you don't have any experience with Node or backend development, we will cover everything you need to know.
Let's start by installing the dependencies in each of our directories. I recommend opening two separate terminal windows in either your code editor or CLI.
Next, let's start our server. From inside the server
directory,
run the following.
If everything went as planned, you should see a message in your terminal
that says Server running on http://localhost:3000
Next, let's start our Figma plugin. From inside the figma-plugin
directory, run the following.
Lastly, we need to import our plugin into Figma. Open the Figma desktop
app and navigate to: Plugins Development > Import plugin from manifest. Select the manifest.json
file inside the figma-plugin
repository. If everything went smoothly, you should see a hello world!
message in the console. This message is being returned from our server in
server/index.ts
Next, let's focus on our plugin. We need a way for users to describe the
types of colors they wish to generate. This description will be sent to
OpenAI and we will receive a response in return. For simplicity, we're
going to use Figma parameters to handle the description. Parameters are a
great method for capturing user input without having to create a custom
UI. Open the manifest.json
file inside the figma-plugin
directory and add the parameter seen in the example below.
You can learn more about Figma parameters here
Next, let's add a function to our plugin to handle our user's description.
Open code.ts
and add the following
This function listens for the run
event and captures the
colorsDescription
parameter. We then send a POST
request to our server with the user's input.
We need to make a small change to our server before our code will work.
Open server/index.ts
and make the following changes.
On line 7, we added express.json()
middleware to handle requests
with the content type of JSON
. On line 9, we updated our
route handler from GET
to POST
. And finally,
on line 10, we return the request body as a response. Go ahead and run
your plugin now. If everything is working correctly, you should see your
user's input logged to the console. We now have a roundtrip working
between our Figma plugin and our server. Wahoo!
What's with all this server nonsense? In the next section, we will be sending and receiving data using the OpenAI SDK. For this, we will be required to create an API key. In order to keep our application secure, we will store our API key on our server. If you download any products that ask you for your OpenAI API key, do not use them. They are not secure.
Now that we can capture our user's description and we have an endpoint
on our server to handle it, we can begin to focus on the fun part. If
you haven't already, create an account with OpenAI. Next, we need to create an API key so we can access the OpenAI API.
Head to the API Keys page
to create a new key. Once you have your key, duplicate the .env.example
file in the server
directory and rename it to .env
. Add your API key to the .env
file.
If you haven't already, I highly reccomend reading OpenAI's documentation, especially the prompt engineering documentation.
At a high level, we will use prompt engineering to influence our language model and generate our color values. To do this, we need to think about what language our users might use in order to describe the colors they wish to generate. Furthermore, we need to think about how we want our model to respond. Our responses need to be consistent and it would be helpful if they were easy to parse.
Fortunately, OpenAI has a great playground feature that allows us to test our prompts before implementing them in our code. Let's give it a try. Open the OpenAI playground.
The "system" textbox on the left is where we will provide instructions for our LLM. We know our users will be providing us with a description and based on that description, we want the LLM to return a set of colors. So, let's write a prompt that will do just that. In the "system" textbox, input the following prompt.
Now, try entering a value for the user and click submit. You should see a response from the assistant that includes color names in the first row and color values written as hexidecimals in the following row. This is a good start but surely we can make some improvements. First of all, this format is pretty hard to parse. Wouldn't it be great if we could get our response in a familiar format, such as JSON?
Before we make any changes to our prompt, we should also take a moment
to familiarize ourself with the Figma API. We want to return our values
in a format that will require minimal transformation. What's the point
in massaging our data if GPT can do this for us. In this tutorial, I'll
be adding rectangles to my canvas and filling them with the returned
color values. Rectangle Nodes contain a fills
property that
accepts an array of type Paint. Paints can either be a solid, gradient, image or video. For this
example, I'm going to keep things simple and update my rectangles with a
solid paint color. If you wanted to update them with images, you could
look into using the images endpointfrom OpenAI. Pretty cool! Solid paints accept a color property that is
either of type RGB
or RGBA
but thankfully Figma
has a built in utility function that will transform our hex values into these
formats. With all of that in mind, let's take another stab at updating our
prompt.
Let's ask our model to return our values in a JSON format and let's also be explicit about our hexidecimal format. I don't know how it chose hexidecimal by default but we don't want that to change in the future. Secondly, let's ask our model to return a set amount of colors if our user doesn't specify any. I think 5 is a reasonable amount. Here is my new prompt.
That's looking much better! If you didn't receive a JSON array with objects, you can double check your prompt with mine here.
Before we begin to handle our responses, there's one piece of advice I'd like to elaborate on. While I did encourage us to return our data structure in a format that was easy to handle, this is not necissarily a great strategy as your application begins to scale. It will become expensive, very quickly. This is because of the pricing model. As a customer, you pay for tokens. In short, the longer the response, the more expensive it is. In our example, a large part of our response is redundant. We don't need OpenAI to return our key values to us each time - we only care about the values. This is where using something like a CSV format instead of a JSON format can save you money in the long run. We won't cover this in the tutorial but it is worth mentioning.
Now that we have a prompt that's returning consistent values, let's use
the OpenAI SDK to fetch our colors and add them to our canvas. Update
your server/index.ts
file with the following code.
Update: with some of the newer models, you can specify a
response_format
. This means you can explicity set the
response to be JSON instead of instructing the model to return JSON in
the prompt.
Now let's update our code inside our Figma plugin to handle our newly created colors.
Replace your figma-plugin/code.ts
file with the code below.
Let's review.
fetchColors
fetchColors
function sends a POST
request
to our server, which then sends a request to OpenAI with our prompt. The
response is stored in our colorArray
createColorPalette
function, which loops over our array and adds a rectangle to our canvas
for each object
Congratulations, you've now created an AI-powered application! I hope your mind is running wild with ideas and I look forward to seeing what you create. If you have any questions or need help, feel free to reach out to me on Twitter or email me at lee@2fold.io