SpellPrints Applications

Building applications powered by generative AI

What is SpellPrints?

SpellPrints is a new website offering a marketplace where creators can create and sell small applications powered by generative AI, such as GPT-3.5/GPT-4.

Applications range from social media post generators, to movie recommenders, to fitness planners. They are designed to receive one or more discrete fields of input, consisting of text or checkbox options.

Each application generally costs a few euros per run, with offerings for free test runs and run packages.

My Applications

I created a number of SpellPrints applications to explore the possibilities of generative AI and learn the practice of AI prompt crafting.

The applications range from microcopy generators to palette suggesters to quick competitor analysis generators. I tried to create apps based on subjects that I understood (usually in the design and development fields), and which I believed could be useful or entertaining.

My Process

I used the following process to develop my applications:

  1. Brainstorm an idea

    Specifically, an idea that would be useful and feasible as an application, which means identifying a specific and valuable use case (and discussing it with potential users) and defining specific inputs and outputs. Some light testing with ChatGPT to ensure task feasibility is usually done at this stage as well.

  2. Draft the prompt

    I then go into ChatGPT and begin my first draft of the prompt:
An image showing a ChatGPT window with a prompt written out on it.

  1. Test and iterate

    Even well-defined prompts sometimes do not get the desired results on the first try, and often there are edge or other error cases that need to be accounted for. I test the prompt with a number of inputs, and edit the prompt as necessary.

    I also make note of the language model I am using (GPT-3.5, GPT-4, etc.).
An image showing a ChatGPT window with a prompt on it, with edited portions highlighted in red.

  1. Create the app on SpellPrints

    Once I am satisfied with the behavior of the prompt on ChatGPT, I go to SpellPrints and create a new app there.

    After filling out some basic information like names and descriptions, I add the input fields I want for the app, and the prompt text. I make sure that I am using the same language model I was testing with, and that I am setting the role of the prompt (system/user/assistant) correctly.
An image of the SpellPrints app creation interface, showing input field entry.
An image of the SpellPrints app creation interface, showing prompt entry.

  1. Test with the Test Workflow

    SpellPrints offers a test workflow, which allows the creator to test their app with example inputs. I usually run several tests through this workflow to ensure correct and consistent results, editing the entered prompt if necessary.

    This test workflow also creates an example of input/output that will be shown on the app page.
An image of the SpellPrints app creation interface, showing example input/output.

  1. Publish

    Once all testing checks out, I set a price for using the app, and publish it for administrator approval.
An image showing a completed SpellPrints application, with image and pricing.

Notes and Lessons

I learned a number of valuable lessons regarding generative AI and prompt crafting in creating these first several AI-powered applications.

  • AI is not perfect

    Generative AI is a powerful tool, but it is by no means free from error or inaccuracy.

    Sometimes these issues crop up in unintuitive and unpredictable ways - one of the ideas I had for an app that didn't pan out involved generating a number of interesting historical facts given a range of years/dates.

    I ended up abandoning it because for some reason, I had significant difficulty getting GPT to consistently understand how BC/BCE dates worked.
An image showing a ChatGPT interface, with errors in understanding BC/BCE dates highlighted in red.

           Thorough prompt testing is necessary to catch and address potential issues from the AI end.

  • The more specific the prompt, the better

    For the types of applications I was creating, which worked on a run-to-run basis on specified inputs and created specified outputs, I found that long, descriptive, and thoroughly detailed prompts worked better in getting consistent results.

    In particular, specifying the formats of the inputs and outputs, adding examples, and being proactive about detailing what the AI should do in case of ill-formed or missing inputs, seemed to help in consistently getting useful responses.

    Ensuring that the AI was given a role to play (as some specific sort of AI assistant, for example), which could be specified in a system-role prompt, seemed to also to help the language model understand the context and expected actions.

  • Design and development skills are useful in prompt crafting

    There is still some debate as to what sort of a discipline prompt crafting or prompt engineering should be classified as, or if it can be accurately called any sort of engineering discipline at all.

    I do believe, however, that in the case of AI-powered applications like SpellPrints - which require specific inputs to generate specific outputs - design/development skills are valuable for enhancing their usefulness and usability. In particular, experience in use case definition, testing/iteration, and error/edge case handling, all help in creating quality applications.

  • Some use cases, however, are best left to conversational interfaces with generative AI

    Another of the app ideas I had that didn't pan out was a regular expression writer, which would write accurate and useful regular expressions given requirements for matches.

    I quickly realized during initial testing, however, that the process of writing an accurate regular expression to match all the cases a user wants (and un-match all the cases they don't want) was a highly open-ended and iterative process. As such, simply going to ChatGPT and iterating through it with a back-and-forth conversational interface was likely more useful for this case than creating an app with discrete runs.

    I believe it will be an important thing to keep in mind as generative AI gets more advanced, and consideration should be given to what types of use cases are appropriate for the traditional application format, vs the conversational format that LLM's can provide.
Grant Q. He
grhe@grantqh.comLinkedIn Icon