omniAI provides a AI Text service that can be used for a wide variety of tasks. 


AI Text supplies a simple yet powerful text-in, text-out interface with a number of AI models. To trigger the text generation, you input some text as a prompt. The AI generates the text and attempts to match your context or pattern. Suppose you provide the prompt "As Descartes said, I think, therefore" to the AI. For this prompt, omniAI returns: " I am" with high probability.


Another example

Prompt: write a tagline for an ice cream shop

Answer: we serve up smiles with every scoop!


The text results that you see can differ because the omniAI produces fresh output for each interaction. You might get a slightly different text each time you ask to generate, even if your prompt stays the same. You can control this behavior with the Temperature setting.


The simple text-in, text-out interface means you can "program" the omniAI model by providing instructions or just a few examples of what you'd like it to do. The output success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a pre-teenage student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.


Design prompts

omniAI models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you must be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt.


The models try to predict what you want from the prompt. If you enter the prompt "Give me a list of cat breeds," the model doesn't automatically assume you're asking for a list only. You might be starting a conversation where your first words are "Give me a list of cat breeds" followed by "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks.


Guidelines for creating robust prompts

There are three basic guidelines for creating useful prompts:


1. Show and tell.

Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, include these details in your prompt to show the model.


2. Provide quality data. 

If you're trying to build a classifier or get the model to follow a pattern, make sure there are enough examples. Be sure to proofread your examples. The model is smart enough to resolve basic spelling mistakes and give you a meaningful response. Conversely, the model might assume the mistakes are intentional, which can affect the response.


3. Check your settings. 

Probability settings, such as Temperature and Probability, control how deterministic the model is in generating a response. If you're asking for a response where there's only one right answer, you should specify lower values for these settings. If you're looking for a response that's not obvious, you might want to use higher values. The most common mistake users make with these settings is assuming they control "cleverness" or "creativity" in the model response.


Troubleshooting for prompt issues


If you're having trouble getting omniAI to perform as expected, review the following points for your implementation:


  • Is it clear what the intended generation should be?
  • Are there enough examples?
  • Did you check your examples for mistakes? (omniAI doesn't tell you directly.)
  • Are you using the Temperature and Probability settings correctly?