No time to walk through the tutorial? Test this prompt out here.
We're growing very quickly at Mantium, and as we onboard new employees, we want to make our information as accessible as possible. Often times, even if information is available in a knowledge base, it can be difficult to find for newcomers. To help with this problem, we've created a prompt to generate basic answers about our company.
Most large language models are trained on public collections of text like Wikipedia so it's very unlikely that the model will have any information about Mantium. We need to add information about Mantium to our answer generation system and in the process, we'll also help guide the model to respond "I'm not sure" when it doesn't know the answer.
When you are ready to create your prompt, click
AI Manager >
Add New Prompt.
Then, fill out the following:
- Name of Prompt: Mantium Answer Generation
- Description: OpenAI Completion
Tags and Intelets can be left blank.
For deploying your prompt publicly at the end of this tutorial, you can add a default security policy configured by Mantium. Click
Add Security Policies under Security Policies and drag
Default Policies from All Policies to Selected Policies. Click
Done to save - you will know the policy has been applied when its name is visible under Security Policies.
- Provider: OpenAI
- Endpoint: Completion
Since the OpenAI's completion endpoint can be used for many tasks, you need to make it clear what you want to do. You can do this by including instructions, examples, or combining both in your prompt. In this case, we provided it with question-answer pair examples.
Q: What is human in the loop? A: Human in the loop helps to train the models in instances where the model is confused, wrong, or offensive. ### Q: What does Mantium do? A: Mantium enables the AI enthusiasts (personal and professional) to rapidly prototype and share large language models, solving one of the biggest barriers to AI adoption: deployment. And to keep your deployed AI behaving well in the real world, Mantium also includes security measures, logging, and human-in-the-loop. ### Q: What year was Mantium founded? A: I’m not sure ### Q: Who can use Mantium? A: Anyone! Mantium can be used for amusement and solving real business needs. ### Q: Who is Mantium for? A: Mantium offers both a UI and an API, and is currently offered for OpenAPI Cohere, GPT-J and more, and we are continuously adding other providers. ### Q: Why use Mantium? A: Mantium allows the community to be focused on their core product, while taking care of monitoring, logging, and human-in-the-loop. ### Q: How do you create a user in Mantium? A: Check out this great tutorial: https://developer.mantiumai.com/docs/getting-started ### Q: What is the difference between a prompt and an intelet in Mantium? A: Prompts are specific configurations for Search, Completion, Answer, and Classification. Intelets are multiple prompts chained together. ### Q: What is a security policy? A: A set of security rules while the prompt is processing. ### Q: What is an organization in Mantium? A: I’m not sure ### Q: How do you get started with creating a prompt? A: Check out this great tutorial: https://developer.mantiumai.com/docs/getting-started ### Q: What are the ways users can execute prompts in Mantium? A: I’m not sure ### Q: How can I import a prompt into Mantium? A: Currently the way to import a prompt is through the Import Prompt Window. You should copy and paste your prompt in cURL text. ### Q: What is a security rule? A: A security rule is a specific check in the processing pipeline. ### Q: What is NLP? A: Natural language processing (NLP) is the ability of a computer program to interpret human language as it is spoken and written -- referred to as natural language. It is a component of artificial intelligence (AI). ### Q: How do you share prompts in Mantium? A: Check out this great tutorial: https://developer.mantiumai.com/docs/create-and-share-your-first-mantium-ai-app ### Q: How many employees are at Mantium? A: I’m not sure ### Q:
Then, select an engine.
- Choose an Engine: Davinci
We selected the Davinci engine, which is the most powerful of OpenAI's available models. It provides great results for creative content generation. It is possible to select a lighter-weight engine as well, although results may vary.
Making the Prompt more User-Friendly
We ended the prompt with
Q: so that the input to the model doesn't have to be prefixed by
Q: each time. Note that there is a single space after
Q: - this is important to include to help the language model follow a pattern as closely as possible.
Factual Prompt Design
We added "I’m not sure " because the API can generate responses that are fabricated. To reduce our chances of getting answers that are made up, we show the model a way of saying, "I'm not sure." We created these "I'm not sure" answers for questions that the model shouldn't know; meaning, the answers won't be found in the model's training data.
- Response Length: 60
- Temperature: 0
- Top P: 1
- Frequency Penalty: 0
- Presence Penalty: 0
- Stop Sequences: ###
- Best Of: 1
- N: 1
- LogProbs: 1
To test out the prompt's performance, paste the following into the
How many employees work at Mantium?
Although there is still a small possibility of receiving an inaccurate answer, because the prompt was designed with this question, the output should return as "I'm not sure."
Let's take a moment to think about the model's expected behavior. Recall that GPT-3 was introduced in May 2020 and its pre-training data consists of Wikipedia, books, and the Internet, amongst others. The knowledge base is the information we provide the model via the prompt we create. Since Mantium was founded after GTP-3 was trained and released, it should not be able to answer questions related to Mantium unless we have injected the answer in our knowledge base.
For example, the question "Who is the CEO of Mantium" would fall into the "Not in Knowledge Base" and "Not in Pre-Trained Data" category seen in the below table, and so the answer should be "I'm not sure."
answer in knowledge base
answer not in knowledge base
answer in pre-training data
answer not in pre-training data
"I'm not sure"
Mantium enables sharing your prompt using the One-Click Deploy feature, located in each prompt's drawer view. From your list of prompts, click on the Mantium Answer Generation prompt, click
Then, add the following configuration settings:
- Name: Mantium Answer Generation
- Author: Your Name
- ✅ Add Input Field
- ✅ Include Examples - here you can provide examples of questions about Mantium to help future users interact with your prompt.
- ✅ I have followed my provider's go live requirements
To test out this prompt, input a question about Mantium to see the results!
Answer generation can apply to many domains - business, science, customer service, and many others. The important thing to note is that this model follows a pattern of text based on the text that it is provided - that is, factual and correct answers must be provided to it for it to return factual and correct responses. Providing a pattern for it to generate a response of "I don't know" or "I'm not sure" for questions that are not in the knowledge base is one step towards keeping the model from generating inaccurate or fabricated responses.
As you add more knowledge to the prompt, it might eventually get too big and computationally expensive - what are the next steps you could take?
- Use the OpenAI Answers endpoint, such as in this example.
- Fine-tune the model to specialize in your use case.
Updated 5 days ago