Employee Onboarding With OpenAI

Help onboard new employees by generating answers from a company-specific body of knowledge.

No time to walk through the tutorial? Test this prompt out here.

Use Case

We're growing very quickly at Mantium, and as we onboard new employees, we want to make our information as accessible as possible. Often times, even if information is available in a knowledge base, it can be difficult to find for newcomers. To help with this problem, we've created a prompt to generate basic answers about our company.

Most large language models are trained on public collections of text like Wikipedia so it's very unlikely that the model will have any information about Mantium. We need to add information about Mantium to our answer generation system and in the process, we'll also help guide the model to respond "I'm not sure" when it doesn't know the answer.

Prompt Creation

When you are ready to create your prompt, click AI Manager > Prompts > Add New Prompt.


Then, fill out the following:

  • Name of Prompt: Mantium Answer Generation
  • Description: OpenAI Completion

Tags and Intelets can be left blank.

For deploying your prompt publicly at the end of this tutorial, you can add a default security policy configured by Mantium. Click Add Security Policies under Security Policies and drag Default Policies from All Policies to Selected Policies. Click Done to save - you will know the policy has been applied when its name is visible under Security Policies.

Provider Settings

  • Provider: OpenAI
  • Endpoint: Completion

Prompt Body

Since the OpenAI's completion endpoint can be used for many tasks, you need to make it clear what you want to do. You can do this by including instructions, examples, or combining both in your prompt. In this case, we provided it with question-answer pair examples.

Q: What is human in the loop?
A: Human in the loop helps to train the models in instances where the model is confused, wrong, or offensive.
Q: What does Mantium do?
A: Mantium enables the AI enthusiasts (personal and professional) to rapidly prototype and share large language models, solving one of the biggest barriers to AI adoption: deployment. And to keep your deployed AI behaving well in the real world, Mantium also includes security measures, logging, and human-in-the-loop.
Q: What year was Mantium founded?
A: I’m not sure
Q: Who can use Mantium?
A: Anyone! Mantium can be used for amusement and solving real business needs.
Q: Who is Mantium for?
A: Mantium offers both a UI and an API, and is currently offered for OpenAPI Cohere, GPT-J and more, and we are continuously adding other providers.
Q: Why use Mantium?
A: Mantium allows the community to be focused on their core product, while taking care of monitoring, logging, and human-in-the-loop.
Q: How do you create a user in Mantium?
A: Check out this great tutorial: https://developer.mantiumai.com/docs/getting-started
Q: What is the difference between a prompt and an intelet in Mantium?
A: Prompts are specific configurations for Search, Completion, Answer, and Classification.  Intelets are multiple prompts chained together.
Q: What is a security policy?
A: A set of security rules while the prompt is processing.
Q: What is an organization in Mantium?
A: I’m not sure
Q: How do you get started with creating a prompt?
A: Check out this great tutorial: https://developer.mantiumai.com/docs/getting-started
Q: What are the ways users can execute prompts in Mantium?
A: I’m not sure
Q: How can I import a prompt into Mantium?
A: Currently the way to import a prompt is through the Import Prompt Window.  You should copy and paste your prompt in cURL text.
Q: What is a security rule?
A: A security rule is a specific check in the processing pipeline.
Q: What is NLP?
A: Natural language processing (NLP) is the ability of a computer program to interpret human language as it is spoken and written -- referred to as natural language. It is a component of artificial intelligence (AI).
Q: How do you share prompts in Mantium?
A: Check out this great tutorial: https://developer.mantiumai.com/docs/create-and-share-your-first-mantium-ai-app
Q: How many employees are at Mantium?
A: I’m not sure

Then, select an engine.

  • Choose an Engine: Davinci

We selected the Davinci engine, which is the most powerful of OpenAI's available models. It provides great results for creative content generation. It is possible to select a lighter-weight engine as well, although results may vary.

OpenAI’s Engine Documentation

Making the Prompt more User-Friendly

We ended the prompt with Q: so that the input to the model doesn't have to be prefixed by Q: each time. Note that there is a single space after Q: - this is important to include to help the language model follow a pattern as closely as possible.

Factual Prompt Design

We added "I’m not sure " because the API can generate responses that are fabricated. To reduce our chances of getting answers that are made up, we show the model a way of saying, "I'm not sure." We created these "I'm not sure" answers for questions that the model shouldn't know; meaning, the answers won't be found in the model's training data.

Prompt Settings

Basic Settings

  • Response Length: 60
  • Temperature: 0
  • Top P: 1
  • Frequency Penalty: 0
  • Presence Penalty: 0
  • Stop Sequences: ###

Advanced Settings

  • Best Of: 1
  • N: 1
  • LogProbs: 1

Test Prompt Text

To test out the prompt's performance, paste the following into the Input box:

How many employees work at Mantium?

Click Test Run!

Even if your prompt stays the same, you might get slightly different completions every time you run it. The output should return “I’m not sure” but if it doesn’t, execute the prompt again to see the different completions. In the next section, we’ll demonstrate how you can get better results with OpenAI Answers. It’s a dedicated endpoint for generating answers on sources of truth, like company documentation or a knowledge base.

Results & Conclusion

Let's take a moment to think about the model's expected behavior. Recall that GPT-3 was introduced in May 2020 and its pre-training data consists of Wikipedia, books, and the Internet, amongst others. The knowledge base is the information we provide the model via the prompt we create. Since Mantium was founded after GTP-3 was trained and released, it should not be able to answer questions related to Mantium unless we have injected the answer in our knowledge base.

For example, the question "Who is the CEO of Mantium" would fall into the "Not in Knowledge Base" and "Not in Pre-Trained Data" category seen in the below table, and so the answer should be "I'm not sure."

answer in knowledge baseanswer not in knowledge base
answer in pre-training dataAnswerAnswer
answer not in pre-training dataAnswer"I'm not sure"

One-Click Deploy

Mantium enables sharing your prompt using the One-Click Deploy feature, located in each prompt's drawer view. From your list of prompts, click on the Mantium Answer Generation prompt, click Deploy.


Then, add the following configuration settings:

  • Name: Mantium Answer Generation
  • Author: Your Name
  • ✅ Add Input Field
  • ✅ Include Examples - here you can provide examples of questions about Mantium to help future users interact with your prompt.
  • Public
  • Live
  • ✅ I have followed my provider's go live requirements

To test out this prompt, input a question about Mantium to see the results!

Similar Use Cases

Answer generation can apply to many domains - business, science, customer service, and many others. The important thing to note is that this model follows a pattern of text based on the text that it is provided - that is, factual and correct answers must be provided to it for it to return factual and correct responses. Providing a pattern for it to generate a response of "I don't know" or "I'm not sure" for questions that are not in the knowledge base is one step towards keeping the model from generating inaccurate or fabricated responses.

What's Next?

As you add more knowledge to the prompt, it might eventually get too big and computationally expensive - what are the next steps you could take?

  • Use the OpenAI Answers endpoint with a body of text as your source of truth.
  • Fine-tune the model to specialize in your use case.