- Connecting to AI Provider
- Adding Prompts
- Checking Results
Let's walk through core API concepts as we tackle some everyday use cases.
Mantium is a cloud platform for building with large language models and managing them at scale. Mantium supports integrations with AI providers such as OpenAI, Cohere, AI21. If you would prefer, you can also use Mantium's large language model as well.
You can deploy a web application for any use case that you create via the Mantium AI app. For example, you can spin up a web application and share the link with others to interact with a thank you note generator that you have created. Now your friends and colleagues can create thank you notes as well.
To access models or use features specific to different AI providers (OpenAI, AI21, Co:here), you will need to use keys from these AI providers for integration. Please note that Mantium does not supply AI provider keys - read more about this below in AI Provider Integration.
The image below provides a high level description of how Mantium works.
Make an account by visiting the Mantium AI Platform and selecting register. Next, enter your email address and create a password.
After you verify the email, you'll be able to sign in to the Mantium application. You'll also need your username and password to obtain a token for API use.
After you sign up for Mantium, the first step is to input your AI Provider key under the Integrations page. Mantium currently supports three providers: OpenAI, Cohere and AI21.
The image below describes the process of adding a new AI provider. You can also find information on how to get API keys for each AI provider and integrate them with Mantium below.
Note that OpenAI keys all start with
Just like OpenAI integration, you can apply to join the waitlist here to get access. If you have your key, you can enter it in the input field.
Note that Co:here keys are a mix of upper and lower case letters and numbers.
To get an API key for AI21, sign up on the website for access to the AI21 Studio Beta. After the signup process, check your profile for your API key.
Enter your API key into the Input field, and save the key.
You will not be able to see the API key you have stored in the system again. It is stored encrypted and is only decrypted at the point in time it is passed to Open AI with your prompts. It is never logged, cached, or stored unencrypted in any fashion in the Mantium architecture.
Adding your first prompt is when the magic starts to happen! On the Mantium AI Platform, Navigate to AI Manager > Prompts and click Add Prompt.
You'll want to:
- Name your prompt - this will help you identify it in the future.
- Choose a provider - different providers may provide different features and cost structures.
- Select an endpoint - endpoints will determine what your model does. Does it summarize input? Generate text? Classify text?
- Choose an engine - the engine will impact how well your prompt performs and may impact its functionality. For example, the Codex engine in OpenAI is meant to generate code, and would not be well-suited for generating blog posts.
- Specify your prompt text - This text gives the model a small amount of information about what you intend to do with it. For examples of prompt text, check out OpenAI's examples or the AI21 playground.
The other parameters on the page allow for advanced control of your prompt. You can also use the Mantium prompt importer to import prompt configuration settings directly.
Your prompts can be executed in two ways:
- Mantium AI Platform - with the Mantium App, you can execute your prompts using the User Interface.
- User's own platform to Mantium API - you can interact with the APIs supported by Mantium to execute prompts.
You can organize multiple prompts into Intelets by grouping them together sequentially so that the output of one prompt feeds into the input of the next - this enables you create complex AI data pipelines for processing text.
To add a new Intelet, Navigate to AI Manager > Intelets and click Add an Intelet.
From the Intelets interface, you are able to:
- Provide the Intelet Name and Description for identification
- Select your prompts from the list of prompts in the "All Prompts" section, drag and drop in the "Selected Prompts" section.
- Deploy or save your Intelet
Security is an important part of Mantium - we want to empower all users, newcomers and seasoned developers alike, to easily build safety and security features into their workflows and projects. The Mantium App supports a set of security rules that allow you set triggers and actions for your prompts. For each AI Provider, we have default policies that you can set to make sure the prompts are secured according to the user specification.
Navigate to the Security interface from the menu on the left to create a new security policy.
After adding a Policy Name and Description, you can select appropriate rules for your use case in the "Rule Settings" tab.
The "Action Settings" menu allows you to choose a course of action for Security Policy violations. Any violation warnings are automatically logged, but you are also able to halt or interrupt processing by selecting one of the other options.
The "Interrupt Processing" action integrates with the Human-In-The-Loop (HITL) feature by interrupting prompt processing until it is restarted via the HITL interface. This allows you to
approve, modify, or reject provider responses from any interrupted prompts to ensure your application is safe and meets your security expectations.
Mantium makes monitoring your application easy. Dashboard view provides an activity summary, and logs make it possible to debug your applications and review all activities.
Mantium provides a simple interface to configure advanced settings currently supported by GPT-3 and other large language models. These are not required; however, do note that Mantium sets the logprobs parameter to 0.
Updated 12 months ago
Now that you have a sense of what Mantium can do, lets integrate your application with Mantium.