Getting Started with Mantium

This page will help you get started with Mantium. You'll be up and running in a jiffy!

Getting Started

  1. Overview
  2. Authentication
  3. Connecting to AI Provider
  4. Adding Prompts
  5. Checking Results

Let's walk through core API concepts as we tackle some everyday use cases.

Overview

Mantium is a cloud platform for building with large language models and managing them at scale. Mantium supports integrations with AI providers such as OpenAI, Cohere, AI21. If you would prefer, you can also use Mantium's large language model as well.

With Mantium, you can manage models that are suitable for use cases such as classification, semantic searching, content generation, and more. Deploy your models with one click, track interactions with them, and create security policies all within the the Mantium AI App, using our APIs, and with our client libraries (Javascript, Python).

You can deploy a web application for any use case that you create via the Mantium AI app. For example, you can spin up a web application and share the link with others to interact with a thank you note generator that you have created. Now your friends and colleagues can create thank you notes as well.

To access models or use features specific to different AI providers (OpenAI, AI21, Co:here), you will need to use keys from these AI providers for integration. Please note that Mantium does not supply AI provider keys - read more about this below in AI Provider Integration.

How Mantium Works

The image below provides a high level description of how Mantium works.

841

Authentication

Make an account by visiting the Mantium AI Platform and selecting register. Next, enter your email address and create a password.

1920

After you verify the email, you'll be able to sign in to the Mantium application. You'll also need your username and password to obtain a token for API use.

AI Provider Integration

After you sign up for Mantium, the first step is to input your AI Provider key under the Integrations page. Mantium currently supports three providers: OpenAI, Cohere and AI21.
The image below describes the process of adding a new AI provider. You can also find information on how to get API keys for each AI provider and integrate them with Mantium below.

1920

OpenAI Integration

If you have an OpenAI key, you can enter it into the input field; otherwise, you can apply to join the waitlist.
Click here to join the waitlist, and here for academic access.

1921

Note that OpenAI keys all start with sk-.

Cohere Integration

Just like OpenAI integration, you can apply to join the waitlist here to get access. If you have your key, you can enter it in the input field.
Note that Co:here keys are a mix of upper and lower case letters and numbers.

AI21 Integration

To get an API key for AI21, sign up on the website for access to the AI21 Studio Beta. After the signup process, check your profile for your API key.

2552

Enter your API key into the Input field, and save the key.

You will not be able to see the API key you have stored in the system again. It is stored encrypted and is only decrypted at the point in time it is passed to Open AI with your prompts. It is never logged, cached, or stored unencrypted in any fashion in the Mantium architecture.

The first prompt

Adding your first prompt is when the magic starts to happen! On the Mantium AI Platform, Navigate to AI Manager > Prompts and click Add Prompt.

1920

You'll want to:

  1. Name your prompt - this will help you identify it in the future.
  2. Choose a provider - different providers may provide different features and cost structures.
  3. Select an endpoint - endpoints will determine what your model does. Does it summarize input? Generate text? Classify text?
  4. Choose an engine - the engine will impact how well your prompt performs and may impact its functionality. For example, the Codex engine in OpenAI is meant to generate code, and would not be well-suited for generating blog posts.
    1. Engine Overview by Provider:
      1. OpenAI
      2. Co:here - Generation, Representation
      3. Ai21
  5. Specify your prompt text - This text gives the model a small amount of information about what you intend to do with it. For examples of prompt text, check out OpenAI's examples or the AI21 playground.

The other parameters on the page allow for advanced control of your prompt. You can also use the Mantium prompt importer to import prompt configuration settings directly.

Your prompts can be executed in two ways:

  • Mantium AI Platform - with the Mantium App, you can execute your prompts using the User Interface.
  • User's own platform to Mantium API - you can interact with the APIs supported by Mantium to execute prompts.

Organize your prompts into Intelets

You can organize multiple prompts into Intelets by grouping them together sequentially so that the output of one prompt feeds into the input of the next - this enables you create complex AI data pipelines for processing text.

To add a new Intelet, Navigate to AI Manager > Intelets and click Add an Intelet.

2560

Creating an intelet

From the Intelets interface, you are able to:

  1. Provide the Intelet Name and Description for identification
  2. Select your prompts from the list of prompts in the "All Prompts" section, drag and drop in the "Selected Prompts" section.
  3. Deploy or save your Intelet
2560

Designing your intelet

Set Security policies for your Prompts

Security is an important part of Mantium - we want to empower all users, newcomers and seasoned developers alike, to easily build safety and security features into their workflows and projects. The Mantium App supports a set of security rules that allow you set triggers and actions for your prompts. For each AI Provider, we have default policies that you can set to make sure the prompts are secured according to the user specification.

Navigate to the Security interface from the menu on the left to create a new security policy.

1920

Add a new security policy

After adding a Policy Name and Description, you can select appropriate rules for your use case in the "Rule Settings" tab.

1920

Apply rules to your security policy

The "Action Settings" menu allows you to choose a course of action for Security Policy violations. Any violation warnings are automatically logged, but you are also able to halt or interrupt processing by selecting one of the other options.

1921

Set actions for your security policy

The "Interrupt Processing" action integrates with the Human-In-The-Loop (HITL) feature by interrupting prompt processing until it is restarted via the HITL interface. This allows you to
approve, modify, or reject provider responses from any interrupted prompts to ensure your application is safe and meets your security expectations.

1920

View notifications from triggered security policies

View the logs

Mantium makes monitoring your application easy. Dashboard view provides an activity summary, and logs make it possible to debug your applications and review all activities.

2560

View your logs and dashboard

Advanced Settings

Mantium provides a simple interface to configure advanced settings currently supported by GPT-3 and other large language models. These are not required; however, do note that Mantium sets the logprobs parameter to 0.


What’s Next

Now that you have a sense of what Mantium can do, lets integrate your application with Mantium.