Creating a new Create with AI client
In order to configure Create with AI on your site, you must first have a Create with AI client available.
Note
This integration is a premium integration. To activate the integration, you must follow a typical software development life cycle, which will incur additional costs. For more details on this integration or on other premium integrations, please contact your Brightspot representative.
Note
Prior to beginning this process, you must go to the AI provider you are choosing to use and follow their steps to get an API key. You will need this key to complete this configuration.
To configure a Create with AI client:
- Click > Admin > Sites & Settings.
- Select the site for which you wish to configure the Create with AI client.
- Under Integrations, expand the AI cluster.
- Toggle on Create with AI Client Enabled.
- Expand Create with AI Client and select Create New.
- Enter a Name for the Create with AI client you are creating.
- Select a Provider from the list of available providers.
- Select the name of the provider and use the table below to complete the fields as needed.
Amazon Bedrock: Claude
Field Description Credentials Expand Credentials and select one of the following options: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Claude.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Claude that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this client. Model ID Enter the version ID of the foundation model you are using for this client.
Learn more about the models available to you by visiting Amazon Bedrock: Claude.Model Version ID Enter the version ID of the foundation model you are using for this client.
Learn more about the models available to you by visiting Amazon Bedrock: Claude documentation.Max Tokens To Sample Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Amazon Bedrock: Cohere
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access the AI Chat provider.
- Assume Role—Select this option to assume a role that has been created for Amazon Bedrock: Cohere that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this client. Model ID Enter the version ID of the foundation model you are using for this client.
Learn more about the models available to you by visiting Amazon Bedrock: Cohere documentation.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Max Tokens Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Amazon Bedrock: Llama
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Llama.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Llama that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this client. Model ID Enter the version ID of the foundation model you are using for this client.
Learn more about the models available to you by visiting Amazon Bedrock: Llama documentation.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It's a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Max Generation Length Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Amazon Bedrock: Titan
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Titan.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Titan that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this client. Model ID Enter the version ID of the foundation model you are using for this client.
Learn more about the models available to you by visiting Amazon Bedrock: Titan documentation.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Google Vertex AI
NoteYour Google Vertex AI credentials are available on your Google Cloud account. Your credentials consist of your JSON Credentials, Scope, Project ID, Location and Model. These values are entered below.Field Description Credentials Select JSON Credentials if it is not already selected. Scopes Enter a scope value to log into Google Vertex AI by typing in the proper information and then clicking .
Repeat this procedure for each scope needed.
An authorization scope is an OAuth 2.0 URI string that contains the Google Workspace app name, what kind of data it accesses, and the level of access. Scopes are your app's requests to work with Google Workspace data, including users' Google Account data.Project ID Enter the name of the project you created in Google Cloud. Location Enter the location for the Google Vertex AI project as it was entered in Google Cloud. Model Select the Google Vertex AI model you are using for . Max Tokens Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Presence Penalty Enter a value between -2.0 - 2.0.
This parameter makes the model less likely to repeat words that have already been used in the generated text.Frequency Penalty Enter a value between -2.0 - 2.0.
This value discourages the model from repeating the same words multiple times, promoting a richer vocabulary.Open AI
Field Description API Key Enter the Open AI API key. You must get this key from your account on the Open AI Console.
See the Open AI documentation for information about API keys.Engine ID Enter the identifier for the specific model used to power the AI experience. Max Tokens Enter the maximum number of tokens Open AI should sample.
Tokens can be thought of as pieces of words. Before the API processes the request, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end; instead, tokens can include trailing spaces and even sub-words.Top P Enter a value between 0-1.
Top-p is an inference parameter in Amazon Bedrock that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Temperature Enter a value between 0.0
and1.0
. Temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Presence Penalty Enter a value between -2.0 - 2.0.
This parameter makes the model less likely to repeat words that have already been used in the generated text.Frequency Penalty Enter a value between -2.0 - 2.0.
This value discourages the model from repeating the same words multiple times, promoting a richer vocabulary. - Under Prompt Suggestions, click Add Item.
Use the menu to select from available prompt suggestions, or click Create New to add a new prompt. See Creating a new Create with AI prompt suggestion for more information on adding prompt suggestion to the menu.
Prompt suggestions consist of a clickable title that displays in the Create with AI pop-up, and the actual prompt suggestion text that is sent to the Create with AI client to generate content.
- In Systems Prompt, enter prompts (if desired) to guide the LLM (large language model) and set the general direction of the conversations.
- Click Save.