Creating a new Ask AI client
Note
This integration is a premium integration. To activate the integration, you must follow a typical software development life cycle, which will incur additional costs. For more details on this integration or on other premium integrations, please contact your Brightspot representative.
In order to configure Ask AI on your site, you must first have an Ask AI client available.
Note
Before beginning configuration, please open a support ticket to enable Ask AI with Solr as the vector database for your project.
Note
Prior to beginning this process, you must go to the AI provider you are choosing to use and follow their steps to get an API key. You will need this key to complete this configuration.
To configure an Ask AI client:
- Click > Admin > Sites & Settings.
- Select the site for which you wish to configure the Ask AI client.
- Under Integrations, expand the AI cluster.
- Toggle on Ask AI Enabled.
- Expand Ask AI Client, and click Create New.
- Enter a Name for the Ask AI client you are creating.
- Select a Vector Embedding provider from the list of available providers. This service converts data (such as queries) into high-dimensional vectors.TipBrightspot recommends starting with a smaller provider to see if it works for you before moving onto the larger providers.
- In the module below, click the name of the Vector Embedding provider you selected in step 6, and use the table to complete the fields as needed.
Amazon Bedrock: Claude
Field Description Credentials Expand Credentials and select one of the following options: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Claude.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Claude that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this Ask AI client. Model Version ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Claude documentation.Max Tokens To Sample Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Bedrock Titan Embedding API
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Titan.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Titan that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this Ask AI client. Model ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Titan documentation.Google Vertex Embedding API
Field Description Credentials Select JSON Credentials if it is not already selected. JSON Credentials Enter your JSON credentials to log into Google Vertex AI. Scopes Enter a scope value to log into Google Vertex AI by typing in the proper information and then clicking .
Repeat this procedure for each scope needed.
An authorization scope is an OAuth 2.0 URI string that contains the Google Workspace app name, what kind of data it accesses, and the level of access. Scopes are your app's requests to work with Google Workspace data, including users' Google Account data.Project ID Enter the name of the project you created in Google Cloud. Location Enter the location for the Google Vertex AI project as it was entered in Google Cloud. Model Select the Google Vertex AI model you are using for Ask AI. Dimension Enter a dimension for the vector embedding provider. Open AI
Field Description API Key Enter the Open AI API key. You must get this key from your account on the Open AI Console.
See the Open AI documentation for information about API keys.Embedding Model Enter the name of the embedding model to be used with this configuration.
See OpenAI documentation for available models.Max Tokens Enter the maximum number of tokens Open AI should sample.
Tokens can be thought of as pieces of words. Before the API processes the request, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end; instead, tokens can include trailing spaces and even sub-words.User Enter a unique identifier that represents your organization, which will help OpenAI to monitor and detect abuse. - Select a Search Provider. The selected provider converts your content into vectors for use with Ask AI. NoteTo vectorize all of your existing content, please reach out to your Brightspot account representative or project team.
Amazon Bedrock: Claude
Field Description Credentials Expand Credentials and select one of the following options: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Claude.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Claude that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this Ask AI client. Model ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Claude.Model Version ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Claude documentation.Max Tokens To Sample Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Amazon Bedrock: Cohere
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access the Ask AI provider.
- Assume Role—Select this option to assume a role that has been created for Amazon Bedrock: Cohere that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this Ask AI client. Model ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Cohere documentation.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Max Tokens Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Amazon Bedrock: Llama
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Llama.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Llama that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this Ask AI client. Model ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Llama documentation.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It's a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Max Generation Length Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Amazon: Titan
Field Description Credentials Expand Credentials and select from: - Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Titan.
- Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Titan that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
- Static—Functions as a separate user with its own set of credentials.
Region Enter the AWS region for this Ask AI client. Model ID Enter the version ID of the foundation model you are using for this Ask AI client.
Learn more about the models available to you by visiting Amazon Bedrock: Titan documentation.Max Generation Length Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It's a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Google Vertex AI
NoteYour Google Vertex AI credentials are available on your Google Cloud account. Your credentials consist of your JSON Credentials, Scope, Project ID, Location and Model. These values are entered below.Field Description Credentials Select JSON Credentials if it is not already selected. JSON Credentials Enter your JSON credentials to log into Google Vertex AI. Scopes Enter a scope value to log into Google Vertex AI by typing in the proper information and then clicking .
Repeat this procedure for each scope needed.
An authorization scope is an OAuth 2.0 URI string that contains the Google Workspace app name, what kind of data it accesses, and the level of access. Scopes are your app's requests to work with Google Workspace data, including users' Google Account data.Project ID Enter the name of the project you created in Google Cloud. Location Enter the location for the Google Vertex AI project as it was entered in Google Cloud. Model Select the Google Vertex AI model you are using for Ask AI. Max Tokens Enter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt. Top P Enter a value between 0-1.
Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Top K Enter the Top K value for the model.
The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.Temperature Enter a value between 0.0
and1.0
. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Presence Penalty Enter a value between -2.0 - 2.0.
This parameter makes the model less likely to repeat words that have already been used in the generated text.Frequency Penalty Enter a value between -2.0 - 2.0.
This value discourages the model from repeating the same words multiple times, promoting a richer vocabulary.Open AI
Field Description API Key Enter the Open AI API key. You must get this key from your account on the Open AI Console.
See the Open AI documentation for information about API keys.Engine ID Enter the identifier for the specific model used to power the AI experience. Max Tokens Enter the maximum number of tokens Open AI should sample.
Tokens can be thought of as pieces of words. Before the API processes the request, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end; instead, tokens can include trailing spaces and even sub-words.Top P Enter a value between 0-1.
Top-p is an inference parameter in Amazon Bedrock that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.Temperature Enter a value between 0.0
and1.0
. Temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of0.4
(for Ask AI) or0.6
(for Create with AI).Presence Penalty Enter a value between -2.0 - 2.0.
This parameter makes the model less likely to repeat words that have already been used in the generated text.Frequency Penalty Enter a value between -2.0 - 2.0.
This value discourages the model from repeating the same words multiple times, promoting a richer vocabulary. - Enter the Max Records For Search. This is the maximum number of records the Ask AI client considers when performing a search. Brightspot recommends
3
. - Enter the Min Score For Search Records. This is a value between
0.0
and1.0
that signifies the minimum score rating the Ask AI client uses when considering records returned in a search. Brightspot recommends0.5
.TipFor faster response from the model, limit the Max Records For Search to five or under, and the Min Score For Search Records to 0.8. - Enter the Max Tokens For Search. The upper limit is dependent on the provider selected. Tokens can be thought of as pieces of words. Before the API processes the request, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end—tokens can include trailing spaces and even sub-words.
- Under Content Types, click and select the content types you want to expose to the Ask AI client. You may make multiple selections.
- In System Prompt, enter prompts (if desired) to guide the LLM (large language model) and set the general direction of the conversations.
- Under Max Recent Conversation History for Context, enter the limit of message history to include. Each item in the conversation represents a question from the user and an answer from the model. The default setting is
all
. - Click Save.
Previous Topic
Creating a new Create with AI client
Next Topic
Configuring Create with AI settings