Brightspot Integrations Guide

Creating a new Create with AI client


In order to configure Create with AI on your site, you must first have a Create with AI client available.

Note
This integration is a premium integration. To activate the integration, you must follow a typical software development life cycle, which will incur additional costs. For more details on this integration or on other premium integrations, please contact your Brightspot representative.
Note
Prior to beginning this process, you must go to the AI provider you are choosing to use and follow their steps to get an API key. You will need this key to complete this configuration.

To configure a Create with AI client:

  1. Click menu > Admin > Sites & Settings.
  2. Select the site for which you wish to configure the Create with AI client.
  3. Under Integrations, expand the AI cluster.
  4. Toggle on Create with AI Client Enabled.
    AI cluster in sites and settings.jpg AI cluster in sites and settings.jpg
    AI cluster expanded under the Integrations tab
  5. Expand Create with AI Client and select Create New.
  6. Enter a Name for the Create with AI client you are creating.
  7. Select a Provider from the list of available providers.
  8. Select the name of the provider and use the table below to complete the fields as needed.

    Amazon Bedrock: Claude

    FieldDescription
    CredentialsExpand Credentials and select one of the following options:
    • Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Claude.
    • Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Claude that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
    • Static—Functions as a separate user with its own set of credentials.
    RegionEnter the AWS region for this client.
    Model IDEnter the version ID of the foundation model you are using for this client.

    Learn more about the models available to you by visiting Amazon Bedrock: Claude.
    Model Version IDEnter the version ID of the foundation model you are using for this client.


    Learn more about the models available to you by visiting Amazon Bedrock: Claude documentation.
    Max Tokens To SampleEnter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt.
    TemperatureEnter a value between 0.0 and 1.0. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of 0.4 (for Ask AI) or 0.6 (for Create with AI).
    Top KEnter the Top K value for the model.

    The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.
    Top PEnter a value between 0-1.

    Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

    Amazon Bedrock: Cohere

    FieldDescription
    CredentialsExpand Credentials and select from:
    • Default—Uses the default credentials that have been set up for your site to access the AI Chat provider.
    • Assume Role—Select this option to assume a role that has been created for Amazon Bedrock: Cohere that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
    • Static—Functions as a separate user with its own set of credentials.
    RegionEnter the AWS region for this client.
    Model IDEnter the version ID of the foundation model you are using for this client.

    Learn more about the models available to you by visiting Amazon Bedrock: Cohere documentation.

    TemperatureEnter a value between 0.0 and 1.0. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of 0.4 (for Ask AI) or 0.6 (for Create with AI).
    Top PEnter a value between 0-1.

    Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.
    Top KEnter the Top K value for the model.

    The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.
    Max TokensEnter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt.

    Amazon Bedrock: Llama

    FieldDescription
    CredentialsExpand Credentials and select from:
    • Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Llama.
    • Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Llama that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
    • Static—Functions as a separate user with its own set of credentials.
    RegionEnter the AWS region for this client.
    Model IDEnter the version ID of the foundation model you are using for this client.


    Learn more about the models available to you by visiting Amazon Bedrock: Llama documentation.
    TemperatureEnter a value between 0.0 and 1.0. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of 0.4 (for Ask AI) or 0.6 (for Create with AI).
    Top PEnter a value between 0-1.

    Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It's a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.
    Max Generation LengthEnter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt.

    Amazon Bedrock: Titan


    FieldDescription
    CredentialsExpand Credentials and select from:
    • Default—Uses the default credentials that have been set up for your site to access Amazon Bedrock: Titan.
    • Assume Role—This option is used to assume a role that has been created for Amazon Bedrock: Titan that may have different access than the default role. You must also provide the ARN (Amazon Resource Name) for this role.
    • Static—Functions as a separate user with its own set of credentials.
    RegionEnter the AWS region for this client.
    Model IDEnter the version ID of the foundation model you are using for this client.


    Learn more about the models available to you by visiting Amazon Bedrock: Titan documentation.
    TemperatureEnter a value between 0.0 and 1.0. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of 0.4 (for Ask AI) or 0.6 (for Create with AI).
    Top PEnter a value between 0-1.

    Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.

    Google Vertex AI

    Note
    Your Google Vertex AI credentials are available on your Google Cloud account. Your credentials consist of your JSON Credentials, Scope, Project ID, Location and Model. These values are entered below.
    FieldDescription
    CredentialsSelect JSON Credentials if it is not already selected.
    ScopesEnter a scope value to log into Google Vertex AI by typing in the proper information and then clicking add.

    Repeat this procedure for each scope needed.

    An authorization scope is an OAuth 2.0 URI string that contains the Google Workspace app name, what kind of data it accesses, and the level of access. Scopes are your app's requests to work with Google Workspace data, including users' Google Account data.
    Project IDEnter the name of the project you created in Google Cloud.
    LocationEnter the location for the Google Vertex AI project as it was entered in Google Cloud.
    Model Select the Google Vertex AI model you are using for .
    Max TokensEnter the maximum number of tokens the model should sample per prompt. A token equates to the basic unit of text that a model learns to understand the user input and prompt.
    Top PEnter a value between 0-1.

    Top-p is an inference parameter that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.
    Top KEnter the Top K value for the model.

    The model considers a certain number of most-likely candidates for the next token. A lower value narrows the pool and limits the options to more likely outputs, while a higher value expands the pool, giving the model more freedom to consider less likely outputs.
    TemperatureEnter a value between 0.0 and 1.0. The temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of 0.4 (for Ask AI) or 0.6 (for Create with AI).
    Presence PenaltyEnter a value between -2.0 - 2.0.

    This parameter makes the model less likely to repeat words that have already been used in the generated text.
    Frequency PenaltyEnter a value between -2.0 - 2.0.

    This value discourages the model from repeating the same words multiple times, promoting a richer vocabulary.

    Open AI

    FieldDescription
    API KeyEnter the Open AI API key. You must get this key from your account on the Open AI Console.

    See the Open AI documentation for information about API keys.
    Engine IDEnter the identifier for the specific model used to power the AI experience.
    Max TokensEnter the maximum number of tokens Open AI should sample.

    Tokens can be thought of as pieces of words. Before the API processes the request, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end; instead, tokens can include trailing spaces and even sub-words.

    Top PEnter a value between 0-1.


    Top-p is an inference parameter in Amazon Bedrock that controls the model's token choices based on the probability of potential options. It is a float with a minimum and maximum of 0 and 1, respectively, and a default of 1. Choose a lower value to decrease the size of the pool and limit the options to more likely outputs. Choose a higher value to increase the size of the pool and allow the model to consider less likely outputs.
    TemperatureEnter a value between 0.0 and 1.0. Temperature is a parameter that controls the randomness of predictions in a model's text generation. A lower setting makes the model more conservative and deterministic, while a higher setting increases randomness and more creative outputs. recommends a temperature of 0.4 (for Ask AI) or 0.6 (for Create with AI).
    Presence PenaltyEnter a value between -2.0 - 2.0.

    This parameter makes the model less likely to repeat words that have already been used in the generated text.

    Frequency PenaltyEnter a value between -2.0 - 2.0.

    This value discourages the model from repeating the same words multiple times, promoting a richer vocabulary.
  9. Under Prompt Suggestions, click add_circle_outline Add Item.
  10. Use the menu to select from available prompt suggestions, or click Create New to add a new prompt. See Creating a new Create with AI prompt suggestion for more information on adding prompt suggestion to the menu.

    Prompt suggestions consist of a clickable title that displays in the Create with AI pop-up, and the actual prompt suggestion text that is sent to the Create with AI client to generate content.

    Create with AI prompt suggestions.jpg Create with AI prompt suggestions.jpg
    Prompt suggestions in Create with AI window.

  11. In Systems Prompt, enter prompts (if desired) to guide the LLM (large language model) and set the general direction of the conversations.
    AI system prompts.jpg AI system prompts.jpg
    Sample system prompts
  12. Click Save.
Previous Topic
Brightspot's generative AI integrations
Next Topic
Creating a new Ask AI client
Was this topic helpful?
Thanks for your feedback.
Our robust, flexible Design System provides hundreds of pre-built components you can use to build the presentation layer of your dreams.

Asset types
Module types
Page types
Brightspot is packaged with content types that get you up and running in a matter of days, including assets, modules and landing pages.

Content types
Modules
Landing pages
Everything you need to know when creating, managing, and administering content within Brightspot CMS.

Dashboards
Publishing
Workflows
Admin configurations
A guide for installing, supporting, extending, modifying and administering code on the Brightspot platform.

Field types
Content modeling
Rich-text elements
Images
A guide to configuring Brightspot's library of integrations, including pre-built options and developer-configured extensions.

Google Analytics
Shopify
Apple News