Creating custom AI providers using AIChatProvider
Suppose you are using an AI model or service not currently supported by Brightspot (like alternatives to Amazon Bedrock or Google Vertex). In that case, this guide will show you how to implement your own custom provider using the AIChatProvider
interface.
The AIChatProvider
interface allows you to integrate custom AI chat providers. This document provides a guide and code examples for implementing the interface.
To create a custom AI provider, implement the AIChatProvider
interface and override its methods.
Example implementation:
@Recordable.DisplayName("my-custom-provider")
public class MyCustomAIProvider extends Record implements AIChatProvider {
@ToolUi.ValueGeneratorClass(AIChatProviderModelIdValueGenerator.class)
private String model;
public String getModel() {
return model;
}
public void setModel(String model) {
this.model = model;
}
@Override
public Set<String> getModelNames() {
// Return a set of supported model names.
return Set.of("model-v1", "model-v2");
}
@Override
public void invokeLlm(AIChatRequest request) {
// Simulate invoking an LLM synchronously.
String prompt = getPrompt(request);
String response = "Simulated response for: " + prompt;
request.getResponse().setText(response);
request.getChat().save();
}
@Override
public CompletableFuture<?> invokeLlmAsync(AIChatRequest request) {
// Simulate invoking an LLM asynchronously.
return CompletableFuture.runAsync(() -> {
String prompt = getPrompt(request);
String response = "Simulated async response for: " + prompt;
request.getResponse().setText(response);
request.getChat().save();
});
}
@Override
public Message templatedContextMessage(Prompt prompt, Message contextMessage) {
// Create a templated context message based on the prompt and context.
Message message = new Message();
message.setText(prompt.getText());
message.setTemplatedText("User question: " + prompt.getText() + " with context: " + contextMessage.getText());
return message;
}
private String getPrompt(AIChatRequest request) {
// Simulate prompt generation
return request.getConversationHistory().stream()
.map(m -> m.getUser().toString() + " " + m.getTemplatedText())
.collect(Collectors.joining("\n"));
}
}
getModelNames
The getModelNames
method returns a set of supported model names. These names are tied to the @ToolUi.ValueGeneratorClass
annotation, which enables the models to appear as selectable options in the UI.
Key Integration:
@ToolUi.ValueGeneratorClass(AIChatProviderModelIdValueGenerator.class)
private String model;
- Annotation purpose—The
@ToolUi.ValueGeneratorClass
annotation ensures that the values returned bygetModelNames
populate the dropdown or selection options in the UI for the model field. - User interaction—In the UI, users can choose from the set of models defined by the
getModelNames
method.
Example usage:
AIChatProvider provider = new MyCustomAIProvider();
System.out.println("Supported models: " + provider.getModelNames());
Output:
Supported models: [model-v1, model-v2]
invokeLlm
Invoke the LLM synchronously and populate the response in AIChatRequest
.
Example usage:
AIChatRequest request = AIChatRequest.newBuilder()
.newChat()
.withUserPrompt("Hello, AI!")
.build();
AIChatProvider provider = new MyCustomAIProvider();
provider.invokeLlm(request);
System.out.println("LLM Response: " + request.getResponse().getText());
Output:
LLM Response: Simulated response for: Hello, AI!
invokeLlmAsync
Invoke the LLM asynchronously and handle the response with a CompletableFuture
.
Example Usage:
AIChatRequest request = AIChatRequest.newBuilder()
.newChat()
.withUserPrompt("Hello, async AI!")
.build();
AIChatProvider provider = new MyCustomAIProvider();
CompletableFuture<?> future = provider.invokeLlmAsync(request);
future.thenRun(() -> {
System.out.println("Async LLM Response: " + request.getResponse().getText());
});
Output:
Async LLM Response: Simulated async response for: Hello, async AI!
templatedContextMessage
The templatedContextMessage
method allows you to format and customize how the context is inserted into a prompt. This ensures the AI receives structured and relevant information to generate accurate responses.
Example usage:
Prompt prompt = new Prompt("What are some good Italian restaurants?");
Message contextMessage = new Message();
contextMessage.setText("I am in downtown San Francisco and prefer outdoor seating.");
AIChatProvider provider = new MyCustomAIProvider();
Message templatedMessage = provider.templatedContextMessage(prompt, contextMessage);
System.out.println("Text: " + templatedMessage.getText());
System.out.println("TemplatedText: " + templatedMessage.getTemplatedText());
Output:
Text: What are some good Italian restaurants?
TemplatedText: User question: What are some good Italian restaurants? Context: I am in downtown San Francisco and prefer outdoor seating.
- Thread safety—Ensure thread safety if your implementation involves shared resources.
- Asynchronous handling—Handle CompletableFuture cancellations gracefully in invokeLlmAsync.
- Custom templating—Customize templatedContextMessage based on your AI provider's requirements and templating logic.
With this guide and examples, you can implement your own custom AI provider using the AIChatProvider
interface.