Chapter 3. Solution Server configurations
Solution Server is a component that allows Red Hat Developer Lightspeed for MTA to build a collective memory of source code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how source code changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the Solution Server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.
The Solution Server delivers two primary benefits to users:
- Contextual Hints: It surfaces examples of past migration solutions — including successful user modifications and accepted fixes — offering actionable hints for difficult or previously unsolved migration problems.
- Migration Success Metrics: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of Red Hat Developer Lightspeed for MTA successfully migrating a given code segment.
Solution Server is an optional component in Red Hat Developer Lightspeed for MTA. You must complete the following configurations before you can place a code resolution request.
Solution Server is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration |
---|---|
OpenShift AI platform | Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API |
Open AI ( |
|
Azure OpenAI ( |
|
Amazon Bedrock ( |
|
Google Gemini ( |
|
Ollama ( |
|
3.1. Configuring the model secret key Copy linkLink copied to clipboard!
You must configure the Kubernetes secret for the large language model (LLM) provider in the Red Hat OpenShift project where you installed the MTA operator.
You can replace oc
in the following commands with kubectl
.
You must create a secret in your OpenShift cluster to produce the resources necessary for the Solution Server.
Procedure
Create a credentials secret named
kai-api-keys
in theopenshift-mta
project.For Amazon Bedrock as the provider, type:
oc create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID> \ --from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
oc create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID> \ --from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Azure OpenAI as the provider, type:
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=AZURE_OPENAI_API_KEY='<YOUR_AZURE_OPENAI_API_KEY>'
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=AZURE_OPENAI_API_KEY='<YOUR_AZURE_OPENAI_API_KEY>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Google as the provider, type:
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=GEMINI_API_KEY='<YOUR_GOOGLE_API_KEY>'
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=GEMINI_API_KEY='<YOUR_GOOGLE_API_KEY>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the OpenAI-compatible providers, type:
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \ --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \ --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also set the base URL as the
kai_llm_baseurl
variable in the Tackle custom resource.
(Optional) Force a reconcile so that the MTA operator picks up the secret immediately
kubectl patch tackle tackle -n openshift-mta --type=merge -p \ '{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
kubectl patch tackle tackle -n openshift-mta --type=merge -p \ '{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Enabling Red Hat Developer Lightspeed for MTA in Tackle custom resource Copy linkLink copied to clipboard!
Solution Server integrates with the MTA Hub backend component to use the database and volumes necessary to store and retrieve the solved examples.
To enable Solution Server and other AI configurations in the Red Hat Developer Lightspeed for migration toolkit for applications VS Code extension, you must modify the Tackle custom resource (CR) with additional parameters.
Prerequisites
-
You deployed an additional RWO volume for the
Red Hat Developer Lightspeed for MTA-database
if you want to use Red Hat Developer Lightspeed for MTA. See Persistent volume requirements for more information. - You installed the MTA operator v8.0.0.
Procedure
-
Log in to the Red Hat OpenShift cluster and switch to the
openshift-mta
project. Edit the Tackle CR settings in the
tackle_hub.yml
file with the following command:oc edit tackle
oc edit tackle
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter applicable values for
kai_llm_provider
andkai_llm_model
variables.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor the OpenAI provider, the
kai_llm_provider
value isOpenAI
.Apply the Tackle CR by in the
openshift-mta
project using the following command.oc apply -f tackle_hub.yaml
$ oc apply -f tackle_hub.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the following command to verify the Red Hat Developer Lightspeed for MTA resources deployed for Solution Server.
oc get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'
oc get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'
Copy to Clipboard Copied! Toggle word wrap Toggle overflow