Chapter 3. Solution Server configurations


Solution Server is a component that allows Red Hat Developer Lightspeed for MTA to build a collective memory of source code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how source code changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the Solution Server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.

The Solution Server delivers two primary benefits to users:

  • Contextual Hints: It surfaces examples of past migration solutions — including successful user modifications and accepted fixes — offering actionable hints for difficult or previously unsolved migration problems.
  • Migration Success Metrics: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of Red Hat Developer Lightspeed for MTA successfully migrating a given code segment.

Solution Server is an optional component in Red Hat Developer Lightspeed for MTA. You must complete the following configurations before you can place a code resolution request.

Important

Solution Server is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Expand
Table 3.1. Configurable large language models and providers in the Tackle custom resource
LLM Provider (Tackle CR value)Large language model examples for Tackle CR configuration

OpenShift AI platform

Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API

Open AI (openai)

gpt-4, gpt-4o, gpt-4o-mini, gpt-3.5-turbo

Azure OpenAI (azure_openai)

gpt-4, gpt-35-turbo

Amazon Bedrock (bedrock)

anthropic.claude-3-5-sonnet-20241022-v2:0, meta.llama3-1-70b-instruct-v1:0

Google Gemini (google)

gemini-2.0-flash-exp, gemini-1.5-pro

Ollama (ollama)

llama3.1, codellama, mistral

3.1. Configuring the model secret key

You must configure the Kubernetes secret for the large language model (LLM) provider in the Red Hat OpenShift project where you installed the MTA operator.

Note

You can replace oc in the following commands with kubectl.

Note

You must create a secret in your OpenShift cluster to produce the resources necessary for the Solution Server.

Procedure

  1. Create a credentials secret named kai-api-keys in the openshift-mta project.

    1. For Amazon Bedrock as the provider, type:

      oc create secret generic aws-credentials \
       --from-literal=AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID> \
       --from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
      Copy to Clipboard Toggle word wrap
    2. For Azure OpenAI as the provider, type:

      oc create secret generic kai-api-keys -n openshift-mta \
       --from-literal=AZURE_OPENAI_API_KEY='<YOUR_AZURE_OPENAI_API_KEY>'
      Copy to Clipboard Toggle word wrap
    3. For Google as the provider, type:

      oc create secret generic kai-api-keys -n openshift-mta \
       --from-literal=GEMINI_API_KEY='<YOUR_GOOGLE_API_KEY>'
      Copy to Clipboard Toggle word wrap
    4. For the OpenAI-compatible providers, type:

      oc create secret generic kai-api-keys -n openshift-mta \
       --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \
       --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
      Copy to Clipboard Toggle word wrap
      Note

      You can also set the base URL as the kai_llm_baseurl variable in the Tackle custom resource.

  2. (Optional) Force a reconcile so that the MTA operator picks up the secret immediately

    kubectl patch tackle tackle -n openshift-mta --type=merge -p \
    '{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
    Copy to Clipboard Toggle word wrap

Solution Server integrates with the MTA Hub backend component to use the database and volumes necessary to store and retrieve the solved examples.

To enable Solution Server and other AI configurations in the Red Hat Developer Lightspeed for migration toolkit for applications VS Code extension, you must modify the Tackle custom resource (CR) with additional parameters.

Prerequisites

  1. You deployed an additional RWO volume for the Red Hat Developer Lightspeed for MTA-database if you want to use Red Hat Developer Lightspeed for MTA. See Persistent volume requirements for more information.
  2. You installed the MTA operator v8.0.0.

Procedure

  1. Log in to the Red Hat OpenShift cluster and switch to the openshift-mta project.
  2. Edit the Tackle CR settings in the tackle_hub.yml file with the following command:

    oc edit tackle
    Copy to Clipboard Toggle word wrap
  3. Enter applicable values for kai_llm_provider and kai_llm_model variables.

    ---
    kind: Tackle
    apiVersion: tackle.konveyor.io/v1alpha1
    metadata:
      name: mta
      namespace: openshift-mta
    spec:
      kai_solution_server_enabled: true
      kai_llm_provider: <provider-name> #For example, OpenAI.
      # optional, pick a suitable model for your provider
      kai_llm_model: <model-name>
    ...
    Copy to Clipboard Toggle word wrap
    Note

    For the OpenAI provider, the kai_llm_provider value is OpenAI.

  4. Apply the Tackle CR by in the openshift-mta project using the following command.

     $ oc apply -f tackle_hub.yaml
    Copy to Clipboard Toggle word wrap

Verification

  1. Enter the following command to verify the Red Hat Developer Lightspeed for MTA resources deployed for Solution Server.

    oc get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'
    Copy to Clipboard Toggle word wrap
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat