Configuring and Using Red Hat Developer Lightspeed for MTA


Migration Toolkit for Applications 8.0

Using Red Hat Developer Lightspeed for Migration Toolkit for Applications to modernize your applications

Red Hat Customer Content Services

Abstract

By using Red Hat Developer Lightspeed for Migration Toolkit for Applications (MTA), you can modernize applications in your organization by applying LLM-driven code changes to resolve issues found through static code analysis. You can automate code fixes, review and apply the suggested code changes with minimum manual effort.

Starting from 8.0.0, Migration Toolkit for Applications (MTA) integrates with large language models (LLM) through the Red Hat Developer Lightspeed for migration toolkit for applications component in the Visual Studio (VS) Code extension. You can use Red Hat Developer Lightspeed for MTA to apply LLM-driven code changes to resolve issues found through static code analysis of Java applications.

1.1. Use case for AI-driven code fixes

Migration Toolkit for Applications (MTA) performs the static code analysis for a specified target technology to which you want to migrate your applications. Red Hat provides 2400+ analysis rules in MTA for various Java technologies and you can extend the ruleset for custom frameworks or new technologies by creating custom rules.

The static code analysis describes the issues in your code that must be resolved. As you perform analysis for a large portfolio of applications, the issue description and the rule definition that may contain additional information form a large corpus of data that contains repetitive patterns of problem definitions and solutions.

Migrators do duplicate work by resolving issues that are repeated across applications in different migration waves.

Red Hat Developer Lightspeed for MTA works by collecting and storing the changes in the code for a large collection of applications, finding context to generate prompts for the LLM of your choice, and by generating code resolutions produced by the LLM to address specific issues.

Red Hat Developer Lightspeed for MTA uses Retrieval Augmented Generation for context-based resolutions of issues in code. By using RAG, Red Hat Developer Lightspeed for MTA improves the context shared with the LLM to generate more accurate suggestions to fix the issue in the code. The context allows the LLM to "reason" and generate suggestions for issues detected in the code. This mechanism helps to overcome the limited context size in LLMs that prevents them from analyzing the entire source code of an application.

The context is a combination of the source code, the issue description, and solved examples:

  • Description of issues detected by MTA when you run a static code analysis for a given set of target technologies.
  • (Optional) The default and custom rules may contain additional information that you include which can help Red Hat Developer Lightspeed for MTA to define the context.
  • Solved examples constitute code changes from other migrations and a pattern of resolution for an issue that can be used in future. A solved example is created when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. Solved examples are stored in the Solution Server.

    More instances of solved examples for an issue enhances the context and improve the success metrics of rules that trigger the issue. A higher success metrics of an issue refers to the higher confidence level associated with the accepted resolutions for that issue in previous analyses.

  • (Optional) If you enable the Solution Server, it extracts a pattern of resolution, called the migration hint, that can be used by the LLM to generate a more accurate fix suggestion in a future analysis.

    The improvement in the quality of migration hints results in more accurate code resolutions. Accurate code resolutions from the LLM result in the user accepting an update to the code. The updated code is stored in the Solution Server to generate a better migration hint in future.

    This cyclical improvement of resolution pattern from the Solution Server and improved migration hints lead to more reliable code changes as you migrate applications in different migration waves.

You can request AI-assisted code resolutions that obtain additional context from several potential sources, such as analysis issues, IDE diagnostic information, and past migration data via the Solution Server.

The Solution Server acts as an institutional memory that stores changes to source codes after analyzing applications in your organization. This helps you to leverage the recurring patterns of solutions for issues that are repeated in many applications.

When you use the Solution Server, Red Hat Developer Lightspeed for MTA suggests a code resolution that is based on solved examples or code changes in past analysis. You can view a diff of the updated portions of the code and the original source code to do a manual review.

It also enables you to control the analysis through manual reviews of the suggested AI resolutions: you can accept, reject or edit the suggested code changes while reducing the overall time and effort required to prepare your application for migration.

In the agentic AI mode, Red Hat Developer Lightspeed for MTA streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent:

  • Plans the context to define the issues.
  • Chooses a suitable sub agent for the analysis task. Works with the LLM to generate fix suggestions. The reasoning transcript and files to be changed are displayed to the user.
  • Applies the changes to the code once the user approves the updates.

If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this iteration, the agentic AI attempts to fix diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can review the changes and accept the agentic AI’s suggestion to address these diagnostic issues.

After each iteration of applying changes to the code, the agentic AI asks if you want the agent to continue fixing more issue. When you accept, it runs another iteration of automated analysis until it has resolved all issues or it has made a maximum of two attempts to fix an issue.

Agentic AI generates a new preview in each iteration when it updates the code with the suggested resolutions. The time taken by the agentic AI to complete all iterations depends on the number of new diagnostic issues that are detected in the code.

Important

Developer Lightspeed for MTA is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • Model agnostic - Red Hat Developer Lightspeed for MTA follows a "Bring Your Own Model" approach, allowing your organization to use a preferred LLM.
  • Iterative refinement - Red Hat Developer Lightspeed for MTA can include an agent that iterates through the source code to run a series of automated analyses that resolves both the code base and diagnostic issues.
  • Contextual code generation - By leveraging AI for static code analysis, Red Hat Developer Lightspeed for MTA breaks down complex problems into more manageable ones, providing the LLM with focused context to generate meaningful results. This helps overcome the limited context size of LLMs when dealing with large codebases.
  • No fine tuning - You also do not need to fine tune your model with a suitable data set for analysis which leaves you free to use and switch LLM models to respond to your requirements.
  • Learning and Improvement - As more parts of a codebase are migrated with Red Hat Developer Lightspeed for MTA, it can use RAG to learn from the available data and provide better recommendations in subsequent application analysis.

The Getting started section contains information to walk you through the prerequisites, persistent volume requirements, installation, and workflows that help you to decide how you want to use the Red Hat Developer Lightspeed for migration toolkit for applications.

Note

To get support for features in Red Hat Developer Lightspeed for MTA, you require a Red Hat Advanced Developer Suite (RHADS) subscription.

2.1. Prerequisites

This section lists the prerequisites required to successfully use the generative AI features in the Red Hat Developer Lightspeed for MTA Visual Studio (VS) Code extension.

Before you install Red Hat Developer Lightspeed for MTA, you must:

  • Install Language Support for Java™ by Red Hat extension
  • Install Java v17 and later
  • Install Maven v3.9.9 or later
  • Install Git and add it to the $PATH variable
  • Install the MTA Operator 8.0.0

    The MTA Operator is mandatory if you plan to enable the Solution Server that works with the large language model (LLM) for generating code changes. It enables you to log in to the openshift-mta project where you must enable the Solution Server in the Tackle custom resources (CR).

  • Create an API key for an LLM.

    You must enter the provider value and model name in Tackle CR to enable generative AI configuration in the MTA VS Code plugin.

    Expand
    Table 2.1. Configurable large language models and providers
    LLM Provider (Tackle CR value)Large language model examples for Tackle CR configuration

    OpenShift AI platform

    Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API.

    Open AI (openai)

    gpt-4, gpt-4o, gpt-4o-mini, gpt-3.5-turbo

    Azure OpenAI (azure_openai)

    gpt-4, gpt-35-turbo

    Amazon Bedrock (bedrock)

    anthropic.claude-3-5-sonnet-20241022-v2:0, meta.llama3-1-70b-instruct-v1:0

    Google Gemini (google)

    gemini-2.0-flash-exp, gemini-1.5-pro

    Ollama (ollama)

    llama3.1, codellama, mistral

Note

The availability of public LLM models is maintained by the respective LLM provider.

2.2. Persistent volume requirements

The Solution Server component requires a backend database to store code changes from previous analyses.

If you plan to enable Solution Server, you must create a 5Gi RWO persistent volume used by the Red Hat Developer Lightspeed for MTA database. See Persistent volume requirements for more information.

2.3. Installation

You can install the Migration Toolkit for Applications (MTA) 8.0.0 Visual Studio (VS) Code plug-in from the VS Code marketplace.

You can use the MTA VS Code plug-in to perform analysis and optionally enable Red Hat Developer Lightspeed for migration toolkit for applications to use generative AI capabilities. You can fix code issues before migrating the application to target technologies by using the generative AI capabilities.

You can opt to use Red Hat Developer Lightspeed for migration toolkit for applications features to request a code fix suggestion after running a static code analysis of an application. Red Hat Developer Lightspeed for MTA augments the manual changes made to code throughout your organization in different migration waves and creates a context that is shared with a large language model (LLM). The LLM suggests code resolutions based on the issue description, context, and previous examples of code changes to resolve issues.

To make code changes by using the LLM, you must enable the generative AI option, along with either the Solution Server or the Agent AI. The configurations that you complete before you request code fixes depend on how you prefer to request code resolutions.

Note

If you make any change after enabling the generative AI settings in the extension, you must restart the extension for the change to take effect.

To use the Solution Server for code fix suggestions:

  • Create a secret for your LLM key in the Red Hat OpenShift cluster.
  • Enable the Solution Server in the Tackle custom resource (CR).
  • Configure the LLM base URL and model in the Tackle CR.
  • Enable the generative AI option in the MTA extension settings.
  • Add the Solution Server configuration in the settings.json file.
  • Configure the profile settings and activate the LLM provider in the provider-settings.yaml file.

To use the agent mode for code fix suggestions:

  • Enable the generative AI and the agent mode in the MTA extension settings.
  • Configure the profile settings and activate the LLM provider in the provider-settings.yaml file.

2.5. Generating code fix suggestions example

This example will walk you through generating code fixes for a Java application that must be migrated to the target technology quarkus. To generate resolutions for issues in the code, we use the Agentic AI mode and the my-model as the large language model (LLM) that you deployed in OpenShift AI.

Procedure

  1. Open the my-Java project in Visual Studio (VS) Code.
  2. Download the Red Hat Developer Lightspeed for migration toolkit for applications extension from the VS Code marketplace.
  3. Open Command Palette:

    1. Type Ctrl+Shift+P in Windows and Linux systems.
    2. Type Cmd+Shift+P in Mac systems.
  4. Type Preferences: Open Settings (UI) in the Command Palette to open the VS Code settings and select Extensions > MTA.
  5. Select Gen AI:Agent Mode.
  6. In the Red Hat Developer Lightspeed for MTA extension, click Open Analysis View.
  7. Type MTA: Manage Analysis Profile in the Command Palette to open the analysis profile page.
  8. Configure the following fields:

    1. Profile Name: Type a profile name
    2. Target Technologies: quarkus
    3. Custom Rules: Select custom rules if you want to include them while running the analysis. By default, Red Hat Developer Lightspeed for MTA enables Use Default Rules for quarkus.
  9. Close the profile manager.
  10. Type MTA: Open the Gen AI model provider configuration file in the Command Palette.
  11. Configure the following in the provider-settings file and close it:

    models:
      openshift-example-model: &active
        environment:
          OPENAI_API_KEY: "<Server's OPENAI_API_KEY>"
          CA_BUNDLE: "<Servers CA Bundle path>"
        provider: "ChatOpenAI"
        args:
          model: "my-model"
          configuration:
            baseURL: "https://<serving-name>-<data-science-project-name>.apps.konveyor-ai.example.com/v1"
    Copy to Clipboard Toggle word wrap
    Note

    You must change the provider-setting configuration if you plan to use a different LLM provider.

  12. Type MTA: Open Analysis View in the Command Palette.
  13. Click Start to start the MTA RPC server.
  14. Select the profile you configured.
  15. Click Run Analysis to scan the Java application.

    MTA identifies the issues in the code.

  16. Click the solutions icon ( solutions icon new ) in an issue to request suggestions to resolve the issue.

    Red Hat Developer Lightspeed for MTA streams the issue description, a preview of the code changes that resolve the issue, and the file(s) in which the changes are to be made.

    You can review the code changes in the editor and accept or reject the changes. If you accept the changes, Red Hat Developer Lightspeed for MTA creates a new file with the accepted code changes.

  17. Click Continue to allow Red Hat Developer Lightspeed for MTA to run a follow-up analysis.

    This round of analysis detects lint issues, compilation issues, or diagnostic issues that may have occurred when you accepted the suggested code change.

    Repeat the review and accept or reject the resolutions. Red Hat Developer Lightspeed for MTA continues to run repeated iterations of scan if you allow until all issues are resolved.

Chapter 3. Solution Server configurations

Solution Server is a component that allows Red Hat Developer Lightspeed for MTA to build a collective memory of source code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how source code changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the Solution Server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.

The Solution Server delivers two primary benefits to users:

  • Contextual Hints: It surfaces examples of past migration solutions — including successful user modifications and accepted fixes — offering actionable hints for difficult or previously unsolved migration problems.
  • Migration Success Metrics: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of Red Hat Developer Lightspeed for MTA successfully migrating a given code segment.

Solution Server is an optional component in Red Hat Developer Lightspeed for MTA. You must complete the following configurations before you can place a code resolution request.

Important

Solution Server is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Expand
Table 3.1. Configurable large language models and providers in the Tackle custom resource
LLM Provider (Tackle CR value)Large language model examples for Tackle CR configuration

OpenShift AI platform

Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API

Open AI (openai)

gpt-4, gpt-4o, gpt-4o-mini, gpt-3.5-turbo

Azure OpenAI (azure_openai)

gpt-4, gpt-35-turbo

Amazon Bedrock (bedrock)

anthropic.claude-3-5-sonnet-20241022-v2:0, meta.llama3-1-70b-instruct-v1:0

Google Gemini (google)

gemini-2.0-flash-exp, gemini-1.5-pro

Ollama (ollama)

llama3.1, codellama, mistral

3.1. Configuring the model secret key

You must configure the Kubernetes secret for the large language model (LLM) provider in the Red Hat OpenShift project where you installed the MTA Operator.

Note

You can replace oc in the following commands with kubectl.

Note

You must create a LLM API key secret in your OpenShift cluster to produce the resources necessary for the Solution Server. If you do not configure the LLM API key secret, Red Hat Developer Lightspeed for MTA does not create the resources necessary to run the Solution Server.

Procedure

  1. Create a credentials secret named kai-api-keys in the openshift-mta project.

    1. For Amazon Bedrock as the provider, type:

      oc create secret generic aws-credentials \
       --from-literal=AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID> \
       --from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
      Copy to Clipboard Toggle word wrap
    2. For Azure OpenAI as the provider, type:

      oc create secret generic kai-api-keys -n openshift-mta \
       --from-literal=AZURE_OPENAI_API_KEY='<YOUR_AZURE_OPENAI_API_KEY>'
      Copy to Clipboard Toggle word wrap
    3. For Google as the provider, type:

      oc create secret generic kai-api-keys -n openshift-mta \
       --from-literal=GEMINI_API_KEY='<YOUR_GOOGLE_API_KEY>'
      Copy to Clipboard Toggle word wrap
    4. For the OpenAI-compatible providers, type:

      oc create secret generic kai-api-keys -n openshift-mta \
       --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \
       --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
      Copy to Clipboard Toggle word wrap
      Note

      You can also set the base URL as the kai_llm_baseurl variable in the Tackle custom resource.

  2. (Optional) Force a reconcile so that the MTA operator picks up the secret immediately

    kubectl patch tackle tackle -n openshift-mta --type=merge -p \
    '{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'
    Copy to Clipboard Toggle word wrap

Solution Server integrates with the MTA Hub backend component to use the database and volumes necessary to store and retrieve the solved examples.

To enable Solution Server and other AI configurations in the Red Hat Developer Lightspeed for migration toolkit for applications VS Code extension, you must modify the Tackle custom resource (CR) with additional parameters.

Prerequisites

  1. You deployed an additional RWO volume for the Red Hat Developer Lightspeed for MTA-database if you want to use Red Hat Developer Lightspeed for MTA. See Persistent volume requirements for more information.
  2. You installed the MTA operator v8.0.0.

Procedure

  1. Log in to the Red Hat OpenShift cluster and switch to the openshift-mta project.
  2. Edit the Tackle CR settings in the tackle_hub.yml file with the following command:

    oc edit tackle
    Copy to Clipboard Toggle word wrap
  3. Enter applicable values for kai_llm_provider and kai_llm_model variables.

    ---
    kind: Tackle
    apiVersion: tackle.konveyor.io/v1alpha1
    metadata:
      name: mta
      namespace: openshift-mta
    spec:
      kai_solution_server_enabled: true
      kai_llm_provider: <provider-name> #For example, OpenAI.
      # optional, pick a suitable model for your provider
      kai_llm_model: <model-name>
    ...
    Copy to Clipboard Toggle word wrap
    Note

    For OpenAI models and LLMs deployed in the OpenShift AI cluster, enter OpenAI as the kai_llm_provider value.

  4. Apply the Tackle CR by in the openshift-mta project using the following command.

     $ oc apply -f tackle_hub.yaml
    Copy to Clipboard Toggle word wrap

Verification

  1. Enter the following command to verify the Red Hat Developer Lightspeed for MTA resources deployed for Solution Server.

    oc get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'
    Copy to Clipboard Toggle word wrap
    Note

    When you enable Solution Server, the Solution Server API endpoint is served through the MTA Hub. You need not complete any further task, such as creating a route for the Solution Server API.

Red Hat Developer Lightspeed for MTA provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions for resolving issues identified in the current code.

Red Hat Developer Lightspeed for MTA is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, or as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models.

The code fix suggestions produced to resolve issues detected through an analysis depend on the LLM’s capabilities.

You can run an LLM from the following generative AI providers:

  • OpenAI
  • Azure OpenAI
  • Google Gemini
  • Amazon Bedrock
  • Ollama

You can also run OpenAI API-compatible LLMs deployed as:

  • A service in your OpenShift AI cluster
  • Locally in the Podman AI Lab in your system.

The code suggestions from Red Hat Developer Lightspeed for migration toolkit for applications differ based on the large language model (LLM) that you use. Therefore, you may want to use an LLM that caters to your specific requirements.

Red Hat Developer Lightspeed for MTA integrates with LLMs that are deployed as a scalable service on OpenShift AI clusters. These deployments provide you with granular control over resources such as compute, cluster nodes, and auto-scaling Graphical Processing Units (GPUs) while enabling you to leverage LLMs to resolve code issues at a large scale.

An example workflow for configuring an LLM service on OpenShift AI broadly requires the following configurations:

  • Installing and configuring the following infrastructure resources:

    • Red Hat OpenShift cluster and installing the OpenShift AI Operator
    • Configure a GPU machineset
    • (Optional) Configure an auto scaler custom resource (CR) and a machine scaler CR
  • Configuring OpenShift AI platform

    • Configure a data science project
    • Configure a serving runtime
    • Configure an accelerator profile
  • Deploying the LLM through OpenShift AI

    • Uploading your model to an AWS compatible bucket
    • Add a data connection
    • Deploy the LLM in your OpenShift AI data science project
    • Export the SSL certificate, OPENAI_API_BASE URL and other environment variables to access the LLM
  • Preparing the LLM for analysis

    • Configure an OpenAI API key
    • Update the OpenAI API key and the base URL in provider-settings.yaml.

See Configuring LLM provider settings to configure the base URL and the LLM API key in the Red Hat Developer Lightspeed for MTA VS Code extension.

4.2. Configuring LLM provider settings

Red Hat Developer Lightspeed for migration toolkit for applications is large language model (LLM) agnostic and integrates with an LLM of your choice. To enable Red Hat Developer Lightspeed for MTA to access your large language model (LLM), you must enter the LLM provider configurations in the provider-settings.yaml file.

The provider-settings.yaml file contains a list of LLM providers that are supported by default. The mandatory environment variables are different for each LLM provider. Depending on the provider that you choose, you can configure additional environment variables for a model in the provider-settings.yaml file. You can also enter a new provider with the required environment variables, the base URL, and the model name.

The provider settings file is available in the Red Hat Developer Lightspeed for MTA Visual Studio (VS) Code extension.

Access the provider-settings.yaml from the VS Code Command Palette by typing Open the GenAI model provider configuration file.

Note

You can select one provider from the list by using the &active anchor in the name of the provider. To use a model from another provider, move the &active anchor to one of the desired provider blocks.

For a model named "my-model" deployed in OpenShift AI with "example-model" as the serving name:

models:
  openshift-example-model: &active
    environment:
      CA_BUNDLE: "<Servers CA Bundle path>"
    provider: "ChatOpenAI"
    args:
      model: "my-model"
      configuration:
        baseURL: "https://<serving-name>-<data-science-project-name>.apps.konveyor-ai.example.com/v1"
Copy to Clipboard Toggle word wrap
Note

When you change the model deployed in OpenShift AI, you must also change the model argument and the baseURL endpoint.

Note

If you want to select a public LLM provider, you must move the &active anchor to the desired block and change the provider arguments.

For an OpenAI model:

OpenAI: &active
    environment:
      OPENAI_API_KEY: "<your-API-key>" # Required
    provider: ChatOpenAI
    args:
      model: gpt-4o # Required
Copy to Clipboard Toggle word wrap

For Azure OpenAI:

AzureChatOpenAI: &active
    environment:
      AZURE_OPENAI_API_KEY: "" # Required
    provider: AzureChatOpenAI
    args:
      azureOpenAIApiDeploymentName: "" # Required
      azureOpenAIApiVersion: "" # Required
Copy to Clipboard Toggle word wrap

For Amazon Bedrock:

AmazonBedrock: &active
    environment:
      ## May have to use if no global `~/.aws/credentials`
      AWS_ACCESS_KEY_ID: "" # Required if a global ~/.aws/credentials file is not present
      AWS_SECRET_ACCESS_KEY: "" # Required if a global ~/.aws/credentials file is not present
      AWS_DEFAULT_REGION: "" # Required
    provider: ChatBedrock
    args:
      model: meta.llama3-70b-instruct-v1:0 # Required
Copy to Clipboard Toggle word wrap
Note

It is recommended to use the AWS CLI and verify that you have command line access to AWS services before you proceed with the provider-settings configurations.

For Google Gemini:

GoogleGenAI: &active
    environment:
      GOOGLE_API_KEY: "" # Required
    provider: ChatGoogleGenerativeAI
    args:
      model: gemini-2.5-pro # Required
Copy to Clipboard Toggle word wrap

For Ollama:

models:
  ChatOllama: &active
    provider: "ChatOllama"
    args:
      model: "granite-code:8b-instruct"
      baseUrl: "127.0.0.1:11434" # example URL
Copy to Clipboard Toggle word wrap

4.3. Configuring the LLM in Podman Desktop

The Podman AI lab extension enables you to use an open-source model from a curated list of models and use it locally in your system.

The code fix suggestions generated by a model depends on the model’s capabilities. Models deployed through the Podman AI Lab were found to be insufficient for the complexity of code changes required to fix issues discovered by MTA. You must not use such models in a production environment.

Prerequisites

  • You installed Podman Desktop in your system.
  • You completed initial configurations in Red Hat Developer Lightspeed for MTA required for the analysis.

Procedure

  1. Go to the Podman AI Lab extension and click Catalog under Models.
  2. Download one or more models.
  3. Go to Services and click New Model Service.
  4. Select a model that you downloaded in the Model drop down menu and click Create Service.
  5. Click the deployed model service to open the Service Details page.
  6. Note the server URL and the model name. You must configure these specifications in the Red Hat Developer Lightspeed for MTA extension.
  7. Export the inference server URL as follows:

    export OPENAI_API_BASE=<server-url>
    Copy to Clipboard Toggle word wrap
  8. In the Red Hat Developer Lightspeed for MTA extension, type Open the GenAI model provider configuration file in the Command Palette to open the provider-settings.yaml file.
  9. Enter the model details from Podman Desktop. For example, use the following configuration for a Mistral model.

    podman_mistral: &active
        provider: "ChatOpenAI"
         environment:
          OPENAI_API_KEY: "unused value"
        args:
          model: "mistral-7b-instruct-v0-2"
          base_url: "http://localhost:35841/v1"
    Copy to Clipboard Toggle word wrap
    Note

    The Podman Desktop service endpoint does not need a password but the OpenAI library expects the OPENAI_API_KEY to be set. In this case, the value of the OPENAI_API_KEY variable does not matter.

You must configure the following settings in Red Hat Developer Lightspeed for migration toolkit for applications:

  • Visual Studio Code IDE settings.
  • Profile settings that provide context before you request a code fix for a particular application.

After you install the MTA extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate Red Hat Developer Lightspeed for MTA settings in Visual Studio (VS) Code.

Red Hat Developer Lightspeed for MTA settings are applied to all AI-assisted analysis that you perform by using the MTA extension. The extension settings can be broadly categorized into debugging and logging, Red Hat Developer Lightspeed for MTA settings, analysis related settings, and Solution Server settings.

Prerequisites

In addition to the overall prerequisites, you have configured the following:

  • You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.

Procedure

  1. Go to the Red Hat Developer Lightspeed for MTA settings in one of the following ways:

    1. Click Extensions > MTA Extension for VSCode > Settings
    2. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar to open the Command Palette and enter Preferences: Open Settings (UI). Go to Extensions > MTA to open the settings page.
  2. Configure the settings described in the following table:
Expand
Table 5.1. Red Hat Developer Lightspeed for MTA extension settings
SettingsDescription

Log level

Set the log level for the MTA binary. The default log level is debug. The log level increases or decreases the verbosity of logs.

Analyzer path

Specify an MTA custom binary path. If you do not provide a path, Red Hat Developer Lightspeed for MTA uses the default path to the binary.

Auto Accept on Save

This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes.

Gen AI:Enabled

This option is enabled by default. It enables you to get code fixes by using Red Hat Developer Lightspeed for MTA with a large language model.

Gen AI: Agent mode

Enable the experimental Agentic AI flow for analysis. Red Hat Developer Lightspeed for MTA runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, Red Hat Developer Lightspeed for MTA makes the changes in the code and re-analyzes the file.

Gen AI: Excluded diagnostic sources

Add diagnostic sources in the settings.json file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis.

Cache directory

Specify the path to a directory in your filesystem to store cached responses from the LLM.

Trace directory

Configure the absolute path to the directory that contains the saved LLM interaction.

Trace enabled

Enable to trace MTA communication with the LLM model. Traces are stored in the trace directory that you configured.

Demo mode

Enable to run Red Hat Developer Lightspeed for MTA in demo mode that uses the LLM responses saved in the cache directory for analysis.

Solution Server:URL

Edit the configurations for the Solution Server in settings.json:

  • “enabled”: Enter a boolean value. Set true for connecting the Solution Server client (Red Hat Developer Lightspeed for MTA extension) to the Solution Server.
  • “url”: Configure the URL of the Solution Server end point.
  • “auth”: The authentication settings allows you to configure a list of options to authenticate to the Solution Server.

    • "enabled": Set to true to enable authentication. If you enable authentication, then you must configure the Solution Server realm.
    • "insecure": Set to true to skip SSL certificate verification when clients connect to the Solution Server. Set to false to allow secure connections to the Solution Server.
    • "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a Keycloak realm to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm.

Debug:Webview

Enable debug level logging for Webview message handling in VS Code.

See Configuring the solution server settings for an example Solution Server configuration.

5.2. Configuring the Solution Server settings

You need a Keycloak realm and the Solution Server URL to connect Red Hat Developer Lightspeed for MTA extension with the Solution Server.

Prerequisites

  • The Solution Server URL is available.
  • An administrator configured the Keycloak realm for the Solution Server.

Procedure

  1. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar and enter Preferences:Open User Settings (JSON).
  2. In the settings.json file, enter Ctrl + SPACE to enable the auto-complete for the Solution Server configurable fields.
  3. Modify the following configuration as necessary:

    {
        "mta-vscode-extension.solutionServer": {
    
            "url": "https://mta-openshift-mta-kai.apps.konveyor-ai.example.com/hub/services/kai/api",
    
            "enabled": true,
            "auth": {
    
               "enabled": true, #you must enter the username and password
               "insecure": true,
               "realm": "mta"
            },
    
        }
    }
    Copy to Clipboard Toggle word wrap
    Note

    When you enable Solution Server authentication for the first time, you must enter the username and password in the VS Code search bar.

    Tip

    Enter MTA: Restart Solution Server in the Command Palette to restart the Solution Server.

You can use the Visual Studio (VS) Code plugin to run an analysis to discover issues in the code. You can optionally enable Red Hat Developer Lightspeed for migration toolkit for applications to get AI-assisted code suggestions.

To generate code changes using Red Hat Developer Lightspeed for MTA, you must configure a profile that contains all the necessary configurations, such as source and target technologies and the API key to connect to your chosen large language model (LLM).

Prerequisites

  • You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.
  • You opened a Java project in your VS Code workspace.

Procedure

  1. Open the MTA View Analysis page in either of the following ways:

    1. Click the book icon on the MTA: Issues pane of the MTA extension.
    2. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar to open the Command Palette and enter MTA:Open Analysis View.
  2. Click the settings button on the MTA View Analysis page to configure a profile for your project. The Get Ready to Analyze pane lists the following basic configurations required for an analysis:

    Verification

    After you complete the profile configuration, close the Get Ready to Analyze pane. You can verify that your configuration works by running an analysis.

Expand
Table 5.2. Red Hat Developer Lightspeed for MTA profile settings
Profile settingsDescription

Select profile

Create a profile that you can reuse for multiple analyses. The profile name is part of the context provided to the LLM for analysis.

Configure label selector

A label selector filters rules for analysis based on the source or target technology.

Specify one or more target or source technologies (for example, cloud-readiness). Red Hat Developer Lightspeed for MTA uses this configuration to determine the rules that are applied to a project during analysis.

If you mentioned a new target or a source technology in your custom rule, you can type that name to create and add the new item to the list.

Note

You must configure either target or source technologies before running an analysis.

Set rules

Enable default rules and select your custom rule that you want MTA to use for an analysis. You can use the custom rules in addition to the default rules.

Configure generative AI

This option opens the provider-settings.yaml file that contains API keys and other parameters for all supported LLMs. By default, Red Hat Developer Lightspeed for MTA is configured to use OpenAI LLM. To change the model, update the anchor &active to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup.

See for Configuring LLM provider settings to complete the LLM provider configuration.

After you complete the configurations, the next step is running an analysis to identify the issues in the code and generate suggestions to resolve the issues. You can get suggestions to fix code by using Red Hat Developer Lightspeed for migration toolkit for applications.

When you run an analysis, MTA displays the issues in the Analysis Results view.

When you request code fix suggestions, Red Hat Developer Lightspeed for MTA performs the following tasks:

  • Streams LLM messages that describe the issue description, resolution, and the file in which the updates are applied.
  • Generates new files in the Resolutions pane. These files have the updates to the code to resolve the issues detected in the current analysis. You can review the changes, apply, or revert the updates.

If you apply all the resolutions, Red Hat Developer Lightspeed for MTA applies the changes and triggers another analysis to check if there are more issues. Subsequent analysis reports fewer issues and incidents.

6.1. Running an Analysis

You can run a static code analysis of an application with or without enabling the generative AI features. The RPC (Remote Procedure Call) server runs the analysis to detect all issues in the code for one or more target technologies to which you want to migrate the application.

Prerequisites

  • You opened a Java project in your VS Code workspace.
  • You configured an analysis profile on the MTA Analysis View page.

Procedure

  1. Click the Red Hat Developer Lightspeed for MTA extension and click Open MTA Analysis View.
  2. Select a profile for the analysis.
  3. Click Start to start the MTA RPC server.
  4. Click Run Analysis on the MTA Analysis View page.

When you request code resolutions by enabling the Solution Server, an issue displays the success metric when the metric becomes available. A success metric indicates the confidence level in applying the fix suggestion from the LLM based on how many times the update was applied in past analysis.

You can review the code updates and edit the suggested code resolutions before accepting the suggestions.

Prerequisites

  • You opened a Java project in your VS Code workspace.
  • You configured a profile on the MTA Analysis View page.
  • You ran an analysis after enabling the Solution Server.

Procedure

  1. Review the issues from the Analysis results space of the MTA view analysis page by the following tabs:

    1. All: lists all incidents identified in your project.
    2. Files: lists all the files in your project for which the analysis identified issues that must be resolved.
    3. Issues: lists all issues across different files in your project.
  2. Use the Category drop down to filter issues based on how crucial the fix is for the target migration. You can filter mandatory, potential, and optional issues.
  3. Click Has Success Rate to check how many times the same issue resolution was accepted in previous analysis.
  4. Click the solution tool to trigger automated updates to your code. If you applied any category filter, code updates are made for all incidents, specific files, or specific issues based on the filter. Red Hat Developer Lightspeed for MTA generates new files with the updated code.
  5. Review and (optionally) edit the code.
  6. Click Apply all in the Resolutions pane to permanently apply the changes to your code.

6.3. Generating code resolutions in the agent mode

In the agent mode, the Red Hat Developer Lightspeed for MTA planning agent creates the context for an issue and picks a sub-agent that is most suited to resolve the issue. The sub-agent runs an automated scan to describe how the issue can be resolved and generates files with the updated resolutions in one stream.

You can review the updated files and approve or reject the changes to the code. The agent runs another automated analysis to detect new issues in the code that may have occurred because of the accepted changes or diagnostic issues that your tool may generate following a previous analysis. If you allow the process to continue, Red Hat Developer Lightspeed for MTA runs the stream again and generates a new file with the latest updates.

When using the agent mode, you can reject the changes or discontinue the stream but you cannot edit the updated files during the stream.

Prerequisites

  • You opened a Java project in your VS Code workspace.
  • You configured an analysis profile on the MTA Analysis View page.

Procedure

  1. Verify that agent mode is enabled in one of the following ways:

    1. Type Ctrl + Shift + P in VS Code search (Linux/Windows system) and Cmd + Shift + P for Mac to go to the command palette.
    2. Enter Preferences: Open User Settings (JSON) to open the settings.json file.
    3. Ensure that mta-vscode-extension.genai.agentMode is set to true.

      OR

    4. Go to Extensions > Red Hat Developer Lightspeed for MTA > settings
    5. Click the Agent Mode option to enable the server.
  2. Click the Red Hat Developer Lightspeed for MTA extension and click Open MTA Analysis View.
  3. Select a profile for the analysis.
  4. Click Start to start the MTA RPC server.
  5. Click Run Analysis on the MTA Analysis View page. The Resolution Details tab opens, where you can view the automated analysis that makes changes in applicable files.
  6. Click the Review Changes option to open the editor that shows the diff view of the modified file.
  7. Review the changes and click Apply to update the file with all the changes or Reject to reject all changes. If you applied the changes, then Red Hat Developer Lightspeed for MTA creates the updated file with code changes.
  8. Open Source Control to access the updated file.
  9. In the Resolution Details view, accept the proposal from Red Hat Developer Lightspeed for MTA to make further changes. The stream of analysis repeats, after which you can review and accept change. Red Hat Developer Lightspeed for MTA creates the file with the code changes, and the stream continues until you reject the proposal for further analysis.

Red Hat Developer Lightspeed for migration toolkit for applications generates logs to debug issues specific to the extension host and the MTA analysis and RPC server. You can also configure the log level for the Red Hat Developer Lightspeed for MTA in the extension settings. The default log level is debug.

Extension logs are stored as extension.log with automatic rotation. The maximum size of the log file is 10 MB and three files are retained. Analyzer RPC logs are stored as analyzer.log without rotation.

7.1. Archiving the logs

To archive the logs as a zip file, type MTA: Generate Debug Archive in the VS Code Command Palette and select the information type that must be archived as a log file.

The archive command allows capturing all relevant log files in a zip archive at the specified location in your project. By default, you can access the archived logs in the .vscode directory of your project.

The archival feature helps you to save the following information:

  • Large language model (LLM) provider configuration: Fields from the provider settings that can be included in the archive. All fields are redacted for security reasons by default. Ensure that you do not expose any secrets.
  • LLM model arguments
  • LLM traces: If you enabled tracing LLM interactions, you can choose to include LLM traces in the logs.

7.2. Accessing the logs

You can access the logs in the following ways:

  • Log file: Type Developer: Open Extension Logs Folder and open the redhat.mta-vscode-extension directory that contains the extension log and the analyzer log.
  • Output panel: Select Red Hat Developer Lightspeed for MTA from the drop-down menu.
  • Webview logs: You can also inspect webview content by using the webview logs. To access the webview logs, type Open Webview Developer Tools in the VS Code Command Palette.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat