Configuring and Using Red Hat Developer Lightspeed for MTA
Using Red Hat Developer Lightspeed for Migration Toolkit for Applications to modernize your applications
Abstract
Chapter 1. Introduction to the Red Hat Developer Lightspeed for MTA Copy linkLink copied to clipboard!
Starting from 8.0.0, Migration Toolkit for Applications (MTA) integrates with large language models (LLM) through the Red Hat Developer Lightspeed for migration toolkit for applications component in the Visual Studio (VS) Code extension. You can use Red Hat Developer Lightspeed for MTA to apply LLM-driven code changes to resolve issues found through static code analysis of Java applications.
1.1. Use case for AI-driven code fixes Copy linkLink copied to clipboard!
Migration Toolkit for Applications (MTA) performs the static code analysis for a specified target technology to which you want to migrate your applications. Red Hat provides 2400+ analysis rules in MTA for various Java technologies and you can extend the ruleset for custom frameworks or new technologies by creating custom rules.
The static code analysis describes the issues in your code that must be resolved. As you perform analysis for a large portfolio of applications, the issue description and the rule definition that may contain additional information form a large corpus of data that contains repetitive patterns of problem definitions and solutions.
Migrators do duplicate work by resolving issues that are repeated across applications in different migration waves.
1.2. How does Red Hat Developer Lightspeed for MTA work Copy linkLink copied to clipboard!
Red Hat Developer Lightspeed for MTA works by collecting and storing the changes in the code for a large collection of applications, finding context to generate prompts for the LLM of your choice, and by generating code resolutions produced by the LLM to address specific issues.
Red Hat Developer Lightspeed for MTA uses Retrieval Augmented Generation for context-based resolutions of issues in code. By using RAG, Red Hat Developer Lightspeed for MTA improves the context shared with the LLM to generate more accurate suggestions to fix the issue in the code. The context allows the LLM to "reason" and generate suggestions for issues detected in the code. This mechanism helps to overcome the limited context size in LLMs that prevents them from analyzing the entire source code of an application.
The context is a combination of the source code, the issue description, and solved examples:
- Description of issues detected by MTA when you run a static code analysis for a given set of target technologies.
- (Optional) The default and custom rules may contain additional information that you include which can help Red Hat Developer Lightspeed for MTA to define the context.
Solved examples constitute code changes from other migrations and a pattern of resolution for an issue that can be used in future. A solved example is created when a Migrator accepts a resolution in a previous analysis that results in updated code or an unfamiliar issue in a legacy application that the Migrator manually fixed. Solved examples are stored in the Solution Server.
More instances of solved examples for an issue enhances the context and improve the success metrics of rules that trigger the issue. A higher success metrics of an issue refers to the higher confidence level associated with the accepted resolutions for that issue in previous analyses.
(Optional) If you enable the Solution Server, it extracts a pattern of resolution, called the migration hint, that can be used by the LLM to generate a more accurate fix suggestion in a future analysis.
The improvement in the quality of migration hints results in more accurate code resolutions. Accurate code resolutions from the LLM result in the user accepting an update to the code. The updated code is stored in the Solution Server to generate a better migration hint in future.
This cyclical improvement of resolution pattern from the Solution Server and improved migration hints lead to more reliable code changes as you migrate applications in different migration waves.
1.3. Requesting code fixes in Red Hat Developer Lightspeed for MTA Copy linkLink copied to clipboard!
You can request AI-assisted code resolutions that obtain additional context from several potential sources, such as analysis issues, IDE diagnostic information, and past migration data via the Solution Server.
The Solution Server acts as an institutional memory that stores changes to source codes after analyzing applications in your organization. This helps you to leverage the recurring patterns of solutions for issues that are repeated in many applications.
When you use the Solution Server, Red Hat Developer Lightspeed for MTA suggests a code resolution that is based on solved examples or code changes in past analysis. You can view a diff of the updated portions of the code and the original source code to do a manual review.
It also enables you to control the analysis through manual reviews of the suggested AI resolutions: you can accept, reject or edit the suggested code changes while reducing the overall time and effort required to prepare your application for migration.
In the agentic AI mode, Red Hat Developer Lightspeed for MTA streams an automated analysis of the code in a loop until all issues are resolved and changes the code with the updates. In the initial run, the AI agent:
- Plans the context to define the issues.
- Chooses a suitable sub agent for the analysis task. Works with the LLM to generate fix suggestions. The reasoning transcript and files to be changed are displayed to the user.
- Applies the changes to the code once the user approves the updates.
If you accept that the agentic AI must continue to make changes, it compiles the code and runs a partial analysis. In this iteration, the agentic AI attempts to fix diagnostic issues (if any) generated by tools that you installed in the VS Code IDE. You can review the changes and accept the agentic AI’s suggestion to address these diagnostic issues.
After each iteration of applying changes to the code, the agentic AI asks if you want the agent to continue fixing more issue. When you accept, it runs another iteration of automated analysis until it has resolved all issues or it has made a maximum of two attempts to fix an issue.
Agentic AI generates a new preview in each iteration when it updates the code with the suggested resolutions. The time taken by the agentic AI to complete all iterations depends on the number of new diagnostic issues that are detected in the code.
Developer Lightspeed for MTA is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
1.4. Benefits of using Red Hat Developer Lightspeed for MTA Copy linkLink copied to clipboard!
- Model agnostic - Red Hat Developer Lightspeed for MTA follows a "Bring Your Own Model" approach, allowing your organization to use a preferred LLM.
- Iterative refinement - Red Hat Developer Lightspeed for MTA can include an agent that iterates through the source code to run a series of automated analyses that resolves both the code base and diagnostic issues.
- Contextual code generation - By leveraging AI for static code analysis, Red Hat Developer Lightspeed for MTA breaks down complex problems into more manageable ones, providing the LLM with focused context to generate meaningful results. This helps overcome the limited context size of LLMs when dealing with large codebases.
- No fine tuning - You also do not need to fine tune your model with a suitable data set for analysis which leaves you free to use and switch LLM models to respond to your requirements.
- Learning and Improvement - As more parts of a codebase are migrated with Red Hat Developer Lightspeed for MTA, it can use RAG to learn from the available data and provide better recommendations in subsequent application analysis.
Chapter 2. Getting started with Red Hat Developer Lightspeed for MTA Copy linkLink copied to clipboard!
The Getting started section contains information to walk you through the prerequisites, persistent volume requirements, installation, and workflows that help you to decide how you want to use the Red Hat Developer Lightspeed for migration toolkit for applications.
To get support for features in Red Hat Developer Lightspeed for MTA, you require a Red Hat Advanced Developer Suite (RHADS) subscription.
2.1. Prerequisites Copy linkLink copied to clipboard!
This section lists the prerequisites required to successfully use the generative AI features in the Red Hat Developer Lightspeed for MTA Visual Studio (VS) Code extension.
Before you install Red Hat Developer Lightspeed for MTA, you must:
- Install Language Support for Java™ by Red Hat extension
- Install Java v17 and later
- Install Maven v3.9.9 or later
- Install Git and add it to the $PATH variable
Install the MTA Operator 8.0.0
The MTA Operator is mandatory if you plan to enable the Solution Server that works with the large language model (LLM) for generating code changes. It enables you to log in to the
openshift-mtaproject where you must enable the Solution Server in the Tackle custom resources (CR).Create an API key for an LLM.
You must enter the provider value and model name in Tackle CR to enable generative AI configuration in the MTA VS Code plugin.
Expand Table 2.1. Configurable large language models and providers LLM Provider (Tackle CR value) Large language model examples for Tackle CR configuration OpenShift AI platform
Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API.
Open AI (
openai)gpt-4,gpt-4o,gpt-4o-mini,gpt-3.5-turboAzure OpenAI (
azure_openai)gpt-4,gpt-35-turboAmazon Bedrock (
bedrock)anthropic.claude-3-5-sonnet-20241022-v2:0,meta.llama3-1-70b-instruct-v1:0Google Gemini (
google)gemini-2.0-flash-exp,gemini-1.5-proOllama (
ollama)llama3.1,codellama,mistral
The availability of public LLM models is maintained by the respective LLM provider.
2.2. Persistent volume requirements Copy linkLink copied to clipboard!
The Solution Server component requires a backend database to store code changes from previous analyses.
If you plan to enable Solution Server, you must create a 5Gi RWO persistent volume used by the Red Hat Developer Lightspeed for MTA database. See Persistent volume requirements for more information.
2.3. Installation Copy linkLink copied to clipboard!
You can install the Migration Toolkit for Applications (MTA) 8.0.0 Visual Studio (VS) Code plug-in from the VS Code marketplace.
You can use the MTA VS Code plug-in to perform analysis and optionally enable Red Hat Developer Lightspeed for migration toolkit for applications to use generative AI capabilities. You can fix code issues before migrating the application to target technologies by using the generative AI capabilities.
2.4. How to use Red Hat Developer Lightspeed for MTA Copy linkLink copied to clipboard!
You can opt to use Red Hat Developer Lightspeed for migration toolkit for applications features to request a code fix suggestion after running a static code analysis of an application. Red Hat Developer Lightspeed for MTA augments the manual changes made to code throughout your organization in different migration waves and creates a context that is shared with a large language model (LLM). The LLM suggests code resolutions based on the issue description, context, and previous examples of code changes to resolve issues.
To make code changes by using the LLM, you must enable the generative AI option, along with either the Solution Server or the Agent AI. The configurations that you complete before you request code fixes depend on how you prefer to request code resolutions.
If you make any change after enabling the generative AI settings in the extension, you must restart the extension for the change to take effect.
To use the Solution Server for code fix suggestions:
- Create a secret for your LLM key in the Red Hat OpenShift cluster.
- Enable the Solution Server in the Tackle custom resource (CR).
- Configure the LLM base URL and model in the Tackle CR.
- Enable the generative AI option in the MTA extension settings.
-
Add the Solution Server configuration in the
settings.jsonfile. -
Configure the profile settings and activate the LLM provider in the
provider-settings.yamlfile.
To use the agent mode for code fix suggestions:
- Enable the generative AI and the agent mode in the MTA extension settings.
-
Configure the profile settings and activate the LLM provider in the
provider-settings.yamlfile.
2.5. Generating code fix suggestions example Copy linkLink copied to clipboard!
This example will walk you through generating code fixes for a Java application that must be migrated to the target technology quarkus. To generate resolutions for issues in the code, we use the Agentic AI mode and the my-model as the large language model (LLM) that you deployed in OpenShift AI.
Procedure
-
Open the
my-Javaproject in Visual Studio (VS) Code. - Download the Red Hat Developer Lightspeed for migration toolkit for applications extension from the VS Code marketplace.
Open Command Palette:
-
Type
Ctrl+Shift+Pin Windows and Linux systems. -
Type
Cmd+Shift+Pin Mac systems.
-
Type
-
Type
Preferences: Open Settings (UI)in the Command Palette to open the VS Code settings and selectExtensions > MTA. -
Select
Gen AI:Agent Mode. -
In the Red Hat Developer Lightspeed for MTA extension, click
Open Analysis View. -
Type
MTA: Manage Analysis Profilein the Command Palette to open the analysis profile page. Configure the following fields:
- Profile Name: Type a profile name
-
Target Technologies:
quarkus -
Custom Rules: Select custom rules if you want to include them while running the analysis. By default, Red Hat Developer Lightspeed for MTA enables Use Default Rules for
quarkus.
- Close the profile manager.
-
Type
MTA: Open the Gen AI model provider configuration filein the Command Palette. Configure the following in the
provider-settingsfile and close it:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou must change the
provider-settingconfiguration if you plan to use a different LLM provider.-
Type
MTA: Open Analysis Viewin the Command Palette. - Click Start to start the MTA RPC server.
- Select the profile you configured.
Click Run Analysis to scan the Java application.
MTA identifies the issues in the code.
Click the solutions icon (
) in an issue to request suggestions to resolve the issue.
Red Hat Developer Lightspeed for MTA streams the issue description, a preview of the code changes that resolve the issue, and the file(s) in which the changes are to be made.
You can review the code changes in the editor and accept or reject the changes. If you accept the changes, Red Hat Developer Lightspeed for MTA creates a new file with the accepted code changes.
Click Continue to allow Red Hat Developer Lightspeed for MTA to run a follow-up analysis.
This round of analysis detects lint issues, compilation issues, or diagnostic issues that may have occurred when you accepted the suggested code change.
Repeat the review and accept or reject the resolutions. Red Hat Developer Lightspeed for MTA continues to run repeated iterations of scan if you allow until all issues are resolved.
Chapter 3. Solution Server configurations Copy linkLink copied to clipboard!
Solution Server is a component that allows Red Hat Developer Lightspeed for MTA to build a collective memory of source code changes from all analysis performed in an organization. When you request code fix for issues in the Visual Studio (VS) Code, the Solution Server augments previous patterns of how source code changed to resolve issues (also called solved examples) that were similar to those in the current file, and suggests a resolution that has a higher confidence level derived from previous solutions. After you accept a suggested code fix, the Solution Server works with the large language model (LLM) to improve the hints about the issue that becomes part of the context. An improved context enables the LLM to generate more reliable code fix suggestions in future cases.
The Solution Server delivers two primary benefits to users:
- Contextual Hints: It surfaces examples of past migration solutions — including successful user modifications and accepted fixes — offering actionable hints for difficult or previously unsolved migration problems.
- Migration Success Metrics: It exposes detailed success metrics for each migration rule, derived from real-world usage data. These metrics can be used by IDEs or automation tools to present users with a “confidence level” or likelihood of Red Hat Developer Lightspeed for MTA successfully migrating a given code segment.
Solution Server is an optional component in Red Hat Developer Lightspeed for MTA. You must complete the following configurations before you can place a code resolution request.
Solution Server is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
| LLM Provider (Tackle CR value) | Large language model examples for Tackle CR configuration |
|---|---|
| OpenShift AI platform | Models deployed in an OpenShift AI cluster that can be accessed by using Open AI-compatible API |
|
Open AI ( |
|
|
Azure OpenAI ( |
|
|
Amazon Bedrock ( |
|
|
Google Gemini ( |
|
|
Ollama ( |
|
3.1. Configuring the model secret key Copy linkLink copied to clipboard!
You must configure the Kubernetes secret for the large language model (LLM) provider in the Red Hat OpenShift project where you installed the MTA Operator.
You can replace oc in the following commands with kubectl.
You must create a LLM API key secret in your OpenShift cluster to produce the resources necessary for the Solution Server. If you do not configure the LLM API key secret, Red Hat Developer Lightspeed for MTA does not create the resources necessary to run the Solution Server.
Procedure
Create a credentials secret named
kai-api-keysin theopenshift-mtaproject.For Amazon Bedrock as the provider, type:
oc create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID> \ --from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
oc create secret generic aws-credentials \ --from-literal=AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID> \ --from-literal=AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Azure OpenAI as the provider, type:
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=AZURE_OPENAI_API_KEY='<YOUR_AZURE_OPENAI_API_KEY>'
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=AZURE_OPENAI_API_KEY='<YOUR_AZURE_OPENAI_API_KEY>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Google as the provider, type:
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=GEMINI_API_KEY='<YOUR_GOOGLE_API_KEY>'
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=GEMINI_API_KEY='<YOUR_GOOGLE_API_KEY>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow For the OpenAI-compatible providers, type:
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \ --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'
oc create secret generic kai-api-keys -n openshift-mta \ --from-literal=OPENAI_API_BASE='https://example.openai.com/v1' \ --from-literal=OPENAI_API_KEY='<YOUR_OPENAI_KEY>'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also set the base URL as the
kai_llm_baseurlvariable in the Tackle custom resource.
(Optional) Force a reconcile so that the MTA operator picks up the secret immediately
kubectl patch tackle tackle -n openshift-mta --type=merge -p \ '{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'kubectl patch tackle tackle -n openshift-mta --type=merge -p \ '{"metadata":{"annotations":{"konveyor.io/force-reconcile":"'"$(date +%s)"'"}}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2. Enabling Red Hat Developer Lightspeed for MTA in Tackle custom resource Copy linkLink copied to clipboard!
Solution Server integrates with the MTA Hub backend component to use the database and volumes necessary to store and retrieve the solved examples.
To enable Solution Server and other AI configurations in the Red Hat Developer Lightspeed for migration toolkit for applications VS Code extension, you must modify the Tackle custom resource (CR) with additional parameters.
Prerequisites
-
You deployed an additional RWO volume for the
Red Hat Developer Lightspeed for MTA-databaseif you want to use Red Hat Developer Lightspeed for MTA. See Persistent volume requirements for more information. - You installed the MTA operator v8.0.0.
Procedure
-
Log in to the Red Hat OpenShift cluster and switch to the
openshift-mtaproject. Edit the Tackle CR settings in the
tackle_hub.ymlfile with the following command:oc edit tackle
oc edit tackleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter applicable values for
kai_llm_providerandkai_llm_modelvariables.Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor OpenAI models and LLMs deployed in the OpenShift AI cluster, enter
OpenAIas thekai_llm_providervalue.Apply the Tackle CR by in the
openshift-mtaproject using the following command.oc apply -f tackle_hub.yaml
$ oc apply -f tackle_hub.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Enter the following command to verify the Red Hat Developer Lightspeed for MTA resources deployed for Solution Server.
oc get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'
oc get deploy,svc -n openshift-mta | grep -E 'kai-(api|db|importer)'Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you enable Solution Server, the Solution Server API endpoint is served through the MTA Hub. You need not complete any further task, such as creating a route for the Solution Server API.
Chapter 4. Configuring large language models for analysis Copy linkLink copied to clipboard!
Red Hat Developer Lightspeed for MTA provides the large language model (LLM) with the contextual prompt, migration hints, and solved examples to generate suggestions for resolving issues identified in the current code.
Red Hat Developer Lightspeed for MTA is designed to be model agnostic. It works with LLMs that are run in different environments (in local containers, as local AI, or as a shared service) to support analyzing Java applications in a wide range of scenarios. You can choose an LLM from well-known providers, local models that you run from Ollama or Podman desktop, and OpenAI API compatible models.
The code fix suggestions produced to resolve issues detected through an analysis depend on the LLM’s capabilities.
You can run an LLM from the following generative AI providers:
- OpenAI
- Azure OpenAI
- Google Gemini
- Amazon Bedrock
- Ollama
You can also run OpenAI API-compatible LLMs deployed as:
- A service in your OpenShift AI cluster
- Locally in the Podman AI Lab in your system.
4.1. Deploying an LLM as a service in an OpenShift AI cluster Copy linkLink copied to clipboard!
The code suggestions from Red Hat Developer Lightspeed for migration toolkit for applications differ based on the large language model (LLM) that you use. Therefore, you may want to use an LLM that caters to your specific requirements.
Red Hat Developer Lightspeed for MTA integrates with LLMs that are deployed as a scalable service on OpenShift AI clusters. These deployments provide you with granular control over resources such as compute, cluster nodes, and auto-scaling Graphical Processing Units (GPUs) while enabling you to leverage LLMs to resolve code issues at a large scale.
An example workflow for configuring an LLM service on OpenShift AI broadly requires the following configurations:
Installing and configuring the following infrastructure resources:
- Red Hat OpenShift cluster and installing the OpenShift AI Operator
- Configure a GPU machineset
- (Optional) Configure an auto scaler custom resource (CR) and a machine scaler CR
Configuring OpenShift AI platform
- Configure a data science project
- Configure a serving runtime
- Configure an accelerator profile
Deploying the LLM through OpenShift AI
- Uploading your model to an AWS compatible bucket
- Add a data connection
- Deploy the LLM in your OpenShift AI data science project
-
Export the SSL certificate,
OPENAI_API_BASEURL and other environment variables to access the LLM
Preparing the LLM for analysis
- Configure an OpenAI API key
-
Update the OpenAI API key and the base URL in
provider-settings.yaml.
See Configuring LLM provider settings to configure the base URL and the LLM API key in the Red Hat Developer Lightspeed for MTA VS Code extension.
4.2. Configuring LLM provider settings Copy linkLink copied to clipboard!
Red Hat Developer Lightspeed for migration toolkit for applications is large language model (LLM) agnostic and integrates with an LLM of your choice. To enable Red Hat Developer Lightspeed for MTA to access your large language model (LLM), you must enter the LLM provider configurations in the provider-settings.yaml file.
The provider-settings.yaml file contains a list of LLM providers that are supported by default. The mandatory environment variables are different for each LLM provider. Depending on the provider that you choose, you can configure additional environment variables for a model in the provider-settings.yaml file. You can also enter a new provider with the required environment variables, the base URL, and the model name.
The provider settings file is available in the Red Hat Developer Lightspeed for MTA Visual Studio (VS) Code extension.
Access the provider-settings.yaml from the VS Code Command Palette by typing Open the GenAI model provider configuration file.
You can select one provider from the list by using the &active anchor in the name of the provider. To use a model from another provider, move the &active anchor to one of the desired provider blocks.
For a model named "my-model" deployed in OpenShift AI with "example-model" as the serving name:
When you change the model deployed in OpenShift AI, you must also change the model argument and the baseURL endpoint.
If you want to select a public LLM provider, you must move the &active anchor to the desired block and change the provider arguments.
For an OpenAI model:
For Azure OpenAI:
For Amazon Bedrock:
It is recommended to use the AWS CLI and verify that you have command line access to AWS services before you proceed with the provider-settings configurations.
For Google Gemini:
For Ollama:
4.3. Configuring the LLM in Podman Desktop Copy linkLink copied to clipboard!
The Podman AI lab extension enables you to use an open-source model from a curated list of models and use it locally in your system.
The code fix suggestions generated by a model depends on the model’s capabilities. Models deployed through the Podman AI Lab were found to be insufficient for the complexity of code changes required to fix issues discovered by MTA. You must not use such models in a production environment.
Prerequisites
- You installed Podman Desktop in your system.
- You completed initial configurations in Red Hat Developer Lightspeed for MTA required for the analysis.
Procedure
- Go to the Podman AI Lab extension and click Catalog under Models.
- Download one or more models.
- Go to Services and click New Model Service.
- Select a model that you downloaded in the Model drop down menu and click Create Service.
- Click the deployed model service to open the Service Details page.
- Note the server URL and the model name. You must configure these specifications in the Red Hat Developer Lightspeed for MTA extension.
Export the inference server URL as follows:
export OPENAI_API_BASE=<server-url>
export OPENAI_API_BASE=<server-url>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
In the Red Hat Developer Lightspeed for MTA extension, type
Open the GenAI model provider configuration filein the Command Palette to open theprovider-settings.yamlfile. Enter the model details from Podman Desktop. For example, use the following configuration for a Mistral model.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe Podman Desktop service endpoint does not need a password but the OpenAI library expects the
OPENAI_API_KEYto be set. In this case, the value of theOPENAI_API_KEYvariable does not matter.
Chapter 5. Using MTA with Developer Lightspeed in IDE Copy linkLink copied to clipboard!
You must configure the following settings in Red Hat Developer Lightspeed for migration toolkit for applications:
- Visual Studio Code IDE settings.
- Profile settings that provide context before you request a code fix for a particular application.
5.1. Configuring the Red Hat Developer Lightspeed for MTA IDE settings Copy linkLink copied to clipboard!
After you install the MTA extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate Red Hat Developer Lightspeed for MTA settings in Visual Studio (VS) Code.
Red Hat Developer Lightspeed for MTA settings are applied to all AI-assisted analysis that you perform by using the MTA extension. The extension settings can be broadly categorized into debugging and logging, Red Hat Developer Lightspeed for MTA settings, analysis related settings, and Solution Server settings.
Prerequisites
In addition to the overall prerequisites, you have configured the following:
- You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.
Procedure
Go to the Red Hat Developer Lightspeed for MTA settings in one of the following ways:
-
Click
Extensions > MTA Extension for VSCode > Settings -
Type
Ctrl + Shift + PorCmd + Shift + Pon the search bar to open the Command Palette and enterPreferences: Open Settings (UI). Go toExtensions > MTAto open the settings page.
-
Click
- Configure the settings described in the following table:
| Settings | Description |
|---|---|
| Log level |
Set the log level for the MTA binary. The default log level is |
| Analyzer path | Specify an MTA custom binary path. If you do not provide a path, Red Hat Developer Lightspeed for MTA uses the default path to the binary. |
| Auto Accept on Save | This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes. |
| Gen AI:Enabled | This option is enabled by default. It enables you to get code fixes by using Red Hat Developer Lightspeed for MTA with a large language model. |
| Gen AI: Agent mode | Enable the experimental Agentic AI flow for analysis. Red Hat Developer Lightspeed for MTA runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, Red Hat Developer Lightspeed for MTA makes the changes in the code and re-analyzes the file. |
| Gen AI: Excluded diagnostic sources |
Add diagnostic sources in the |
| Cache directory | Specify the path to a directory in your filesystem to store cached responses from the LLM. |
| Trace directory | Configure the absolute path to the directory that contains the saved LLM interaction. |
| Trace enabled | Enable to trace MTA communication with the LLM model. Traces are stored in the trace directory that you configured. |
| Demo mode |
Enable to run Red Hat Developer Lightspeed for MTA in demo mode that uses the LLM responses saved in the |
| Solution Server:URL |
Edit the configurations for the Solution Server in
|
| Debug:Webview | Enable debug level logging for Webview message handling in VS Code. |
See Configuring the solution server settings for an example Solution Server configuration.
5.2. Configuring the Solution Server settings Copy linkLink copied to clipboard!
You need a Keycloak realm and the Solution Server URL to connect Red Hat Developer Lightspeed for MTA extension with the Solution Server.
Prerequisites
- The Solution Server URL is available.
- An administrator configured the Keycloak realm for the Solution Server.
Procedure
-
Type
Ctrl + Shift + PorCmd + Shift + Pon the search bar and enterPreferences:Open User Settings (JSON). -
In the
settings.jsonfile, enterCtrl + SPACEto enable the auto-complete for the Solution Server configurable fields. Modify the following configuration as necessary:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you enable Solution Server authentication for the first time, you must enter the
usernameandpasswordin the VS Code search bar.TipEnter
MTA: Restart Solution Serverin the Command Palette to restart the Solution Server.
5.3. Configuring the Red Hat Developer Lightspeed for MTA profile settings Copy linkLink copied to clipboard!
You can use the Visual Studio (VS) Code plugin to run an analysis to discover issues in the code. You can optionally enable Red Hat Developer Lightspeed for migration toolkit for applications to get AI-assisted code suggestions.
To generate code changes using Red Hat Developer Lightspeed for MTA, you must configure a profile that contains all the necessary configurations, such as source and target technologies and the API key to connect to your chosen large language model (LLM).
Prerequisites
- You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.
- You opened a Java project in your VS Code workspace.
Procedure
Open the
MTA View Analysispage in either of the following ways:-
Click the book icon on the
MTA: Issuespane of the MTA extension. -
Type
Ctrl + Shift + PorCmd + Shift + Pon the search bar to open the Command Palette and enterMTA:Open Analysis View.
-
Click the book icon on the
Click the settings button on the
MTA View Analysispage to configure a profile for your project. TheGet Ready to Analyzepane lists the following basic configurations required for an analysis:Verification
After you complete the profile configuration, close the
Get Ready to Analyzepane. You can verify that your configuration works by running an analysis.
| Profile settings | Description |
|---|---|
| Select profile | Create a profile that you can reuse for multiple analyses. The profile name is part of the context provided to the LLM for analysis. |
| Configure label selector | A label selector filters rules for analysis based on the source or target technology. Specify one or more target or source technologies (for example, cloud-readiness). Red Hat Developer Lightspeed for MTA uses this configuration to determine the rules that are applied to a project during analysis. If you mentioned a new target or a source technology in your custom rule, you can type that name to create and add the new item to the list. Note You must configure either target or source technologies before running an analysis. |
| Set rules | Enable default rules and select your custom rule that you want MTA to use for an analysis. You can use the custom rules in addition to the default rules. |
| Configure generative AI |
This option opens the |
See for Configuring LLM provider settings to complete the LLM provider configuration.
Chapter 6. Running an analysis and resolving issues Copy linkLink copied to clipboard!
After you complete the configurations, the next step is running an analysis to identify the issues in the code and generate suggestions to resolve the issues. You can get suggestions to fix code by using Red Hat Developer Lightspeed for migration toolkit for applications.
When you run an analysis, MTA displays the issues in the Analysis Results view.
When you request code fix suggestions, Red Hat Developer Lightspeed for MTA performs the following tasks:
- Streams LLM messages that describe the issue description, resolution, and the file in which the updates are applied.
- Generates new files in the Resolutions pane. These files have the updates to the code to resolve the issues detected in the current analysis. You can review the changes, apply, or revert the updates.
If you apply all the resolutions, Red Hat Developer Lightspeed for MTA applies the changes and triggers another analysis to check if there are more issues. Subsequent analysis reports fewer issues and incidents.
6.1. Running an Analysis Copy linkLink copied to clipboard!
You can run a static code analysis of an application with or without enabling the generative AI features. The RPC (Remote Procedure Call) server runs the analysis to detect all issues in the code for one or more target technologies to which you want to migrate the application.
Prerequisites
- You opened a Java project in your VS Code workspace.
- You configured an analysis profile on the MTA Analysis View page.
Procedure
- Click the Red Hat Developer Lightspeed for MTA extension and click Open MTA Analysis View.
- Select a profile for the analysis.
- Click Start to start the MTA RPC server.
- Click Run Analysis on the MTA Analysis View page.
6.2. Applying resolutions generated by the solution server Copy linkLink copied to clipboard!
When you request code resolutions by enabling the Solution Server, an issue displays the success metric when the metric becomes available. A success metric indicates the confidence level in applying the fix suggestion from the LLM based on how many times the update was applied in past analysis.
You can review the code updates and edit the suggested code resolutions before accepting the suggestions.
Prerequisites
- You opened a Java project in your VS Code workspace.
- You configured a profile on the MTA Analysis View page.
- You ran an analysis after enabling the Solution Server.
Procedure
Review the issues from the Analysis results space of the MTA view analysis page by the following tabs:
- All: lists all incidents identified in your project.
- Files: lists all the files in your project for which the analysis identified issues that must be resolved.
- Issues: lists all issues across different files in your project.
- Use the Category drop down to filter issues based on how crucial the fix is for the target migration. You can filter mandatory, potential, and optional issues.
- Click Has Success Rate to check how many times the same issue resolution was accepted in previous analysis.
- Click the solution tool to trigger automated updates to your code. If you applied any category filter, code updates are made for all incidents, specific files, or specific issues based on the filter. Red Hat Developer Lightspeed for MTA generates new files with the updated code.
- Review and (optionally) edit the code.
- Click Apply all in the Resolutions pane to permanently apply the changes to your code.
6.3. Generating code resolutions in the agent mode Copy linkLink copied to clipboard!
In the agent mode, the Red Hat Developer Lightspeed for MTA planning agent creates the context for an issue and picks a sub-agent that is most suited to resolve the issue. The sub-agent runs an automated scan to describe how the issue can be resolved and generates files with the updated resolutions in one stream.
You can review the updated files and approve or reject the changes to the code. The agent runs another automated analysis to detect new issues in the code that may have occurred because of the accepted changes or diagnostic issues that your tool may generate following a previous analysis. If you allow the process to continue, Red Hat Developer Lightspeed for MTA runs the stream again and generates a new file with the latest updates.
When using the agent mode, you can reject the changes or discontinue the stream but you cannot edit the updated files during the stream.
Prerequisites
- You opened a Java project in your VS Code workspace.
- You configured an analysis profile on the MTA Analysis View page.
Procedure
Verify that agent mode is enabled in one of the following ways:
-
Type
Ctrl + Shift + Pin VS Code search (Linux/Windows system) andCmd + Shift + Pfor Mac to go to the command palette. -
Enter
Preferences: Open User Settings (JSON)to open thesettings.jsonfile. Ensure that
mta-vscode-extension.genai.agentModeis set totrue.OR
- Go to Extensions > Red Hat Developer Lightspeed for MTA > settings
- Click the Agent Mode option to enable the server.
-
Type
- Click the Red Hat Developer Lightspeed for MTA extension and click Open MTA Analysis View.
- Select a profile for the analysis.
- Click Start to start the MTA RPC server.
- Click Run Analysis on the MTA Analysis View page. The Resolution Details tab opens, where you can view the automated analysis that makes changes in applicable files.
- Click the Review Changes option to open the editor that shows the diff view of the modified file.
- Review the changes and click Apply to update the file with all the changes or Reject to reject all changes. If you applied the changes, then Red Hat Developer Lightspeed for MTA creates the updated file with code changes.
- Open Source Control to access the updated file.
- In the Resolution Details view, accept the proposal from Red Hat Developer Lightspeed for MTA to make further changes. The stream of analysis repeats, after which you can review and accept change. Red Hat Developer Lightspeed for MTA creates the file with the code changes, and the stream continues until you reject the proposal for further analysis.
Chapter 7. Debugging Red Hat Developer Lightspeed for MTA Copy linkLink copied to clipboard!
Red Hat Developer Lightspeed for migration toolkit for applications generates logs to debug issues specific to the extension host and the MTA analysis and RPC server. You can also configure the log level for the Red Hat Developer Lightspeed for MTA in the extension settings. The default log level is debug.
Extension logs are stored as extension.log with automatic rotation. The maximum size of the log file is 10 MB and three files are retained. Analyzer RPC logs are stored as analyzer.log without rotation.
7.1. Archiving the logs Copy linkLink copied to clipboard!
To archive the logs as a zip file, type MTA: Generate Debug Archive in the VS Code Command Palette and select the information type that must be archived as a log file.
The archive command allows capturing all relevant log files in a zip archive at the specified location in your project. By default, you can access the archived logs in the .vscode directory of your project.
The archival feature helps you to save the following information:
- Large language model (LLM) provider configuration: Fields from the provider settings that can be included in the archive. All fields are redacted for security reasons by default. Ensure that you do not expose any secrets.
- LLM model arguments
- LLM traces: If you enabled tracing LLM interactions, you can choose to include LLM traces in the logs.
7.2. Accessing the logs Copy linkLink copied to clipboard!
You can access the logs in the following ways:
-
Log file: Type
Developer: Open Extension Logs Folderand open theredhat.mta-vscode-extensiondirectory that contains the extension log and the analyzer log. -
Output panel: Select
Red Hat Developer Lightspeed for MTAfrom the drop-down menu. -
Webview logs: You can also inspect webview content by using the webview logs. To access the webview logs, type
Open Webview Developer Toolsin the VS Code Command Palette.