Chapter 5. Using MTA with Developer Lightspeed in IDE


You must configure the following settings in Red Hat Developer Lightspeed for migration toolkit for applications:

  • Visual Studio Code IDE settings.
  • Profile settings that provide context before you request a code fix for a particular application.

After you install the MTA extension in Visual Studio (VS) Code, you must provide your large language model (LLM) credentials to activate Red Hat Developer Lightspeed for MTA settings in Visual Studio (VS) Code.

Red Hat Developer Lightspeed for MTA settings are applied to all AI-assisted analysis that you perform by using the MTA extension. The extension settings can be broadly categorized into debugging and logging, Red Hat Developer Lightspeed for MTA settings, analysis related settings, and Solution Server settings.

Prerequisites

In addition to the overall prerequisites, you have configured the following:

  • You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.

Procedure

  1. Go to the Red Hat Developer Lightspeed for MTA settings in one of the following ways:

    1. Click Extensions > MTA Extension for VSCode > Settings
    2. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar to open the Command Palette and enter Preferences: Open Settings (UI). Go to Extensions > MTA to open the settings page.
  2. Configure the settings described in the following table:
Expand
Table 5.1. Red Hat Developer Lightspeed for MTA extension settings
SettingsDescription

Log level

Set the log level for the MTA binary. The default log level is debug. The log level increases or decreases the verbosity of logs.

Analyzer path

Specify an MTA custom binary path. If you do not provide a path, Red Hat Developer Lightspeed for MTA uses the default path to the binary.

Auto Accept on Save

This option is enabled by default. When you accept the changes suggested by the LLM, the updated code is saved automatically in a new file. Disable this option if you want to manually save the new file after accepting the suggested code changes.

Gen AI:Enabled

This option is enabled by default. It enables you to get code fixes by using Red Hat Developer Lightspeed for MTA with a large language model.

Gen AI: Agent mode

Enable the experimental Agentic AI flow for analysis. Red Hat Developer Lightspeed for MTA runs an automated analysis of a file to identify issues and suggest resolutions. After you accept the solutions, Red Hat Developer Lightspeed for MTA makes the changes in the code and re-analyzes the file.

Gen AI: Excluded diagnostic sources

Add diagnostic sources in the settings.json file. The issues generated by such diagnostic sources are excluded from the automated Agentic AI analysis.

Cache directory

Specify the path to a directory in your filesystem to store cached responses from the LLM.

Trace directory

Configure the absolute path to the directory that contains the saved LLM interaction.

Trace enabled

Enable to trace MTA communication with the LLM model. Traces are stored in the trace directory that you configured.

Demo mode

Enable to run Red Hat Developer Lightspeed for MTA in demo mode that uses the LLM responses saved in the cache directory for analysis.

Solution Server:URL

Edit the configurations for the Solution Server in settings.json:

  • “enabled”: Enter a boolean value. Set true for connecting the Solution Server client (Red Hat Developer Lightspeed for MTA extension) to the Solution Server.
  • “url”: Configure the URL of the Solution Server end point.
  • “auth”: The authentication settings allows you to configure a list of options to authenticate to the Solution Server.

    • "enabled": Set to true to enable authentication. If you enable authentication, then you must configure the Solution Server realm.
    • "insecure": Set to true to skip SSL certificate verification when clients connect to the Solution Server. Set to false to allow secure connections to the Solution Server.
    • "realm": Enter the name of the Keycloak realm for Solution Server. If you enabled authentication for the Solution Server, you must configure a Keycloak realm to allow clients to connect to the Solution Server. An administrator can configure SSL for the realm.

Debug:Webview

Enable debug level logging for Webview message handling in VS Code.

See Configuring the solution server settings for an example Solution Server configuration.

5.2. Configuring the Solution Server settings

You need a Keycloak realm and the Solution Server URL to connect Red Hat Developer Lightspeed for MTA extension with the Solution Server.

Prerequisites

  • The Solution Server URL is available.
  • An administrator configured the Keycloak realm for the Solution Server.

Procedure

  1. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar and enter Preferences:Open User Settings (JSON).
  2. In the settings.json file, enter Ctrl + SPACE to enable the auto-complete for the Solution Server configurable fields.
  3. Modify the following configuration as necessary:

    {
        "mta-vscode-extension.solutionServer": {
    
            "url": "https://mta-openshift-mta-kai.apps.konveyor-ai.example.com/hub/services/kai/api",
    
            "enabled": true,
            "auth": {
    
               "enabled": true, #you must enter the username and password
               "insecure": true,
               "realm": "mta"
            },
    
        }
    }
    Copy to Clipboard Toggle word wrap
    Note

    When you enable Solution Server authentication for the first time, you must enter the username and password in the VS Code search bar.

    Tip

    Enter MTA: Restart Solution Server in the Command Palette to restart the Solution Server.

You can use the Visual Studio (VS) Code plugin to run an analysis to discover issues in the code. You can optionally enable Red Hat Developer Lightspeed for migration toolkit for applications to get AI-assisted code suggestions.

To generate code changes using Red Hat Developer Lightspeed for MTA, you must configure a profile that contains all the necessary configurations, such as source and target technologies and the API key to connect to your chosen large language model (LLM).

Prerequisites

  • You completed the Solution Server configurations in Tackle custom resource if you opt to use the Solution Server.
  • You opened a Java project in your VS Code workspace.

Procedure

  1. Open the MTA View Analysis page in either of the following ways:

    1. Click the book icon on the MTA: Issues pane of the MTA extension.
    2. Type Ctrl + Shift + P or Cmd + Shift + P on the search bar to open the Command Palette and enter MTA:Open Analysis View.
  2. Click the settings button on the MTA View Analysis page to configure a profile for your project. The Get Ready to Analyze pane lists the following basic configurations required for an analysis:

    Verification

    After you complete the profile configuration, close the Get Ready to Analyze pane. You can verify that your configuration works by running an analysis.

Expand
Table 5.2. Red Hat Developer Lightspeed for MTA profile settings
Profile settingsDescription

Select profile

Create a profile that you can reuse for multiple analyses. The profile name is part of the context provided to the LLM for analysis.

Configure label selector

A label selector filters rules for analysis based on the source or target technology.

Specify one or more target or source technologies (for example, cloud-readiness). Red Hat Developer Lightspeed for MTA uses this configuration to determine the rules that are applied to a project during analysis.

If you mentioned a new target or a source technology in your custom rule, you can type that name to create and add the new item to the list.

Note

You must configure either target or source technologies before running an analysis.

Set rules

Enable default rules and select your custom rule that you want MTA to use for an analysis. You can use the custom rules in addition to the default rules.

Configure generative AI

This option opens the provider-settings.yaml file that contains API keys and other parameters for all supported LLMs. By default, Red Hat Developer Lightspeed for MTA is configured to use OpenAI LLM. To change the model, update the anchor &active to the desired block. Modify this file with the required arguments, such as the model and API key, to complete the setup.

See for Configuring LLM provider settings to complete the LLM provider configuration.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat