Using the MTA command-line interface to analyze applications


Migration Toolkit for Applications 8.0

Using the Migration Toolkit for Applications command-line interface to prepare your applications for migration

Red Hat Customer Content Services

Abstract

By using the Migration Toolkit for Applications (MTA) command-line interface (CLI), you can assess and prioritize migration and modernization efforts for applications written in different languages. You can use the CLI to customize MTA analysis options or integrate with external automation tools.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

The Migration Toolkit for Applications (MTA) command-line interface (CLI) provides a comprehensive set of rules to assess the suitability of your applications for containerization and deployment on Red Hat OpenShift. By using the MTA CLI, you can assess and prioritize migration and modernization efforts for applications written in different languages. For example, you can use MTA to analyze applications written in the following languages:

  • Java
  • Go
  • .NET
  • Node.js
  • Python
Important

Analyzing applications written in the .NET language is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Important

Analyzing applications written in the Python and Node.js languages is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The CLI provides numerous reports that highlight the analysis without using the other tools. You can use the CLI to customize MTA analysis options or integrate with external automation tools.

You can use the Migration Toolkit for Applications (MTA) to assess your applications' suitability for migration to multiple target platforms.

MTA supports the following migration paths:

Expand
Table 2.1. Supported Java migration paths
Source platform ⇒Migration to JBoss EAP 7 & 8OpenShift (cloud readiness)OpenJDK 11, 17, and 21Jakarta EE 9Camel 3 & 4Spring Boot in Red Hat RuntimesQuarkusOpen Liberty

Oracle WebLogic Server

-

-

-

-

-

IBM WebSphere Application Server

-

-

-

-

JBoss EAP 4

[a]

-

-

-

-

-

JBoss EAP 5

-

-

-

-

-

JBoss EAP 6

-

-

-

-

-

JBoss EAP 7

-

-

-

-

Thorntail

[b]

-

-

-

-

-

-

-

Oracle JDK

-

-

-

-

-

-

Camel 2

-

-

-

-

-

Spring Boot

-

-

-

Any Java application

-

-

-

-

-

-

Any Java EE application

-

-

-

-

-

-

-

[a] Although MTA does not currently provide rules for this migration path, Red Hat Consulting can assist with migration from any source platform to JBoss EAP 7.
[b] Requires JBoss Enterprise Application Platform expansion pack 2 (EAP XP 2)
Expand
Table 2.2. .NET migration paths
Source platform ⇒OpenShift (cloud readiness)Migration to .NET 8.0

.NET Framework 4.5+ (Windows only)

Important

Analyzing applications written in the .NET language is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Depending on your scenario, you can use the Migration Toolkit for Applications (MTA) CLI to perform the following actions:

  • Run the analysis against a single application.
  • Run the analysis against multiple applications:

    • In MTA versions earlier than 7.1.0, you can enter a series of --analyze commands, each against an application and each generating a separate report. For more information, see Running the MTA CLI against an application.
    • In MTA version 7.1.0 and later, you can use the --bulk option to analyze multiple applications at once and generate a single report. Note that this feature is a Developer Preview feature only. For more information, see Analyzing multiple applications.
Important

Starting from MTA version 7.2.0, you can run the application analysis for Java applications in the containerless mode. Note that this option is set by default and is used automatically only if all requirements are met. For more information, see Analyzing an application in the containerless mode.

However, if you want to analyze applications in languages other than Java or, for example, use transformation commands, you still need to use containers.

Note

The analysis output in the disconnected environment usually results in fewer incidents because a dependency analysis does not run accurately without access to Maven.

MTA CLI supports running source code and binary analysis by using analyzer-lsp. analyzer-lsp is a tool that evaluates rules by using language providers.

3.1. Analyzing a single application

You can use the Migration Toolkit for Applications (MTA) CLI to perform an application analysis for a single application.

Note

Extracting the list of dependencies from compiled Java binaries is not always possible during the analysis, especially if the dependencies are not embedded within the binary.

Procedure

  1. Optional: List available target technologies for an analysis:

    $ mta-cli analyze --list-targets
    Copy to Clipboard Toggle word wrap
  2. Run the analysis:

    $ mta-cli analyze --input <path_to_input> --output <path_to_output> --source <source_name> --target <target_name>
    Copy to Clipboard Toggle word wrap

    Specify the following arguments:

    • --input: An application to be evaluated.
    • --output: An output directory for the generated reports. mta-cli analyze creates the following analysis reports:

      ./
      ├── analysis.log
      ├── dependencies.yaml
      ├── output.yaml
      ├── shim.log
      ├── static-report
      └── static-report.log
      Copy to Clipboard Toggle word wrap
    • --source: A source technology for the application migration, for example, weblogic.
    • --target: A target technology for the application migration, for example, eap8.
  3. Access the generated analysis report:

    1. In the output of the mta-cli analyze command, copy a path to the index.html analysis report file:

      Report created: <output_report_directory>/index.html
                	Access it at this URL: file:///<output_report_directory>/index.html
      Copy to Clipboard Toggle word wrap
    2. Paste the path to the browser of your choice.

    Alternatively, press Ctrl and click on the path to the report file.

3.2. Analyzing multiple applications

You can use the Migration Toolkit for Applications (MTA) CLI to perform an application analysis for multiple applications at once and generate a combined report.

Important

Analyzing multiple applications is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Procedure

  1. Run the analysis for multiple applications.

    Important

    You must enter one input per analyze command, but make sure to enter the same output directory for all inputs.

    For example, to analyze example applications A, B, and C, enter the following commands:

    1. For input A, enter:

      $ mta-cli analyze --bulk --input <path_to_input_A> --output <path_to_output_ABC> --source <source_A> --target <target_A>
      Copy to Clipboard Toggle word wrap
    2. For input B, enter:

      $ mta-cli analyze --bulk --input <path_to_input_B> --output <path_to_output_ABC> --source <source_B> --target <target_B>
      Copy to Clipboard Toggle word wrap
    3. For input C, enter:

      $ mta-cli analyze --bulk --input <path_to_input_C> --output <path_to_output_ABC> --source <source_C> --target <target_C>
      Copy to Clipboard Toggle word wrap
  2. Access the analysis report. MTA generates a single report, listing all issues that must be resolved before the applications can be migrated.

Starting from MTA 7.2.0, you can perform an application analysis for Java applications by using the MTA CLI that does not require installation of a container runtime.

Important

In MTA 7.2.0 and later, containerless CLI is a default mode. To enable container runtime usage for the analysis of Java applications, you must set the --run-local flag to false:

--run-local=false
Copy to Clipboard Toggle word wrap

The analysis for other applications runs in the container mode automatically

Prerequisites

  • You installed the MTA CLI. For more information, see Installing the CLI by using a .zip file.
  • You installed Java Development Kit (JDK) version 17 or later.
  • If you use OpenJDK on Red Hat Enterprise Linux (RHEL) or Fedora, you installed the Java devel package.
  • You installed Maven version 3.9.9 or later.
  • The CLI assumes that a path to the mvn binary is correctly registered in the system variable. Therefore, ensure that you added mvn to the following variable:

    • Path for Windows.
    • PATH for Linux and macOS.
  • You set the JAVA_HOME environmental variable.
  • You set the JVM_MAX_MEM system variable.

    Note

    If you do not set JVM_MAX_MEM, the analysis might hang because Java might require more memory than the default JVM_MAX_MEM value.

  • For Gradle analysis:

    • You installed OpenJDK version 8.
    • You set $JAVA8_HOME and it is pointing to the OpenJDK 8 home directory.
    • Your project has a Gradle wrapper.

Procedure

  1. Optional: Display all mta-cli analyze command options:

    $ mta-cli analyze --help
    Copy to Clipboard Toggle word wrap
  2. Run the application analysis:

    $ mta-cli analyze --overwrite --input <path_to_input> --output <path_to_output> --target <target_source>
    Copy to Clipboard Toggle word wrap
    Note

    The --overwrite option overwrites the output folder if it exists.

3.4. The analyze command options

The following are the options that you can use together with the mta-cli analyze command to adjust the command behavior to your needs.

Expand
Table 3.1. mta-cli analyze command options
OptionDescription

--analyze-known-libraries (bool)

Analyze open-source libraries.

--disable-maven-search

Set --disable-maven-search=true to disable MTA from relying on the Maven search index to determine if a dependency is publicly available (such as an open-source dependency) or internal to the Java binary application during analysis.

When you disable Maven search, MTA at first tries to determine dependencies from the the JAR file’s POM file (if any). If this method does not succeed, MTA goes through the directory structure to determine dependencies. This method may not produce a reliable dependency classification since the package structure can differ from what is expected by MTA. You may see more number of incidents because some dependencies may be wrongly classified as internal.

By default, --disable-maven-search=false. Therefore, MTA uses the SHA digest of the JAR file to search the Maven search index. This setting generates more accurate dependencies but the drawback is that the Maven search index is frequently unavailable.

--context-lines (int)

The number of lines of source code to include in the output for each incident. The default is 100.

--dependency-folders (stringArray)

A directory for dependencies.

--enable-default-rulesets (bool)

Run default rulesets with analysis. The default is true.

--help

Display the available flags for the analyze command.

--http-proxy (string)

An HTTP proxy string URL.

--https-proxy (string)

An HTTPS proxy string URL.

--incident-selector (string)

An expression to select incidents based on custom variables, for example:

!package=io.demo.config-utils
Copy to Clipboard Toggle word wrap

--input (string)

A path to the application source code or a binary.

--jaeger-endpoint (string)

A Jaeger endpoint to collect traces.

--json-output (string)

Create analysis and dependence output as a JSON file.

--label-selector (string)

Run rules based on specified label selector expression.

--list-languages

List all languages in the source application. This flag is not supported for binary applications.

--list-providers

List available supported providers.

--list-sources

List rules for available migration sources.

--list-targets

List rules for available migration targets.

--maven-settings (string)

A path to the custom Maven settings file to use.

--mode (string)

An analysis mode. Must be set to either of the following values:

  • full (default)
  • source-only

--no-proxy (string)

Proxy-excluded URLs (relevant only with proxy).

--output (string)

A path to the directory for analysis output.

--overwrite (bool)

Overwrite the output directory.

--rules (stringArray)

A filename or directory that contains rule files.

--run-local

Enable or disable container runtime usage for Java applications. For example, to enable container runtime, set --run-local to false. Note that the analysis of non-Java applications runs in container mode only.

--skip-static-report (bool)

Do not generate the static report.

--source (string)

A source technology to consider for the analysis. To specify multiple sources, repeat the parameter, for example:

--source <source_1> --source <source_2> ...
Copy to Clipboard Toggle word wrap

--target (string)

A target technology to consider for the analysis. To specify multiple targets, repeat the parameter, for example:

--target <target_1> --target <target_2> ...
Copy to Clipboard Toggle word wrap

--log-level uint32

A log level. The default is 4.

--no-cleanup (bool)

Do not clean up temporary resources.

Starting from Migration Toolkit for Applications (MTA) version 7.1.0, you can run the application analysis on applications written in languages other than Java. You can perform the analysis either of the following ways:

  • Select a supported language provider to run the analysis for.
  • Overwrite the existing supported language provider with your own unsupported language provider, and then run the analysis on it.
Important

Analyzing applications written in languages other than Java is only possible in container mode. You can use the containerless CLI only for Java applications. For more information, see Analyzing an application in containerless mode.

You can explicitly set a supported language provider according to your application’s language to run the analysis for.

Prerequisites

  • You have the latest version of MTA CLI installed on your system.

Procedure

  1. List language providers supported for the analysis:

    $ mta-cli analyze --list-providers
    Copy to Clipboard Toggle word wrap
  2. Run the application analysis for the selected language provider:

    $ mta-cli analyze --input <path_to_input> --output <path_to_output> --provider <language_provider> --rules <path_to_custom_rules>
    Copy to Clipboard Toggle word wrap
    Important

    Note that if you do not set the --provider option, the analysis might fail because it detects unsupported providers. The analysis will complete without --provider only if all discovered providers are supported.

You can run the analysis for an unsupported language provider. To do so, you must overwrite the existing supported language provider with your own unsupported language provider.

Important

You must create a configuration file for your unsupported language provider before overriding the supported provider.

Prerequisites

  • You created a configuration file for your unsupported language provider, for example:

    [
    {
    "name": "java",
    "address": "localhost:14651"
    "initConfig": [{
    "location": "<java-app-path>",
    "providerSpecificConfig": {
    "bundles": "<bundle-path>",
    "jvmMaxMem": "2G",
    },
    "analysisMode": "source-only"
    }]
    }
    ]
    Copy to Clipboard Toggle word wrap

Procedure

  • Override an existing supported language provider with your unsupported provider and run the analysis:

    $ mta-cli analyze --provider-override <path_to_configuration_file> --output <path_to_output> --rules <path_to_custom_rules>
    Copy to Clipboard Toggle word wrap

Chapter 5. Reviewing an analysis report

After analyzing an application, you can access an analysis report to check the details of the application migration effort.

5.1. Accessing an analysis report

When you run an application analysis, a report is generated in the output directory that you specify by using the --output argument in the command line.

Procedure

  • Copy the path of the index.html file from the analysis output and paste it in a browser of your choice:

    Report created: <output_report_directory>/index.html
              	Access it at this URL: file:///<output_report_directory>/index.html
    Copy to Clipboard Toggle word wrap

    Alternatively, press Ctrl and click on the path of the index.html file.

5.2. Analysis report sections

The following are sections of an analysis report that are available after the application analysis is complete. These sections contain additional details about the migration of an application.

Note

You can only review the report applicable to the current application.

Important

Insights is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Expand
Table 5.1. Analysis report sections
SectionDescription

Dashboard

An overview of the incidents and total story points, sorted by category.

Issues

A concise summary of all issues and their details that require attention.

Dependencies

All Java-packaged dependencies found within the application.

Technologies

All embedded libraries grouped by functionality. Use this report to display the technologies used in each application.

Insights

Information about a violation generated by a rule with zero effort. Issues are generated by general rules, whereas string tags are generated by the tagging rules. String tags indicate the presence of a technology but do not show the code location. Insights contain information about the technologies used in the application and their usage in the code.

Insights do not impact the migration. For example, a rule searching for deprecated API usage in the code that does not impact the current migration but can be tracked and fixed when needed in the future.

Unlike with issues, you do not need to fix insights for a successful migration. They are generated by any rule that does not have a positive effort value and category assigned. They might have a message and tag.

5.3. Reviewing the analysis issues and incidents

After an analysis is complete, you can review issues that might appear during an application migration. Each issue contains a list of files where a rule matched one or more times. These files include all the incidents within the issue. Each incident contains a detailed explanation of the issue and how to fix this issue.

Procedure

  1. Open the analysis report. For more information, see Accessing an analysis report.
  2. Click Issues.
  3. Click on the issue you want to check.
  4. Under the File tab, click on a file to display an incident or incidents that triggered the issue.
  5. Display the incident message by hovering over the line that triggered the incident, for example:

    Use the Quarkus Maven plugin adding the following sections to the pom.xml file:
    <properties>
    <quarkus.platform.group-id>io.quarkus.platform</quarkus.platform.group-id>
    <quarkus.platform.version>3.1.0.Final</quarkus.platform.version>
    </properties>
    <build>
    <plugins>
    <plugin>
    <groupId>$</groupId>
    <artifactId>quarkus-maven-plugin</artifactId>
    <version>$</version>
    <extensions>true</extensions>
    <executions>
    <execution>
    <goals>
    <goal>build</goal>
    <goal>generate-code</goal>
    <goal>generate-code-tests</goal>
    </goals>
    </execution>
    </executions>
    </plugin>
    </plugins>
    </build>
    Copy to Clipboard Toggle word wrap

You can use transformation to transform Java applications source code by using the transform openrewrite command.

Important

Performing transformation requires the container runtime to be configured.

6.1. Transforming applications source code

To update Java libraries or frameworks, for example, javax or Spring Boot, you can transform Java application source code by using the transform openrewrite command. The openrewrite subcommand allows running OpenRewrite recipes on source code.

Note

You can only use a single target to run the transform overwrite command.

Prerequisites

  • You configured the container runtime.

Procedure

  1. Display the available OpenRewrite recipes:

    $ mta-cli transform openrewrite --list-targets
    Copy to Clipboard Toggle word wrap
  2. Transform the application source code:

    $ mta-cli transform openrewrite --input=<path_to_source_code> --target=<target_from_the_list>
    Copy to Clipboard Toggle word wrap

Verification

  • Inspect the target application source code diff to see the transformation.

6.2. Available OpenRewrite recipes

The following are the OpenRewrite recipes that you can use for transforming application source code.

Expand
Table 6.1. Available OpenRewrite recipes
Migration pathPurposeThe rewrite.config file locationActive recipes

Java EE to Jakarta EE

Replace import of javax packages with equivalent jakarta packages.

Replace javax artifacts, declared within pom.xml files, with the jakarta equivalents.

<MTA_HOME>/rules/openrewrite/jakarta \ /javax/imports/rewrite.yml

org.jboss.windup.JavaxToJakarta

Java EE to Jakarta EE

Rename bootstrapping files.

<MTA_HOME>/rules/openrewrite/jakarta \ /javax/bootstrapping/rewrite.yml

org.jboss.windup.jakarta.javax. \ BootstrappingFiles

Java EE to Jakarta EE

Transform the persistence.xml file configuration.

<MTA_HOME>/rules/openrewrite/jakarta \ /javax/xml/rewrite.yml

org.jboss.windup.javax-jakarta. \ PersistenceXML

Spring Boot to Quarkus

Replace spring.jpa.hibernate.ddl-auto property within files matching application*.properties.

<MTA_HOME>/rules/openrewrite/quarkus \ /springboot/properties/rewrite.yml

org.jboss.windup.sb-quarkus.Properties

6.3. The openrewrite command options

The following are the options that you can use together with the mta-cli transform openrewrite command to adjust the command behavior to your needs.

Expand
Table 6.2. The mta-cli transform openrewrite command options
OptionDescription

--goal (string)

A target goal. The default is "dryRun".

--help

Display all mta-cli transform openrewrite command options.

--input (string)

A path to the application source code directory.

--list-targets

List all available OpenRewrite recipes.

-maven-settings (string)

A path to a custom Maven settings file.

--target (string)

A target OpenRewrite recipe.

--log-level uint32

A log level. The default is 4.

--no-cleanup

Do not clean up temporary resources.

Starting from MTA version 7.3.0, you can use the discover and generate commands in containerless mode to automatically generate the manifests needed to deploy a Cloud Foundry (CF) application in the OpenShift Container Platform:

  • Use the discover command to generate the discovery manifest in the YAML format directly from a CF instance or from either of the following manifest files:

    • A single application manifest
    • A CF manifest
    • A path to the directory with multiple manifest files, for example, with application manifests, CF manifests, or both of these manifest types.

    The discovery manifest preserves the specifications found in the CF manifest. The specifications define the metadata, runtime, and platform configurations.

  • Use the generate command to generate the deployment manifest for OCP deployments by using the discovery manifest. The deployment manifest is generated by using a templating engine, such as Helm, that converts the discovery manifest into a Kubernetes-native format. You can also use this command to generate non-Kubernetes manifests, such as a Dockerfile or a configuration file.
Important

Generating platform assets for application deployment is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

Benefits of generating deployment assets

Generating deployment assets has the following benefits:

  • Generating the Kubernetes and non-Kubernetes deployment manifests.
  • Generating deployment manifests by using familiar template engines, for example, Helm, that are widely used for Kubernetes deployments.
  • Adhering to Kubernetes best practices when preparing the deployment manifest by using Helm templates.

7.1. Generating a discovery manifest

You can generate the discovery manifest for the Cloud Foundry (CF) application by using the discover command. The discovery manifest preserves configurations, such as application properties, resource allocations, environment variables, and service bindings found in the CF manifest.

Prerequisites

  • You have Cloud Foundry (v3) as a source platform.
  • You installed MTA CLI version 7.3.0 or later.

Procedure

  1. Open the terminal application and navigate to the <MTA_HOME>/ directory.
  2. List the supported platforms for the discovery process:

    $ mta-cli discover --list-platforms
    Copy to Clipboard Toggle word wrap
  3. Generate the discovery manifest:

    $ mta-cli discover cloud-foundry --input <path_to_input> --output-dir <path_to_output-directory>
    Copy to Clipboard Toggle word wrap

You can use a live discovery if you want to determine what is deployed in a certain Cloud Foundry (CF) cluster. For example, you can determine how many applications are in the cluster. You can also use the live discovery if you do not have access to manifest YAML files.

You can run the live discovery for a remote CF instance by using the mta-cli discover cloud-foundry --use-live-connection --spaces=<space_name> command.

Important

You must always define Cloud Foundry spaces to analyze during a live discovery by using the --spaces option.

Prerequisites

  • You have permission to remotely connect to the CF instance.

Procedure

  1. Optional: Investigate the contents of the remote CF instance

    $ cf spaces
    $ cf apps
    Copy to Clipboard Toggle word wrap
  2. Copy the CF configuration file to the directory of your choice:

    $ mkdir <path_to_the_directory>/.cf
    Copy to Clipboard Toggle word wrap
  3. Run the live discovery in a remote CF instance:

    $ mta-cli discover cloud-foundry --use-live-connection --spaces=<space_name> --output-dir <path_to_output_directory> --cf-config=<path_to_CF_config_file>
    Copy to Clipboard Toggle word wrap

    The command runs the discovery for each application from each space.

    If you want to run the discovery for a specific application, enter, for example:

    $ mta-cli discover cloud-foundry --use-live-connection --app-name=<application_name> --spaces=<space_name> --output-dir <path_to_output_directory> --cf-config=<path_to_CF_config_file>
    Copy to Clipboard Toggle word wrap

You can conceal sensitive information, for example, services and docker credentials, in a Cloud Foundry (CF) discovery manifest by using the mta-cli discover cloud-foundry --conceal-sensitive-data command. This command generates the following files:

  • A discovery manifest
  • A file with concealed data
Note

If you do not specify the --conceal-sensitive-data option, the option is automatically set to false.

Procedure

  1. Display the contents of the CF manifest and locate sensitive data:

    $ cat <manifest_name>.yaml
    name: <manifest_name>
    disk_quota: 512M
    memory: 500M
    timeout: 10
    docker:
     image:myregistry/myapp:latest
     username: docker-registry-user
    Copy to Clipboard Toggle word wrap
  2. Generate the discovery manifest for the CF application as an output file and conceal sensitive data:

    $ mta-cli discover cloud-foundry --conceal-sensitive-data=true --input <path_to_application_manifest> --output-dir <path_to_output_directory>
    Copy to Clipboard Toggle word wrap

Verification

  1. Display the repository structure:

    $ tree <path_to_discovery_manifest>
    <path_to_discovery_manifest>
    ├── discover_manifest_<app-name>.yaml
    ├── secrets_<discovery_manifest_name>.yaml
    
    1 directory, 2 files
    Copy to Clipboard Toggle word wrap
  2. Display the contents of the discovery manifest:

    $ cat <discovery_manifest_name>.yaml
    name: <discovery_manifest_name>
    timeout: 10
    docker:
     image:myregistry/myapp:latest
     username: $(f0e9ea9e-1913-446f-8483-da9301373eef)
    disk: 512M
    memory: 500M
    instances: 1
    Copy to Clipboard Toggle word wrap

    The sensitive data was replaced with a UUID (Universally Unique Identifier).

  3. Display the contents of the secrets_<discovery_manifest_name>.yaml file:

    $ cat secrets_<discovery_manifest_name>.yaml
    f0e9ea9e-1913-446f-8483-da9301373eef: docker-registry-user
    Copy to Clipboard Toggle word wrap

    The file contains the mapping of the UUID to the concealed sensitive data.

7.4. Generating a deployment manifest

You can auto-generate the Red Hat OpenShift Container Platform deployment manifest for the Cloud Foundry (CF) application by using the generate command. Based on the Helm template that you provide, the command generates manifests, such as a ConfigMap, and non-Kubernetes manifests, such as a Dockerfile, for application deployment.

Prerequisites

  • You have Cloud Foundry (v3) as a source platform.
  • You have OpenShift Container Platform as a target platform.
  • You installed MTA CLI version 7.3.0.
  • You generated a discovery manifest.
  • You created a Helm template with the required configuration for the OCP deployment.

Procedure

  1. Open the terminal application and navigate to the <MTA_HOME>/ directory.
  2. Generate the deployment manifest as an output file:

    $ mta-cli generate helm --chart-dir helm_sample \ --input <path_to_discovery-manifest> \ --output-dir <location_of_deployment_manifest> \
    Copy to Clipboard Toggle word wrap
  3. Verify the ConfigMap:

    $ mta-cli cd <location_of_deployment_manifest> \
    $ cat configmap.yaml
    $ cat Dockerfile
    Copy to Clipboard Toggle word wrap
  4. Verify the Dockerfile:

    $ mta-cli cd <location_of_deployment_manifest> \
    $ cat Dockerfile
    Copy to Clipboard Toggle word wrap

7.5. The discover and generate command options

You can use the following options together with the discover or generate command to adjust the command behavior to your needs.

Expand
Table 7.1. Options for discover and generate commands
CommandOptionDescription

discover

--app-name

An application to run the discovery for.

-h, --help

Display details for different command arguments.

--list-apps

List the available applications on the source platform, for example:

$ mta-cli discover cloud-foundry --use-live-connection --spaces=space,space-2 --cf-config=/home/gloria/ --list-apps
INFO[0000] Cloud Foundry client created successfully
INFO[0000] Analyzing space space_name=space
INFO[0006] Apps discovered count=2
INFO[0006] Analyzing space space_name=space-2
INFO[0007] Apps discovered count=1
Space: space
- nginx
- test-app
Space: space-2
- test-app
Copy to Clipboard Toggle word wrap

--list-platforms

List the supported platforms for the discovery process.

--log-level

Set the log level, for example, discover --log-level 1. The default log level is 4.

discover cloud-foundry

 

Discover Cloud Foundry applications.

--conceal-sensitive-data

Extract sensitive information from a discovery manifest and put it into a separate file.

--input

Specify the location of the YAML manifest file to discover the CF applications, for example:

  • A path to the single application manifest.
  • A path to the Cloud Foundry manifest.
  • A path to the directory with multiple manifest files.

--output

Specify the location to save the <discovery-manifest-name>.yaml file.

--spaces

A comma-separated list of Cloud Foundry spaces to analyze during a live discovery, for example:

--spaces=space1,space2,…
Copy to Clipboard Toggle word wrap

--use-live-connection

Enable real-time discovery by using live platform connections.

generate

-h, --help

Display details for different command arguments.

generate helm

 

Generate a deployment manifest by using the Helm template.

--chart-dir

Specify a directory that contains the Helm chart.

--input

Specify a location of the <discovery-manifest-name>.yaml file to generate the deployment manifest.

--non-k8s-only

Generate only non-Kubernetes templates, such as a Dockerfile.

--output-dir

Specify a location to which the deployment manifests are saved.

--set

Override values of attributes in the discovery manifest with the key-value pair entered from the CLI.

7.6. Assets generation example

The following is an example of generating discovery and deployment manifests of a Cloud Foundry (CF) Node.js application.

For this example, the following files and directories are used:

  • CF Node.js application manifest name: cf-nodejs-app.yaml
  • Discovery manifest name: discover.yaml
  • Location of the application Helm chart: helm_sample
  • Deployment manifests: a ConfigMap and a Dockerfile
  • Output location of the deployment manifests: newDir

Assumed that the cf-nodejs-app.yaml is located in the same directory as the MTA CLI binary. If the CF application manifest location is different, you can also enter the location path to the manifest as the input.

Prerequisites

  • You installed MTA CLI 7.3.0.
  • You have a CF application manifest as a YAML file.
  • You created a Helm template with the required configurations for the OCP deployment.

Procedure

  1. Open the terminal application and navigate to the <MTA_HOME>/ directory.
  2. Verify the content of the CF Node.js application manifest:

    $ cat cf-nodejs-app.yaml
    name: cf-nodejs
    lifecycle: cnb
    buildpacks:
      - docker://my-registry-a.corp/nodejs
      - docker://my-registry-b.corp/dynatrace
    memory: 512M
    instances: 1
    random-route: true
    Copy to Clipboard Toggle word wrap
  3. Generate the discovery manifest:

    $ mta-cli discover cloud-foundry \ --input cf-nodejs-app.yaml \ --output discover.yaml \
    Copy to Clipboard Toggle word wrap
  4. Verify the content of the discover manifest:

    $ cat discover.yaml
    name: cf-nodejs
    randomRoute: true
    timeout: 60
    buildPacks:
    - docker://my-registry-a.corp/nodejs
    - docker://my-registry-b.corp/dynatrace
    instances: 1
    Copy to Clipboard Toggle word wrap
  5. Generate the deployment manifest in the newDir directory by using the discover.yaml file:

    $ mta-cli generate helm \ --chart-dir helm_sample \ --input discover.yaml --output-dir newDir
    Copy to Clipboard Toggle word wrap
  6. Check the contents of the Dockerfile in the newDir directory:

    $ cat ./newDir/Dockerfile
    FROM busybox:latest
    
    RUN echo "Hello cf-nodejs!"
    Copy to Clipboard Toggle word wrap
  7. Check the contents of the ConfigMap in the newDir directory:

    $ cat ./newDir/configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cf-nodejs-config
    data:
      RANDOM_ROUTE: true
      TIMEOUT: "60"
      BUILD_PACKS: |
        - docker://my-registry-a.corp/nodejs
        - docker://my-registry-b.corp/dynatrace
      INSTANCES: "1"
    Copy to Clipboard Toggle word wrap
  8. In the ConfigMap, override the name to nodejs-app and INSTANCES to 2 :

    $ mta-cli generate helm \ --chart-dir helm_sample \ --input discover.yaml --set name="nodejs-app" \ --set instances=2 \ --output-dir newDir \
    Copy to Clipboard Toggle word wrap
  9. Check the contents of the ConfigMap again:

    $ cat ./newDir/configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nodejs-app
    data:
      RANDOM_ROUTE: true
      TIMEOUT: "60"
      BUILD_PACKS: |
        - docker://my-registry-a.corp/nodejs
        - docker://my-registry-b.corp/dynatrace
      INSTANCES: "2"
    Copy to Clipboard Toggle word wrap

Chapter 8. MTA CLI known issues

This section provides highlighted known issues in MTA CLI.

Limitations with Podman on Microsoft Windows

The CLI is built and distributed with support for Microsoft Windows.

However, when running any container image based on Red Hat Enterprise Linux 9 (RHEL9) or Universal Base Image 9 (UBI9), the following error can be returned when starting the container:

Fatal glibc error: CPU does not support x86-64-v2
Copy to Clipboard Toggle word wrap

This error is caused because Red Hat Enterprise Linux 9 or Universal Base Image 9 container images must be run on a CPU architecture that supports x86-64-v2.

For more details, see (Running Red Hat Enterprise Linux 9 (RHEL) or Universal Base Image (UBI) 9 container images fail with "Fatal glibc error: CPU does not support x86-64-v2").

CLI runs the container runtime correctly. However, different container runtime configurations are not supported.

Although unsupported, you can run CLI with Docker instead of Podman, which would resolve this issue.

To achieve this, you replace the CONTAINER_TOOL path with the path to Docker.

For example, if you experience this issue, instead of issuing:

CONTAINER_TOOL=/usr/local/bin/docker mta-cli analyze
Copy to Clipboard Toggle word wrap

You replace CONTAINER_TOOL with the path to Docker:

<Docker Root Dir>=/usr/local/bin/docker mta-cli analyze
Copy to Clipboard Toggle word wrap

While this is not supported, it would allow you to explore CLI while you work to upgrade your hardware or move to hardware that supports x86_64-v2.

Appendix A. Reference material

The following is information that you might find useful when using the Migration Toolkit for Applications (MTA) CLI.

A.1. Supported technology tags

The following technology tags are supported in MTA 8.0.0:

  • 0MQ Client
  • 3scale
  • Acegi Security
  • AcrIS Security
  • ActiveMQ library
  • Airframe
  • Airlift Log Manager
  • AKKA JTA
  • Akka Testkit
  • Amazon SQS Client
  • AMQP Client
  • Anakia
  • AngularFaces
  • ANTLR StringTemplate
  • AOP Alliance
  • Apache Accumulo Client
  • Apache Aries
  • Apache Commons JCS
  • Apache Commons Validator
  • Apache Flume
  • Apache Geronimo
  • Apache Hadoop
  • Apache HBase Client
  • Apache Ignite
  • Apache Karaf
  • Apache Mahout
  • Apache Meecrowave JTA
  • Apache Sirona JTA
  • Apache Synapse
  • Apache Tapestry
  • Apiman
  • Applet
  • Arquillian
  • AspectJ
  • Atomikos JTA
  • Avalon Logkit
  • Axion Driver
  • Axis
  • Axis2
  • BabbageFaces
  • Bean Validation
  • BeanInject
  • Blaze
  • Blitz4j
  • BootsFaces
  • Bouncy Castle
  • ButterFaces
  • Cache API
  • Cactus
  • Camel
  • Camel Messaging Client
  • Camunda
  • Cassandra Client
  • CDI
  • Cfg Engine
  • Chunk Templates
  • Cloudera
  • Coherence
  • Common Annotations
  • Composite Logging
  • Composite Logging JCL
  • Concordion
  • CSS
  • Cucumber
  • Dagger
  • DbUnit
  • Demoiselle JTA
  • Derby Driver
  • Drools
  • DVSL
  • Dynacache
  • EAR Deployment
  • Easy Rules
  • EasyMock
  • Eclipse RCP
  • EclipseLink
  • Ehcache
  • EJB
  • EJB XML
  • Elasticsearch
  • Entity Bean
  • EtlUnit
  • Eureka
  • Everit JTA
  • Evo JTA
  • Feign
  • File system Logging
  • FormLayoutMaker
  • FreeMarker
  • Geronimo JTA
  • GFC Logging
  • GIN
  • GlassFish JTA
  • Google Guice
  • Grails
  • Grapht DI
  • Guava Testing
  • GWT
  • H2 Driver
  • Hamcrest
  • Handlebars
  • HavaRunner
  • Hazelcast
  • Hdiv
  • Hibernate
  • Hibernate Cfg
  • Hibernate Mapping
  • Hibernate OGM
  • HighFaces
  • HornetQ Client
  • HSQLDB Driver
  • HTTP Client
  • HttpUnit
  • ICEfaces
  • Ickenham
  • Ignite JTA
  • Ikasan
  • iLog
  • Infinispan
  • Injekt for Kotlin
  • Iroh
  • Istio
  • Jamon
  • Jasypt
  • Java EE Batch
  • Java EE Batch API
  • Java EE JACC
  • Java EE JAXB
  • Java EE JAXR
  • Java EE JSON-P
  • Java Transaction API
  • JavaFX
  • JavaScript
  • Javax Inject
  • JAX-RS
  • JAX-WS
  • JayWire
  • JBehave
  • JBoss Cache
  • JBoss EJB XML
  • JBoss logging
  • JBoss Transactions
  • JBoss Web XML
  • JBossMQ Client
  • JBPM
  • JCA
  • Jcabi Log
  • JCache
  • JCunit
  • JDBC
  • JDBC datasources
  • JDBC XA datasources
  • Jersey
  • Jetbrick Template
  • Jetty
  • JFreeChart
  • JFunk
  • JGoodies
  • JMock
  • JMockit
  • JMS Connection Factory
  • JMS Queue
  • JMS Topic
  • JMustache
  • JNA
  • JNI
  • JNLP
  • JPA entities
  • JPA Matchers
  • JPA named queries
  • JPA XML
  • JSecurity
  • JSF
  • JSF Page
  • JSilver
  • JSON-B
  • JSP Page
  • JSTL
  • JTA
  • Jukito
  • JUnit
  • Ka DI
  • Keyczar
  • Kibana
  • KLogger
  • Kodein
  • Kotlin Logging
  • KouInject
  • KumuluzEE JTA
  • LevelDB Client
  • Liferay
  • LiferayFaces
  • Lift JTA
  • Log.io
  • Log4J
  • Log4s
  • Logback
  • Logging Utils
  • Logstash
  • Lumberjack
  • Macros
  • Magicgrouplayout
  • Mail
  • Management EJB
  • MapR
  • MckoiSQLDB Driver
  • Memcached
  • Message (MDB)
  • Micro DI
  • Micrometer
  • Microsoft SQL Driver
  • MiGLayout
  • MinLog
  • Mixer
  • Mockito
  • MongoDB Client
  • Monolog
  • Morphia
  • MRules
  • Mule
  • Mule Functional Test Framework
  • MultithreadedTC
  • Mycontainer JTA
  • MyFaces
  • MySQL Driver
  • Narayana Arjuna
  • Needle
  • Neo4j
  • NLOG4J
  • Nuxeo JTA/JCA
  • OACC
  • OAUTH
  • OCPsoft Logging Utils
  • OmniFaces
  • OpenFaces
  • OpenPojo
  • OpenSAML
  • OpenWS
  • OPS4J Pax Logging Service
  • Oracle ADF
  • Oracle DB Driver
  • Oracle Forms
  • Orion EJB XML
  • Orion Web XML
  • Oscache
  • OTR4J
  • OW2 JTA
  • OW2 Log Util
  • OWASP CSRF Guard
  • OWASP ESAPI
  • Peaberry
  • Pega
  • Persistence units
  • Petals EIP
  • PicketBox
  • PicketLink
  • PicoContainer
  • Play
  • Play Test
  • Plexus Container
  • Polyforms DI
  • Portlet
  • PostgreSQL Driver
  • PowerMock
  • PrimeFaces
  • Properties
  • Qpid Client
  • RabbitMQ Client
  • RandomizedTesting Runner
  • Resource Adapter
  • REST Assured
  • Restito
  • RichFaces
  • RMI
  • RocketMQ Client
  • Rythm Template Engine
  • SAML
  • Santuario
  • Scalate
  • Scaldi
  • Scribe
  • Seam
  • Security Realm
  • ServiceMix
  • Servlet
  • ShiftOne
  • Shiro
  • Silk DI
  • SLF4J
  • Snippetory Template Engine
  • SNMP4J
  • Socket handler logging
  • Spark
  • Specsy
  • Spock
  • Spring
  • Spring Batch
  • Spring Boot
  • Spring Boot Actuator
  • Spring Boot Cache
  • Spring Boot Flo
  • Spring Cloud Config
  • Spring Cloud Function
  • Spring Data
  • Spring Data JPA
  • spring DI
  • Spring Integration
  • Spring JMX
  • Spring Messaging Client
  • Spring MVC
  • Spring Properties
  • Spring Scheduled
  • Spring Security
  • Spring Shell
  • Spring Test
  • Spring Transactions
  • Spring Web
  • SQLite Driver
  • SSL
  • Standard Widget Toolkit (SWT)
  • Stateful (SFSB)
  • Stateless (SLSB)
  • Sticky Configured
  • Stripes
  • Struts
  • SubCut
  • Swagger
  • SwarmCache
  • Swing
  • SwitchYard
  • Syringe
  • Talend ESB
  • Teiid
  • TensorFlow
  • Test Interface
  • TestNG
  • Thymeleaf
  • TieFaces
  • tinylog
  • Tomcat
  • Tornado Inject
  • Trimou
  • Trunk JGuard
  • Twirl
  • Twitter Util Logging
  • UberFire
  • Unirest
  • Unitils
  • Vaadin
  • Velocity
  • Vlad
  • Water Template Engine
  • Web Services Metadata
  • Web Session
  • Web XML File
  • WebLogic Web XML
  • Webmacro
  • WebSocket
  • WebSphere EJB
  • WebSphere EJB Ext
  • WebSphere Web XML
  • WebSphere WS Binding
  • WebSphere WS Extension
  • Weka
  • Weld
  • WF Core JTA
  • Wicket
  • Winter
  • WSDL
  • WSO2
  • WSS4J
  • XACML
  • XFire
  • XMLUnit
  • Zbus Client
  • Zipkin

A.2. Rule story points

Story points are an abstract metric commonly used in Agile software development to estimate the level of effort required to implement a feature or change.

The Migration Toolkit for Applications uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value must be consistent across tasks.

The following are the general guidelines MTA uses when estimating the level of effort required for a rule.

Expand
Table A.1. Guidelines for the level of effort estimation
Level of EffortStory PointsDescription

Information

0

An informational warning with very low or no priority for migration.

Trivial

1

The migration is a trivial change or a simple library swap with no or minimal API changes.

Complex

3

The changes required for the migration task are complex, but have a documented solution.

Redesign

5

The migration task requires a redesign or a complete library change, with significant API changes.

Rearchitecture

7

The migration requires a complete rearchitecture of the component or subsystem.

Unknown

13

The migration solution is not known and may need a complete rewrite.

A.2.2. Migration tasks categories

In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort.

Mandatory
The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform.
Optional
If the migration task is not completed, the application should work, but the results might not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed.
Potential
The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type.
Information
The task is included to inform you of the existence of certain files. These might need to be examined or modified as part of the modernization effort, but changes are typically not required.

Appendix B. Contributing to the MTA project

You can help the Migration Toolkit for Applications to cover most application constructs and server configurations, including yours.

You can provide your help with any of the following items:

  • Send an email to jboss-migration-feedback@redhat.com and let us know what MTA migration rules must cover.
  • Provide example applications to test migration rules.
  • Identify application components and problem areas that might be difficult to migrate:

    • Write a short description of the problem migration areas.
    • Write a brief overview describing how to solve the problem in migration areas.
  • Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet.
  • Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet. MTA uses Jira as its issue tracking system. If you encounter an issue executing MTA, submit a Jira issue.
  • Contribute to the Migration Toolkit for Applications rules repository:

    • Write a Migration Toolkit for Applications rule to identify or automate a migration process.
    • Create a test for the new rule.

      For more information, see Rule Development Guide.

  • Contribute to the project source code:

    • Create a core rule.
    • Improve MTA performance or efficiency.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat