Chapter 6. Administering the Ansible Lightspeed Service
As an organization administrator, you can use Red Hat Ansible Lightspeed to manage the Ansible Lightspeed service, so that your users and teams can create and use custom automation content. This chapter provides information about how to get set up as an organization administrator on Red Hat Ansible Lightspeed, with details on how to:
- Access the Ansible Lightspeed portal as an organization administrator
- View and manage the Admin dashboard telemetry data
- Configure custom models
If you are using a free 90-day trial account, you need a trial or paid subscription to the Red Hat Ansible Automation Platform, but you do not need a trial or paid subscription to IBM watsonx Code Assistant. This means that you do not need to configure the API key or model ID when setting up or using a trial account.
6.1. Logging in to the Ansible Lightspeed administrator portal Copy linkLink copied to clipboard!
Use the Ansible Lightspeed administrator portal to connect Red Hat Ansible Lightspeed to IBM watsonx Code Assistant.
Prerequisites
- You have organization administrator privileges to a Red Hat Customer Portal organization with a valid Red Hat Ansible Automation Platform subscription.
Procedure
- Log in to the Ansible Lightspeed portal as an organization administrator.
-
Click
. Enter your Red Hat account username and password. The Ansible Lightspeed Service uses Red Hat Single Sign-On (RH-SSO) for authentication.
As part of the authentication process, the Ansible Lightspeed Service checks whether your organization has an active Ansible Automation Platform subscription. On successful authentication, the login screen is displayed along with your username and your assigned user role.
From the login screen, click Admin Portal.
You are redirected to the Red Hat Ansible Lightspeed with IBM watsonx Code Assistant administrator portal where you can connect Red Hat Ansible Lightspeed to your IBM watsonx Code Assistant instance.
6.1.1. Logging out of the Ansible Lightspeed Service Copy linkLink copied to clipboard!
To log out of the Ansible Lightspeed Service, you must log out of both the Ansible Lightspeed VS Code extension and the Ansible Lightspeed portal.
Procedure
Log out of the Ansible Lightspeed VS Code extension:
-
Click the Person icon
. You will see a list of accounts that VS Code is logged into.
-
Select
.
-
Click the Person icon
Log out of the Ansible Lightspeed portal:
- Navigate to the Ansible Lightspeed portal login page.
- Click Log out.
6.2. Viewing and managing the Admin dashboard telemetry Copy linkLink copied to clipboard!
Red Hat Ansible Lightspeed collects the following telemetry data by default:
Operational telemetry data
This is the data that is required to operate and troubleshoot the Ansible Lightspeed service. For more information, refer the Enterprise Agreement. You cannot disable the collection of operational telemetry data.
This includes the following data:
- Organization you are logged into (Organization ID, account number)
- Large language model (or models) that you are connected to
Admin dashboard telemetry data
This is the data that provides insight into how your organization users are using the Ansible Lightspeed service, and the metrics are displayed on the Admin dashboard.
This includes the following data:
- Prompts and content suggestions, including accept or reject of the content suggestions
User sentiment feedback
You can also disable the Admin dashboard telemetry if you no longer want to collect and monitor the telemetry data.
Viewing telemetry data on the Admin dashboard is not yet supported on Red Hat Ansible Lightspeed on-premise deployments.
6.2.1. Prerequisites Copy linkLink copied to clipboard!
To view and manage the Admin dashboard telemetry data, ensure that you have the following:
- You have organization administrator privileges to a Red Hat Customer Portal organization with a valid Red Hat Ansible Automation Platform subscription.
- You have installed the Ansible VS Code extension v2.13.148 that is required to collect Admin dashboard telemetry.
Red Hat Ansible Lightspeed does not collect users' personal information, such as usernames or passwords. If any personal information is inadvertently received, the data is deleted. For more information about Red Hat Ansible Lightspeed’s privacy practices, see the Telemetry Data Collection Notice for the Admin dashboard.
6.2.2. What telemetry data is collected? Copy linkLink copied to clipboard!
Following is the list of telemetry data that Red Hat Ansible Lightspeed collects:
- Details of the organization that you are logged into, such as organization ID and account number
- Large language models that you are connected to
- Inline suggestions that were accepted, rejected, or ignored by your organization users
- User sentiment feedback
- Top 10 modules returned in code recommendations
6.2.3. Viewing the Admin dashboard telemetry Copy linkLink copied to clipboard!
The Admin dashboard displays the analytics telemetry data that you can use to gain insight into how your organization users are using the Ansible Lightspeed service.
The Admin dashboard displays the following charts:
Inline suggestions accepted, rejected, or ignored by users
This graph tracks the number of inline suggestions that were accepted, rejected, or ignored by users in your organization. Use this graph to gain insight into how your organization users are using the Ansible Lightspeed service.
User sentiment
This graph measures the users' feedback (feelings, opinions). Use this graph to gain insight into the overall user experience with Red Hat Ansible Lightspeed.
Top 10 modules returned in code recommendations
This graph displays the top 10 modules returned in code recommendations. Use this metric to determine which modules are being suggested the most to your organization’s automation developers.
Procedure
- Log in to the Ansible Lightspeed with IBM watsonx Code Assistant Hybrid Cloud Console as an organization administrator.
From the navigation panel, select Ansible Lightspeed > Admin Dashboard.
The Admin dashboard displays a graphical representation of analytics telemetry data for the last 30 days by default.
Use the following filters to refine your telemetry data:
- To view the telemetry data for a specific time period or for a custom date range, select the date range from the Quick Date Range list.
- To view the telemetry data for a specific IBM watsonx Code Assistant model only, select the model ID from the Model Name list. By default, the Admin dashboard displays telemetry data for all models.
6.2.4. Disabling the Admin dashboard telemetry Copy linkLink copied to clipboard!
Red Hat Ansible Lightspeed collects the Admin dashboard telemetry data by default. The data provides insight into how your organization users are using the Ansible Lightspeed service. If you no longer want to collect analytics telemetry data for your organization, you can disable the Admin dashboard telemetry.
After you disable the Admin dashboard telemetry, the Ansible Lightspeed service no longer collects the analytics telemetry data for your organization. The earlier telemetry data is still available on the Admin dashboard, but no latest data is displayed. If you re-enable the Admin dashboard telemetry, the Ansible Lightspeed service starts collecting data for your organization, and the metrics are displayed on the Admin dashboard after 24 hours.
Prerequisites
- You have organization administrator privileges to a Red Hat Customer Portal organization with a valid Red Hat Ansible Automation Platform subscription.
Procedure
- Log in to the Ansible Lightspeed portal as an organization administrator.
- From the login screen, click Admin Portal.
- Under Admin Portal, click Telemetry.
To disable the Admin dashboard telemetry, select Operational telemetry data only.
NoteTo re-enable the Admin dashboard telemetry, select Admin dashboard telemetry data.
- Click Save.
6.3. Configuring custom models Copy linkLink copied to clipboard!
As an organization administrator, you can create and use fine-tuned, custom models that are trained on your organization’s existing Ansible content. With this capability, you can tune the models to your organization’s automation patterns and improve the code recommendation experience.
After you create a custom model, you can specify one of the following access types:
Enable access for all users in your organization
You can configure the custom model as the default model for your organization. All users in your organization can use the custom model.
Enable access for select Ansible users in your organization
Using the model-override setting in the Ansible VS Code extension, select Ansible users can tune their Ansible Lightspeed service to use a custom model instead of the default model.
6.3.1. Process for configuring custom models Copy linkLink copied to clipboard!
To configure a custom model, perform the following tasks:
6.3.2. Creating a training data set by using the content parser tool Copy linkLink copied to clipboard!
Use the content parser tool, a command-line interface (CLI) tool, to scan your existing Ansible files and generate a custom model training data set. The training data set includes a list of Ansible files and their paths relative to the project root. You can then upload this data set to IBM watsonx Code Assistant, and use it to create a custom model that is trained on your organization’s existing Ansible content.
6.3.2.1. Methods of creating training data sets Copy linkLink copied to clipboard!
You can generate a training data set by using one of the following methods:
With ansible-lint preprocessing
By default, the content parser tool generates training data sets by using ansible-lint preprocessing. The content parser tool uses ansible-lint rules to scan your Ansible files and ensure that the content adheres to Ansible best practices. If rule violations are found, the content parser tool excludes these files from the generated output. In such scenarios, you must resolve the rule violations, and run the content parser tool once again so that the generated output includes all your Ansible files.
Without ansible-lint preprocessing
You can generate a training data set without ansible-lint preprocessing. In this method, the content parser tool does not scan your Ansible files for ansible-lint rule violations; therefore, the training data set includes all files. Although the training data set includes all files, it might not adhere to Ansible best practices and could affect the quality of your code recommendation experience.
6.3.2.2. Supported data sources Copy linkLink copied to clipboard!
The content parser tool scans the following directories and file formats:
- Local directories
-
Archived files, such as
.zip,.tar,.tar.gz,.tar.bz2, and.tar.xzfiles - Git repository URLs (includes both private and public repositories)
6.3.2.3. Process of creating a training data set Copy linkLink copied to clipboard!
To create a custom model training data set, perform the following tasks:
- Install the content parser tool on your computer
- Generate a custom model training data set
- View the generated training data set
- (Optional: If you generated a training data set with ansible-lint preprocessing and detected ansible-lint rule violations) Resolve ansible-lint rule violations
- (Optional: If you generated multiple training data sets) Merge multiple training data sets into a single JSONL file
6.3.2.4. Installing the content parser tool Copy linkLink copied to clipboard!
Install the content parser tool, a command-line interface (CLI) tool, on your computer.
Prerequisites
Ensure that your computer has one of the following supported OS:
- Python version 3.10 or later.
UNIX OS, such as Linux or Mac OS.
NoteInstallation of the content parser tool on Microsoft Windows OS is not supported.
Procedure
Create a working directory and set up
venvPython virtual environment:$ python -m venv ./venv$ source ./venv/bin/activateInstall the latest version of the content parser tool from the
piprepository:$ pip install --upgrade pip$ pip install --upgrade ansible-content-parserPerform one of the following tasks:
- To generate a training data set without ansible-lint preprocessing, see section Generating a custom model training data set.
To generate a training data set with ansible-lint preprocessing, ensure that you have the latest version of ansible-lint installed on your computer:
View the ansible-lint versions that are installed on your computer.
$ ansible-content-parser --version$ ansible-lint --versionA list of application versions and their dependencies are displayed.
In the output, verify that the version of ansible-lint that was installed with the content parser tool is the same as that of the previously-installed ansible-lint. A mismatch in the installed ansible-lint versions causes inconsistent results from the content parser tool and ansible-lint.
For example, in the following output, the content parser tool installation includes ansible-lint version 6.20.0 which is a mismatch from previously-installed ansible-lint version 6.13.1:
ansible-content-parser --version ansible-lint --version
$ ansible-content-parser --version ansible-content-parser 0.0.1 using ansible-lint:6.20.0 ansible-core:2.15.4 $ ansible-lint --version ansible-lint 6.13.1 using ansible 2.15.4 A new release of ansible-lint is available: 6.13.16.20.0 Copy to Clipboard Copied! Toggle word wrap Toggle overflow If there is a mismatch in the ansible-lint versions, deactivate and reactivate
venvPython virtual environment:$ deactivate$ source ./venv/bin/activateVerify that the version of ansible-lint that is installed with the content parser tool is the same as that of the previously-installed ansible-lint:
$ ansible-content-parser --version$ ansible-lint --versionFor example, the following output shows that both ansible-lint installations on your computer are of version 6.20.0:
ansible-content-parser --version ansible-lint --version
$ ansible-content-parser --version ansible-content-parser 0.0.1 using ansible-lint:6.20.0 ansible-core:2.15.4 $ ansible-lint --version ansible-lint 6.20.0 using ansible-core:2.15.4 ansible-compat:4.1.10 ruamel-yaml:0.17.32 ruamel-yaml-clib:0.2.7Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.5. Generating a custom model training data set Copy linkLink copied to clipboard!
After installing the content parser tool, run it to scan your custom Ansible files and generate a custom model training data set. You can then upload the training data set to IBM watsonx Code Assistant and create a custom model for your organization. If you used ansible-lint preprocessing and encountered rule violations, you must resolve the rule violations before uploading the training data set to IBM watsonx Code Assistant.
You can generate a training data set by using one of the following methods:
With ansible-lint preprocessing
By default, the content parser tool generates training data sets by using ansible-lint preprocessing. The content parser tool uses ansible-lint rules to scan your Ansible files and ensure that the content adheres to Ansible best practices. If rule violations are found, the content parser tool excludes these files from the generated output. In such scenarios, you must resolve the rule violations, and run the content parser tool once again so that the generated output includes all your Ansible files.
Without ansible-lint preprocessing
You can generate a training data set without ansible-lint preprocessing. In this method, the content parser tool does not scan your Ansible files for ansible-lint rule violations; therefore, the training data set includes all files. Although the training data set includes all files, it might not adhere to Ansible best practices and could affect the quality of your code recommendation experience.
Prerequisites
- You must have installed the content parser tool on your computer.
- You must have verified that the version of ansible-lint that is installed with the content parser tool is the same as that of the previously-installed ansible-lint.
Procedure
Run the content parser tool to generate a training data set:
-
With ansible-lint preprocessing:
$ ansible-content-parser source output Without ansible-lint preprocessing:
$ ansible-content-parser source output -SThe following table lists the required parameters.
Expand Parameter Description sourceSpecifies the source of the training data set.
outputSpecifies the output of the training data set.
-Sor--skip-ansible-lintSpecifies to skip ansible-lint preprocessing while generating the training data set.
For example: If the source is a Github URL
https://github.com/ansible/ansible-tower-samples.git, and the output directory is/tmp/out, the command prompt is$ ansible-content-parser https://github.com/ansible/ansible-tower-samples.git /tmp/out.-
With ansible-lint preprocessing:
Optional: To generate a training data set with additional information, specify the following parameters while running the content parser tool.
Expand Parameter Description --source-licenseSpecifies to include the licensing information of the source directory in the training data set.
--source-descriptionSpecifies to include the descriptions of the source directory in the training data set.
--repo-nameSpecifies to include the repository name in the training data set. If you do not specify the repository name, the content parser tool automatically generates it from the source name.
--repo-urlSpecifies to include the repository URL in the training data set. If you do not specify the repository URL, the content parser tool automatically generates it from the source URL.
-vor--verboseDisplays the console logging information.
Example of a command prompt for Github repository ansible-tower-samples
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example of a generated training data set for Github repository ansible-tower-samples
The training data set is formatted with Jeff Goldblum (jg), a command-line JSON processing tool.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.6. Viewing the generated training data set Copy linkLink copied to clipboard!
After the content parser tool scans your Ansible files, it generates the training data set in an output directory. The training data set includes a ftdata.jsonl file, which is the main output of the content parser tool. The file is available in JSON Lines file format, where each line entry represents a JSON object. You must upload this JSONL file to IBM watsonx Code Assistant to create a custom model.
Result
The generated output directory has the following file structure:
Where:
- ftdata.jsonl: A training data set file, which is the main output of the content parser tool. The file is available in JSON Lines files format, where each line entry represents a JSON object. You must upload this JSONL file in IBM watsonx Code Assistant to create a custom model.
- report.txt: A human-readable text file that provides a summary of all content parser tool executions.
- repository: A directory that contains files from the source repository. Sometimes, ansible-lint updates the directory according to the configured rules, so the file contents of the output directory might differ from the source repository.
- metadata: A directory that contains multiple metadata files that are generated during each content parser tool execution.
- File counts per type: A list of files according to their file types, such as playbooks, tasks, handlers, and jinja2.
- List of Ansible files that were identified: A list of files identified by ansible-lint with a file name, a file type, and whether the file was excluded from further processing, or automatically fixed by ansible-lint.
- List of Ansible modules found in tasks: A list of modules identified by ansible-lint with a module name, a module type, and whether the file was excluded from further processing, or automatically fixed by ansible-lint.
- Issues found by ansible-lint: A list of issues along with a brief summary of ansible-lint execution results. If ansible-lint encounters files with syntax-check errors in the first execution, then it initiates a second execution and excludes the files with errors from the scan. You can use this information to resolve ansible-lint rule violations.
6.3.2.7. About ansible-lint rule violations Copy linkLink copied to clipboard!
By default, the content parser tool uses ansible-lint rules to scan your Ansible files and ensure that the content adheres to Ansible best practices. If rule violations are found, the content parser tool excludes these files from the generated output. In such scenarios, it is recommended that you fix the files with rule violations before uploading the training data set to IBM watsonx Code Assistant.
By default, ansible-lint applies the rules that are configured in ansible-lint/src/ansiblelint/rules while scanning your Ansible files. For more information about ansible-lint rules, see the Ansible Lint documentation.
6.3.2.7.1. How does the content parser tool handle rule violations? Copy linkLink copied to clipboard!
Using autofixes
The content parser tool runs ansible-lint with the
--fix=alloption to perform autofixes, which can fix or simplify fixing issues identified by that rule.If ansible-lint identifies rule violations that have an associated autofix, it automatically fixes or simplifies the issues that violate the rules. If ansible-lint identifies rule violations that do not have an associated autofix, it reports these instances as rule violations which you must fix manually. For more information about autofixes, see Autofix in Ansible Lint Documentation.
Using syntax-checks
Ansible-lint also performs syntax checks while scanning your Ansible files. If any syntax-check errors are found, ansible-lint stops processing the files. For more information about syntax-check errors, see syntax-check in Ansible Lint Documentation.
The content parser tool handles syntax-check rule violations in the following manner:
-
If
syntax-checkerrors are found in the first execution of ansible-lint, the content parser tool generates a list of files that contain the rule violations. -
If one or more
syntax-checkerrors are found in the first execution of ansible-lint, the content parser tool runs ansible-lint again but excludes the files with syntax-check errors. After the scan is completed, the content parser tool generates a list of files that contain rule violations. The list includes all files that caused syntax-check errors as well as other rule violations. The content parser tool excludes files with rule violations in all future scans, and the final training data set does not include data from the excluded files.
-
If
6.3.2.8. Resolving ansible-lint rule violations Copy linkLink copied to clipboard!
If the content parser tool finds ansible-lint rule violations in your Ansible files, it is recommended that you fix the files with rule violations before uploading the training data set to IBM watsonx Code Assistant. If you do not resolve the rule violations, the content parser tool excludes these files from the generated output.
Procedure
Use one of the following methods to resolve ansible-lint rule violations:
Run the content parser tool with the
--no-excludeoptionIf any rule violations, including syntax-check errors, are found, the execution is aborted with an error and no training data set is created.
Limit the set of rules that ansible-lint uses to scan your data with the
--profileoptionIt is recommended that you fix the files with rule violations. However, if you do not want to modify the source files, you can limit the set of rules that ansible-lint uses to scan your data. To limit the set of rules that ansible-lint uses to scan your data, specify the
--profileoption with a predefined profile (for example,min,basic,moderate,safety,shared, orproductionprofiles) or by using ansible-lint configuration files. For more information, see the Ansible Lint documentation.Run the content parser tool by skipping ansible-lint preprocessing
You can run the content parser without ansible-lint preprocessing. The content parser tool generates a training data set without scanning for ansible-lint rule violations.
To run the content parser tool without ansible-lint preprocessing, execute the following command:
$ ansible-content-parser source output -SWhere:
-
source: Specifies the source of the training data set. -
output: Specifies the output of the training data set. -
-Sor--skip-ansible-lint: Specifies to skip ansible-lint preprocessing while generating the training data set.
-
6.3.2.9. Merging multiple training data sets into a single file Copy linkLink copied to clipboard!
For every execution, the content parser tool creates a training data set JSONL file named ftdata.jsonl that you upload to IBM watsonx Code Assistant for creating a custom model. If the content parser tool runs multiple times, multiple JSONL files are created. IBM watsonx Code Assistant supports a single JSONL file upload only; therefore, if you have multiple JSONL files, you must merge them into a single, concatenated file. You can also merge the multiple JSONL files that are generated in multiple subdirectories within a parent directory into a single file.
Procedure
- Using the command prompt, go to the parent directory.
Run the following command to create a single, concatenated file:
find . -name ftdata.json | xargs cat > concatenated.jsonOptional: Rename the concatenated file for easy identification.
You can now upload the merged JSONL file to IBM watsonx Code Assistant and create a custom model.
6.3.3. Creating and deploying a custom model in IBM watsonx Code Assistant Copy linkLink copied to clipboard!
After the content parser tool generates a custom model training data set, upload the JSONL file ftdata.jsonl to IBM watsonx Code Assistant and create a custom model for your organization.
IBM watsonx Code Assistant might take a few hours to create a custom model, depending on the size of your training data set. You must continue monitoring the IBM Tuning Studio for the status of custom model creation.
For information about how to create and deploy a custom model in IBM watsonx Code Assistant, see the IBM watsonx Code Assistant documentation.
6.3.4. Configuring Red Hat Ansible Lightspeed to use custom models Copy linkLink copied to clipboard!
After you create and deploy a custom model in IBM watsonx Code Assistant, you must configure Red Hat Ansible Lightspeed so that you can use the custom model for your organization.
You can specify one of the following configurations for using the custom model:
Enable access for all users in your organization
You can configure a custom model as the default model for your organization. All users in your organization can use the custom model.
Enable access for select Ansible users in your organization
Using the model-override setting in the Ansible VS Code extension, select Ansible users can tune their Ansible Lightspeed service to use a custom model instead of the default model. For example, If you are using Red Hat Ansible Lightspeed as both an organization administrator and an end user, you can test the custom model for select Ansible users before making it available for all users in your organization.
Procedure
Choose one of the following configurations for your custom model:
Configure the custom model for all Ansible users in your organization
- Log in to the Ansible Lightspeed with IBM watsonx Code Assistant Hybrid Cloud Console as an organization administrator.
Specify the model ID of the custom model:
- Click Model Settings.
- Under Model ID, click Add Model ID. A screen to enter the Model ID is displayed.
- Enter the Model ID of the custom model.
- Optional: Click Test model ID to validate the model ID.
- Click Save.
Configure the custom model for select Ansible users in your organization
- Log in to the VS Code application using your Red Hat account.
-
From the Activity bar, click the Extensions icon
.
- From the Installed Extensions list, select Ansible.
- From the Ansible extension page, click the Settings icon and select Extension Settings.
- From the list of settings, select Ansible Lightspeed.
In the Model ID Override field, enter the model ID of the custom model.
Your settings are automatically saved in VS Code, and you can now use the custom model.