Chapter 10. Configuring and Setting up Remote Jobs
Use this section as a guide to configuring Satellite to execute jobs on remote hosts.
Any command that you want to apply to a remote host must be defined as a job template. After you have defined a job template you can execute it multiple times.
10.1. About Running Jobs on Hosts
You can run jobs on hosts remotely from Capsules using shell scripts or Ansible tasks and playbooks. This is referred to as remote execution.
For custom Ansible roles that you create, or roles that you download, you must install the package containing the roles on the Capsule base operating system. Before you can use Ansible roles, you must import the roles into Satellite from the Capsule where they are installed.
Communication occurs through Capsule Server, which means that Satellite Server does not require direct access to the target host, and can scale to manage many hosts. Remote execution uses the SSH service that must be enabled and running on the target host. Ensure that the remote execution Capsule has access to port 22 on the target hosts.
Satellite uses ERB syntax job templates. For more information, see Template Writing Reference in the Managing Hosts guide.
Several job templates for shell scripts and Ansible are included by default. For more information, see Setting up Job Templates.
Any Capsule Server base operating system is a client of Satellite Server’s internal Capsule, and therefore this section applies to any type of host connected to Satellite Server, including Capsules.
You can run jobs on multiple hosts at once, and you can use variables in your commands for more granular control over the jobs you run. You can use host facts and parameters to populate the variable values.
In addition, you can specify custom values for templates when you run the command.
For more information, see Executing a Remote Job.
10.2. Remote Execution Workflow
When you run a remote job on hosts, for every host, Satellite performs the following actions to find a remote execution Capsule to use.
Satellite searches only for Capsules that have the remote execution feature enabled.
- Satellite finds the host’s interfaces that have the Remote execution check box selected.
- Satellite finds the subnets of these interfaces.
- Satellite finds remote execution Capsules assigned to these subnets.
- From this set of Capsules, Satellite selects the Capsule that has the least number of running jobs. By doing this, Satellite ensures that the jobs load is balanced between remote execution Capsules.
If Satellite does not find a remote execution Capsule at this stage, and if the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite selects the most lightly loaded Capsule from the following types of Capsules that are assigned to the host:
- DHCP, DNS and TFTP Capsules assigned to the host’s subnets
- DNS Capsule assigned to the host’s domain
- Realm Capsule assigned to the host’s realm
- Puppet Master Capsule
- Puppet CA Capsule
- OpenSCAP Capsule
- If Satellite does not find a remote execution Capsule at this stage, and if the Enable Global Capsule setting is enabled, Satellite selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host’s organization and location to execute a remote job.
10.3. Permissions for Remote Execution
You can control which users can run which jobs within your infrastructure, including which hosts they can target. The remote execution feature provides two built-in roles:
- Remote Execution Manager: This role allows access to all remote execution features and functionality.
- Remote Execution User: This role only allows running jobs; it does not provide permission to modify job templates.
You can clone the Remote Execution User role and customize its filter for increased granularity. If you adjust the filter with the view_job_templates
permission, the user can only see and trigger jobs based on matching job templates. You can use the view_hosts
and view_smart_proxies
permissions to limit which hosts or Capsules are visible to the role.
The execute_template_invocation
permission is a special permission that is checked immediately before execution of a job begins. This permission defines which job template you can run on a particular host. This allows for even more granularity when specifying permissions. For more information on working with roles and permissions see Creating and Managing Roles in the Administering Red Hat Satellite.
The following example shows filters for the execute_template_invocation
permission:
name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = "Restart service" and host_group.name = webservers
The first line in this example permits the user to apply the Reboot template to one selected host. The second line defines a pool of hosts with names ending with .staging.example.com. The third line binds the template with a host group.
Permissions assigned to users can change over time. If a user has already scheduled some jobs to run in the future, and the permissions have changed, this can result in execution failure because the permissions are checked immediately before job execution.
10.4. Creating a Job Template
Use this procedure to create a job template. To use the CLI instead of the web UI, see the CLI procedure.
Procedure
- Navigate to Hosts > Job templates.
- Click New Job Template.
- Click the Template tab, and in the Name field, enter a unique name for your job template.
- Select Default to make the template available for all organizations and locations.
- Create the template directly in the template editor or upload it from a text file by clicking Import.
- Optional: In the Audit Comment field, add information about the change.
- Click the Job tab, and in the Job category field, enter your own category or select from the default categories listed in Default Job Template Categories.
-
Optional: In the Description Format field, enter a description template. For example,
Install package %{package_name}
. You can also use%{template_name}
and%{job_category}
in your template. - From the Provider Type list, select SSH for shell scripts and Ansible for Ansible tasks or playbooks.
- Optional: In the Timeout to kill field, enter a timeout value to terminate the job if it does not complete.
- Optional: Click Add Input to define an input parameter. Parameters are requested when executing the job and do not have to be defined in the template. For examples, see the Help tab.
- Optional: Click Foreign input set to include other templates in this job.
-
Optional: In the Effective user area, configure a user if the command cannot use the default
remote_execution_effective_user
setting. - Optional: If this template is a snippet to be included in other templates, click the Type tab and select Snippet.
- Click the Location tab and add the locations where you want to use the template.
- Click the Organizations tab and add the organizations where you want to use the template.
- Click Submit to save your changes.
You can extend and customize job templates by including other templates in the template syntax. For more information, see the appendices in the Managing Hosts guide.
CLI procedure
To create a job template using a template-definition file, enter the following command:
# hammer job-template create \ --file "path_to_template_file" \ --name "template_name" \ --provider-type SSH \ --job-category "category_name"
10.5. Configuring the Fallback to Any Capsule Remote Execution Setting in Satellite
You can enable the Fallback to Any Capsule setting to configure Satellite to search for remote execution Capsules from the list of Capsules that are assigned to hosts. This can be useful if you need to run remote jobs on hosts that have no subnets configured or if the hosts' subnets are assigned to Capsules that do not have the remote execution feature enabled.
If the Fallback to Any Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded Capsule from the set of all Capsules assigned to the host, such as the following:
- DHCP, DNS and TFTP Capsules assigned to the host’s subnets
- DNS Capsule assigned to the host’s domain
- Realm Capsule assigned to the host’s realm
- Puppet Master Capsule
- Puppet CA Capsule
- OpenSCAP Capsule
Procedure
- In the Satellite web UI, navigate to Administer > Settings.
- Click RemoteExecution.
- Configure the Fallback to Any Capsule setting.
CLI procedure
Enter the hammer settings set
command on Satellite to configure the Fallback to Any Capsule setting. For example, to set the value to true
, enter the following command:
# hammer settings set --name=remote_execution_fallback_proxy --value=true
10.6. Configuring the Global Capsule Remote Execution Setting in Satellite
By default, Satellite searches for remote execution Capsules in hosts' organizations and locations regardless of whether Capsules are assigned to hosts' subnets or not. You can disable the Enable Global Capsule setting if you want to limit the search to the Capsules that are assigned to hosts' subnets.
If the Enable Global Capsule setting is enabled, Satellite adds another set of Capsules to select the remote execution Capsule from. Satellite also selects the most lightly loaded remote execution Capsule from the set of all Capsules in the host’s organization and location to execute a remote job.
Procedure
- In the Satellite web UI, navigate to Administer > Settings.
- Click RemoteExection.
- Configure the Enable Global Capsule setting.
CLI procedure
Enter the
hammer settings set
command on Satellite to configure theEnable Global Capsule
setting. For example, to set the value totrue
, enter the following command:# hammer settings set --name=remote_execution_global_proxy --value=true
10.7. Configuring Satellite to Use an Alternative Directory to Execute Remote Jobs on Hosts
By default, Satellite uses the /var/tmp
directory on the client system to execute the remote execution jobs. If the client system has noexec
set for the /var/
volume or file system, you must configure Satellite to use an alternative directory because otherwise the remote execution job fails since the script cannot be run.
Procedure
Create a new directory, for example new_place:
# mkdir /remote_working_dir
Copy the SELinux context from the default
var
directory:# chcon --reference=/var /remote_working_dir
Configure the system:
# satellite-installer --foreman-proxy-plugin-remote-execution-ssh-remote-working-dir /remote_working_dir
10.8. Distributing SSH Keys for Remote Execution
To use SSH keys for authenticating remote execution connections, you must distribute the public SSH key from Capsule to its attached hosts that you want to manage. Ensure that the SSH service is enabled and running on the hosts. Configure any network or host-based firewalls to enable access to port 22.
Use one of the following methods to distribute the public SSH key from Capsule to target hosts:
- Section 10.9, “Distributing SSH Keys for Remote Execution Manually”.
- Section 10.10, “Using the Satellite API to Obtain SSH Keys for Remote Execution”.
- Section 10.11, “Configuring a Kickstart Template to Distribute SSH Keys during Provisioning”.
- For new Satellite hosts, you can deploy SSH keys to Satellite hosts during registration using the global registration template. For more information, see Registering a Host to Red Hat Satellite Using the Global Registration Template.
Satellite distributes SSH keys for the remote execution feature to the hosts provisioned from Satellite by default.
If the hosts are running on Amazon Web Services, enable password authentication. For more information, see https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance.
10.9. Distributing SSH Keys for Remote Execution Manually
To distribute SSH keys manually, complete the following steps:
Procedure
Enter the following command on Capsule. Repeat for each target host you want to manage:
# ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub root@target.example.com
To confirm that the key was successfully copied to the target host, enter the following command on Capsule:
# ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy root@target.example.com
10.10. Using the Satellite API to Obtain SSH Keys for Remote Execution
To use the Satellite API to download the public key from Capsule, complete this procedure on each target host.
Procedure
On the target host, create the
~/.ssh
directory to store the SSH key:# mkdir ~/.ssh
Download the SSH key from Capsule:
# curl https://capsule.example.com:9090/ssh/pubkey >> ~/.ssh/authorized_keys
Configure permissions for the
~/.ssh
directory:# chmod 700 ~/.ssh
Configure permissions for the
authorized_keys
file:# chmod 600 ~/.ssh/authorized_keys
10.11. Configuring a Kickstart Template to Distribute SSH Keys during Provisioning
You can add a remote_execution_ssh_keys
snippet to your custom kickstart template to deploy SSH Keys to hosts during provisioning. Kickstart templates that Satellite ships include this snippet by default. Therefore, Satellite copies the SSH key for remote execution to the systems during provisioning.
Procedure
To include the public key in newly-provisioned hosts, add the following snippet to the Kickstart template that you use:
<%= snippet 'remote_execution_ssh_keys' %>
10.12. Configuring a keytab for Kerberos Ticket Granting Tickets
Use this procedure to configure Satellite to use a keytab to obtain Kerberos ticket granting tickets. If you do not set up a keytab, you must manually retrieve tickets.
Procedure
Find the ID of the
foreman-proxy
user:# id -u foreman-proxy
Modify the
umask
value so that new files have the permissions600
:# umask 077
Create the directory for the keytab:
# mkdir -p "/var/kerberos/krb5/user/USER_ID"
Create a keytab or copy an existing keytab to the directory:
# cp your_client.keytab /var/kerberos/krb5/user/USER_ID/client.keytab
Change the directory owner to the
foreman-proxy
user:# chown -R foreman-proxy:foreman-proxy "/var/kerberos/krb5/user/USER_ID"
Ensure that the keytab file is read-only:
# chmod -wx "/var/kerberos/krb5/user/USER_ID/client.keytab"
Restore the SELinux context:
# restorecon -RvF /var/kerberos/krb5
10.13. Configuring Kerberos Authentication for Remote Execution
You can use Kerberos authentication to establish an SSH connection for remote execution on Satellite hosts.
Prerequisites
- Enroll Satellite Server on the Kerberos server
- Enroll the Satellite target host on the Kerberos server
- Configure and initialize a Kerberos user account for remote execution
- Ensure that the foreman-proxy user on Satellite has a valid Kerberos ticket granting ticket
Procedure
To install and enable Kerberos authentication for remote execution, enter the following command:
# satellite-installer --scenario satellite \ --foreman-proxy-plugin-remote-execution-ssh-ssh-kerberos-auth true
- To edit the default user for remote execution, in the Satellite web UI, navigate to Administer > Settings and click the RemoteExecution tab. In the SSH User row, edit the second column and add the user name for the Kerberos account.
- Navigate to remote_execution_effective_user and edit the second column to add the user name for the Kerberos account.
To confirm that Kerberos authentication is ready to use, run a remote job on the host.
10.14. Setting up Job Templates
Satellite provides default job templates that you can use for executing jobs. To view the list of job templates, navigate to Hosts > Job templates. If you want to use a template without making changes, proceed to Executing a Remote Job.
You can use default templates as a base for developing your own. Default job templates are locked for editing. Clone the template and edit the clone.
Procedure
- To clone a template, in the Actions column, select Clone.
- Enter a unique name for the clone and click Submit to save the changes.
Job templates use the Embedded Ruby (ERB) syntax. For more information about writing templates, see the Template Writing Reference in the Managing Hosts guide.
Ansible Considerations
To create an Ansible job template, use the following procedure and instead of ERB syntax, use YAML syntax. Begin the template with ---
. You can embed an Ansible playbook YAML file into the job template body. You can also add ERB syntax to customize your YAML Ansible template. You can also import Ansible playbooks in Satellite. For more information, see Synchronizing Repository Templates in the Managing Hosts guide.
Parameter Variables
At run time, job templates can accept parameter variables that you define for a host. Note that only the parameters visible on the Parameters tab at the host’s edit page can be used as input parameters for job templates. If you do not want your Ansible job template to accept parameter variables at run time, in the Satellite web UI, navigate to Administer > Settings and click the Ansible tab. In the Top level Ansible variables row, change the Value parameter to No.
10.15. Executing a Remote Job
You can execute a job that is based on a job template against one or more hosts.
To use the CLI instead of the web UI, see the CLI procedure.
Procedure
- Navigate to Hosts > All Hosts and select the target hosts on which you want to execute a remote job. You can use the search field to filter the host list.
- From the Select Action list, select Schedule Remote Job.
- On the Job invocation page, define the main job settings:
- Select the Job category and the Job template you want to use.
- Optional: Select a stored search string in the Bookmark list to specify the target hosts.
- Optional: Further limit the targeted hosts by entering a Search query. The Resolves to line displays the number of hosts affected by your query. Use the refresh button to recalculate the number after changing the query. The preview icon lists the targeted hosts.
- The remaining settings depend on the selected job template. See Creating a Job Template for information on adding custom parameters to a template.
Optional: To configure advanced settings for the job, click Display advanced fields. Some of the advanced settings depend on the job template, the following settings are general:
- Effective user defines the user for executing the job, by default it is the SSH user.
- Concurrency level defines the maximum number of jobs executed at once, which can prevent overload of systems' resources in a case of executing the job on a large number of hosts.
- Timeout to kill defines time interval in seconds after which the job should be killed, if it is not finished already. A task which could not be started during the defined interval, for example, if the previous task took too long to finish, is canceled.
- Type of query defines when the search query is evaluated. This helps to keep the query up to date for scheduled tasks.
Execution ordering determines the order in which the job is executed on hosts: alphabetical or randomized.
Concurrency level and Timeout to kill settings enable you to tailor job execution to fit your infrastructure hardware and needs.
- To run the job immediately, ensure that Schedule is set to Execute now. You can also define a one-time future job, or set up a recurring job. For recurring tasks, you can define start and end dates, number and frequency of runs. You can also use cron syntax to define repetition. For more information about cron, see the Automating System Tasks section of the Red Hat Enterprise Linux 7 System Administrator’s Guide.
- Click Submit. This displays the Job Overview page, and when the job completes, also displays the status of the job.
CLI procedure
- Enter the following command on Satellite:
# hammer settings set --name=remote_execution_global_proxy --value=false
To execute a remote job with custom parameters, complete the following steps:
Find the ID of the job template you want to use:
# hammer job-template list
Show the template details to see parameters required by your template:
# hammer job-template info --id template_ID
Execute a remote job with custom parameters:
# hammer job-invocation create \ --job-template "template_name" \ --inputs key1="value",key2="value",... \ --search-query "query"
Replace query with the filter expression that defines hosts, for example
"name ~ rex01"
. For more information about executing remote commands with hammer, enterhammer job-template --help
andhammer job-invocation --help
.
10.16. Monitoring Jobs
You can monitor the progress of the job while it is running. This can help in any troubleshooting that may be required.
Ansible jobs run on batches of 100 hosts, so you cannot cancel a job running on a specific host. A job completes only after the Ansible playbook runs on all hosts in the batch.
Procedure
-
Navigate to the Job page. This page is automatically displayed if you triggered the job with the
Execute now
setting. To monitor scheduled jobs, navigate to Monitor > Jobs and select the job run you wish to inspect. - On the Job page, click the Hosts tab. This displays the list of hosts on which the job is running.
- In the Host column, click the name of the host that you want to inspect. This displays the Detail of Commands page where you can monitor the job execution in real time.
- Click Back to Job at any time to return to the Job Details page.
CLI procedure
To monitor the progress of a job while it is running, complete the following steps:
Find the ID of a job:
# hammer job-invocation list
Monitor the job output:
# hammer job-invocation output \ --id job_ID \ --host host_name
Optional: to cancel a job, enter the following command:
# hammer job-invocation cancel \ --id job_ID
10.17. Synchronizing Template Repositories
In Satellite, you can synchronize repositories of job templates, provisioning templates, report templates, and partition table templates between Satellite Server and a version control system or local directory. In this chapter, a Git repository is used for demonstration purposes.
This section details the workflow for:
- installing and configuring the TemplateSync plug-in
- performing exporting and importing tasks
10.17.1. Enabling the TemplateSync plug-in
To enable the plug-in on your Satellite Server, enter the following command:
# satellite-installer --enable-foreman-plugin-templates
- To verify that the plug-in is installed correctly, ensure Administer > Settings includes the TemplateSync menu.
10.17.2. Configuring the TemplateSync plug-in
In the Satellite web UI, navigate to Administer > Settings > TemplateSync to configure the plug-in. The following table explains the attributes behavior. Note that some attributes are used only for importing or exporting tasks.
Parameter | API parameter name | Meaning on importing | Meaning on exporting |
---|---|---|---|
Associate |
Accepted values: | Associates templates with OS, Organization, and Location based on metadata. | N/A |
Branch |
| Specifies the default branch in Git repository to read from. | Specifies the default branch in Git repository to write to. |
Dirname |
| Specifies the subdirectory under the repository to read from. | Specifies the subdirectory under the repository to write to. |
Filter |
| Imports only templates with names that match this regular expression. | Exports only templates with names that match this regular expression. |
Force import |
| Imported templates overwrite locked templates with the same name. | N/A |
Lock templates |
| Do not overwrite existing templates when you import a new template with the same name, unless Force import is enabled. | N/A |
Metadata export mode |
Accepted values: | N/A | Defines how metadata is handled when exporting:
|
Negate |
Accepted values: | Imports templates ignoring the filter attribute. | Exports templates ignoring the filter attribute. |
Prefix |
| Adds specified string to the beginning of the template if the template name does not start with the prefix already. | N/A |
Repo |
| Defines the path to the repository to synchronize from. | Defines the path to a repository to export to. |
Verbosity |
Accepted values: | Enables writing verbose messages to the logs for this action. | N/A |
10.17.3. Importing and Exporting Templates
You can import and export templates using the Satellite web UI, Hammer CLI, or Satellite API. Satellite API calls use the role-based access control system, which enables the tasks to be executed as any user. You can synchronize templates with a version control system, such as Git, or a local directory.
10.17.3.1. Importing Templates
You can import templates from a repository of your choice. You can use different protocols to point to your repository, for example /tmp/dir
, git://example.com
, https://example.com
, and ssh://example.com
.
Prerequisites
Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:
<%# kind: provision name: My Kickstart File oses: - RedHat 7 - RedHat 6 locations: - First Location - Second Location organizations: - Default Organization - Extra Organization %>
Procedure
- In the Satellite web UI, navigate to Hosts > Sync Templates.
- Click Import.
- Each field is populated with values configured in Administer > Settings > TemplateSync. Change the values as required for the templates you want to import. For more information about each field, see Section 10.17.2, “Configuring the TemplateSync plug-in”.
- Click Submit.
The Satellite web UI displays the status of the import. The status is not persistent; if you leave the status page, you cannot return to it.
CLI procedure
To import a template from a repository, enter the following command:
$ hammer import-templates \ --prefix '[Custom Index] ' \ --filter '.*Template Name$' \ --repo https://github.com/examplerepo/exampledirectory \ --branch my_branch \ --organization 'Default Organization'
For better indexing and management of your templates, use
--prefix
to set a category for your templates. To select certain templates from a large repository, use--filter
to define the title of the templates that you want to import. For example--filter '.*Ansible Default$'
imports various Ansible Default templates.
10.17.3.2. Exporting Templates
You can export templates to a version control server, such as a Git repository.
Procedure
- In the Satellite web UI, navigate to Hosts > Sync Templates.
- Click Export.
- Each field is populated with values configured in Administer > Settings > TemplateSync. Change the values as required for the templates you want to export. For more information about each field, see Section 10.17.2, “Configuring the TemplateSync plug-in”.
- Click Submit.
The Satellite web UI displays the status of the export. The status is not persistent; if you leave the status page, you cannot return to it.
CLI procedure
Clone a local copy of your Git repository:
$ git clone https://github.com/theforeman/community-templates /custom/templates
Change the owner of your local directory to the
foreman
user, and change the SELinux context with the following commands:# chown -R foreman:foreman /custom/templates # chcon -R -t httpd_sys_rw_content_t /custom/templates
To export the templates to your local repository, enter the following command:
hammer export-templates --organization 'Default Organization' --repo /custom/templates
When exporting templates, avoid temporary directories like
/tmp
or/var/tmp
because the backend service runs with systemd private temporary directories.
10.17.3.3. Synchronizing Templates Using the Satellite API
Prerequisites
Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:
<%# kind: provision name: My Kickstart File oses: - RedHat 7 - RedHat 6 locations: - First Location - Second Location organizations: - Default Organization - Extra Organization %>
Procedure
- Configure a version control system that uses SSH authorization, for example gitosis, gitolite, or git daemon.
Configure the TemplateSync plug-in settings on a TemplateSync tab.
- Change the Branch setting to match the target branch on a Git server.
-
Change the Repo setting to match the Git repository. For example, for the repository located in
git@git.example.com/templates.git
set the setting intossh://git@git.example.com/templates.git
.
Accept Git SSH host key as the
foreman
user:# sudo -u foreman ssh git.example.com
You can see the
Permission denied, please try again.
message in the output, which is expected, because the SSH connection cannot succeed yet.Create an SSH key pair if you do not already have it. Do not specify a passphrase.
# sudo -u foreman ssh-keygen
-
Configure your version control server with the public key from your Satellite, which resides in
/usr/share/foreman/.ssh/id_rsa.pub
. Export templates from your Satellite Server to the version control repository specified in the TemplateSync menu:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_satellite.example.com/api/v2/templates/export \ -X POST {"message":"Success"}
Import templates to Satellite Server after their content was changed:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_satellite.example.com/api/v2/templates/import \ -X POST {“message”:”Success”}
Note that templates provided by Satellite are locked and you cannot import them by default. To overwrite this behavior, change the
Force import
setting in the TemplateSync menu toyes
or add theforce
parameter-d '{ "force": "true" }’
to the import command.
10.17.3.4. Synchronizing Templates with a Local Directory Using the Satellite API
Synchronizing templates with a local directory is useful if you have configured a version control repository in the local directory. That way, you can edit templates and track the history of edits in the directory. You can also synchronize changes to Satellite Server after editing the templates.
Prerequisites
Each template must contain the location and organization that the template belongs to. This applies to all template types. Before you import a template, ensure that you add the following section to the template:
<%# kind: provision name: My Kickstart File oses: - RedHat 7 - RedHat 6 locations: - First Location - Second Location organizations: - Default Organization - Extra Organization %>
Procedure
Create the directory where templates are stored and apply appropriate permissions and SELinux context:
# mkdir -p /usr/share/templates_dir/ # chown foreman /usr/share/templates_dir/ # chcon -t httpd_sys_rw_content_t /usr/share/templates_dir/ -R
-
Change the Repo setting on the TemplateSync tab to match the export directory
/usr/share/templates_dir/
. Export templates from your Satellite Server to a local directory:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_satellite.example.com/api/v2/templates/export \ -X POST \ {"message":"Success"}
Import templates to Satellite Server after their content was changed:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://_satellite.example.com/api/v2/templates/import \ -X POST {“message”:”Success”}
Note that templates provided by Satellite are locked and you cannot import them by default. To overwrite this behavior, change the
Force import
setting in the TemplateSync menu toyes
or add theforce
parameter-d '{ "force": "true" }’
to the import command.
You can override default API settings by specifying them in the request with the -d
parameter. The following example exports templates to the git.example.com/templates
repository:
$ curl -H "Accept:application/json,version=2" \ -H "Content-Type:application/json" \ -u login:password \ -k https://satellite.example.com/api/v2/templates/export \ -X POST \ -d "{\"repo\":\"git.example.com/templates\"}"
10.17.4. Advanced Git Configuration
You can perform additional Git configuration for the TemplateSync plug-in using the command line or editing the .gitconfig
file.
Accepting a self-signed Git certificate
If you are using a self-signed certificate authentication on your Git server, validate the certificate with the git config http.sslCAPath
command.
For example, the following command verifies a self-signed certificate stored in /cert/cert.pem
:
# sudo -u foreman git config --global http.sslCAPath cert/cert.pem
For a complete list of advanced options, see the git-config
manual page.
10.17.5. Uninstalling the plug-in
To avoid errors after removing the foreman_templates plugin:
Disable the plug-in using the Satellite installer:
# satellite-installer --no-enable-foreman-plugin-templates
Clean custom data of the plug-in. The command does not affect any templates that you created.
# foreman-rake templates:cleanup
Uninstall the plug-in:
# satellite-maintain packages remove foreman-plugin-templates