Uyuni Administration Guide
Uyuni Administration Guide
03
Administration Guide
April 17 2024
Table of Contents
Administration Guide Overview 1
1. Actions 2
1.1. Recurring Actions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2. Action Chains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3. Remote Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2. Authentication Methods 6
2.1. Authentication With Single Sign-On (SSO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2. Enable SSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.3. Example SSO Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2. Authentication With PAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3. Backup and Restore 14
3.1. Back up Uyuni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2. Administering the Database with smdba. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3. Database Backup with smdba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.1. Perform a Manual Database Backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.2. Scheduling Automatic Backups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4. Restore from Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5. Archive Log Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.6. Retrieve an Overview of Occupied Database Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.7. Move the Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.8. Recover From a Crashed Root Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.9. Database Connection Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4. Content Staging 23
4.1. Enable Content Staging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2. Configure Content Staging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5. Channel Management 25
5.1. Channel Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.2. Delete Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.3. Custom Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3.1. Creating Custom Channels and Repositories. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3.2. Custom Channel Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3.3. Add Packages and Patches to Custom Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3.4. Manage Custom Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6. Content Lifecycle Management 33
6.1. Create a Content Lifecycle Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2. Filter Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.2.1. Filter rule Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3. Filter Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3.1. Live Patching Based on a SUSE Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3.2. Live Patching Based on a System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.3.3. AppStream Modules with Defaults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.4. Build a Content Lifecycle Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.5. Promote Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.6. Assign Clients to Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.7. Content Lifecycle Management Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.7.1. Creating a Project for a Monthly Patch Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.7.2. Update an Existing Monthly Patch Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6.7.3. Enhance a Project with Live Patching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.7.4. Switch to a New Kernel Version for Live Patching. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.7.5. AppStream Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7. Disconnected Setup 46
7.1. Synchronize RMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.2. Synchronize SMT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.3. Mandatory Channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.4. Synchronize a Disconnected Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
8. Disk Space Management 50
8.1. Monitored Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.2. Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.3. Shut Down Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.4. Disable Space Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9. Image Building and Management 52
9.1. Image Building Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.2. Container Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.2.1. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2.2. Create a Build Host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
9.2.3. Create an Activation Key for Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.2.4. Create an Image Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9.2.5. Create an Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
9.2.6. Build an image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
9.2.7. Import an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.2.8. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.3. OS Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.3.1. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.3.2. Create a Build Host. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.3.3. Create an Activation Key for OS Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
9.3.4. Create an Image Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
9.3.5. Create an Image Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
9.3.6. Build an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
9.3.7. Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
9.3.8. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
9.4. List of Built Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
10. Infrastructure maintenance tasks 70
10.1. Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.1.1. Client tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
10.2. Inter-Server Synchronization slave server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
10.3. Monitoring server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
10.4. Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
11. Inter-Server Synchronization 72
11.1. Inter-Server Synchronization - Version 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
11.2. Inter-Server Synchronization - Version 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.2.1. Install ISS Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
11.2.2. Content Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.2.3. Database connection configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
11.2.4. Known Limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
12. Live Patching with SUSE Manager 75
12.1. Set up Channels for Live Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.1.1. Use spacewalk-manage-channel-lifecycle for Live Patching . . . . . . . . . . . . . . . . . . . . . 75
12.2. Live Patching on SLES 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
12.3. Live Patching on SLES 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
13. Maintenance Windows 80
13.1. Maintenance Schedule Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
13.2. Restricted and Unrestricted Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
14. Using mgr-sync 84
15. Monitoring with Prometheus and Grafana 86
15.1. Prometheus and Grafana . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
15.1.1. Prometheus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
15.1.2. Prometheus Exporters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
15.1.3. Grafana. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
15.2. Set up the Monitoring Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
15.2.1. Install Prometheus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
15.2.2. Install Grafana . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
15.3. Configure Uyuni Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
15.3.1. Server Self Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
15.3.2. Monitoring Managed Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
15.3.3. Change Grafana Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
15.4. Network Boundaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
15.4.1. Reverse Proxy Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
15.5. Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
15.5.1. Generating TLS certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
16. Organizations 97
16.1. Manage Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
16.1.1. Organization Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
16.1.2. Trusted Organizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
16.1.3. Configure Organizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
16.2. Manage States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
16.2.1. Manage Configuration Channels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
17. Patch Management 100
17.1. Retracted Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.1.1. Channel Clones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
17.1.2. Patch sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
18. Using PTFs in Uyuni 102
18.1. Understanding PTF packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
18.2. Installing PTF packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
18.3. After PTF installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
18.4. Removing the patched version of a package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
18.5. Removing the patched version of a package on the client . . . . . . . . . . . . . . . . . . . . . . . . . . 104
19. Generate Reports 105
19.1. Using spacewalk-report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
19.2. spacewalk-report and the reporting database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
19.3. List of available reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
20. Security 112
20.1. Set up a Client to Master Validation Fingerprint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
20.2. Signing Repository Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
20.3. Mirror Source Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
20.4. System Security with OpenSCAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
20.4.1. About SCAP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
20.4.2. Prepare Clients for an SCAP Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
20.4.3. OpenSCAP Content Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
20.4.4. Find OpenSCAP profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
20.4.5. Perform an Audit Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
20.4.6. Scan Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
20.4.7. Remediation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
20.5. Auditing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
20.5.1. CVE Audits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
20.5.2. CVE Status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
21. SSL Certificates 127
21.1. Self-Signed SSL Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
21.1.1. Re-Create Existing Server Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
21.1.2. Create a new CA and Server Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
21.2. Import SSL Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
21.2.1. Import Certificates for New Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
21.2.2. Import Certificates for New Proxy Installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
21.2.3. Replace Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
21.3. HTTP Strict Transport Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
22. Subscription Matching 133
22.1. Pin Clients to Subscriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
23. Task Schedules 135
23.1. Predefined task bunches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
24. Tuning Changelogs 139
25. Users 140
25.1. Deactivate and Delete Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
25.2. Administrator Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
25.3. User Permissions and Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
25.4. Users and Channel Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
25.5. User Default Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
25.5.1. User Default Interface Theme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
26. Troubleshooting 144
26.1. Troubleshooting Autoinstallation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
26.2. Troubleshooting Bootstrap Repository for End-of-Life Products . . . . . . . . . . . . . . . . . . . . 144
26.3. Troubleshooting Clients Cloned Salt Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
26.4. Troubleshooting Corrupt Repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
26.5. Troubleshooting Custom Channel with Conflicting Packages. . . . . . . . . . . . . . . . . . . . . . . 145
26.6. Troubleshooting Disabling the FQDNS grain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
26.7. Troubleshooting Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
26.8. Troubleshooting Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
26.9. Troubleshooting high sync times between Uyuni Server and Proxy over WAN
connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
26.10. Troubleshooting Inactive clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
26.11. Troubleshooting Inter-Server Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
26.12. Troubleshooting Local Issuer Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
26.13. Troubleshooting Login Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
26.14. Troubleshooting Mail Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
26.15. Troubleshooting Mounting /tmp with noexec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
26.16. Troubleshooting Mounting /var/tmp with noexec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
26.17. Troubleshooting Not Enough Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
26.18. Troubleshooting Notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
26.19. Troubleshooting OSAD and jabberd. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
26.20. Troubleshooting Package Inconsistencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
26.21. Troubleshooting Repository Via Proxy Issues. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
26.22. Troubleshooting Passing Grains to a Start Event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
26.23. Troubleshooting Proxy Connections and FQDN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
26.24. Troubleshooting Registering Cloned Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
26.25. Troubleshooting Registering Deleted Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
26.26. Troubleshooting Registration from Web UI fails and does not show any errors . . . . . . . . 158
26.27. Troubleshooting Red Hat CDN Channel and Multiple Certificates . . . . . . . . . . . . . . . . . . 158
26.28. Troubleshooting Renaming Uyuni Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
26.29. Troubleshooting Retrying to Set up the Target System . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
26.30. Troubleshooting RPC Connection Timeouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
26.31. Troubleshooting Salt clients shown as down and DNS settings. . . . . . . . . . . . . . . . . . . . . 162
26.32. Troubleshooting the Saltboot Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
26.33. Troubleshooting Schema Upgrade Fails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
26.34. Troubleshooting Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
26.35. Troubleshooting Taskomatic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
26.36. Troubleshooting Web UI Fails to Load. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
27. GNU Free Documentation License 167
Administration Guide Overview
Updated: 2024-04-17
This book provides guidance on performing administration tasks on the Uyuni Server.
Chapter 1. Actions
You can manage actions on your clients in a number of different ways:
• You can schedule automated recurring actions to apply the highstate or an arbitrary set of custom
states to clients on a specified schedule.
• You can apply recurring actions to individual clients, to all clients in a system group, or to an entire
organization.
• You can set actions to be performed in a particular order by creating action chains.
◦ Action chains can be created and edited ahead of time, and scheduled to run at a time that
suits you.
• You can also perform remote commands on one or more of your Salt clients.
◦ Remote commands allows you to issue commands to individual Salt clients, or to all clients
that match a search term.
◦ Weekly: Select the day of the week and the time of the day, to execute the action every
week at the specified time.
◦ Monthly: Select the day of the month and the time of the day, to execute the action every
month at the specified time.
◦ Custom Quartz format: For more detailed options, enter a custom quartz string. For
example, to run a recurring action at 0215 every Saturday of every month, enter:
0 15 2 ? * 7
7. OPTIONAL: Toggle the Test mode switch on to run the schedule in test mode.
8. For actions of type Custom state, select the states from the list of available states and click
[Save Changes] . This will only save the current state selection and not the schedule.
9. In the next pane, drag and drop the selected states to put them in the execution order and click
[Confirm] .
10. Click [Create Schedule] to save, and see the complete list of existing schedules.
Organization Administrators can set and edit recurring actions for all clients in the organization. Navigate
to Home › My Organization › Recurring Actions to see all recurring actions that apply to the entire
organization.
Uyuni Administrators can set and edit recurring actions for all clients in all organizations. Navigate to
Admin › Organizations, select the organization to manage, and navigate to the States › Recurring
Actions tab.
By default, most clients execute an action as soon as the command is issued. In some case, actions take a
long time, which could mean that actions issued afterwards fail. For example, if you instruct a client to
reboot, then issue a second command, the second action could fail because the reboot is still occurring. To
ensure that actions occur in the correct order, use action chains.
You can use action chains on all clients. Action chains can include any number of these actions, in any
order:
• Images › Build
If one action in an action chain fails, the action chain stops, and no further
actions are executed.
You can see scheduled actions from action chains by navigating to Schedule › Pending Actions.
This feature is automatically enabled on Salt clients, and you do not need to perform any further
configuration. You can use this procedure to enable it manually, instead.
Before you begin, ensure your client is subscribed to the appropriate tools child channel for its installed
operating system. For more information about subscribing to software channels, see Client-configuration
› Channels.
ensure that remote commands work accurately, do not mount /tmp with
the noexec option. For more information, see Administration ›
Troubleshooting.
• All commands run from the Remote Commands page are executed as
root on clients. Wildcards can be used to run commands across any
number of systems. Always take extra care to check your commands
before issuing them.
2. In the first field, before the @ symbol, type the command you want to issue.
3. In the second field, after the @ symbol, type the client you want to issue the command on. You can
type the minion-id of an individual client, or you can use wildcards to target a range of clients.
4. Click [Find targets] to check which clients you have targeted, and confirm that you have
used the correct details.
5. Click [Run command] to issue the command to the target clients.
Single sign-on is an authentication process that allows a user to access multiple applications with one set
of credentials. SAML is an XML-based standard for exchanging authentication and authorization data. A
SAML identity service provider (IdP) provides authentication and authorization services to service
providers (SP), such as Uyuni. Uyuni exposes three endpoints which must be enabled for single sign-on.
• Product choosing and implementation for the identity service provider (IdP).
• SAML support for other products (check with the respective product documentation).
If you change from the default authentication method to single sign-on, the new
SSO credentials apply only to the Web UI. Client tools such as mgr-sync or
spacecmd continue to work with the default authentication method only.
2.1.1. Prerequisites
Before you begin, you need to have configured an external identity service provider with these
parameters. Check your IdP documentation for instructions.
The mapping between the IdP user and the Uyuni user is specified in a
SAML:Attribute. The SAML:Attribute must be configured in the IdP and must
• Assertion consumer service (or ACS): an endpoint to accept SAML messages to establish a session
into the Service Provider. The endpoint for ACS in Uyuni is: https://server.example.com/rhn/
manager/sso/acs
• Single logout service (or SLS): an endpoint to initiate a logout request from the IdP. The endpoint
for SLS in Uyuni is: https://server.example.com/rhn/manager/sso/sls
• Metadata: an endpoint to retrieve Uyuni metadata for SAML. The endpoint for metadata in Uyuni
is: https://server.example.com/rhn/manager/sso/metadata
After the authentication with the IdP using the user orgadmin is successful, you are logged in to Uyuni
as the orgadmin user, provided that the orgadmin user exists in Uyuni.
java.sso = true
onelogin.saml2.sp.assertion_consumer_service.url = https://YOUR-PRODUCT-
HOSTNAME-OR-IP/rhn/manager/sso/acs
java.sso.onelogin.saml2.sp.assertion_consumer_service.url =
https://YOUR-PRODUCT-HOSTNAME-OR-IP/rhn/manager/sso/acs
To find all the occurrences you need to change, search in the file for the placeholders YOUR-
PRODUCT and YOUR-IDP-ENTITY. Every parameter comes with a brief explanation of what it
is meant for.
spacewalk-service restart
When you visit the Uyuni URL, you are redirected to the IdP for SSO where you are requested to
authenticate. Upon successful authentication, you are redirected to the Uyuni Web UI, logged in as the
authenticated user. If you encounter problems with logging in using SSO, check the Uyuni logs for more
information.
Start by installing the Keycloak IdP, then setting up the Uyuni Server. Then you can add the endpoints as
Keycloak clients, and create users.
This example is provided for illustrative purposes only. SUSE does not
You can install Keycloak directly on your machine, or run it in a container. In this example, we run
Keycloak in a Podman container. For more information about installing Keycloak, see the Keycloak
documentation at https://www.keycloak.org/guides#getting-started.
3. Sign in the Keycloak Web UI as the admin user, and create an authentication realm using these
details:
◦ In the Name field, enter a name for the realm. For example, SUMA.
◦ In the Endpoints field, click the SAML 2.0 Identity Provider Metadata
link. This will lead you to a page where you will see the endpoints and certificate to copy
into the Uyuni configuration file.
When you have installed Keycloak and created the realm, you can prepare the Uyuni Server.
java.sso.onelogin.saml2.sp.entityid =
https://<FQDN_SUMA>/rhn/manager/sso/metadata
java.sso.onelogin.saml2.sp.assertion_consumer_service.url =
https://<FQDN_SUMA>/rhn/manager/sso/acs
java.sso.onelogin.saml2.sp.single_logout_service.url =
https://<FQDN_SUMA>/rhn/manager/sso/sls
2. In the configuration file, replace <FQDN_IDP> with the fully qualified domain name of your
Keycloak server. Replace <REALM> with your authentication realm, for example SUMA:
java.sso.onelogin.saml2.idp.entityid =
http://<FQDN_IDP>:8080/realms/<REALM>
java.sso.onelogin.saml2.idp.single_sign_on_service.url =
http://<FQDN_IDP>:8080/realms/<REALM>/protocol/saml
java.sso.onelogin.saml2.idp.single_logout_service.url =
http://<FQDN_IDP>:8080/realms/<REALM>/protocol/saml
3. In the IdP metadata, locate the public x509 certificate. It uses this format:
http://<FQDN_IDP>:8080/realms/<REALM>/protocol/saml/descriptor. In
the configuration file, specify the public x509 certificate of the IdP:
java.sso = true
# This is the configuration file for Single Sign-On (SSO) via SAMLv2 protocol
# To enable SSO, set java.sso = true in /etc/rhn/rhn.conf
#
# Mandatory changes: search this file for:
# - YOUR-PRODUCT
# - YOUR-IDP-ENTITY
#
# See product documentation and the comments inline in this file for more
# information about every parameter.
#
#
#
#
# If 'strict' is True, then the Java Toolkit will reject unsigned
# or unencrypted messages if it expects them signed or encrypted
# Also will reject the messages if not strictly follow the SAML
#
# WARNING: In production, this parameter setting parameter MUST be set as
"true".
# Otherwise your environment is not secure and will be exposed to attacks.
# Enable debug mode (to print errors)
# Specifies info about where and how the <AuthnResponse> message MUST be
# returned to the requester, in this case our SP.
# URL Location where the <Response> from the IdP will be returned
java.sso.onelogin.saml2.sp.assertion_consumer_service.url =
https://sumaserver.example.org/rhn/manager/sso/acs
# Specifies info about where and how the <Logout Response> message MUST be
# returned to the requester, in this case our SP.
java.sso.onelogin.saml2.sp.single_logout_service.url =
https://sumaserver.example.org/rhn/manager/sso/sls
# Organization
java.sso.onelogin.saml2.organization.name = SUSE Manager admin
java.sso.onelogin.saml2.organization.displayname = SUSE Manager admin
java.sso.onelogin.saml2.organization.url = https://sumaserver.example.org
java.sso.onelogin.saml2.organization.lang =
# Contacts
java.sso.onelogin.saml2.contacts.technical.given_name = SUSE Manager admin
java.sso.onelogin.saml2.contacts.technical.email_address = [email protected]
java.sso.onelogin.saml2.contacts.support.given_name = SUSE Manager admin
java.sso.onelogin.saml2.contacts.support.email_address = [email protected]
You can add the Uyuni endpoints to Keycloak. Keycloak refers to endpoints as clients.
◦ In the Client ID field, enter the endpoint specified in the server configuration file as
java.sso.onelogin.saml2.idp.entityid. For example,
https://<FQDN_SUMA>/rhn/manager/sso/metadata.
2. In the Settings tab, fine-tune the client using these details:
◦ Toggle the Sign assertions switch to On.
◦ In the Signature algorithm field, select RSA_SHA1.
◦ In the SAML Signature Key Name field, select Key ID.
3. In the Keys tab:
◦ Set Client signature required to Off.
4. In the Advanced tab, in the Fine Grain SAML Endpoint Configuration section,
add the two endpoints using these details:
◦ In both the Assertion Consumer Service fields, enter the endpoint specified in
the server configuration file as
java.sso.onelogin.saml2.sp.assertion_consumer_service.url.
For example, https://<FQDN_SUMA>/rhn/manager/sso/acs.
◦ In both the Logout Service fields, enter the endpoint specified in the server
configuration file as
java.sso.onelogin.saml2.sp.single_logout_service.url. For
example, https://<FQDN_SUMA>/rhn/manager/sso/sls.
When you have added the endpoints as clients, you can configure the client scope, and map the users
between Keycloak and Uyuni.
When you have completed the configuration, you can test that the installation is working as expected.
Restart the Uyuni Server to pick up your changes, and navigate to the Uyuni Web UI. If your installation
is working correctly, you are redirected to the Keycloak SSO page, where you can authenticate
successfully.
#%PAM-1.0
auth include common-auth
account include common-account
password include common-password
session include common-session
Listing 1. On the Uyuni Server, at the command prompt, as root, add the sss PAM module:
pam-config -a --sss
2. Enforce the use of the service file by adding this line to /etc/rhn/rhn.conf:
pam_auth_service = susemanager
Changing the password in the Uyuni Web UI changes only the local password on
the Uyuni Server. If PAM is enabled for that user, the local password might not
be used at all. In the above example, for instance, the Kerberos password is not
changed. Use the password change mechanism of your network service to
change the password for these users.
To configure system-wide authentication you can use YaST. You need to install the yast2-auth-
client package.
For more information about configuring PAM, the SUSE Linux Enterprise Server Security Guide
contains a generic example that also works for other network-based authentication methods. It also
describes how to configure an active directory service. For more information, see
https://documentation.suse.com/sles/15-SP4/html/SLES-all/part-auth.html.
Regardless of the backup method you use, you must have available at least three
times the amount of space your current installation uses. Running out of space
can result in backups failing, so check this often.
If you want to back up only the required files and directories, use the following
list. To make this process simpler, and more comprehensive, we recommend
backing up the entire /etc and /root directories, not just the ones specified
here. Some files only exist if you are actually using the related SUSE Manager
feature.
• /etc/cobbler/
• /etc/dhcp.conf
• /etc/fstab and any ISO mount points you require.
If your UUID has changed, ensure you have updated the fstab entries accordingly.
• /etc/rhn/
• /etc/salt
• /etc/sudoers
• /etc/sysconfig/rhn/
• /root/.gnupg/
• /root/.ssh
This file exists if you are using an SSH tunnel or SSH push. You also need to have saved a copy
of the id-susemanager key.
• /root/ssl-build/
• /srv/formula_metadata
• /srv/pillar
• /srv/salt
• /srv/susemanager
• /srv/tftpboot/
• /srv/www/cobbler
• /srv/www/htdocs/pub/
• /srv/www/os-images
• /var/cache/rhn
• /var/cache/salt
• /var/lib/cobbler/
• /var/lib/cobbler/templates/ (before version 4.0 it is
/var/lib/rhn/kickstarts/)
• /var/lib/Kiwi
• /var/lib/rhn/
• /var/run/pgsql/
• /var/lib/salt/
• /var/run/salt/
• /var/spacewalk/
• Any directories containing custom data such as scripts, Kickstart or AutoYaST profiles, and
custom RPMs.
You also need to back up your database, which you can do with the smdba tool.
rhn-search cleanindex
The smdba tool works with local PostgreSQL databases only, it does not work with remotely accessed
databases, or Oracle databases.
The smdba tool requires sudo access, to execute system changes. Ensure you
have enabled sudo access for the admin user before you begin, by checking
smdba db-status
smdba db-start
And:
smdba db-stop
This method of backing up is stable and generally creates consistent snapshots, however it can take up a
lot of storage space. Ensure you have at least three times the current database size of space available for
backups. You can check your current database size by navigating to /var/lib/pgsql/ and running
df -h.
The smdba tool also manages your archives, keeping only the most recent backup, and the current
archive of logs. The log files can only be a maximum file size of 16 MB, so a new log file is created when
the files reach this size. Every time you create a new backup, previous backups are purged to release disk
space. We recommend you use cron to schedule your smdba backups to ensure that your storage is
managed effectively, and you always have a backup ready in case of failure.
When smdba is run for the first time, or if you have changed the location of the
backup, it needs to restart your database before performing the archive. This
results in a small amount of downtime. Regular database backups do not require
any downtime.
As root:
3. Ensure you have the correct permissions set on the backup location:
4. To create a backup for the first time, run the smdba backup-hot command with the enable
option set. This creates the backup in the specified directory, and, if necessary, restart the database:
This command produces debug messages and finishes sucessfully with the output:
INFO: Finished
5. Check that the backup files exist in the /var/spacewalk/db-backup directory, to ensure
that your backup has been successful.
Ensure you have at least three times the current database size of space available
for backups. You can check your current database size by navigating to
/var/lib/pgsql/ and running df -h.
2. Open /etc/cron.d/db-backup-mgr, or create it if it does not exist, and add the following
line to create the cron job:
3. Check the backup directory regularly to ensure the backups are working as expected.
smdba db-stop
smdba db-start
4. Check if there are differences between the RPMs and the database.
spacewalk-data-fsck
PostgreSQL maintains a limited number of archive logs. Using the default configuration, approximately
64 files with a size of 16 MiB are stored.
• SLES12-SP2-Pool-x86_64
• SLES12-SP2-Updates-x86_64
• SLE-Manager-Tools12-Pool-x86_64-SP2
• SLE-Manager-Tools12-Updates-x86_64-SP2
smdba space-overview
outputs:
The smdba command is available for PostgreSQL. For a more detailed report, use the space-tables
subcommand. It lists the table and its size, for example:
smdba space-tables
outputs:
Table | Size
--------------------------------------+-----------
public.all_primary_keys | 0 bytes
public.all_tab_columns | 0 bytes
public.allserverkeywordsincereboot | 0 bytes
public.dblink_pkey_results | 0 bytes
public.dual | 8192 bytes
public.evr_t | 0 bytes
public.log | 32 kB
...
rcpostgresql stop
spacewalk-service stop
3. Copy the current working directory structure with cp using the -a, --archive option. For
example:
same, otherwise the Uyuni database may malfunction. You also should
ensure that there is enough available disk space.
mount /storage/postgres/pgsql
5. Make sure ownership is postgres:postgres and not root:root by changing to the new
directory and running these commands:
cd /storage/postgres/pgsql/
ls -l
Outputs:
total 8
drwxr-x--- 4 postgres postgres 47 Jun 2 14:35 ./
6. Add the new database mount location to your servers fstab by editing etc/fstab.
7. Start the database with:
rcpostgresql start
spacewalk-service start
After a new installation of a system most users and groups get different IDs.
Most backup systems store the names instead of the IDs and will restore the files
with the correct ownership and permissions. But if you mount existing partitions,
you must align the ownership to the new system.
rcpostgresql stop
spacewalk-service stop
rcpostgresql start
spacewalk-service start
Uyuni should now operate normally without loss of your database or synchronized channels.
db_backend = postgresql
db_user = susemanager
db_password = susemanager
db_name = susemanager
db_host = localhost
db_port = 5432
db_ssl_enabled =
You can also enable staging at the command prompt by editing /etc/sysconfig/rhn/up2date,
and adding or editing these lines:
stagingContent=1
stagingContentWindow=24
The stagingContentWindow parameter is a time value expressed in hours and determines when
downloading starts. It is the number of hours before the scheduled installation or update time. In this
example, content is downloaded 24 hours before the installation time. The start time for download
depends on the selected contact method for a system. The assigned contact method sets the time for when
the next mgr_check is executed.
Next time an action is scheduled, packages are automatically downloaded, but not installed. At the
scheduled time, the staged packages are installed.
Default values:
• salt_content_staging_advance: 8 hours
• salt_content_staging_window: 8 hours
In Uyuni, channels are grouped into base and child channels, with base channels grouped by operating
system type, version, and architecture, and child channels being compatible with their related base
channel. When a client has been assigned to a base channel, it is only possible for that system to install the
related child channels. Organizing channels in this way ensures that only compatible packages are installed
on each system.
Software channels use repositories to provide packages. The channels mirror the repositories in Uyuni,
and the package names and other data are stored in the Uyuni database. You can have any number of
repositories associated with a channel. The software from those repositories can then be installed on
clients by subscribing the client to the appropriate channel.
Clients can only be assigned to one base channel. The client can then install or update packages from the
repositories associated with that base channel and any of its child channels.
Uyuni provides a number of vendor channels, which provide you everything you need to run Uyuni.
Uyuni Administrators and Channel Administrators have channel management authority, which gives them
the ability to create and manage their own custom channels. If you want to use your own packages in your
environment, you can create custom channels. Custom channels can be used as a base channel, or you can
associate them with a vendor base channel.
spacewalk-remove-channel -c <channel-name>
This section gives more detail on how to create, administer, and delete custom channels. You must have
administrator privileges to be able to create and manage custom channels.
If you have custom software packages that you need to install on your client systems, you can create a
custom child channel to manage them. You need to create the channel in the Uyuni Web UI and create a
repository for the packages, before assigning the channel to your systems.
Do not create child channels containing packages that are not compatible with
the client system.
You can select a vendor channel as the base channel if you want to use packages provided by a vendor.
Alternatively, select none to make your custom channel a base channel.
5. Provide any additional information in the contact details, channel access control, and GPG fields,
as required for your environment.
6. Click [Create Channel] .
Custom channels sometimes require additional security settings. Many third party vendors secure
packages with GPG. If you want to use GPG-protected packages in your custom channel, you need to
trust the GPG key which has been used to sign the metadata. You can then check the Has Signed
Metadata? check box to match the package metadata against the trusted GPG keys.
If remote channels and repositories are signed with GPG keys, you can import and trust these GPG keys.
For example, execute the spacewalk-repo-sync from the command line on the Uyuni Server:
The underlying zypper call will import the key, if it is available. The Web UI does not offer this feature.
This only works when the repository you want to mirror is set up in a special way and provide the "key"
in the repository next to the signature. This is the case for all repositories generated by the Open Build
Service (OBS). For other repositories special preparation steps are needed (see below!).
By default, the Enable GPG Check field is checked when you create a new
channel. If you would like to add custom packages and applications to your
channel, make sure you uncheck this field to be able to install unsigned
packages. Disabling the GPG check is a security risk if packages are from an
untrusted source.
You can only add a repository to the Uyuni with the Web UI if it is a valid software repository. Check in
advance that needed repository metadata are available. Tools such as createrepo and reprepro are
useful in this regard. mgrpush can help with pushing a single RPM into a channel without creating a
repository. For more information, see the man pages for createrepo_c and reprepro.
The above procedure only works if the repository you want to mirror provides the "key" in the repository
next to the signature. This is the case for all repositories generated by the OBS, but it is typically not the
case for other repos of operating systems that are not offered by the OBS.
If the repository you want to use does not have a GPG key you can provide one yourself and import the
GPG key into the keyring manually. If you import the key into the /var/lib/spacewalk/gpgdir
keyring using the gpg commandline tool it would be stored permanently. The key would also persist if
the chroot environment would be cleaned.
uyuni_suite
is mandatory. In Debian documentation, this is also known as distribution. With this parameter
you specify the apt source. Without this parameter the original approach is used. If the parameter
ends with /, the repository is identified as flat.
uyuni_component
is optional. This parameter can specify only one component. It is not possible to list the components.
An apt source entry allows to specify multiple components, but for Uyuni it is not possible. Instead
you must create separate repository for each component.
uyuni_arch
is optional. If omitted the architecture name is calculated with a SQL query for the channel from the
database. Specify uyuni_arch explicitly if it does not match the architecture of the channel
(sometimes architecture naming is ambiguous).
For example:
or
For each pair of suite and component the spezification defines a distinct
URL calculated on the base URL + suite + component.
By default, a synchronization will happen automatically for all custom channels you create. In particular, it
will happen:
java.unify_custom_channel_management = 0
With this property turned off, no synchronization is performed automatically and, in order to keep a
custom channel up to date, you need to:
• synchronize it manually by navigating to the Sync tab and click [Sync Now] ,
• set up an automated synchronization schedule from the Repositories tab.
When the process is started, there are several ways to check if a channel has finished synchronizing:
• In the Uyuni Web UI, navigate to Admin › Setup Wizard and select the Products tab. This
dialog displays a completion bar for each product when they are being synchronized.
• In the Uyuni Web UI, navigate to Software › Manage › Channels, then click the channel
associated to the repository. Navigate to the menu:[Repositories > Sync] tab. The Sync Status
is shown next to the repository name..
• Check the synchronization log file at the command prompt:
tail -f /var/log/rhn/reposync/<channel-label>.log
Each child channel generates its own log during the synchronization progress. You need to check
all the base and child channel log files to be sure that the synchronization is complete.
Custom channels can only include packages or patches that are cloned or custom, and they must match the
base architecture of the channel. Patches added to custom channels must apply to a package that exists in
the channel.
2. OPTIONAL: See all packages currently in the channel by navigating to the List/Remove tab.
3. Add new packages to the channel by navigating to the Add tab.
4. Select the parent channel to provide packages, and click [View Packages] to populate the list.
5. Check the packages to add to the custom channel, and click [Add Packages] .
6. When you are satisfied with the selection, click [Confirm Addition] to add the packages to
the channel.
7. OPTIONAL: You can compare the packages in the current channel with those in a different
channel by navigating to Software › Manage › Channels, and going to the Packages › Compare
tab. To make the two channels the same, click the [Merge Differences] button, and resolve
any conflicts.
2. OPTIONAL: See all patches currently in the channel by navigating to the List/Remove tab.
3. Add new patches to the channel by navigating to the Add tab, and selecting what kind of patches
you want to add.
4. Select the parent channel to provide patches, and click [View Associated Patches] to
populate the list.
5. Check the patches to add to the custom channel, and click [Confirm] .
6. When you are satisfied with the selection, click [Confirm] to add the patches to the channel.
To grant other users rights to alter or delete a channel, navigate to Software › Manage › Channels and
select the channel you want to edit. Navigate to the Managers tab, and check the user to grant
permissions. Click [Update] to save the changes.
If you delete a channel that has been assigned to a set of clients, it triggers an
immediate update of the channel state for any clients associated with the deleted
channel. This is to ensure that the changes are reflected accurately in the
repository file.
You cannot delete Uyuni channels with the Web UI. Only custom channels can be deleted.
When channels are deleted, the packages that are part of the deleted channel are not automatically
removed. You are not able to update packages that have had their channel deleted.
You can delete packages that are not associated with a channel in the Uyuni Web UI. Navigate to
Software › Manage › Packages, check the packages to remove, and click [Delete Packages] .
Content lifecycle management allows you to select software channels as sources, adjust them as required
for your environment, and thoroughly test them before installing onto your production clients.
While you cannot directly modify vendor channels, you can clone them and then modify the clones by
adding or removing packages and custom patches. You can assign these cloned channels to test clients to
ensure they work as expected.
By default, cloned vendor channels match the original vendor channel and
automatically select the dependencies. You can disable the automatic selection
java.cloned_channel_auto_selection = false
Then, when all tests pass, you can promote the cloned channels to production servers.
This is achieved through a series of environments that your software channels can move through on their
lifecycle. Most environment lifecycles include at least test and production environments, but you can have
as many environments as you require.
This section covers the basic content lifecycle procedures, and the filters available. For more specific
examples, see Administration › Content-lifecycle-examples.
7. Check the child channels you require, and click [Save] to return to the project page. The
software channels you selected should now be showing.
8. Click [Attach/Detach Filters] .
9. In the Filters dialog, select the filters you want to attach to the project. To create a new filter,
click [Create new Filter] .
10. Click [Add Environment] .
11. In the Environment Lifecycle dialog, give the first environment a name, a label, and a
description, and click [Save] . The Label field only accepts lowercase letters, numbers, periods,
hyphens, and underscores.
12. Continue creating environments until you have all the environments for your lifecycle completed.
You can select the order of the environments in the lifecycle by selecting an environment in the
Insert before field when you create it.
• package filtering
◦ by name
◦ by name, epoch, version, release, and architecture
◦ by provided name
• patch filtering
◦ by advisory name
◦ by advisory type
◦ by synopsis
◦ by keyword
◦ by date
◦ by affected package
• module
◦ by stream
There are multiple matchers you can use with the filter. Which ones are available depends on the filter
type you choose.
• contains
• matches (must take the form of a regular expression)
• equals
• greater
• greater or equal
• lower or equal
• lower
• later or equal
This behavior is useful when you want to exclude large number of packages or patches using a general
Deny filter and "cherry-pick" specific packages or patches with specific Allow filters.
Content filters are global in your organization and can be shared between
projects.
If your project already contains built sources, when you add an environment it is
automatically populated with the existing content. Content is drawn from the
previous environment of the cycle if it had one. If there is no previous
environment, it is left empty until the project sources are built again.
When applied, this template creates three filters required to achieve this behavior:
• Allow patches that contain kernel-default package equal to a base kernel version
• Deny patches that contain reboot_suggested keyword
• Deny patches that contain a package which provides the name installhint(reboot-
needed)
For more information on how to set up a live patching project, see administration:content-lifecycle-
examples.pdf.
2. In the dialog, click [Use a template] . The inputs will change accordingly.
3. In the Prefix field, type a name prefix. This value will be prepended to the name of every
individual filter created by the template. If the template is being applied in the context of a project,
this field will be prefilled with the project label.
4. In the Template field, select Live patching based on a SUSE product.
5. In the Product field, select the product you wish to set up live patching for.
6. In the Kernel field, select a kernel version from the list of versions available in the selected
product. The filter to deny the later regular kernel patches will be based on this version.
7. Click [Save] to create the filters.
8. Navigate to Content Lifecycle › Projects and select your project.
When applied, this template creates three filters required to achieve this behavior:
• Allow patches that contain kernel-default package equal to a base kernel version
• Deny patches that contain reboot_suggested keyword
• Deny patches that contain a package which provides the name installhint(reboot-
needed)
For more information on how to set up a live patching project, see administration:content-lifecycle-
examples.pdf.
2. In the dialog, click [Use a template] . The inputs will change accordingly.
3. In the Prefix field, type a name prefix. This value will be prepended to the name of every filter
created by the template. If the template is being applied in the context of a project, this field will be
prefilled with the project label.
4. In the Template field, select Live patching based on a specific system.
5. In the System field, select a system from the list, or start typing a system name to narrow down
the options.
6. In the Kernel field, select a kernel version from the list of versions installed in the selected
system. The filter to deny the later regular kernel patches will be based on this version.
7. Click [Save] to create the filters.
8. Navigate to Content Lifecycle › Projects and select your project.
When applied, this template creates an AppStream filter per module and its default stream.
If this process is done from the project’s page, the filters are added to the project automatically.
Otherwise, the created filters can be listed in Content Lifecycle › Filters and be added to any project as
needed.
Each individual filter can be edited to select a different module stream, or removed altogether to exclude
that module from the target repositories.
Because not all module streams are compatible with each other, changing
individual streams may prevent successful resolution of modular dependencies.
When this happens, the filters pane in the project details page will show an error
describing the problem, and the build button will be disabled until all the module
selections are compatible.
Since Red Hat Enterprise Linux 9, modules do not have any defined default
streams. Therefore, using this template with Red Hat Enterprise Linux 9 sources
will have no effect.
For more information on how to set up AppStream repositories with content lifecycle management, see
administration:content-lifecycle-examples.pdf.
2. In the Filters section, click [Attach/Detach Filters] , and then click [Create
New Filter] .
3. In the dialog, click [Use a template] . The inputs will change accordingly.
4. In the Prefix field, type a name prefix. This value will be prepended to the name of every filter
created by the template. If the template is being applied in the context of a project, this field will be
prefilled with the project label.
5. In the Template field, select AppStream modules with defaults.
6. In the Channel field, select a modular channel to get the modules from. In this dropdown, only
the modular channels are displayed.
7. Click [Save] to create the filters.
8. Scroll to the Filters section to see the newly attached AppStream filters.
9. You can edit/remove any individual filter to tailor the project to your needs.
Building applies filters to the attached sources and clones them to the first environment in the project.
You can use the same vendor channels as sources for multiple content projects. In this case, Uyuni does
not create new patch clones for each cloned channel. Instead, a single patch clone is shared between all of
your cloned channels. This can cause problems if a vendor modifies a patch; for example, if the patch is
retracted, or the packages within the patch are changed. When you build one of the content projects, all
the channels that share the cloned patch are synchronized with the original by default, even if the channels
are in other environments of your content project, or other content project channels in your organization.
You can change this behavior by turning off automatic patch synchronization in your organization
settings. To manually synchronize the patch later for all channels sharing the patch, navigate to Software ›
Manage › Channels, click the channel you want to synchronize and navigate to the Sync subtab. Even
manual patch synchronization affects all organization channels sharing the patch.
Make sure you have the environment available before building the project.
After the build is finished, the environment version is increased by one and the built sources, such as
software channels, can be assigned to your clients.
Newly added cloned channels are not assigned to clients automatically. If you
add or promote sources you need to manually check and update your channel
assignments.
The By Date filter excludes all patches released after a specified date. This filter is useful for your
content lifecycle projects that follow a monthly patch cycle.
1. In the Uyuni Web UI, navigate to Content Lifecycle › Filters and click [Create Filter] .
2. In the Filter Name field, type a name for your filter. For example, Exclude patches by
date.
3. In the Filter Type field, select Patch (Issue date).
4. In the Matcher field, later or equal is autoselected.
5. Select the date and time.
6. Click [Save] .
The new filter is added to your filter list, but it still needs to be applied to the project. To apply the filter
you need to build the first environment.
Tests may help you discover issues. When an issue is found, exclude the problem patch released before
the by date filter.
2. In the Filter Name field, enter a name for the filter. For example, Exclude openjdk
patch.
3. In the Filter Type field, select Patch (Advisory Name).
4. In the Matcher field, select equals.
5. In the Advisory Name field, type a name for the advisory. For example, SUSE-15-2019-
1807.
6. Click [Save] .
7. Navigate to Content Lifecycle › Projects and select your project.
8. Click [Attach/Detach Filters] link, select Exclude openjdk patch, and click
[Save] .
When you rebuild the project with the [Build] button, the new filter is used together with the by
date filter we added before.
In this example, you have received a security alert. An important security patch was released several days
after the first of the month you are currently working on. The name of the new patch is SUSE-15-
2019-2071. You need to include this new patch into your environment.
The Allow filters rule overrides the exclude function of the Deny filter rule.
For more information, see Administration › Content-lifecycle.
2. In the Filter Name field, type a name for the filter. For example, Include kernel
security fix.
3. In the Filter Type field, select Patch (Advisory Name).
4. In the Matcher field, select equals.
5. In the Advisory Name field, type SUSE-15-2019-2071, and check Allow.
6. Click [Save] to store the filter.
7. Navigate to Content Lifecycle › Projects and select your project from the list.
4. Rebuild the project to create a new environment with patches for the next month.
When you are preparing to use live patching, there are some important
considerations
• Only ever use one kernel version on your systems. The live patching
packages are installed with a specific kernel.
• Live patching updates are shipped as one patch.
• Each kernel patch that begins a new series of live patching kernels
displays the required reboot flag. These kernel patches come with
live patching tools. When you have installed them, you need to reboot the
system at least once before the next year.
• Only install live patch updates that match the installed kernel version.
• Live patches are provided as stand-alone patches. You must exclude all
regular kernel patches with higher kernel version than the currently
installed one.
In this example you update your systems with the SUSE-15-2019-1244 patch. This patch contains
kernel-default-4.12.14-150.17.1-x86_64.
You need to exclude all patches which contain a higher version of kernel-default.
2. In the Filter Name field, type a name for your filter. For example, Exclude kernel
greater than 4.12.14-150.17.1.
3. In the Filter Type field, select Patch (Contains Package).
4. In the Matcher field, select version greater than.
5. In the Package Name field, type kernel-default.
6. Leave the the Epoch field empty.
7. In the Version field, type 4.12.14.
8. In the Release field, type 150.17.1.
9. Click [Save] to store the filter.
10. Navigate to Content Lifecycle › Projects and select your project.
12. Select Exclude kernel greater than 4.12.14-150.17.1, and click [Save] .
When you click [Build] , a new environment is created. The new environment contains all the kernel
patches up to the version you installed.
All kernel patches with higher kernel versions are removed. Live patching
kernels remain available as long as they are not the first in a series.
This procedure can be automated using a filter template. For more information
Click [Build] to rebuild the environment. The new environment contains all kernel patches up to the
new kernel version you selected. Systems using these channels have the kernel update available for
installation. You need to reboot systems after they have performed the upgrade. The new kernel remains
valid for one year. All packages installed during the year match the current live patching kernel filter.
The AppStream filter selects a single module stream to be included in the target repository. You can add
multiple filters to select multiple module streams.
If you do not use an AppStream filter in your CLM project, the module metadata in the modular sources
remains intact, and the target repositories contain the same module metadata. As long as at least one
AppStream filter is enabled in the CLM project, all target repositories are transformed into regular
repositories.
In some cases, you might wish to build regular repositories without having to include packages from any
module. To do so, add an AppStream filter using the matcher none (disable modularity). This
will disable all the modules in the target repository. This is especially useful for Red Hat Enterprise
Linux 9 clients, where the default versions of most modules are already included in the AppStream
repository as regular packages.
To use the AppStream filter, you need a CLM project with a modular repository such as Red Hat
Enterprise Linux AppStream. Ensure that you included the module you need as a source
before you begin.
5. Click btn:Attach/Detach Filters, select your new AppStream filter, and click [Save] .
You can use the browse function in the Create/Edit Filter form to select a module from a list of
available module streams for a modular channel.
Channel selection is only for browsing modules. The selected channel is not be
saved with the filter, and does not affect the CLM process in any way.
You can create additional AppStream filters for any other module stream to be included in the target
repository. Any module streams that the selected stream depends on is automatically included.
5. Click btn:Attach/Detach Filters, select your new AppStream filter, and click [Save] .
This will effectively remove the module metadata from the target repository, excluding any package that
belongs to a module.
When you build your CLM project using the [Build] button in the Web UI, the target repository is a
regular repository without any modules, that contains packages from the selected module streams.
The repository mirroring tool (RMT) is available on SUSE Linux Enterprise 15 and later. RMT replaces
the subscription management tool (SMT), which can be used on older SUSE Linux Enterprise
installations.
In a disconnected Uyuni setup, RMT or SMT uses an external network to connect to SUSE Customer
Center. All software channels and repositories are synchronized to a removable storage device. The
storage device can then be used to update the disconnected Uyuni installation.
This setup allows your Uyuni installation to remain in an offline, disconnected environment.
Your RMT or SMT instance must be used to manage the Uyuni Server directly.
It cannot be used to manage a second RMT or SMT instance, in a cascade.
We recommend you set up a dedicated RMT instance for each Uyuni installation.
zypper in rmt-server
yast2 rmt
rmt-cli sync
3. Enable the products you require. For example, to enable SLES 15:
4. Export the synchronized data to your removable storage. In this example, the storage medium is
mounted at /mnt/usb:
Ensure that the external storage is mounted to a directory that is writeable by the
RMT user. You can change RMT user settings in the cli section of
/etc/rmt.conf.
SMT requires you to create a local mirror directory on the SMT instance to synchronize repositories and
packages.
2. Export the synchronized data to your removable storage. In this example, the storage medium is
mounted at /mnt/usb:
Ensure that the external storage is mounted to a directory that is writeable by the
RMT user. You can change SMT user settings in /etc/smt.conf.
SLES 12 and products based on it such as SLES for SAP or SLE HPC
RMT: rmt-cli products enable sle-manager-tools/12/x86_64
SLES 15 and products based on it such as SLES for SAP or SLE HPC
RMT: rmt-cli products enable sle-manager-tools/15/x86_64
Other distributions, or architectures, can be enabled. For more information about enabling product
channels or repositories to be mirrored, see the documentation:
RMT
https://documentation.suse.com/sles/15-SP4/html/SLES-all/cha-rmt-mirroring.html#sec-rmt-
mirroring-enable-disable
SMT
https://documentation.suse.com/sles/12-SP5/single-html/SLES-smt/index.html#smt-mirroring-
manage-domirror
server.susemanager.fromdir = /media/disk
mgr-sync refresh
5. Perform a synchronization:
The removable disk that you use for synchronization must always be available at
the same mount point. Do not trigger a synchronization, if the storage medium is
not mounted. This results in data corruption.
be able to check if SUSE Customer Center credentials are valid or not. Instead, a
warning sign will be displayed and no SCC online check will be performed.
Uyuni monitors some directories for free disk space. You can modify which directories are monitored,
and the warnings that are created. All settings are configured in the /etc/rhn/rhn.conf
configuration file.
When the available space in one of the monitored directories falls below a warning threshold, a message
is sent to the configured email address and a notification is shown at the top of the sign-in page.
• /var/lib/pgsql
• /var/spacewalk
• /var/cache
• /srv
You can change which directories are monitored with the spacecheck_dirs parameter. You can
specify multiple directories by separating them with a space.
For example:
8.2. Thresholds
By default, Uyuni creates a warning when a monitored directory has less than 10% of total space
available. A critical alert is created when a monitored directory falls below 5% space available.
You can change these alert thresholds with the spacecheck_free_alert and
spacecheck_free_critical parameters.
For example:
spacecheck_free_alert = 10
spacecheck_free_critical = 5
You can change this behavior with the spacecheck_shutdown parameter. A value of true enables
the shut down feature. Any other value disables it.
For example:
spacecheck_shutdown = true
Disabling the spacewalk-diskcheck.timer will stop periodic email alerts if the alert threshold is
reached, but the warning notification will still appear at the top of the sign-in page.
Uyuni supports two distinct build types: Dockerfile and the Kiwi image system.
The Kiwi build type is used to build system, virtual, and other images. The image store for the Kiwi build
type is pre-defined as a file system directory at /srv/www/os-images on the server. Uyuni serves
the image store over HTTPS from //<SERVER-FQDN>/os-images/. The image store location is
unique and is not customizable.
9.2.1. Requirements
The containers feature is available for Salt clients running SUSE Linux Enterprise Server 12 or later.
Before you begin, ensure your environment meets these requirements:
• A published git repository containing a Dockerfile and configuration scripts. The repository can be
public or private, and should be hosted on GitHub, GitLab, or BitBucket.
• A properly configured image store, such as a Docker registry.
The operating system on the build host must match the operating system on the
targeted image.
For example, build SUSE Linux Enterprise Server 15 based images on a build
host running SUSE Linux Enterprise Server 15 (SP2 or later) OS version. Build
SUSE Linux Enterprise Server 12 based images on a build host running SUSE
Linux Enterprise Server 12 SP5 or SUSE Linux Enterprise Server 12 SP4 OS
version.
From the Uyuni Web UI, perform these steps to configure a build host:
2. From the System Details page of the selected client assign the containers modules. Go to
Software › Software Channels and enable the containers module (for example, SLE-Module-
Containers15-Pool and SLE-Module-Containers15-Updates). Confirm by
clicking [Change Subscriptions] .
3. From the System Details › Properties page, enable Container Build Host from the Add-
on System Types list. Confirm by clicking [Update Properties] .
4. Install all required packages by applying Highstate. From the system details page select States
› Highstate and click Apply Highstate. Alternatively, apply Highstate from the Uyuni Server
command line:
To build a container, you need an activation key that is associated with a channel
other than SUSE Manager Default.
registry.example.com
The Registry URI can also be used to specify an image store on a registry that is already in use.
registry.example.com:5000/myregistry/myproject
2. Provide a name for the image profile by filling in the Label field.
https://github.com/USER/project.git#branchname:folder
https://github.com/ORG/project.git#branchname:folder
If your git repository is private, modify the profile’s URL to include authentication. Use this
URL format to authenticate with a GitHub token:
https://USER:<AUTHENTICATION_TOKEN>@github.com/USER/project.git#mas
ter:/container/
https://gitlab.example.com/USER/project.git#master:/container/
https://gitlab.example.com/GROUP/project.git#master:/container/
If your git repository is private and not publicly accessible, you need to modify the profile’s
git URL to include authentication. Use this URL format to authenticate with a GitLab
token:
https://gitlab-ci-
token:<AUTHENTICATION_TOKEN>@gitlab.example.com/USER/project.git#ma
ster:/container/
6. Select an Activation Key. Activation keys ensure that images using a profile are assigned to
the correct channel and packages.
When you associate an activation key with an image profile you are
ensuring any image using the profile uses the correct software channel
and any packages in the channel.
The ARG parameters ensure that the built image is associated with the desired
repository served by Uyuni. The ARG parameters also allow you to build image
versions of SUSE Linux Enterprise Server which may differ from the version of
SUSE Linux Enterprise Server used by the build host itself.
For example: The ARG repo parameter and the echo command pointing to
the repository file, creates and then injects the correct path into the repository
file for the desired channel version.
The repository is determined by the activation key that you assigned to your
image profile.
FROM registry.example.com/sles12sp2
MAINTAINER Tux Administrator "[email protected]"
ARG repo
ARG cert
You can assign custom info key-value pairs to attach information to the image profiles. Additionally, these
key-value pairs are passed to the Docker build command as buildargs.
For more information about the available custom info keys and creating additional ones, see Reference ›
Systems.
2. Add a different tag name if you want a version other than the default latest (only relevant to
containers).
3. Select Build Profile and Build Host.
Notice the Profile Summary to the right of the build fields. When
you have selected a build profile, detailed information about the selected
profile is displayed in this area.
Image store
The registry from where the image is pulled for inspection.
Image name
The name of the image in the registry.
Image version
The version of the image in the registry.
Build host
The build host that pulls and inspects the image.
Activation key
The activation key that provides the path to the software channel that the image is inspected
with.
The entry for the image is created in the database, and an Inspect Image action on Uyuni is
scheduled.
When it has been processed, you can find the imported image in the Image List. It has a different
icon in the Build column, to indicate that the image is imported. The status icon for the imported image
can also be seen on the Overview tab for the image.
9.2.8. Troubleshooting
These are some known problems when working with images:
• HTTPS certificates to access the registry or the git repositories should be deployed to the client by
a custom state file.
• SSH git access using Docker is currently unsupported.
9.3. OS Images
OS Images are built by the Kiwi image system. The output image is customizable and can be PXE,
QCOW2, LiveCD, or other types of images.
For more information about the Kiwi build system, see the Kiwi documentation.
9.3.1. Requirements
The Kiwi image building feature is available for Salt clients running SUSE Linux Enterprise Server 12
and SUSE Linux Enterprise Server 11.
Kiwi image configuration files and configuration scripts must be accessible in one of these locations:
• Git repository
• HTTP hosted tarball
• Local build host directory
You need at least 1 GB of RAM available for hosts running OS Images built
with Kiwi. Disk space depends on the actual size of the image. For more
information, see the documentation of the underlying system.
This procedure guides you through the initial configuration for a build host.
The operating system on the build host must match the operating system on the
targeted image.
For example, build SUSE Linux Enterprise Server 15 based images on a build
host running SUSE Linux Enterprise Server 15 (SP2 or later) OS version. Build
SUSE Linux Enterprise Server 12 based images on a build host running SUSE
Linux Enterprise Server 12 SP5 or SUSE Linux Enterprise Server 12 SP4 OS
version.
Cross-architecture builds are not possible. For example, you must build
Raspberry PI SUSE Linux Enterprise Server 15 SP3 image on a Raspberry PI
(aarch64 architecture) build host running SUSE Linux Enterprise Server 15 SP3.
2. Navigate to the System Details › Properties tab, enable the Add-on System Type OS
Image Build Host. Confirm with [Update Properties] .
3. Navigate to System Details › Software › Software Channels, and enable the required software
channels depending on the build host version.
◦ SUSE Linux Enterprise Server 12 build hosts require Uyuni Client tools (SLE-Manager-
Tools12-Pool and SLE-Manager-Tools12-Updates).
◦ SUSE Linux Enterprise Server 15 build hosts require SUSE Linux Enterprise Server
modules SLE-Module-DevTools15-SP4-Pool and SLE-Module-
DevTools15-SP4-Updates.
Build host provisioning copies the Uyuni certificate RPM to the build host. This certificate is used for
accessing repositories provided by Uyuni.
When you upgrade the spacewalk-certs-tools package, the upgrade scenario calls the package
script using the default values. However if the certificate path was changed or unavailable, call the package
script manually using --ca-cert-full-path <path_to_certificate> after the upgrade
procedure has finished.
The RPM package with the certificate is stored in a salt-accessible directory such as:
/usr/share/susemanager/salt/images/rhn-org-trusted-ssl-cert-osimage-1.0-
1.noarch.rpm
The RPM package with the certificate is provided in the local build host repository:
/var/lib/Kiwi/repo
Specify the RPM package with the Uyuni SSL certificate in the build source,
and make sure your Kiwi configuration contains rhn-org-trusted-ssl-
cert-osimage as a required package in the bootstrap section.
Listing 2. config.xml
...
<packages type="bootstrap">
...
<package name="rhn-org-trusted-ssl-cert-osimage"
bootinclude="true"/>
</packages>
...
To build OS Images, you need an activation key that is associated with a channel
other than SUSE Manager Default.
Image stores for Kiwi build type, used to build system, virtual, and other images,
are not supported yet.
URL to the git repository containing the sources of the image to be built. Depending on the
layout of the repository the URL can be:
https://github.com/SUSE/manager-build-profiles
You can specify a branch after the # character in the URL. In this example, we use the
master branch:
https://github.com/SUSE/manager-build-profiles#master
You can specify a directory that contains the image sources after the : character. In this
example, we use OSImage/POS_Image-JeOS6:
https://github.com/SUSE/manager-build-
profiles#master:OSImage/POS_Image-JeOS6
https://myimagesourceserver.example.org/MyKiwiImage.tar.gz
Enter the path to the directory with the Kiwi build system sources. This directory must be
present on the selected build host.
/var/lib/Kiwi/MyKiwiImage
Kiwi sources consist at least of config.xml. Usually, config.sh and images.sh are present as
well. Sources can also contain files to be installed in the final image under the root subdirectory.
For information about the Kiwi build system, see the Kiwi documentation.
SUSE provides examples of fully functional image sources at the SUSE/manager-build-profiles public
GitHub repository.
<locale>en_US</locale>
<keytable>us.map.gz</keytable>
<timezone>Europe/Berlin</timezone>
<hwclock>utc</hwclock>
<rpm-excludedocs>true</rpm-excludedocs>
<type boot="saltboot/suse-SLES12" bootloader="grub2" checkprebuilt=
"true" compressed="false" filesystem="ext3" fsmountoptions="acl" fsnocheck=
"true" image="pxe" kernelcmdline="quiet"></type>
</preferences>
<!-- CUSTOM REPOSITORY
<repository type="rpm-dir">
<source path="this://repo"/>
</repository>
-->
<packages type="image">
<package name="patterns-sles-Minimal"/>
<package name="aaa_base-extras"/> <!-- wouldn't be SUSE without that
;-) -->
<package name="kernel-default"/>
<package name="salt-minion"/>
...
</packages>
<packages type="bootstrap">
...
<package name="sles-release"/>
<!-- this certificate package is required to access {productname}
repositories
and is provided by {productname} automatically -->
<package name="rhn-org-trusted-ssl-cert-osimage" bootinclude="true"/>
</packages>
<packages type="delete">
<package name="mtools"/>
<package name="initviocons"/>
...
</packages>
</image>
2. Add a different tag name if you want a version other than the default latest (applies only to
containers).
3. Select the Image Profile and a Build Host.
When you have selected a build profile, detailed information about the
selected profile is shown here.
The build server cannot run any form of automounter during the image building
process. If applicable, ensure that you do not have your Gnome session running
as root. If an automounter is running, the image build finishes successfully, but
the checksum of the image is different and causes a failure.
After the image is successfully built, the inspection phase begins. During the inspection phase SUSE
Manager collects information about the image:
If the built image type is PXE, a Salt pillar is also generated. Image pillars are
stored in the database and the Salt subsystem can access details about the
generated image. Details include where the image files are located and provided,
image checksums, information needed for network boot, and more.
9.3.7. Troubleshooting
Building an image requires several dependent steps. When the build fails, investigating Salt states results
and build log can help identify the source of the failure. You can carry out these checks when the build
fails:
9.3.8. Limitations
The section contains some known issues when working with images.
• HTTPS certificates used to access the HTTP sources or git repositories should be deployed to the
client by a custom state file, or configured manually.
• Importing Kiwi-based images is not supported.
Displayed data about images includes an image Name, its Version, Revision, and the build
Status. You can also see the image update status with a listing of possible patch and package updates
that are available for the image.
For OS Images, the Name and Version fields originate from Kiwi sources and are updated at the end
of successful build. During building or after failed build these fields show a temporary name based on
profile name.
Revision is automatically increased after each successful build. For OS Images, multiple revisions can
co-exist in the store.
For Container Images the store holds only the latest revision. Information about previous revisions
(packages, patches, etc.) are preserved and it is possible to list them with the Show obsolete
checkbox.
Clicking the [Details] button on an image provides a detailed view. The detailed view includes an
exact list of relevant patches, list of all packages installed within the image and a build log.
Clicking the [Delete] button deletes the image from the list. It also deletes the associated pillar, files
from OS Image Store and obsolete revisions.
The patch and the package list is only available if the inspect state after a build
was successful.
SUSE recommends you always keep your Uyuni infrastructure updated. That includes servers, proxies,
and build hosts. If you do not keep the Uyuni Server updated, you might not be able to update some parts
of your environment when you need to.
This section contains a checklist for your downtime period, with links to further information on
performing each of the steps.
10.1. Server
Procedure: Server checks
1. Apply the latest updates. See Installation-and-upgrade › Server-intro.
For information about database schema upgrades and PostgreSQL migrations, see Installation-and-
upgrade › Db-intro.
By default, several update channels are configured and enabled for the Uyuni Server. New and updated
packages become available automatically.
10.4. Proxy
Proxies should be updated as soon as Uyuni Server updates are complete.
In general, running a proxy connected to a server on a different version is not supported. The only
exception is for the duration of updates where it is expected that the server is updated first, so the proxy
could run the previous version temporarily.
If you are migrating from version 4.2 to 4.3, upgrade the server first, then any
proxy.
With the version 2 ISS implementation SUSE removed the master/slave notion.
Contents can be exported and imported in any direction between any Uyuni
server.
To set up ISS version 1, you need to define one Uyuni Server as a master, with the other as a slave. If
conflicting configurations exist, the system prioritizes the master configuration.
ISS Masters are masters only because they have slaves attached to them. This
means that you need to set up the ISS Master first, by defining its slaves. You
can then set up the ISS Slaves, by attaching them to a master.
Before you set up the ISS Slave, you need to ensure you have the appropriate CA certificate.
2. On the ISS Slave, save the CA certificate file to the /etc/pki/trust/anchors/ directory.
When you have copied the certificate, you can set up the ISS Slave.
mgr-inter-sync
mgr-inter-sync -c <channel-name>
3. In the Uyuni Web UI, navigate to Admin › ISS Configuration › Configure Master-to-Slave
Mappings and select the organizations you want to synchronize.
With the version 2 ISS implementation SUSE removed the master/slave notion.
Contents can be exported and imported in any direction between any Uyuni
server.
inter-server-sync export -h
The export procedure creates an output directory with all the needed data for the import procedure.
inter-server-sync import -h
The procedure for setting up Live Patching is slightly different for SLES 12 and SLES 15. Both
procedures are documented in this section.
Use content lifecycle management to clone the product tree and remove kernel versions newer than the
running one. This procedure is explained in the administration:content-lifecycle-examples.pdf. This is the
recommended solution.
# spacewalk-manage-channel-lifecycle --list-channels
Spacewalk Username: admin
Spacewalk Password:
Channel tree:
1. sles15-sp5-pool-x86_64
\__ sle-live-patching15-pool-x86_64-sp5
\__ sle-live-patching15-updates-x86_64-sp5
\__ sle-manager-tools15-pool-x86_64-sp5
\__ sle-manager-tools15-updates-x86_64-sp5
\__ sles15-sp5-updates-x86_64
Check the dev cloned channel you created, and remove any kernel updates that require a reboot.
Your channel is now set up for live patching, and can be promoted to testing. In this procedure, you
also add the live patching child channels to your client, ready to be applied.
2. In the Uyuni Web UI, select the client from Systems › Overview, and navigate to the Software ›
Software Channels tab.
3. Check the new test-sles15-sp5-pool-x86_64 custom channel to change the base
channel, and check both corresponding live patching child channels.
4. Click [Next] , confirm that the details are correct, and click [Confirm] to save the changes.
You can now select and view available CVE patches, and apply these important kernel updates with Live
Patching.
2. Apply the highstate to enable Live Patching, and reboot the client.
3. Repeat for each client that you want to manage with Live Patching.
4. To check that live patching has been enabled correctly, select the client from Systems › System
List, and ensure that Live Patch appears in the Kernel field.
• Not all kernel patches are Live Patches. Non-Live kernel patches are
represented by a Reboot Required icon located next to the
Security shield icon. These patches always require a reboot.
• Not all security issues can be fixed by applying a live patch. Some
security issues can only be fixed by applying a full kernel update and
requires a reboot. The assigned CVE numbers for these issues are not
included in live patches. A CVE audit displays this requirement.
2. Apply the highstate to enable Live Patching, and reboot the client.
3. Repeat for each client that you want to manage with Live Patching.
4. To check that live patching has been enabled correctly, select the client from Systems › System
List, and ensure that Live Patching appears in the Kernel field.
• Not all kernel patches are Live Patches. Non-Live kernel patches are
represented by a Reboot Required icon located next to the
Security shield icon. These patches always require a reboot.
• Not all security issues can be fixed by applying a live patch. Some
security issues can only be fixed by applying a full kernel update and
require a reboot. The assigned CVE numbers for these issues are not
included in live patches. A CVE audit displays this requirement.
time when actions are allowed. Additionally, the allowed and restricted actions
differ. For more information about system locks, see Client-configuration ›
System-locking.
Maintenance windows require both a calendar, and a schedule. The calendar defines the date and time of
your maintenance window events, including recurring events, and must be in ical format. The schedule
uses the events defined in the calendar to create the maintenance windows. You must create an ical file
for upload, or link to an ical file to create the calendar, before you can create the schedule.
When you have created the schedule, you can assign it to clients that are registered to the Uyuni Server.
Clients that have a maintenance schedule assigned cannot run restricted actions outside of maintenance
windows.
Restricted actions significantly modify the client, and could potentially cause the client to stop running.
Some examples of restricted actions are:
• Package installation
• Client upgrade
• Product migration
• Highstate application (for Salt clients)
Unrestricted actions are minor actions that are considered safe and are unlikely to cause problems on the
client. Some examples of unrestricted actions are:
Before you begin, you must create an ical file for upload, or link to an ical file to create the calendar.
You can create ical files in your preferred calendaring tool, such as Microsoft Outlook, Google
Calendar, or KOrganizer.
When you assign a new maintenance schedule to a client, it is possible that the
client might already have some restricted actions scheduled, and that these might
now conflict with the new maintenance schedule. If this occurs, the Web UI
displays an error and you cannot assign the schedule to the client. To resolve this,
check the [Cancel affected actions] option when you assign the
schedule. This cancels any previously scheduled actions that conflict with the
new maintenance schedule.
When you have created your maintenance windows, you can schedule restricted actions, such as package
upgrades, to be performed during the maintenance window.
For example, you might like to create a schedule for production servers, and a different schedule for
testing servers. In this case, you would specify SUMMARY: Production Servers on events for the
production servers, and SUMMARY: Testing Servers on events for the testing servers.
There are two types of schedule: single, or multi. If your calendar contains events that apply to more than
one schedule, then you must select multi, and ensure you name the schedule according to the
summary field you used in the calendar file.
Restricted actions significantly modify the client, and could potentially cause the client to stop running.
Restricted actions can only be run during a maintenance window. The restricted actions are:
For Salt clients, it is possible to run remote commands directly at any time by
navigating to Salt › Remote Commands. This applies whether or not the Salt
client is in a maintenance window. For more information about remote
commands, see Administration › Actions.
Unrestricted actions are minor actions that are considered safe and are unlikely to cause problems on the
client. If an action is not restricted it is, by definition, unrestricted, and can be be run at any time.
This tool is designed for use with a SUSE support subscription. It is not required for open source
distributions, including openSUSE, CentOS, and Ubuntu.
The available commands and arguments for mgr-sync are listed in this table. Use this syntax for mgr-
sync commands:
To see the full list of options specific to a command, use this command:
• /var/log/rhn/mgr-sync.log
• /var/log/rhn/rhn_web_api.log
Prometheus and Grafana packages are included in the Uyuni Client Tools for:
You need to install Prometheus and Grafana on a machine separate from the Uyuni Server. We
recommend to use a managed Salt SUSE client as your monitoring server. Other clients are not supported
as a monitoring server.
Prometheus fetches metrics using a pull mechanism, so the server must be able to establish TCP
connections to monitored clients. Clients must have corresponding open ports and be reachable over the
network. Alternatively, you can use reverse proxies to establish a connection.
15.1.1. Prometheus
Prometheus is an open-source monitoring tool that is used to record real-time metrics in a time-series
database. Metrics are pulled via HTTP, enabling high performance and scalability.
Prometheus metrics are time series data, or timestamped values belonging to the same group or
dimension. A metric is uniquely identified by its name and set of labels.
Each application or system being monitored must expose metrics in the format above, either through code
instrumentation or Prometheus exporters.
The Prometheus community provides a list of official exporters, and more can be found as community
contributions. For more information and an extensive list of exporters, see https://prometheus.io/docs/
instrumenting/exporters/.
15.1.3. Grafana
Grafana is a tool for data visualization, monitoring, and analysis. It is used to create dashboards with
panels representing specific metrics over a set period of time. Grafana is commonly used together with
Prometheus, but also supports other data sources such as ElasticSearch, MySQL, PostgreSQL, and Influx
DB. For more information about Grafana, see https://grafana.com/docs/.
For more information about the monitoring formulas, see Specialized-guides › Salt.
zypper in golang-github-prometheus-prometheus
3. Check that the Prometheus interface loads correctly. In your browser, navigate to the URL of the
server where Prometheus is installed, on port 9090 (for example,
http://example.com:9090).
4. Open the configuration file at /etc/prometheus/prometheus.yml and add this
configuration information. Replace server.url with your Uyuni server URL and adjust
username and password fields to match your Uyuni credentials.
For more information about the Prometheus configuration options, see the official Prometheus
documentation at https://prometheus.io/docs/prometheus/latest/configuration/configuration/.
monitoring, and more. You can choose which dashboards to provision in the
formula configuration page.
zypper in grafana
3. In your browser, navigate to the URL of the server where Grafana is installed, on port 3000 (for
example, http://example.com:3000).
4. On the login page, enter admin for username and password.
5. Click [Log in] . If login is successful, then you will see a prompt to change the password.
6. Click [OK] on the prompt, then change your password.
7. Move your cursor to the cog icon on the side menu which will show the configuration options.
8. Click [Data sources] .
9. Click [Add data source] to see a list of all supported data sources.
10. Choose the Prometheus data source.
11. Make sure to specify the correct URL of the Prometheus server.
12. Click [Save & test] .
13. To import a dashboard click the [+] icon in the side menu, and then click [Import] .
14. For Uyuni server overview load the dashboard ID: 17569.
15. For Uyuni clients overview load the dashboard ID: 17570.
guides › Salt.
• For more information on how to manually install and configure Grafana,
see https://grafana.com/docs.
The exporter packages are pre-installed in Uyuni Server and Proxy, but their respective systemd daemons
are disabled by default.
Only server self-health monitoring can be enabled using the Web UI. Metrics for
Every salt_queue value has a label named queue with the queue number as value.
On SLE Micro, only the Node exporter and the Blackbox exporter are available.
When you have the exporters installed and configured, you can start using Prometheus to collect metrics
from the monitored systems. If you have configured your monitoring server with the Web UI, metrics
collection happens automatically.
5. Select the exporters you want to enable and customize arguments according to your needs. The
Address field accepts either a port number preceded by a colon (:9100), or a fully resolvable
address (example:9100).
6. Click [Save Formula] .
7. Apply the highstate.
Monitoring formulas can also be configured for System Groups, by applying the
same configuration used for individual systems inside the corresponding group.
For more information about the monitoring formulas, see Specialized-guides › Salt.
• https://grafana.com/docs/grafana/latest/administration/user-management/user-preferences/#change-
your-grafana-password
In case you have lost the Grafana administrator password you can reset it as root with the following
command:
Additionally, if you are running the alert manager on a different host than where you run Prometheus, you
also need to open port 9093.
For clients installed on cloud instances, you can add the required ports to a security group that has access
to the monitoring server.
Alternatively, you can deploy a Prometheus instance in the exporters' local network, and configure
federation. This allows the main monitoring server to scrape the time series from the local Prometheus
instance. If you use this method, you only need to open the Prometheus API port, which is 9090.
You can also proxy requests through the network boundary. Tools like PushProx deploy a proxy and a
client on both sides of the network barrier and allow Prometheus to work across network topologies such
as NAT.
For more information about the monitoring formulas, see Specialized-guides › Salt.
15.5. Security
Prometheus server and Prometheus node exporter offer a built-in mechanism to secure their endpoints
with TLS encryption and authentication. Uyuni Web UI simplifies the configuration of all involved
components. The TLS certificates have to be provided and deployed by the user. Uyuni offers enabling
the following security model:
For more information about configuring all available options, see Specialized-guides › Salt.
This section demonstrates how to generate client/server certificates for Prometheus and Node exporter
minions self-signed with SUSE Manager CA.
Ensure that the set-cname parameter is the fully qualified domain name (FQDN) of your Salt
client. You can use the the set-cname parameter multiple times if you require multiple aliases.
2. Copy server.crt and server.key files to the Salt minion and provide read access for
prometheus user.
For most environments, a single organization is enough. However, more complicated environments might
need several organizations. You might like to have an organization for each physical location within your
business, or for different business functions.
When you have created your organizations, you can create and assign users to your organizations. You can
then assign permissions on an organization level, which applies by default to every user assigned to the
organization.
You can also configure authentication methods for your new organization, including PAM and single
sign-on. For more information about authentication, see Administration › Auth-methods.
From the Admin › Organizations section, you can access tabs to manage users, trusts, configuration, and
organization, ensure you are signed in as the correct administrator for the
organization you want to change.
4. Apply the changes by clicking [Apply] . This schedules the task to apply the changes to all
clients within the organization.
SUSE has introduced a new mechanism (2021) called "retracted patches" to revoke such patches almost
immediately by setting their advisory status to "retracted" (instead of "final" or "stable").
A retracted patch or package cannot be installed on systems with Uyuni. The only way to install a
retracted package, is to do it manually with zypper install and specifying the exact package
version. For example:
Retracted status of patches and packages is depicted with the icon in the Web UI of Uyuni. For
example, see:
When a patch or package, that has been installed on a system, gets retracted, the icon is also displayed
in the installed packages list of that system. Uyuni does not provide a way to downgrade such a patch or
package.
Upon cloning vendor channels into your organization, channel patches will be cloned as well.
When the vendor retracts a patch in a channel and Uyuni synchronizes this channel (for example, with the
nightly job), the "retracted" attribute will not get propagated to the cloned patches and will not be
observed by the clients subscribed to cloned channels. To propagate the attribute to your cloned channels
use one of the following ways:
• Patch Sync (Software › Manage › cloned channel › Patches › Sync). This function allows you to
align the attributes of patches in your cloned channel to their originals.
• Content Lifecycle Management. For more information about cloned channels in the context of
Content Lifecycle Management, see Client-configuration › Channels.
Example:
1. Consider two Content Lifecycle Management projects prj1 and prj2
2. Both of these projects have 2 environments dev and test
3. Both of these projects have a vendor channel set as a source channel
4. All channels in this scenario (four cloned channels in total) are aligned to the latest state of the
vendor channels
5. Vendor retracts a patch in the source channel and the nighly job synchronizes it to your Uyuni
6. None of the four channels see this change because they are using a patch clone, not the patch
directly.
7. As soon as you synchronize your patch (either you build any of these two projects, or you use the
Patch Sync function on any of the four cloned channels), due to the patch sharing, ALL of the
cloned channels will see the patch as retracted.
They will depend on the correct version of the package that is known to include the correction in the
software. This type of package:
• cannot be installed accidentally (i.e. zypper update will never suggest installing them),
• cannot be removed accidentally (i.e. a newer package version will not replace the PTF one, unless
the user makes it explicitly on the zypper command line),
• is only updated when the newer version is known to address the specific issue previously solved by
the PTF,
• updates only packages already installed on the system (i.e. if a software is split into multiple
packages, the PTF will replace only those currently installed on the system).
The correct ID of the package will be provided by SUSE Support during the course of the support case
investigation, along with instruction on how to deploy/restart the affected services.
systems. Other versions or operating systems do not have this feature and the
pages are not visible for them.
Procedure: Enabling and synchronizing PTF repositories using the command line
1. On the console enter mgr-sync refresh.
2. Enter mgr-sync list channel and look for channels starting with your SCC account name
and ptfs in its name. For example, a123456-sles-15.3-ptfs-x86_64.
3. Enable the PTF channel with mgr-sync add channel <label>.
This channel is now available and can be added to every system which is using the same base channel.
PTF packages need to be installed explicitly, since they are not automatically picked up when updating a
system. The SUSE Customer Support will provide the PTF number to fix a specific problem. With the
number the proxy package can be identified in the PTF list. In Uyuni Web UI every system with PTFs
available for installation has a page which lists them.
Procedure: Enabling and synchronizing PTF repositories via the Uyuni Web UI
1. In the Uyuni Web UI, navigate to Admin › Setup Wizard › Products and look for the product
you want to enable the PTF repository for.
2. Click [Show product’s channels] next to the products sync status.
3. You should see a popup listing mandatory and optional channels for the product.
4. In the optional channels list look for channels starting with your SCC account name and ptfs in
its name. For example a123456-sles-15.3-ptfs-x86_64.
5. Select the channel using the checkbox next to its name and click [Confirm] to schedule the
sync.
Note that the product has to be installed to be able to add optional channels to it.
In case a PTF should be installed using the API, the normal system.schedulePackageInstall
API can be used with the proxy package name.
When this regular update with the fix is released, an updated version of the PTF will also be released into
the account-specific PTF repository. The updated PTF will remove the strict dependencies and allow
updates to be installed again.
The replacement of the PTF with the maintenance update which includes the fix happens automatically
via a standard package update or patch installation.
In case the PTF should be removed using the API, the normal system.schedulePackageRemove
API can be used with the proxy package name.
104 / 172 18.4. Removing the patched version of a package | Uyuni 2024.03
19.1. Using spacewalk-report
While the command line tool spacewalk-report can be used to generate pre-configured reports,
with the introduction of the Specialized-guides › Large-deployments it is also possible to generate fully
customized reports. This can be achieved by connecting any reporting tool that supports the SQL
language to the reporting database and extract the data directly. For more information about the data
availability and structure, see the reporting database schema documentation.
To get the report in CSV format, run this command at the command prompt on the server:
spacewalk-report <report_name>
• the report data is not changing in real-time, but it’s updated only by the execution of a scheduled
task;
• data duplication has been removed and columns that were previously considered "multival" contain
now multiple values separated by ;. This also means that the command line options
can be used to fall back to the old report, which is executed against the
application database.
MD5 Users users-md5 All users for all Yes The column
organizations using organization_
MD5 encrypted id has been
passwords, with removed.
their details and
roles.
For more information about an individual report, run spacewalk-report with the option --info
or --list-fields-info and the report name. This shows the description and list of possible fields
in the report.
For further information on program invocation and options, see the spacewalk-report(8) man
page as well as the --help parameter of the spacewalk-report command.
salt-key -F master
On your client, open the /etc/salt/minion configuration file. Uncomment the following line
and enter the master’s fingerprint replacing the example fingerprint:
master_finger: 'ba:30:65:2a:d6:9e:20:4f:d8:b2:f3:a7:d4:65:11:13'
gpg --gen-key
2. At the prompts, select RSA as the key type, with a size of 2048 bits, and select an appropriate
expiry date for your key. Check the details for your new key, and type y to confirm.
3. At the prompts, enter a name and email address to be associated with your key. You can also add a
comment to help you identify the key, if desired. When you are happy with the user identity, type
O to confirm.
4. At the prompt, enter a passphrase to protect your key.
112 / 172 20.1. Set up a Client to Master Validation Fingerprint | Uyuni 2024.03
20.2. Signing Repository Metadata
5. The key should be automatically added to your keyring. You can check by listing the keys in your
keyring:
gpg --list-keys
6. Add the password for your keyring to the /etc/rhn/signing.conf configuration file, by
opening the file in your text editor and adding this line:
GPGPASS="password"
You can manage metadata signing on the command line using the mgr-sign-metadata-ctl
command.
3. You can check that your configuration is correct with this command:
mgr-sign-metadata-ctl check-config
4. Restart the services and schedule metadata regeneration to pick up the changes:
mgr-sign-metadata-ctl regen-metadata
You can also use the mgr-sign-metadata-ctl command to perform other tasks. Use mgr-
sign-metadata-ctl --help to see the complete list.
Repository metadata signing is a global option. When it is enabled, it is enabled on all software channels
on the server. This means that all clients connected to the server need to trust the new GPG key to be able
to install or update packages.
For more information about troubleshooting GPG keys, see Administration › Troubleshooting.
server.sync_source_packages = 1
spacewalk-service restart
Currently, this feature can only be enabled globally for all repositories. It is not possible to select
individual repositories for mirroring.
When this feature has been activated, the source packages become available in the Uyuni Web UI after
the next repository synchronization. They are shown as sources for the binary package, and can be
downloaded directly from the Web UI. Source packages cannot be installed on clients using the Web UI.
SCAP was created to provide a standardized approach to maintaining system security, and the standards
that are used continually change to meet the needs of the community and enterprise businesses. New
specifications are governed by NIST’s SCAP Release cycle to provide a consistent and repeatable revision
work flow. For more information, see:
• http://scap.nist.gov/timeline.html
• https://csrc.nist.gov/projects/security-content-automation-protocol
• https://www.open-scap.org/features/standards/
• https://ncp.nist.gov/repository?scap
Uyuni uses OpenSCAP to implement the SCAP specifications. OpenSCAP is an auditing tool that utilizes
the Extensible Configuration Checklist Description Format (XCCDF). XCCDF is a standard way of
expressing checklist content and defines security checklists. It also combines with other specifications
such as Common Platform Enumeration (CPE), Common Configuration Enumeration (CCE), and Open
Vulnerability and Assessment Language (OVAL), to create a SCAP-expressed checklist that can be
processed by SCAP-validated products.
OpenSCAP verifies the presence of patches by using content produced by the SUSE Security Team.
OpenSCAP checks system security configuration settings and examines systems for signs of compromise
by using rules based on standards and specifications. For more information about the SUSE Security
Team, see https://www.suse.com/support/security.
OpenSCAP auditing is not available on Salt clients that use the SSH contact
method.
Scanning clients can consume a lot of memory and compute power on the client
being scanned. For Red Hat clients, ensure you have at least 2 GB of RAM
available on each client to be scanned.
Install the OpenSCAP scanner and the SCAP Security Guide (content) packages on the client before you
begin. Depending on the operating system, these packages are included either on the base operating
system, or in the Uyuni Client Tools.
Other profiles, like the CIS profile, are community supplied and not officially
supported by SUSE.
For Non-SUSE operating systems the included profiles are community supplied.
They are not officially supported by SUSE.
We recommend you use templates to create your SCAP content files. If you
create and use your own custom content files, you do so at your own risk. If your
system becomes damaged through the use of custom content files, you might not
be supported by SUSE.
When you have created your content files, you need to transfer the file to the client. You can do this in
the same way as you move any other file, using physical storage media, or across a network with Salt (for
example, salt-cp or the Salt File Server), ftp or scp.
We recommend that you create a package to distribute content files to clients that you are managing with
Uyuni. Packages can be signed and verified to ensure their integrity. For more information, see
Administration › Custom-channels.
On RPM-based operating systems, use this command to determine the location of the available SCAP
files:
On DEB-based operating systems, use this command to determine the location of the available SCAP
files:
dpkg -L <scap-security-guide-package-name-from-table>
When you have identified one SCAP content file that suits your needs, list profiles available on the client:
Stream: scap_org.open-scap_datastream_from_xccdf_ssg-sle15-xccdf-1.2.xml
Generated: (null)
Version: 1.2
Checklists:
Ref-Id: scap_org.open-scap_cref_ssg-sle15-xccdf-1.2.xml
Status: draft
Generated: 2021-03-24
Resolved: true
Profiles:
Title: CIS SUSE Linux Enterprise 15 Benchmark
Id: xccdf_org.ssgproject.content_profile_cis
Title: Standard System Security Profile for SUSE
Linux Enterprise 15
Id:
xccdf_org.ssgproject.content_profile_standard
Title: DISA STIG for SUSE Linux Enterprise 15
Id: xccdf_org.ssgproject.content_profile_stig
Referenced check files:
ssg-sle15-oval.xml
system: http://oval.mitre.org/XMLSchema/oval-
definitions-5
ssg-sle15-ocil.xml
system: http://scap.nist.gov/schema/ocil/2
https://ftp.suse.com/pub/projects/security/oval/suse.linux.enterprise.15.xml
system: http://oval.mitre.org/XMLSchema/oval-
definitions-5
Checks:
Ref-Id: scap_org.open-scap_cref_ssg-sle15-oval.xml
Ref-Id: scap_org.open-scap_cref_ssg-sle15-ocil.xml
Ref-Id: scap_org.open-scap_cref_ssg-sle15-cpe-oval.xml
Dictionaries:
Ref-Id: scap_org.open-scap_cref_ssg-sle15-cpe-dictionary.xml
Take a note of the file paths and profiles for performing the scan.
The XCCDF content file is validated before it is run on the remote system. If the
content file includes invalid arguments, the test fails.
#!/usr/bin/python3
client = xmlrpc.client.ServerProxy('https://server.example.com/rpc/api')
key = client.auth.login('username', 'password')
client.system.scap.scheduleXccdfScan(key, <1000010001>,
'<path_to_xccdf_file.xml>',
'--profile <profile_name>')
3. Run the script on the client you want to scan, from the command prompt.
To ensure that detailed information about scans is available, you need to enable it on the client. In the
Uyuni Web UI, navigate to Admin › Organizations and click on the organization the client is a part of.
Navigate to the Configuration tab, and check the Enable Upload of Detailed
SCAP Files option. When enabled, this generates an additional HTML file on every scan, which
contains extra information. The results show an extra line similar to this:
To retrieve scan information from the command line, use the spacewalk-report command:
spacewalk-report system-history-scap
spacewalk-report scap-scan
spacewalk-report scap-scan-results
You can also use the Uyuni API to view results, with the system.scap handler.
20.4.7. Remediation
Remediation Bash scripts and Ansible playbooks are provided in the same SCAP Security Guide
packages to harden the client systems. For example:
/usr/share/scap-security-guide/bash/sle15-script-cis.sh
/usr/share/scap-security-guide/bash/sle15-script-standard.sh
/usr/share/scap-security-guide/bash/sle15-script-stig.sh
/usr/share/scap-security-guide/ansible/sle15-playbook-cis.yml
/usr/share/scap-security-guide/ansible/sle15-playbook-standard.yml
/usr/share/scap-security-guide/ansible/sle15-playbook-stig.yml
You can run them using remote commands or with Ansible, after enabling Ansible in the client system.
Install the scap-security-guide package on all your target systems. For more information, see
Administration › Ansible-setup-control-node.
Packages, channels and scripts are different for each operating system and distribution. Examples are
listed in the Example remediation Bash scripts section.
1. From System › Overview tab, select your instance. Then in Details › Remote Commands, write a
Bash script such as:
#!/bin/bash
chmod +x -R /usr/share/scap-security-guide/bash
/usr/share/scap-security-guide/bash/sle15-script-stig.sh
2. Click [Schedule] .
Folder and script names change between distribution and version. Examples are
listed in the Example remediation Bash scripts section.
20.4.7.1.2. Run the bash script using System Set Manager on multiple systems
1. When a system group has been created click System Groups, select Use in SSM from the
table.
2. From the System Set Manager, under Misc › Remote Command, write a Bash script such
as:
#!/bin/bash
chmod +x -R /usr/share/scap-security-guide/bash
/usr/share/scap-security-guide/bash/sle15-script-stig.sh
3. Click [Schedule] .
Package
scap-security-guide
Channels
• SLE12: SLES12 Updates
• SLE15: SLES15 Module Basesystem Updates
Bash scripts
opensuse-script-standard.sh
sle12-script-standard.sh
sle12-script-stig.sh
sle15-script-cis.sh
sle15-script-standard.sh
sle15-script-stig.sh
20.4.7.2.2. Red Hat Enterprise Linux and CentOS Bash script data
Package
scap-security-guide-redhat
Channels
• SUSE Manager Tools
Bash scripts
centos7-script-pci-dss.sh
centos7-script-standard.sh
centos8-script-pci-dss.sh
centos8-script-standard.sh
fedora-script-ospp.sh
fedora-script-pci-dss.sh
fedora-script-standard.sh
ol7-script-anssi_nt28_enhanced.sh
ol7-script-anssi_nt28_high.sh
ol7-script-anssi_nt28_intermediary.sh
ol7-script-anssi_nt28_minimal.sh
ol7-script-cjis.sh
ol7-script-cui.sh
ol7-script-e8.sh
ol7-script-hipaa.sh
ol7-script-ospp.sh
ol7-script-pci-dss.sh
ol7-script-sap.sh
ol7-script-standard.sh
ol7-script-stig.sh
ol8-script-anssi_bp28_enhanced.sh
ol8-script-anssi_bp28_high.sh
ol8-script-anssi_bp28_intermediary.sh
ol8-script-anssi_bp28_minimal.sh
ol8-script-cjis.sh
ol8-script-cui.sh
ol8-script-e8.sh
ol8-script-hipaa.sh
ol8-script-ospp.sh
ol8-script-pci-dss.sh
ol8-script-standard.sh
rhel7-script-anssi_nt28_enhanced.sh
rhel7-script-anssi_nt28_high.sh
rhel7-script-anssi_nt28_intermediary.sh
rhel7-script-anssi_nt28_minimal.sh
rhel7-script-C2S.sh
rhel7-script-cis.sh
rhel7-script-cjis.sh
rhel7-script-cui.sh
rhel7-script-e8.sh
rhel7-script-hipaa.sh
rhel7-script-ncp.sh
rhel7-script-ospp.sh
rhel7-script-pci-dss.sh
rhel7-script-rhelh-stig.sh
rhel7-script-rhelh-vpp.sh
rhel7-script-rht-ccp.sh
rhel7-script-standard.sh
rhel7-script-stig_gui.sh
rhel7-script-stig.sh
rhel8-script-anssi_bp28_enhanced.sh
rhel8-script-anssi_bp28_high.sh
rhel8-script-anssi_bp28_intermediary.sh
rhel8-script-anssi_bp28_minimal.sh
rhel8-script-cis.sh
rhel8-script-cjis.sh
rhel8-script-cui.sh
rhel8-script-e8.sh
rhel8-script-hipaa.sh
rhel8-script-ism_o.sh
rhel8-script-ospp.sh
rhel8-script-pci-dss.sh
rhel8-script-rhelh-stig.sh
rhel8-script-rhelh-vpp.sh
rhel8-script-rht-ccp.sh
rhel8-script-standard.sh
rhel8-script-stig_gui.sh
rhel8-script-stig.sh
rhel9-script-pci-dss.sh
rhosp10-script-cui.sh
rhosp10-script-stig.sh
rhosp13-script-stig.sh
rhv4-script-pci-dss.sh
rhv4-script-rhvh-stig.sh
rhv4-script-rhvh-vpp.sh
sl7-script-pci-dss.sh
sl7-script-standard.sh
Package
scap-security-guide-ubuntu
Channels
• SUSE Manager Tools
Bash scripts
ubuntu1804-script-anssi_np_nt28_average.sh
ubuntu1804-script-anssi_np_nt28_high.sh
ubuntu1804-script-anssi_np_nt28_minimal.sh
ubuntu1804-script-anssi_np_nt28_restrictive.sh
ubuntu1804-script-cis.sh
ubuntu1804-script-standard.sh
ubuntu2004-script-standard.sh
Package
scap-security-guide-debian
Channels
• SUSE Manager Tools
Bash scripts
debian11-script-anssi_np_nt28_average.sh
debian11-script-anssi_np_nt28_high.sh
debian11-script-anssi_np_nt28_minimal.sh
debian11-script-anssi_np_nt28_restrictive.sh
debian11-script-standard.sh
debian12-script-anssi_np_nt28_average.sh
debian12-script-anssi_np_nt28_high.sh
debian12-script-anssi_np_nt28_minimal.sh
debian12-script-anssi_np_nt28_restrictive.sh
debian12-script-standard.sh
20.5. Auditing
In Uyuni, you can keep track of your clients through a series of auditing tasks. You can check that your
clients are up to date with all public security patches (CVEs), perform subscription matching, and use
OpenSCAP to check for specification compliance.
You must apply CVEs to your clients as soon as they become available.
Each CVE contains an identification number, a description of the vulnerability, and links to further
information. CVE identification numbers use the form CVE-YEAR-XXXX.
In the Uyuni Web UI, navigate to Audit › CVE Audit to see a list of all clients and their current patch
status.
By default, the CVE data is updated at 2300 every day. We recommend that before you begin a CVE
audit you refresh the data to ensure you have the latest patches.
2. To check the patch status for a particular CVE, type the CVE identifier in the CVE Number field.
3. Select the patch statuses you want to look for, or leave all statuses checked to look for all.
4. Click [Audit Servers] to check all systems, or click [Audit Images] to check all
images.
For more information about the patch status icons used on this page, see Reference › Audit.
For each system, the Next Action column provides information about what you need to do to address
vulnerabilities. If applicable, a list of candidate channels or patches is also given. You can also assign
systems to a System Set for further batch processing.
You can use the Uyuni API to verify the patch status of your clients. Use the
audit.listSystemsByPatchStatus API method. For more information about this method, see
the Uyuni API Guide.
Relevant patch
A patch known by Uyuni in a relevant channel.
Relevant channel
A channel managed by Uyuni, which is either assigned to the system, the original of a cloned channel
which is assigned to the system, a channel linked to a product which is installed on the system or a
past or future service pack channel for the system.
Because of the definitions used within Uyuni, CVE audit results might be
Every client that uses SSL to register to the Uyuni Server checks that it is connecting to the right server
by validating against a server certificate. This process is called an SSL handshake.
During the SSL handshake, the client checks that the hostname in the server certificate matches what it
expects. The client also needs to check if the server certificate is trusted.
Certificate authorities (CAs) are certificates that are used to sign other certificates. All certificates must be
signed by a certificate authority (CA) in order for them to be considered valid, and for clients to be able to
successfully match against them.
In order for SSL authentication to work correctly, the client must trust the root CA. This means that the
root CA must be installed on every client.
The default method of SSL authentication is for Uyuni to use self-signed certificates. In this case, Uyuni
has generated all the certificates, and the root CA has signed the server certificate directly.
An alternative method is to use an intermediate CA. In this case, the root CA signs the intermediate CA.
The intermediate CA can then sign any number of other intermediate CAs, and the final one signs the
server certificate. This is referred to as a chained certificate.
If you are using intermediate CAs in a chained certificate, the root CA is installed on the client, and the
server certificate is installed on the server. During the SSL handshake, clients must be able to verify the
entire chain of intermediate certificates between the root CA and the server certificate, so they must be
able to access all the intermediate certificates.
There are two main ways of achieving this. In older versions of Uyuni, by default, all the intermediate
CAs are installed on the client. However, you could also configure your services on the server to provide
them to the client. In this case, during the SSL handshake, the server presents the server certificate as well
as all the intermediate CAs. This mechanims is used now as the new default configuration.
By default, Uyuni uses a self-signed certificate without intermediate CAs. For additional security, you can
arrange a third party CA to sign your certificates. Third party CAs perform checks to ensure that the
information contained in the certificate is correct. They usually charge an annual fee for this service.
Using a third party CA makes certificates harder to spoof, and provides additional protection for your
installation. If you have certificates signed by a third party CA, you can import them to your Uyuni
installation.
In case the certificates are provided by a third party instance like an own or external PKI, step 1 can be
skipped.
This section covers how to create or re-create your self-signed certificates on new or existing installation.
The host name of the SSL keys and certificates must match the fully qualified host name of the machine
you deploy them on.
Ensure that the set-cname parameter is the fully qualified domain name of your Uyuni Server.
You can use the the set-cname parameter multiple times if you require multiple aliases.
The private key and the server certificate can be found in the directory /root/ssl-
build/susemanager/ as server.key and server.crt. The name of the last directory
depends on the hostname used with --set-hostname option.
Be careful when you need to replace the Root CA. It is possible to break the
trust chain between the server and clients. If that happens, you need an
administrative user to log in to every client and deploy the CA directly.
mv /root/ssl-build /root/old-ssl-build
Ensure that the set-cname parameter is the fully qualified domain name of your Uyuni Server.
You can use the the set-cname parameter multiple times if you require multiple aliases.
You need to generate a server certificate also for each proxy, using their host names and cnames.
• A certificate authority (CA) SSL public certificate. If you are using a CA chain, all intermediate
CAs must also be available.
• An SSL server private key
• An SSL server certificate
The host name of the SSL server certificate must match the fully qualified host name of the machine you
deploy them on. You can set the host names in the X509v3 Subject Alternative Name
section of the certificate. You can also list multiple host names if your environment requires it. Supported
Key types are RSA and EC (Elliptic Curve).
Third-party authorities commonly use intermediate CAs to sign requested server certificates. In this case,
all CAs in the chain are required to be available. If there is no extra parameter or option available to
specify intermediate CAs, take care that all CAs (Root CA and intermediate CAs) are stored in one file.
3. At the command prompt, point the SSL environment variables to the certificate file locations:
export CA_CERT=<path_to_CA_certificates_file>
export SERVER_KEY=<path_to_web_server_key>
export SERVER_CERT=<path_to_web_server_certificate>
yast susemanager_setup
When you are prompted for certificate details during setup, fill in random values. The values are
overridden by the values you specified at the command prompt.
configure-proxy.sh
Use the same certificate authority to sign all server certificates for servers and
proxies. Certificates signed with different CAs do not match.
Intermediate CAs can either be available in the file which is specified with --root-ca-file or
specified as extra options with --intermediate-ca-file. The --intermediate-ca-file
option can be specified multiple times. This command performs a number of tests on the provided files to
test if they are valid and can be used for the requested use case.
spacewalk-service stop
systemctl restart postgresql.service
spacewalk-service start
If you are using a proxy, you need to generate a server certificate RPM for each proxy, using their host
names and cnames. You should use mgr-ssl-cert-setup also on a Uyuni Proxy to replace the
certificates. Because the Uyuni Proxy does not have a postgreSQL database, only spacewalk-
service restart is sufficient.
If the Root CA was changed, it needs to get deployed to all the clients connected to Uyuni.
2. Check all your Salt Clients to add them to the system set manager.
3. Navigate to Systems › System Set Manager › Overview.
Procedure
1. Create new configuration file in /etc/apache2/conf.d/<filename>.conf, for
example /etc/apache2/conf.d/zz-spacewalk-www-custom.conf.
2. Add line Header always set Strict-Transport-Security "max-
age=63072000; includeSubDomains"
3. Restart Apache with systemctl restart apache2
Procedure
1. Create new configuration file in /etc/apache2/conf.d/<filename>.conf, for
example /etc/apache2/conf.dz/zz-spacewalk-proxy-custom.conf.
2. Add line Header always set Strict-Transport-Security "max-
age=63072000; includeSubDomains"
3. Restart Apache with systemctl restart apache2
When naming the new config file <filename>.conf, make sure it is loaded
When HSTS is enabled while using the default SSL certificate generated by
Uyuni or a self-signed certificate, browsers will refuse to connect with HTTPS
unless the CA used to sign such certificates is trusted by the browser. If you are
using the SSL certificate generated by Uyuni, you can trust it by importing the
file located at http://<SERVER-HOSTNAME>/pub/RHN-ORG-
TRUSTED-SSL-CERT to the browsers of all users.
The Subscriptions Report tab gives information about current and expiring subscriptions.
The Unmatched Products Report tab gives a list of clients that do not have a current
subscription. This includes clients that could not be matched, or that are not currently registered with
Uyuni. The report includes product names and the number of systems that remain unmatched.
The Pins tab allows you to associate individual clients to the relevant subscription. This is especially
useful if the subscription manager is not automatically associating clients to subscriptions successfully.
The Messages tab shows all the messages generated by subscription matcher during the matching
process. They provide information to help understand the results and to improve the matching.
You can also download the reports in .csv format, or access them from that command prompt in the
/var/lib/spacewalk/subscription-matcher/ directory.
By default, the subscription matcher runs daily, at midnight. To change this, navigate to Admin › Task
Schedules and click gatherer-matcher-default. Change the schedule as required, and click
[Update Schedule] .
Because the report can only match current clients with current subscriptions, you might find that the
matches change over time. The same client does not always match the same subscription. This can be due
to new clients being registered or unregistered, or because of the addition or expiration of subscriptions.
The subscription matcher automatically attempts to reduce the number of unmatched products, limited by
the terms and conditions of the subscriptions in your account. However, if you have incomplete hardware
information, unknown virtual machine host assignments, or clients running in unknown public clouds, the
matcher might show that you do not have enough subscriptions available. Always ensure you have
complete data about your clients included in Uyuni, to help ensure accuracy.
The subscription matcher does not always match clients and subscriptions
accurately. It is not intended to be a replacement for auditing.
However, the matcher does not always respect a pin. It depends on the subscription being available, and
whether or not the subscription can be applied to the client. Additionally, pins are ignored if they result in
a match that violates the terms and conditions of the subscription, or if the matcher detects a more
accurate match if the pin is ignored.
To add a new pin, click [Add a Pin] , and select the client to pin.
Click SUSE Manager Schedules › Schedule name to open the Schedule Name › Basic Schedule
Details. You can disable it or change its frequency.
Only disable or delete a schedule if you are absolutely certain this is necessary as
they are essential for Uyuni to work properly.
If you click a bunch name, a list of runs of that bunch type and their status is displayed.
Clicking the start time links takes you back to the Schedule Name › Basic Schedule Details.
auto-errata-default
Schedules auto errata updates as necessary.
channel-repodata-default
(Re)generates repository metadata files.
cleanup-data-default
Cleans up stale package change log and monitoring time series data from the database.
clear-taskologs-default
Clears task engine (taskomatic) history data older than a specified number of days, depending on the
job type, from the database.
cobbler-sync-default
Synchronizes distribution and profile data from Uyuni to Cobbler. For more information about
autoinstallation powered by Cobbler, see Client-configuration › Autoinst-intro.
compare-configs-default
Compares configuration files as stored in configuration channels with the files stored on all
configuration-enabled servers. To review comparisons, click Systems tab and select the system of
interest. Go to Configuration › Compare Files. For more information, see reference:systems/system-
details/sd-configuration.pdf.
cve-server-channels-default
Updates internal pre-computed CVE data that is used to display results on the Audit › CVE Audit
page. Search results in the Audit › CVE Audit page are updated to the last run of this schedule. For
more information, see Reference › Audit.
daily-status-default
Sends daily report e-mails to relevant addresses. For more information about configuring notifications
for specific users, see Reference › Users.
errata-cache-default
Updates internal patch cache database tables, which are used to look up packages that need updates
for each server. Also, this sends notification emails to users that might be interested in certain patches.
For more information about patches, see Reference › Patches.
errata-queue-default
Queues automatic updates (patches) for servers that are configured to receive them.
gatherer-matcher-default
Gather virtual host data by running Virtual Host Gatherer configured in Virtual Host Managers. After
updated data are available, the Subscription Matcher job is run.
kickstart-cleanup-default
Cleans up stale Kickstart session data.
kickstartfile-sync-default
Generates Cobbler files corresponding to Kickstart profiles created by the configuration wizard.
mgr-forward-registration-default
Synchronizes client registration data with SUSE Customer Center. By default, new, changed, or
deleted client data are forwarded. To disable synchronization set in /etc/rhn/rhn.conf, run:
server.susemanager.forward_registration = 0
mgr-sync-refresh-default
Synchronizes with SUSE Customer Center (mgr-sync-refresh). By default, all custom channels
are also synchronized as part of this task. For more information about custom channel
synchronization, see administration:custom-channels.pdf.
minion-action-chain-cleanup-default
Cleans up outdated action chain data.
minion-action-cleanup-defaul:
Deletes stale client action data from the file system. First it tries to complete any possibly unfinished
actions by looking up the corresponding results stored in the Salt job cache. An unfinished action can
occur if the server has missed the results of the action. For successfully completed actions it removes
artifacts such as executed script files.
minion-checkin-default
Performs a regular check-in on clients.
notifications-cleanup-default
Cleans up expired notification messages.
package-cleanup-default
Deletes stale package files from the file system.
reboot-action-cleanup-default
Any reboot actions pending for more than six hours are marked as failed and associated data is
cleaned up from the database. For more information on scheduling reboot actions, see
reference:systems/system-details/sd-provisioning.pdf.
sandbox-cleanup-default
Cleans up Sandbox configuration files and channels that are older than the sandbox_lifetime
configuration parameter (3 days by default). Sandbox files are those imported from systems or files
under development. For more information, see reference:systems/system-details/sd-configuration.pdf.
session-cleanup-default
Cleans up stale Web interface sessions, typically data that is temporarily stored when a user logs in
and then closes the browser before logging out.
ssh-service-default
Prompts clients to check in with Uyuni via SSH if they are configured with a SSH Push contact
method. Also resume action chains after a reboot.
system-profile-refresh-default
Runs a hardware refresh on all systems. This happens only monthly and can increase load on the
Uyuni Server. The job uses Specialized-guides › Salt. For tuning the batch size, see specialized-
guides:large-deployments/tuning.pdf.
token-cleanup-default
Deletes expired repository tokens that are used by Salt clients to download packages and metadata.
update-payg-default
Collects authentication data from configure PAYG cloud instances.
update-reporting-default
Updates the local Reporting Database.
update-reporting-hub-default
Collects all reporting data from peripheral Uyuni Server and update the Hub Reporting Database.
uuid-cleanup-default
Cleans up outdated UUID records.
This configuration option is in the /etc/rhn/rhn.conf configuration file. The parameter defaults to
20. Changing this value to 0 will provide an unlimited number of entries.
java.max_changelog_entries = 20
If you set this parameter, it comes into effect only for new packages when they are synchronized.
You might like to delete and regenerate the cached data to remove older data.
Deleting and regenerating cached data can take a long time. Depending on the
number of channels you have and the amount of data to be deleted, it can
potentially take several hours. The task is run in the background by Taskomatic,
so you can continue to use Uyuni while the operation completes, however you
should expect some performance loss.
You can delete and request a regeneration of cached data from the command line:
spacewalk-sql -i
The Users menu is only available if you are logged in with the Uyuni
administrator account.
To manage Uyuni users, navigate to Users › User List › All to see all users in your Uyuni Server. Each
user in the list shows the username, real name, assigned roles, the date the user last signed in, and the
current status of the user. Click btn:Create User to create a new user account. Click the username to
go to the User Details page.
To add new users to your organization, click [Create User] , complete the details for the new user,
and click [Create Login] .
Users can deactivate their own accounts. However, if users have an administrator role, the role must be
removed before the account can be deactivated.
Deactivated users cannot log in to the Uyuni Web UI or schedule any actions. Actions scheduled by a user
prior to their deactivation remain in the action queue. Deactivated users can be reactivated by Uyuni
administrators.
To change a user’s administrator roles, except for the Uyuni Administrator role, navigate to Users › User
List › All, select the user to change, and check or uncheck the administrator roles as required.
To change a user’s Uyuni Administrator role, navigate to Admin › Users and check or uncheck Uyuni
Admin? as required.
To assign a user to a system group, navigate to Users › User List, click the username to edit, and go to
the System Groups tab. Check the groups to assign, and click btn:Update Defaults.
You can also select one or more default system groups for a user. When the user registers a new client, it
is assigned to the chosen system group by default. This allows the user to immediately access the newly
registered client.
To manage external groups, navigate to Users › System Group Configuration, and go to the
External Authentication tab. Click [Create External Group] to create a new external
group. Give the group a name, and assign it to the appropriate system group.
To see the individual clients a user can administer, navigate to Users › User List, click the username to
edit, and go to the Systems tab. To carry out bulk tasks, you can select clients from the list to add them
to the system set manager.
For more information about the system set manager, see Client-configuration › System-set-manager.
To subscribe a user to a channel, navigate to Users › User List, click the username to edit, and go to the
Channel Permissions › Subscription tab. Check the channels to assign, and click btn:Update
Permissions.
To grant a user channel management permissions, navigate to Users › User List, click the username to
edit, and go to the Channel Permissions › Management tab. Check the channels to assign, and click
btn:Update Permissions.
Some channels in the list might not be subscribable. This is usually because of the users administrator
status, or the channels global settings.
The default language is set in the rhn.conf configuration file. To change the default language, open the
/etc/rhn/rhn.conf file and add or edit this line:
web.locale = <LANGCODE>
ca Catalan
de German
es Spanish
fr French
gu Gujarati
hi Hindi
it Italian
ja Japanese
ko Korean
pa Punjabi
pt Portuguese
ru Russian
ta Tamil
You can change the default theme in the rhn.conf configuration file. To change the default theme,
open the /etc/rhn/rhn.conf file and add or edit this line:
web.theme_default = <THEME>
• pyOpenSSL
• rhnlib
• libxml2-python
• spacewalk-koan
• Check that the tools software channel related to the base channel in your autoinstallation profile is
available to your organization and your user.
• Check that the tools channel is available to your Uyuni as a child channel.
• Check that the required packages and any dependencies are available in the associated channels.:
2. Create the bootstrap repository, using the appropriate repository name as the product label:
If you do not want to create bootstrap repositories manually, you can check whether LTSS is available for
the product and bootstrap repository you need.
This is caused by the new, cloned, system having the same machine ID as an existing, registered, system.
You can adjust this manually to correct the error and register the cloned system successfully.
For example, conflicting packages with higher version numbers could be included into the bootstrap
repository. Such packages (for example, python3-zmq or zeromq) may corrupt the creation of the
bootstrap repository or cause issues during bootstrap of the client.
When the custom channel (for example, an EPEL channel) is added below the parent vendor channel,
issues with conflicting packages cannot be solved directly. The way how to solve this is to separate the
custom channel from the vendor channel. The custom channel needs to be created in a separate tree. In
case that the custom channel needs to be delivered as a child, such environemnt can be created using
Content Lifecycle Management (CLM). Sources in a CLM project can be added there from different
trees. Using such an approach, the custom channel stays below the parent within the built environment.
145 / 172 26.3. Troubleshooting Clients Cloned Salt Clients | Uyuni 2024.03
26.6. Troubleshooting Disabling the FQDNS grain
However, the vendor channel tree stays without the custom channel and the bootstrap repository. Then
registering clients works correctly.
When the custom channel with the conflicting packages (salt, zeromq, and so on) is created as a child
channel, following steps can help to avoid the issue:
To prevent this problem, you can disable the FQDNS grain with a Salt flag. If you disable the grain, you
can use a network module to provide FQDNS services, without the risk of the client becoming
unresponsive.
This only applies to older Salt clients. If you registered your Salt client recently,
the FQDNS grain is disabled by default.
On the Uyuni Server, at the command prompt, use this command to disable the FQDNS grain:
146 / 172 26.6. Troubleshooting Disabling the FQDNS grain | Uyuni 2024.03
26.7. Troubleshooting Disk Space
This command restarts each client and generate Salt events that the server needs to process. If you have a
large number of clients, you can execute the command in batch mode instead:
Wait for the batch command to finish executing. Do not interrupt the process with Ctrl+C.
You can recover disk space by removing unused software channels. For instructions on how to delete
vendor channels, see Administration › Channel-management. For instructions on how to delete custom
channels, see Administration › Custom-channels.
You can also check how often your custom channels are synchronized. For instructions on how deal with
custom channel synchronization, see administration:custom-channels.pdf.
You can also recover disk space by cleaning up unused activation keys, content lifecycle projects, and
client registrations. You can also remove redundant database entries:
This occurs because the synchronization process needs to access third-party repositories that provide
packages for non-SUSE clients, and not just the SUSE Customer Center. When the Uyuni Server attempts
to reach these repositories to check that they are valid, the firewall drops the requests, and the
synchronization continues to wait for the response until it times out.
If this occurs, the synchronization takes a long time before it fails, and your non-SUSE products are not
shown in the product list.
The simplest method is to configure your firewall to allow access to the URLs required by non-SUSE
repositories. This allows the synchronization process to reach the URLs and complete successfully.
If allowing external traffic is not possible, configure your firewall to REJECT requests from Uyuni
instead of DROP. This rejects requests to third-party URLs, so that the synchronization fails early rather
than times out, and the products are not shown in the list.
If you do not have configuration access to the firewall, you can consider setting up a separate firewall on
the Uyuni Server instead.
26.9. Troubleshooting high sync times between Uyuni Server and Proxy
over WAN connections
Depending on what changes are executed in the WebUI or via an API call to distribution or system
settings, cobbler sync command may be required to transfer files from Uyuni Server to Uyuni Proxy
systems. To accomplish this, Cobbler uses a list of proxies specified in /etc/cobbler/settings.
Due to its design, cobbler sync is not able to sync only the changed or recently added files.
Instead, executing cobbler sync triggers a full sync of the /srv/tftpboot directory to all
specified proxies configured in /etc/cobbler/settings. It is also influenced by the latency of the
WAN connection between the involved systems.
The process of syncing may take a considerable amount of time to finish according to the logs in
/var/log/cobbler/.
The transfer amount was roughly 1.8 GB. The transfer took almost 30 minutes.
By comparison, copying a single big file of the same size as /srv/tftboot completes within several
minutes.
Switching to an rsync-based approach to copy files between Uyuni Server and Proxy may help to
reduce the transfer and wait times.
148 / 172 26.9. Troubleshooting high sync times between Uyuni Server and Proxy over WAN connections | Uyuni 2024.03
26.9. Troubleshooting high sync times between Uyuni Server and Proxy over WAN connections
The script does not accept command line options. Before running the script, you need to manually edit it
and set correctly SUMAHOSTNAME, SUMAIP and SUMAPROXY1 variables for it to work correctly.
There is no support available for individual adjustments of the script. The script
and the comments inside aim to provide an overview of the process and steps to
be taken into consideration. If further help is required, contact SUSE Consulting.
The proposed approach using the script is beneficial in the following environment:
#proxies:
# - "sumaproxy.sumaproxy.test"
# - "sumaproxy2.sumaproxy.test"
1. Take a dump of the TCP traffic between Uyuni and the involved systems.
◦ On SUSE Manager Server:
◦ This will only capture a package size of 200 which is sufficient to run an analysis.
◦ Adjust ethX to the respective network interface Uyuni uses to communicate with the proxy.
◦ At last, ssh communication will not be captured to reduce the number of packages even
further.
2. Start a cobbler sync.
◦ To force a sync, delete the Cobbler json cache file first and then issue cobbler sync:
rm /var/lib/cobbler/pxe_cache.json
cobbler sync
149 / 172 26.9. Troubleshooting high sync times between Uyuni Server and Proxy over WAN connections | Uyuni 2024.03
26.10. Troubleshooting Inactive clients
Ignore ports 4505 and 4506 as these are used for Salt
communication.
Analysis of the TCPdumps showed the transfer of small files with a size of approx. 1800 bytes from
Uyuni Server to Proxy took around 0.3 seconds.
While there were not many big files, the high number of smaller files resulted high number of established
connections as new TCP connection is created for every single transferred file.
Therefore, knowing the minimal amount of transfer time and a number of connections needed (approx.
5000 in the example), gives an approximate estimated time for the overall transfer time: 5000 * 0.3 / 60 =
25 minutes.
• The client is not entitled to any Uyuni service. If the client remains unentitled for 180 days
(6 months), it is removed.
• The client is behind a firewall that does not allow HTTPS connections.
• The client is behind a proxy that is misconfigured.
• The client is communicating with a different Uyuni Server, or the connection has been
misconfigured.
• The client is not in a network that can communicate with the Uyuni Server.
• A firewall is blocking traffic between the client and the Uyuni Server.
• Taskomatic is misconfigured.
For more information about client connections to the server, see Client-configuration › Contact-
methods-intro.
Cache errors can lead to synchronization failing with a variety of errors, but the error message will usually
report something like this:
You can resolve this by deleting the cache on the ISS Master and the ISS Slave, so that synchronization
completes successfully.
rm -rf /var/cache/rhn/xml-*
rcapache2 restart
3. On the ISS Master, at the command prompt, as root, delete the cache file for the Slave:
rm -rf /var/cache/rhn/satsync/*
rcapache2 restart
To adjust the value, you need to make the change in both rhn.conf and web.xml. Ensure you set the
value in seconds in /etc/rhn/rhn.conf, and in minutes in web.xml. The two values must equal
the same amount of time.
For example, to change the timeout value to one hour, set the value in rhn.conf to 3600 seconds, and
the value in web.xml to 60 minutes.
spacewalk-service stop
2. Open /etc/rhn/rhn.conf and add or edit this line to include the new timeout value in
seconds:
web.session_database_lifetime = <Timeout_Value_in_Seconds>
<session-timeout>Timeout_Value_in_Minutes</session-timeout>
spacewalk-service start
To increase the connection timeout for SMTP server communication, the following parameters can be set
in /etc/rhn/rhn.conf:
[Service]
Environment=TMPDIR=/var/tmp
When you are setting up a separate file system, edit /etc/fstab and remove the /var/lib/pqsql
subvolume. Reboot the server to pick up the changes.
To get more information about an upgrade problem, check the migration log file. The log file is located at
/var/log/rhn/migration.log on the system you are upgrading.
java.notifications_lifetime = 30
java.notifications_type_disabled = OnboardingFailed,ChannelSyncFailed,\
ChannelSyncFinished,CreateBootstrapRepoFailed,StateApplyFailed,\
PaygAuthenticationUpdateFailed,EndOfLifePeriod,SubscriptionWarning
If this occurs, OSAD clients cannot contact the SUSE Manager Server, and jabberd takes an excessive
amount of time to respond on port 5222.
This fix is only required if you have more than 8192 clients connected using
OSAD. In this case, we recommend you consider using Salt clients instead. For
more information about tuning large scale installations, see Specialized-guides ›
Salt.
You can increase the number of files available to jabber by editing the jabberd local configuration file. By
default, the file is located at
/etc/systemd/system/jabberd.service.d/override.conf.
[Service]
LimitNOFILE=<soft_limit>:<hard_limit>
The value you choose varies depending on your environment. For example, if you have 9500
clients, increase the soft value by 100 to 9600, and the hard value by 1000 to 10500:
[Unit]
LimitNOFILE=
LimitNOFILE=9600:10500
The default editor for systemctl files is vim. To save the file and exit, press Esc
to enter normal mode, type :wq and press Enter.
Ensure you also update the max_fds parameter in /etc/jabberd/c2s.xml. For example:
<max_fds>10500</max_fds>
The soft file limit is the maximum number of open files for a single process. In Uyuni the highest
consuming process is c2s, which opens a connection per client. 100 additional files are added, here, to
accommodate for any non-connection file that c2s requires to work correctly. The hard limit applies to
all processes belonging to jabber, and also accounts for open files from the router, c2s and sm processes.
On the client, check package locks and exclude lists to determine if packages are locked or excluded:
• On SUSE Linux Enterprise and openSUSE, use the zypper locks command.
For a container proxy running with podman, follow this procedure on the host machine:
To overcome this problem, a new feature has been introduced in Salt to avoid making a separate
synchronous Salt call.
To use this feature, you can add a configuration parameter to the client configuration, on clients that
support it.
To make this process easier, you can use the mgr_start_event_grains.sls helper Salt state.
This only applies to already registered clients. If you registered your Salt client
recently, this config parameter is added by default.
On the Uyuni Server, at the command prompt, use this command to enable the
start_event_grains configuration helper:
This command adds the required configuration into the client’s configuration file, and applies it when the
client is restarted. If you have a large number of clients, you can execute the command in batch mode
instead:
156 / 172 26.22. Troubleshooting Passing Grains to a Start Event | Uyuni 2024.03
26.24. Troubleshooting Registering Cloned Clients
To correct this behavior, specify additional FQDNs as grains in the client configuration file on the proxy:
grains:
susemanager:
custom_fqdns:
- name.one
- name.two
While cloning VMs can save you a lot of time, the duplicated identifying information on the disk can
sometimes cause problems.
If you have a client that is already registered, you create a clone of that client, and then try and register the
clone, you probably want Uyuni to register them as two separate clients. However, if the machine ID in
both the original client and the clone is the same, Uyuni registers both clients as one system, and the
existing client data is overwritten with that of the clone.
This can be resolved by changing the machine ID of the clone, so that Uyuni recognizes them as two
different clients.
Each step of this procedure is performed on the cloned client. This procedure
does not manipulate the original client, which remains registered to Uyuni.
rm /etc/machine-id
rm /var/lib/dbus/machine-id
rm /var/lib/zypp/AnonymousUniqueId
dbus-uuidgen --ensure
systemd-machine-id-setup
3. For distributions that do not support systemd: As root, generate a machine ID from dbus:
rm /var/lib/dbus/machine-id
rm /var/lib/zypp/AnonymousUniqueId
dbus-uuidgen --ensure
4. If your clients still have the same Salt client ID, delete the minion_id file on each client (FQDN
is used when it is regenerated on client restart). For Salt Minion clients:
rm /etc/salt/minion_id
rm -rf /etc/salt/pki
rm /etc/venv-salt-minion/minion_id
rm -rf /etc/venv-salt-minion/pki
5. Delete accepted keys from the onboarding page and the system profile from Uyuni, and restart the
client with:
6. Re-register the clients. Each client now has a different /etc/machine-id and should be
correctly displayed on the System Overview page.
rm /var/cache/salt/master/thin/version
rm /var/cache/salt/master/thin/thin.tgz
Because of its nature, Salt SSH clients do not report errors back to the server.
However, the Salt SSH clients store a log locally at /var/log/salt-ssh.log that can be inspected
for errors.
[error]
Repository '<repo_name>' is invalid.
<repo.pem> Valid metadata not found at specified URL
History:
- [|] Error trying to read from '<repo.pem>'
- Permission to access '<repo.pem>' denied.
Please check if the URIs defined for this repository are pointing to a valid
repository.
Skipping repository '<repo_nam' because of the above error.
Could not refresh the repositories because of errors.
HH:MM:SS RepoMDError: Cannot access repository. Maybe repository GPG keys are
not imported
To resolve this issue, merge all valid certificates into a single .pem file, and rebuild the certificates for use
by Uyuni:
You can now import the new certificates to the Uyuni Server, using the instructions in Client-
configuration › Clients-rh-cdn.
If you need to change the hostname of the Uyuni Server, you can do so using the spacewalk-
hostname-rename script. This script updates the settings in the PostgreSQL database and the internal
structures of Uyuni.
The only mandatory parameter for the script is the newly configured IP address of the Uyuni Server.
2. Reboot the Uyuni Server to use the new network configuration and to ensure the hostname has
changed.
3. Run the script spacewalk-hostname-rename script with the public IP address of the
server. If the server is not using the new hostname, the script fails. Be aware that this script
refreshes the pillar data for all Salt clients: the time it takes to run depends on the number of
registered clients.
4. Skip this step if the clients are managed via a Uyuni proxy. Re-configure the clients directly
managed to make them aware of the new hostname and IP address. In the Salt client configuration
file, you must specify the name of the new Salt master (Uyuni Server) (the filename is
/etc/venv-salt-bundle/minionor, if you do not use the Salt bundle,
/etc/salt/minion):
master: <new_hostname>
6. To fully propagate the hostname to the Salt client configuration apply the high state. Applying the
high state will update the hostname in the repository URLs.
Any proxy must be reconfigured. The new server certificate and key must be
copied to the proxy and the configure-proxy.sh script must be run. For
more information about configuring a proxy, see Installation-and-upgrade ›
Proxy-setup.
If you use PXE boot through a proxy, you must check the configuration settings
of the proxy. On the proxy, run the configure-tftpsync.sh setup script
and enter the requested information. For more information, see Installation-
and-upgrade › Proxy-setup.
1. Delete /root/.MANAGER_SETUP_COMPLETE.
2. Stop PostgreSQL and remove /var/lib/pgsql/data.
160 / 172 26.29. Troubleshooting Retrying to Set up the Target System | Uyuni 2024.03
26.30. Troubleshooting RPC Connection Timeouts
3. Set the target system hostname to match the source system hostname.
4. Check the /etc/hosts file, and correct it if necessary.
5. Check /etc/setup_env.sh on the target system, and ensure the database name is set:
MANAGER_DB_NAME='susemanager'
server.timeout =`number`
2. On the Uyuni Proxy, open the /etc/rhn/rhn.conf file and set a maximum timeout value (in
seconds):
proxy.timeout =`number`
3. On a SUSE Linux Enterprise Server client that uses zypper, open the /etc/zypp/zypp.conf
file and set a maximum timeout value (in seconds):
4. On a Red Hat Enterprise Linux client that uses yum, open the /etc/yum.conf file and set a
maximum timeout value (in seconds):
timeout =`number`
If you limit RPC timeouts to less than 180 seconds, you risk aborting perfectly
normal operations.
In this case try rescheduling the action. If rescheduling succeeds, the cause of the problem can be a wrong
DNS configuration.
When the Salt client is restarted, or in case the grains are refreshed, the client calculates its FQDN grains,
and it is unresponsive until the grains are proceeded. When a scheduled action on Uyuni Server is going
to be executed, Uyuni Server performs a test.ping to the client before the actual action to ensure the
client is actually running and the action can be triggered.
By default, Uyuni Server waits for 5 seconds to get the response from test.ping command. If the
response is not received within 5 seconds, then the action is set to fail with the message that the client is
down or could not be contacted.
To correct this, fix the DNS resolution on the client, so the client does not get stuck for 5 seconds while
solving its FQDN.
If this is not possible, try to increase the value for java.salt_presence_ping_timeout in the
/etc/rhn/rhn.conf file on the Uyuni Server to a value higher than 4.
For example:
java.salt_presence_ping_timeout = 6
spacewalk-services restart
Increasing this value will cause Uyuni Server to take longer to check if a minion
162 / 172 26.31. Troubleshooting Salt clients shown as down and DNS settings | Uyuni 2024.03
26.33. Troubleshooting Schema Upgrade Fails
ID: disk1_partitioned
Function: saltboot.partitioned
Name: disk1
Result: false
Comment: An exception occurred in this state: Traceback (most recent
call last):
File "/usr/lib/python2.6/site-packages/salt/state.py", line 1767, in call
**cdata['kwargs'])
File "/usr/lib/python2.6/site-packages/salt/loader.py", line 1705, in
wrapper
return f(*args, **kwargs)
File "/var/cache/salt/minion/extmods/states/saltboot.py", line 393, in
disk_partitioned
existing = __salt__['partition.list'](device, unit='MiB')
File "/usr/lib/python2.6/site-packages/salt/modules/parted.py", line 177,
in list_
'Problem encountered while parsing output from parted')
CommandExecutionError: Problem encountered while parsing output from parted
This problem can be resolved by manually configuring the size of the partition containing the operating
system. When the size is set correctly, formula creation works as expected.
or
journalctl -u uyuni-check-database.service
These commands print debug information if you do not want to run the more general spacewalk-
service command.
export URLGRABBER_DEBUG=DEBUG
spacewalk-repo-sync -c <channelname> <options> > /var/log/spacewalk-repo-
sync-$(date +%F-%R).log 2>&1
To resolve the problem, you need to import the GPG key to Uyuni. For more on importing GPG keys,
see Administration › Repo-metadata.
Checksum Mismatch
If a checksum has failed, you might see an error like this in the
/var/log/rhn/reposync/*.log log file:
You can resolve this error by running the synchronization from the command prompt with the -Y
option:
This option verifies the repository data before the synchronization, rather than relying on locally
cached checksums.
Connection Timeout
If the download times out with the following error:
28, 'Operation too slow. Less than 1000 bytes/sec transferred the last 300
seconds
If Taskomatic is still running, or if the process has crashed, package updates can seem available in the
Web UI, but do not appear on the client, and attempts to update the client fail. In this case, the zypper
ref command shows an error like this:
To correct this, determine if Taskomatic is still in the process of generating repository metadata, or if a
crash could have occurred. Wait for metadata regeneration to complete or restart Taskomatic after a crash
in order for client updates to be carried out correctly.
2. Restart taskomatic:
3. In the Taskomatic log files, you can identify the section related to metadata regeneration by
looking for opening and closing lines that look like this:
...
This issue is resolved by clearing the cache and reloading the page. In most browsers, you can do this
quickly by pressing Ctrl+F5.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document
"free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with
or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for
the author and publisher a way to get credit for their work, while not being considered responsible for
modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves
be free in the same sense. It complements the GNU General Public License, which is a copyleft license
designed for free software.
We have designed this License in order to use it for manuals for free software, because free software
needs free documentation: a free program should come with manuals providing the same freedoms that
the software does. But this License is not limited to software manuals; it can be used for any textual work,
regardless of subject matter or whether it is published as a printed book. We recommend this License
principally for works whose purpose is instruction or reference.
A "Modified Version" of the Document means any work containing the Document or a portion of it,
either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals
exclusively with the relationship of the publishers or authors of the Document to the Document’s overall
subject (or to related matters) and contains nothing that could fall directly within that overall subject.
(Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any
mathematics.) The relationship could be a matter of historical connection with the subject or with related
matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of
Invariant Sections, in the notice that says that the Document is released under this License. If a section
does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The
Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections
167 / 172 Chapter 27. GNU Free Documentation License | Uyuni 2024.03
then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover
Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may
be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose
specification is available to the general public, that is suitable for revising the document straightforwardly
with generic text editors or (for images composed of pixels) generic paint programs or (for drawings)
some widely available drawing editor, and that is suitable for input to text formatters or for automatic
translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise
Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage
subsequent modification by readers is not Transparent. An image format is not Transparent if used for any
substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input
format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming
simple HTML, PostScript or PDF designed for human modification. Examples of transparent image
formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and
edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools
are not generally available, and the machine-generated HTML, PostScript or PDF produced by some
word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed
to hold, legibly, the material this License requires to appear in the title page. For works in formats which
do not have any title page as such, "Title Page" means the text near the most prominent appearance of the
work’s title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or
contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands
for a specific section name mentioned below, such as "Acknowledgements", "Dedications",
"Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document
means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies
to the Document. These Warranty Disclaimers are considered to be included by reference in this License,
but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may
have is void and has no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially,
provided that this License, the copyright notices, and the license notice saying this License applies to the
Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this
License. You may not use technical measures to obstruct or control the reading or further copying of the
copies you make or distribute. However, you may accept compensation in exchange for copies. If you
distribute a large enough number of copies you must also follow the conditions in section 3.
168 / 172 Chapter 27. GNU Free Documentation License | Uyuni 2024.03
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document,
numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover,
and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the
publisher of these copies. The front cover must present the full title with all words of the title equally
prominent and visible. You may add other material on the covers in addition. Copying with changes
limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can
be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed
(as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either
include a machine-readable Transparent copy along with each Opaque copy, or state in or with each
Opaque copy a computer-network location from which the general network-using public has access to
download using public-standard network protocols a complete Transparent copy of the Document, free of
added material. If you use the latter option, you must take reasonably prudent steps, when you begin
distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible
at the stated location until at least one year after the last time you distribute an Opaque copy (directly or
through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing
any large number of copies, to give them a chance to provide you with an updated version of the
Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and
3 above, provided that you release the Modified Version under precisely this License, with the Modified
Version filling the role of the Document, thus licensing distribution and modification of the Modified
Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and
from those of previous versions (which should, if there were any, be listed in the History section of
the Document). You may use the same title as a previous version if the original publisher of that
version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the
modifications in the Modified Version, together with at least five of the principal authors of the
Document (all of its principal authors, if it has fewer than five), unless they release you from this
requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
169 / 172 Chapter 27. GNU Free Documentation License | Uyuni 2024.03
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to
use the Modified Version under the terms of this License, in the form shown in the Addendum
below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in
the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the
title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is
no section Entitled "History" in the Document, create one stating the title, year, authors, and
publisher of the Document as given on its Title Page, then add an item describing the Modified
Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent
copy of the Document, and likewise the network locations given in the Document for previous
versions it was based on. These may be placed in the "History" section. You may omit a network
location for a work that was published at least four years before the Document itself, or if the
original publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section,
and preserve in the section all the substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles.
Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified
Version.
N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any
Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary
Sections and contain no material copied from the Document, you may at your option designate some or
all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the
Modified Version’s license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your
Modified Version by various parties—for example, statements of peer review or that the text has been
approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a
Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any
one entity. If the Document already includes a cover text for the same cover, previously added by you or
by arrangement made by the same entity you are acting on behalf of, you may not add another; but you
may replace the old one, on explicit permission from the previous publisher that added the old one.
170 / 172 Chapter 27. GNU Free Documentation License | Uyuni 2024.03
The author(s) and publisher(s) of the Document do not by this License give permission to use their names
for publicity for or to assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms
defined in section 4 above for modified versions, provided that you include in the combination all of the
Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of
your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections
may be replaced with a single copy. If there are multiple Invariant Sections with the same name but
different contents, make the title of each such section unique by adding at the end of it, in parentheses, the
name of the original author or publisher of that section if known, or else a unique number. Make the same
adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents,
forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and
any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this License,
and replace the individual copies of this License in the various documents with a single copy that is
included in the collection, provided that you follow the rules of this License for verbatim copying of each
of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this
License, provided you insert a copy of this License into the extracted document, and follow this License
in all other respects regarding verbatim copying of that document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the
Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on
covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the
Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole
aggregate.
8. TRANSLATION
171 / 172 Chapter 27. GNU Free Documentation License | Uyuni 2024.03
Translation is considered a kind of modification, so you may distribute translations of the Document
under the terms of section 4. Replacing Invariant Sections with translations requires special permission
from their copyright holders, but you may include translations of some or all Invariant Sections in
addition to the original versions of these Invariant Sections. You may include a translation of this License,
and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions of those notices and disclaimers. In
case of a disagreement between the translation and the original version of this License or a notice or
disclaimer, the original version will prevail.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under
this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will
automatically terminate your rights under this License. However, parties who have received copies, or
rights, from you under this License will not have their licenses terminated so long as such parties remain
in full compliance.
Each version of the License is given a distinguishing version number. If the Document specifies that a
particular numbered version of this License "or any later version" applies to it, you have the option of
following the terms and conditions either of that specified version or of any later version that has been
published (not as a draft) by the Free Software Foundation. If the Document does not specify a version
number of this License, you may choose any version ever published (not as a draft) by the Free Software
Foundation.
172 / 172 Chapter 27. GNU Free Documentation License | Uyuni 2024.03