Cloudstack Single-Node Management Installation of 4.22

Mindwatering Incorporated

Author: Tripp W Black

Created: 08/25/2025 at 04:50 PM

 

Category:
Linux
KVM

Requirements:
1 management node and 1 or more system "iron" "worker" nodes
- Intel VT-X
- 16 GB memory (minimum)
- 2 TB local storage

Notes:
- These instructions are specific for Intel CPU chipsets. If using AMD, a couple steps will have to changed that specify Intel-specific settings.
- There are a lots of places were something might fail -- e.g. NSF secondary storage just didn't mount, or the Pod didn't get created.
- - Therefore, we have a remove.sh script that removes the configuration to save time. We also took these steps and made them into a install.sh script which we tweak as needed.
- The network may need to be simplified to just cloudbr0, and the rest of the real network (VLANs, etc.) have to added back after the install as you add the networks.

Steps:
1. Install Ubuntu 24.04 LTS and update packages
$ sudo apt-get update && apt-get upgrade -y


2. Set hostname:
a. Set name permanently:
$ sudo hostnamectl set-hostname csmgmt

b. Update hosts file:
Note: After the IPs, enter the FQDN before the server name.
$ sudo vi /etc/hosts
<updated the 127.0.0.1 entry, 127.0.1.1 entry, and add the static IP with the name of the server.

c. Validate the update:
$ sudo hostnamectl
<view output -- should be csmgmt>

$ sudo hostname
<view output --> should be csmgmt>

$ sudo hostname --fqdn
<view output --> should be csmgmt.mindwatering.net>

3. Enable "universe" repository in /etc/apt/sources.list
$ sudo add-apt-repository universe
$ sudo apt update


4. Install prerequisites packages:
a. Install packages:
Notes:
- Ubuntu minimal already includes a few of these (e.g. sudo vim openssh)
- The default jdk is 21. Install Java 17 jdk if not installed.
$ sudo apt-get install dpkg-dev apt-utils chrony openntpd
$ sudo apt-get install software-properties-common
$ sudo apt-get install openntpd openssh-server sudo vim htop tar intel-microcode bridge-utils openntpd
$ sudo apt-get install debhelper libws-commons-util-java genisoimage libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java maven
$ sudo apt-get install openjdk-17-jdk libws-commons-util-java libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java

b. Select Java if more than one is installed. Choose 17:
$ sudo update-alternatives --config java
<choose the number option for java 17>

ONLY if the "iron" KVM host is using UEFI legacy/secureboot, install the ovmf or edk2-ovmf package.
$ sudo apt-get install ovmf

If performing any VM migrations, install the v2v package on the KVM host(s):
$ sudo apt-get install virt-v2v nbdkit

Need Win support?
- If needing support MS Windows VMs on KVM, Ubuntu 22 and 24 do not include virtio-win. It must be installed manually.
- Update the following wget lines for the current version(s) needed.
$ cd /home/myadminid/
$ wget https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.noarch.rpm
$ wget -nd -O srvany.rpm https://kojipkgs.fedoraproject.org//packages/mingw-srvany/1.1/4.fc38/noarch/mingw32-srvany-1.1-4.fc38.noarch.rpm

Install alien utility to convert rpms to debs
$ sudo apt-get install alien

Convert the dowloaded virtio-win package:
$ sudo alien -d virtio-win.noarch.rpm
<wait>

Install the resulting deb
$ dpkg -i virtio-win*.deb

Install the required rhsrvany package downloaded:
$ sudo alien -d srvany.rpm
<wait>

$ sudo dpkg -i *srvany*.deb

Need OVF Support?
- Get the VMware-ovftool-4.x-lin.x86_64.zip from the Broadcom support site.
- Transfer the zip file to the KVM host(s).

Install the zip to /usr/local:
$ cd /home/myadminid/
$ sudo unzip VMware-ovftool-4.*.zip -d /usr/local
<wait>

Link:
$ sudo ln -s /usr/local/ovftool/ovftool /usr/local/bin/ovftool

Install the libnsl.so.1 dependency, which is no longer available, version 2 is: libnsl2
$ sudo apt-get install libnsl2


5a. Install MariaDb (or MySQL if preferred):
$ sudo apt install mariadb-server -y

5b. Update the db conf file with the following updates under the [mysqld] section:
Note: Some of these lines exist uncommented already, and a couple need to be uncommented.

$ sudo vi /etc/mysql/mariadb.conf.d/50-server.cnf
[mysqld]
bind-address = 127.0.0.1
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
max_connections = 500
server_id=1
innodb_file_per_table = 1
innodb_file_format = Barracuda
innodb_large_prefix = 1
innodb_lock_wait_timeout=600
log_bin=mysql-bin
binlog_format=ROW

$ sudo systemctl restart mariadb
<wait>
$ sudo systemctl status mariadb
<verify service running okay>

5c. Create the CloudStack DB:
$ sudo mysql -u root -p
<enter your personal sudo password>

Add the following. Make sure you update the password to your own. Warning, if you use special characters, they will have to be escaped.
MariaDB [(none)]> CREATE DATABASE cloud CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cloud.* TO 'cloud'@'localhost' IDENTIFIED BY 'myreallygoodpassword';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cloud.* TO 'cloud'@'%' IDENTIFIED BY 'myreallygoodpassword';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> EXIT;

- Manual DB creation if external DB and script further below cannot be used:
Note: Normally skip these steps, or you'll have to run the cloudstack-setup-databases with the --force-recreate flag.

Longer version from documentation:
CREATE DATABASE `cloud`;
CREATE DATABASE `cloud_usage`;

CREATE USER cloud@`localhost` identified by 'myreallygoodpassword';
CREATE USER cloud@`%` identified by 'myreallygoodpassword';

GRANT ALL ON cloud.* to cloud@`localhost`;
GRANT ALL ON cloud.* to cloud@`%`;
GRANT ALL ON cloud_usage.* to cloud@`localhost`;
GRANT ALL ON cloud_usage.* to cloud@`%`;
GRANT process ON *.* TO cloud@`localhost`;
GRANT process ON *.* TO cloud@`%`;
FLUSH PRIVILEGES;
EXIT;

5d. Give the root user a password.
Note:
Theoretically, the root password can be left blank when running the cloudstack-setup-databases script by clicking <enter>, but we've not seen a prompt and the setup always fails.

$ sudo mysql -u root -p
<enter your personal sudo password>
MariaDB [(none)]> ALTER USER 'root'@'localhost' IDENTIFIED BY rootuserreallygoodpassword';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> EXIT;


6. Set-up CloudStack Ubuntu repo:
$ cd /home/myadminid
$ mkdir tmp
$ cd tmp
$ echo "deb http://download.cloudstack.org/ubuntu noble 4.22" | sudo tee /etc/apt/sources.list.d/cloudstack.list
$ wget -O - https://download.cloudstack.org/release.asc | sudo apt-key add -
$ sudo apt-get update


7. Optimize/tune grub settings for KVM, by adding/changing systemd.unified_cgroup_hierarchy if kernel using cgroup1 vs cgroup2:
$ sudo grep cgroup /proc/filesystems
<view result>
nodev cgroup
nodev cgroup2
If you see both listed, you have cgroup2 support:

Backup the current grub settings:
$ sudo cp /etc/default/grub /etc/default/grub.bak

Use sed to update the file with the Intel chipset, or edit manually w/vi:
$ sudo sed -i.bak 's/^\(GRUB_CMDLINE_LINUX_DEFAULT=".*\)"/\1 'intel_iommu=on' systemd.unified_cgroup_hierarchy=0"/' /etc/default/grub
- OR -
$ vi /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on systemd.unified_cgroup_hierarchy=0"
...
<esc>:wq (to save)
$ sudo update-grub


8a. Install CloudStack Management:
$ sudo apt install cloudstack-management qemu-kvm libvirt-daemon-system libvirt-clients virtinst

8b. Initialize CloudStack Database:
Notes:
- If root has no password, click enter when prompted when using --deploy-as=root (does not work for us)
- Append the root password with --deploy-as=root:rootuserreallygoodpassword
- Append the --force-recreate parameter if any of the cloud database schema exists.

$ sudo cloudstack-setup-databases 'cloud:myreallygoodpassword'@localhost --deploy-as=root

8c. Support KVM VMs on current management machine, adding cloudstack sudoer:
Note: May already exist from the cloudstack-management install

$ sudo vi /etc/sudoers.d/cloudstack
Defaults:cloud !requiretty
cloud ALL=(root) NOPASSWD:CLOUDSTACK
<esc>:wq (to save)
$ sudo chmod 0440 /etc/sudoers.d/cloudstack

8d. Setup the cloudstack services:
- Run setup:
$ sudo cloudstack-setup-management

- Verify service running:
$ systemctl status cloudstack-management
<verify active/running, and enabled>

- If not enabled and running:
$ sudo systemctl enable cloudstack-management
$ sudo systemctl start cloudstack-management

- After set-up, there was the following message to opent ports:
"Please ensure ports 8080, 8250, 8443, and 9090 are opened and not firewalled for the management server and not in use by other processes on this host."

- Validate UI loads:
web browser --> 10.0.42.11:8080
Login:
username: admin
password: password

Notes:
- If the login was successful, scroll down on the Dashboard page, click Continue with installation >>. The first thing it will do is change the password.
- - To change password later: admin cloud (top right corner) --> Profile --> Change Password (2nd icon of Magnifying glass)

FAILED TO LOAD:
- If the page displays a 503 error, there will likely be a lot of module errors with failed to load:
503
Service Unavailable

- We found several hints, including removing 127.0.1.1 from the host file, but below worked for us:
$ sudo vi /etc/cloudstack/management/db.properties
...
#cluster.node.IP=127.0.1.1
cluster.node.IP=10.0.42.11
...
<esc>:wq (to save)

$ sudo systemctl restart cloudstack-management


8e. Update ufw:
$ sudo ufw allow proto tcp from any to any port 22
$ sudo ufw allow proto tcp to any port 8080 from 10.0.42.0/24
$ sudo ufw allow proto tcp to any port 8250 from 10.0.42.0/24
$ sudo ufw allow proto tcp to any port 8443 from 10.0.42.0/24
$ sudo ufw allow proto tcp to any port 9090 from 10.0.42.0/24
$ sudo ufw reload

8f. Verify CloudStack services are running:
Note: All the checks should pass except QEMU Checking for secure guest support. This is an AMD only feature.
$ sudo virt-host-validate
<view output>

$ sudo virsh list --all
<may show no VMs if no system VMs created yet>

8g: Allow root login:
$ sudo passwd root
<enter root password / confirm root password>

$ sudo passwd -u root
<successful message>

$ sudo vi /etc/ssh/sshd_config
...
PermitRootLogin yes
...
<esc>:wq (to save)

$ sudo systemctl restart ssh


8h. Set CloudStack to autostart on reboots:
- Install agent:
$ sudo apt-get install cloudstack-agent

- Verify the cloudstack-agent's UUID, host, and bridges values:
$ sudo vi /etc/cloudstack/agent/agent.properties

- Update host, and unless you want a default zone, update the zone/pod/cluster, and networks:
--> Change from localhost to the (external) IP of the management host
...
#host = localhost
host=10.0.42.11
...
cluster=MWWF_Cluster
...
pod=MWWF_Pod
...
zone=MWWF
...
public.network.device=cloudbr0
...
private.network.device=cloudbr03
...
guest.network.device=cloudbr03
...
hypervisor.uri=qemu:///system

<esc>:wq (to save)

- Update GUID:
Note: Run the following and paste into the field:
$ GUID=$(uuidgen)
$ echo "Generated GUID: $GUID"
<copy output>

Paste the output into back into the agent.properties
$ sudo vi /etc/cloudstack/agent/agent.properties
...
guid=ab12c987-d123-4e56-fa78-b9123b45cd67
<esc>:wq (to save)

- update qemo for vnc access:
$ sudo vi /etc/libvirt/qemu.conf
...
vnc_listen="0.0.0.0"
<esc>:wq (to save)

- Perform key generation for SSL between agent and management:
$ sudo keytool -genkeypair -keyalg RSA -keysize 4096 -keystore /etc/cloudstack/management/cloud.keystore -alias cloud -dname "CN=csmgmt.mindwatering.local,O=mindwatering,C=US" -storepass mindwatering.local -keypass mindwatering.local -validity 36500

Link or copy to the agent folder:
$ sudo cp /etc/cloudstack/management/cloud.keystore /etc/cloudstack/agent/

Make cloud owner:
$ sudo chown cloud:cloud /etc/cloudstack/management/cloud.keystore
$ sudo chown cloudstack:cloudstack /etc/cloudstack/agent/cloud.keystore

Startup service:
$ sudo systemctl start cloudstack-agent
<wait a second>
$ sudo systemctl status cloudstack-agent
<verify started okay>


9. Enable root login (alt to ssh keys, or if not adding cloudstack user to suduers):
$ sudo vi /etc/ssh/sshd_config
<allow root login>
<esc>:wq (to save)


10. If CloudStack is not behind firewalls, consider turning off the access to libvert's non SSL TCP and updating security policies. Update the following lines to disable non-secure TCP:
$ sudo vi /etc/libvirt/libvirtd.conf
...
listen_tcp = 0
...
auth_tcp = "none"
...
mdns_adv = 0
...
remote_mode="legacy"
...

<esc>:wq (to save)

Uncomment the line below:
$ sudo vi /etc/default/libvirtd
LIBVIRTD_ARGS="--listen"
...
<esc>:wq (to save)

Set libvert mode:
$ sudo systemctl mask libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
$ sudo systemctl restart libvirtd
$ sudo systemctl status libvirtd
<verify restarted okay>

Set-up the apparmor policy:
$ sudo dpkg --list 'apparmor'
<confirm installed/enabled>

Disable AppArmor profiles for libvert:
$ sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/
$ sudo ln -s /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper /etc/apparmor.d/disable/
$ sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.libvirtd
$ sudo apparmor_parser -R /etc/apparmor.d/usr.lib.libvirt.virt-aa-helper


11. Set-up network:
WARNING:
Make sure you have a physical console or ILO for steps 10 and 11 as a mistake will drop a SSH session.

- If not using netplan:
$ sudo vi /etc/network/interfaces
- - Basic Network Example from CloudStack Docs is below. Update the interface from eth0 to ens123, etc.
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto eth0.200
iface eth0 inet manual

# management network
auto cloudbr0
iface cloudbr0 inet static
bridge_ports eth0
bridge_fd 0
bridge_stp off
bridge_maxwait 1
address 192.168.42.11
netmask 255.255.255.240
gateway 192.168.42.1
dns-nameservers 8.8.8.8 8.8.4.4
dns-domain lab.example.org

# guest network
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth2
bridge_fd 0
bridge_stp off
bridge_maxwait 1

- - Advanced example:
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

# The second network interface
auto eth1
iface eth1 inet manual

# management network
auto cloudbr0
iface cloudbr0 inet static
bridge_ports eth0
bridge_fd 5
bridge_stp off
bridge_maxwait 1
address 192.168.42.11
netmask 255.255.255.240
gateway 192.168.42.1
dns-nameservers 8.8.8.8 8.8.4.4
dns-domain lab.example.org

# guest network
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth1
bridge_fd 5
bridge_stp off
bridge_maxwait 1


- Netplan Example:
a. Update the netplan config:
$ sudo vi /etc/netplan/01-KVM-config.yaml
network:
ethernets:
eno1:
dhcp4: false
dhcp6: false
optional: true
mtu: 1550
eno2:
dhcp4: false
dhcp6: false
optional: true
mtu: 1550
eno3:
dhcp4: false
dhcp6: false
optional: true
mtu: 1550
eno4:
dhcp4: false
dhcp6: false
optional: true
mtu: 1550
bridges:
cloudbr0:
addresses:
- 192.168.42.151/24
nameservers:
addresses:
- 192.168.42.1
search:
- mindwatering.net
routes:
- to: default
via: 192.168.42.1
interfaces: [eno1]
dhcp4: false
dhcp6: false
mtu: 1550
parameters:
stp: false
forward-delay: 0
cloudbr01:
interfaces: [eno2]
dhcp4: false
dhcp6: false
mtu: 1550
parameters:
stp: false
forward-delay: 0
cloudbr02:
interfaces: [eno3]
dhcp4: false
dhcp6: false
mtu: 1550
parameters:
stp: false
forward-delay: 0
cloudbr03:
interfaces: [eno4]
dhcp4: false
dhcp6: false
mtu: 1550
parameters:
stp: false

b. Reload netplan:
$ sudo netplan generate
$ sudo netplan apply
$ sudo reboot


12. Add additional UFW firewall rules for the cloudstack-agent:
- Open ports 1798, 16514, 5900-6100, and 49152-49216:
$ sudo ufw allow proto tcp to any port 1798 from 10.0.42.0/24
$ sudo ufw allow proto tcp to any port 16514 from 10.0.42.0/24
$ sudo ufw allow proto tcp to any port 5900:6100 from 10.0.42.0/24
$ sudo ufw allow proto tcp to any port 49152:49216 from 10.0.42.0/24
$ sudo ufw reload

- Change forwarding default from DENY to ACCEPT:
$ sudo vi /etc/default/ufw
...
DEFAULT_FORWARD_POLICY="ACCEPT"
...
<esc>:wq (to save)

Enable:
$ sudo ufw enable


13. Adding NFS local storage (if desired):

a. Create partitions on local storage disk arrays:
/export/primary = /dev/sdc1
/export/secondary = /dev/sdb1

$ sudo fdisk /dev/sdb
- Type p to print/display any current partitions on the drive (verify none exist)
- Type n
- - Type p (primary)
- - Type 1 (default, 1st partition)
- - Type <enter> (default minimum - e.g. 2048)
- - Type <enter> (default maximum - whole drive)
- Type w (to save and exit - no undo)

$ sudo fdisk /dev/sdc
- Type p to print/display any current partitions on the drive (verify none exist)
- Type n
- - Type p (primary)
- - Type 1 (default, 1st partition)
- - Type <enter> (default minimum - e.g. 2048)
- - Type <enter> (default maximum - whole drive)
- Type w (to save and exit - no undo)

b. Make folders:
$ sudo mkdir -p /export/primary
$ sudo mkdir -p /export/secondary

c. Update fstab:
$ vi /etc/fstab
...
/dev/sdb1 /export/secondary ext4 defaults 0 2
/dev/sdc1 /export/primary ext4 defaults 0 2
...
<esc>:wq (to save)

$ mount -a
<verify no errors>
$ df -h
<verify /export/secondary and /export/primary mapped and expected sizes>

d. Install NFS server and create NFS export:
$ sudo apt-get install nfs-kernel-server

$ vi /etc/exports
...
/export 127.0.0.1(rw,async,no_root_squash,no_subtree_check) 10.0.42.0/24(rw,async,no_root_squash,no_subtree_check)
...
<esc>:wq (to save)

e. Update NFS for static ports:
$ vi /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT="-N 4 4"
RPCMOUNTDOPTS="-p 892"
STATDOPTS="-p 662"
RPCQUOTADOPTS="-p 875"
...
<esc>:wq (to save)

f. Activate folder:
$ sudo exportfs -ra

Verify:
$ sudo exportfs -v
<confirm /export NFS shares listed for each ip/subnet added>

g. Open NFS ports:
$ sudo ufw allow proto udp from any to any port 111
$ sudo ufw allow proto tcp from any to any port 111
$ sudo ufw allow proto tcp from any to any port 2049
$ sudo ufw allow proto tcp from any to any port 32803
$ sudo ufw allow proto udp from any to any port 32769
$ sudo ufw allow proto tcp from any to any port 892
$ sudo ufw allow proto udp from any to any port 892
$ sudo ufw allow proto tcp from any to any port 875
$ sudo ufw allow proto udp from any to any port 875
$ sudo ufw allow proto tcp from any to any port 662
$ sudo ufw allow proto udp from any to any port 662
$ sudo ufw reload

h. Reload services:
$ sudo systemctl restart nfs-server
$ sudo systemctl status nfs-server
$ sudo systemctl restart rpcbind
$ sudo systemctl status rpcbind


14. Map VLANs to Physical Networks:
ESXi uses the vSwitch names to map Virtual networks to VLANs or physical ports. Similarly, in CloudStack update network labels to the same names as the network bridges in the OS set-up.
CloudStack UI --> Infrastructure (left menu twistie) --> Zones --> <select zone> --> Physical Network (tab)

Select the physical network --> Click the Edit button (pencil icon) --> Under Details (tab), update the Tags field and enter one tag with the network bridge ID (e.g. cloudbr02).

Ensure the Traffic types (tab) for this physical network includes the Guest type. Add the Guest type via the Add Traffic Type icon (square box with plus).

WARNING:
Public IPs are added at the Zone level and only when the zone is created in the CloudStack UI. For each of your present and future VLANs, include the public addresses "above" that VLAN for that Guest VLANs virtual router (VR) in the Zone during creation.


---

Additional Notes:
To use CloudStack repository for DEB, execute the following commands:
export RELEASE=4.22
echo "deb https://download.cloudstack.org/ubuntu $(lsb_release -s -c) ${RELEASE}"|sudo tee /etc/apt/sources.list.d/cloudstack.list
wget -O - https://download.cloudstack.org/release.asc|sudo apt-key add -
sudo apt-get update

Note:
Replace RELEASE by eg 4.17, 4.18, etc.


Other package lists:
# apt-get install openntpd openssh-server sudo vim htop tar intel-microcode bridge-utils openjdk-11-jdk mariadb-server nfs-common nfs-kernel-server quota python3-pip uuid-runtime dpkg-dev apt-utils software-properties-common debhelper libws-commons-util-java genisoimage libcommons-codec-java libcommons-httpclient-java liblog4j1.2-java maven


Script that run de-install so steps can be executed again:
remove.shremove.sh


Netplan Example with VLANs:
Network notes:
- eno1 and eno2 bonded as bond0
- - bond0 provides cloudbr0, with untacked host/management network, and tagged guest networks 100-200
- eno3 and eno4 bonded as bond1
- - bond1 provides cloudbr1, which handles the VLAN tagged public traffic

Cloudstack Notes:
- Management and guest traffic configured to use cloudbr0
- Public traffic is configured to use KVM traffic label cloudbr1
- Internal connectivity from the hypervisor host (agent, libvirt, and kvm) to the system VMs use cloud0: link local 169.254.0.0/16 subnet. The bridge cloud0 is auto-configured by Cloudstack when adding the host to a zone.

Config YAML file:
network:
ethernets:
eno1:
dhcp4: no
eno2:
dhcp4: no
eno3:
dhcp4: no
eno4:
dhcp4: no
bonds:
bond0:
dhcp4: no
interfaces:
- eth0
- eth1
parameters:
mode: active-backup
primary: eth0
bond1:
dhcp4: no
interfaces:
- eth2
- eth3
parameters:
mode: active-backup
primary: eth2
vlans:
vmnet100:
id: 100
link: bond0
dhcp4: no
...
vmnet200:
id: 100
link: bond0
dhcp4: no
bridges:
cloudbr0:
addresses:
- 192.168.42.11/24
gateway4: 192.168.100.1
nameservers:
search: [mw.local]
addresses: [192.168.42.1,192.168.42.6]
interfaces:
- bond0
cloudbr1:
dhcp4: no
interfaces:
- bond1
cloudbr100:
addresses:
- 10.0.100.20/24
interfaces:
- vmnet100
...
cloudbr200:
addresses:
- 10.0.200.20/24
interfaces:
- vmnet200
version: 2






previous page

×