VCF 9 Activating Supervisor (Tanzu) Steps

Mindwatering Incorporated

Author: Tripp W Black

Created: 11/23 at 10:47 PM

 

Category:
VMWare
Tanzu, VCF

Task:
Add Supervisor in order to run K8s workloads on VCF.
Assumes not selected during original cluster set-up.

Notes:
- NSX project (K8s namespace with vendor functionality add-ons) is analogous to a VCF Automation tenant.
- A VCF Automation tenant has a 1 to many relationship with projects that are used to group vSphere Namespaces logically.
- Component name change: VMware vSphere with Tanzu/VMware IaaS Control Plane --> vSphere Supervisor
- Notes during deployment of Supervisor with VCF Network with Virtual Private Cloud (VPC)
- VCF Network with VCP is the VMware optimal workload network for deploying the VMware Cloud Foundation (VCF) stack as it provides the most comprehensive set of capabilities for VCF Automation environments.
- Supervisor can be activated (installed) during the workload domain creation as part of the VCF 9 cluster initial installation.
- After installation, it is deployed afterwards with VPC via vCenter.
- ESXi DNS must be configured w/all lowercase when using Supervisor or their activation may fail.
- vSphere 9.0 Supervisor cannot create IPv6 clusters.


Minimum Resources Needed to Add Supervisor:
Verify the following resources and settings are set-up:
vCenter specs:
- VCSA small or larger sizing
- vSphere HA and DRS enabled in Full Automation Mode
- Independent storage and networking configured for each vSphere cluster
- - 1 ESXi host if not using vSAN, 3 hosts if using vSAN

ESXi specs:
- Minimum 4

NSX specs:
- Medium sizing or larger

NSX Edge cluster:
- 2 node cluster of Large form factor in Active/Standby mode

K8s control plane VMs:
- 1 single Management Zone, or 3 for HA Management Zones


Notes:
- CPU and Memory consumptions needed, as well.
- Network CIDR requirements

Steps:
1. Confirm minimum 1 vSphere cluster with minimum 4 ESX hosts w/vSAN, 3 ESX hosts w/o vSAN (non-HA).
(For HA, 3 clusters, with 4 ESX hosts w/vSAN, or 3 ESX hosts w/o)

2. Confirm vSAN storage is set-up and functional (if using).

3. Confirm all clusters in each Management Zone are connected to the same vSphere Distributed Switch (VDS). Each zone, with its cluster, uses its own management network.

4. Create at least one vSphere Zone that will act as the management Zone of the Supervisor
(Create 3 vSphere Zones on 3 Management Zones for HA.)
- vSphere login --> vCenter --> Configure --> vSphere Zones --> click Add New vSphere Zone (button)
- Enter the zone Name, and add a Description. (e.g. mweast1, mweast k8s zone 1)
- Select the cluster for the zone. Click Finish (button)

If HA, repeat twice more for the other 2 clusters and zones.

5a. Create storage tags for the Supervisor (K8 control plane(s)):
Important:
- Make sure the persistent datastore selected is available to all hosts in the cluster
- The storage polices typically are multiple based on class of service (Critical, general, development)
- If HA, the storage policies must be topology aware, not needed for non HA. In other words, for non-HA, don't enable the consumption domain for non-HA polices.

a. Add tags to the datastore(s):
- vSphere login --> vCenter --> Inventory --> Datastores (icon) --> select datastore
- Right-click, select Tags and Custom Attributes --> Assign Tag --> Add Tag
- In the properties page, enter the tag Name, Description, and Category. (e.g. mweast1dsc1, mweast zone datastore c1 critical K8s storage, supervisor storage) --> Click Create (button)

b. Repeat above step for the other policy levels (e.g. general and development)

5b. Create the storage policies for the Supervisor (K8 control plane(s)):
a. vSphere login --> Menu (hamburger lines) --> Policies and Profiles (left dropdown menu) --> VM Storage Polices (left menu) --> Create VM Storage Policy (button)
- 1. Name and description (tab):
- vCenter Server: <confirm vCenter>
- Name: <enter name> (e.g. mweastc1sp)
- Description: <enter description> (e.g. mweast zone datastore c1 critical k8s storage policy)
- Click Next (button)

- 2. Policy structure (tab):
- Under Datastore specific rules (heading):
- - Check/select Enable tag based placement rules
- Under vSphere Kubernetes Service Storage topology (heading):
- - If using HA, check/select Enable Zonal topology for multi-zone Supervisor
- Click Next (button)

- 3. Tag based placement (tab):
- Create the tag rules:
- - Category: <enter category> (e.g. supervisor storage)
- - Usage option: Use storage tagged with
- - Tags: <add tags via Browse Tags button> (e.g. mweast1dsc1)
- Click Next (button)

- 4. Storage compatibility (tab):
- Confirm the datastore you tagged is the one (or one of the ones) presented that match the rules from the previous tab
- Click Next (button)

- 5. Review and finish (tab):
- Review your input and selections.
- Click Finish (button)

b. Repeat adding storage policies for the other classifications (e.g. general and development)

6. Assign the new storage policies to the vSphere Namespace.

7. Assign the new storage policies to the Supervisor.

8. Configure the Centralized Gateway, if not already done.
- vSphere login --> vCenter --> Inventory --> Networks (icon) --> Network Connectivity --> click Configure Network Connectivity (button)
- In Gateway Type: Centralized Connectivity --> Click Next (button)
- Review, click Continue (button)
- Edge Cluster:
- - Name: <enter name>
- - MTU: <default 1700>
- - form factor: Large
- Click Add (button) and add minimum two Edge nodes:
- - Name: <enter name>
- - vSphere cluster: <select cluster>
- - Resource pool: <select pool for NSX edge node>
- - Edge host affinity: <yes, if desired>
- - Configure management network settings: Static IP Allocation, Port Group, Management IP CIDR, and Default Gateway
- - Check/select Use the host overlay network configuration from the selected vSphere cluster
- - Edge Node Uplink Mapping: <select active and standby PNICs>
- - Check IP pool, static gateway address, and subset mask fields.
- - Click Apply (button to create the NSX Edge node)
- For the second edge, easier to click Clone and change what's different, than to click Add and go through all that again.
- Enter the CLI password and Root Administrator password for the Edge Nodes
- Click Next (button)
- Configure workload domain connectivity:
- - Name: <enter gateway name>
- - Active Standby (for HA)
- Configure routing:
- - Gateway Routing Type: BGP
- - Enter local autonomous system number for use in BGP: <enter number>
- Set gateway uplinks
- - Click Set (button) and configure the gateway properties
- - - VLAN, CIDR, BGP Peer IP, BFD, MTU, BGP Peer ASN, BGP Peer Password
- - Configure Second Uplink
- - Click Apply (button)
- Repeat steps to configure the gateway uplinks for second Edge node.
- In VPC External IP Blocks: <enter IPs for connectivity of VPN workflows through external IPs or public subnets>
- In Private - Transit Gateway IP Blocks: <enter the private IPs for the VPC configuration>
- Click Next (button)
- Review
- Click Deploy (button to deploy gateway)

9. Install and Configure the Avi Load Balancer
- vSphere login --> vCenter --> Inventory --> Workload Domains --> select domain --> click Actions (dropdown) --> Deploy Avi Load Balancer
- Choose the workload domain in SDDCm target
-1 Select version: <select version to deploy>
- - Click Next (button)
- 2 Form factor: <select size of the instance via dropdown> (e.g. small)
- - Click Next (button)
- 3 Settings:
- - Admin Password: <enter a new admin password>
- - Node 1 IP Address: <enter IP addresses of Node 1 in ALB controller cluster>
- - Node 2 IP Address: <enter IP addresses of Node 2 in ALB controller cluster>
- - Node 3 IP Address: <enter IP addresses of Node 3 in ALB controller cluster>
- - VIP Cluster FQDN: <enter fqdn>
- - VIP Cluster Name: <enter nametypicallynospaces>
- - Click Next (button)
- 4 Finish
- - Click Finish (button)

10. Activate the Supervisor:
- vSphere login --> Menu (hamburger lines) --> Supervisor Management
- On the Supervisor Management screen, click Get Started
- On the vCenter Server and Network page:
- - vCenter: <select vCenter setup above>
- - Networking stack: VCF Networking with VPC (check/select)
- - Click Next (button)
- On the Supervisor location page:
- - vSphere Zones Deployment:
- - - Supervisor name: <enter supervisor name>
- - - Datacenter: <select datacenter where zones were created>
- - - Zones: <select the 1 (non-HA), 3 zones created (HA)>
- - Cluster Deployment:
- - - Supervisor name: <enter name again>
- - - Select/check Enable Control Plane high availability (if HA)
- - - Datacenter: <select datacenter where zones were created>
- - - Zones: <select one of the zones>
- - - Storage Policy: <select the corresponding storage policy created above>
- - Click Next (button)
- - On the Management Network page:
- - - DHCP Network: <DHCP/Static>
- - - Network: <network created for this zone>
- - - Floating IP: <enter starting IP address if DHCP, or enter list of static IPs for each Supervisor control plane VMs otherwise>
- - - Subnet Mask: <enter mask> (e.g. 255.255.255.0)
- - - Gateway (management network): <enter IP>
- - - DNS servers: <enter IPs>
- - - DNS Search Domains: <enter name suffix(es)>
- - - NTP Servers: <enter address(es)>
- - Click Next (button)
- - On the Management Network page:
- - - NSX Project: <VPC NSX project name>
- - - VPC Connectivity Profile: <select profile>
- - - Private (VPC) CIDRs: <enter IP blocks from which IPs will be given to workloads>
- - - DNS Server: <enter IP, if any>
- - - NTP Server(s): <enter address(es)>
- - Click Next (button)
- - On the Advanced Settings page:
- - - Supervisor Control Plane Size: <select size> (tiny, small, medium, or large)
- - - API Server DNS Names: <enter names if used instead of the control plane IPs>
- - - Export Configuration: <choose this option to make re-creates faster, save in a safe place>
- - Click Finish(button)



previous page

×