0% found this document useful (0 votes)
178 views31 pages

VCF Lab Constructor Install Guide 391rev1

This document provides instructions for installing VMware Cloud Foundation in a nested virtual environment using the VCF Lab Constructor tool. It describes creating three layers of virtualization, with the physical hosts as layer 1, nested ESXi hosts as layer 2, and the deployed VCF environment as layer 3. The tool can automate deployment or allow manual configuration. Network access to the deployed VCF is also described.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
178 views31 pages

VCF Lab Constructor Install Guide 391rev1

This document provides instructions for installing VMware Cloud Foundation in a nested virtual environment using the VCF Lab Constructor tool. It describes creating three layers of virtualization, with the physical hosts as layer 1, nested ESXi hosts as layer 2, and the deployed VCF environment as layer 3. The tool can automate deployment or allow manual configuration. Network access to the deployed VCF is also described.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

=== VCF Lab Constructor ===

Install Guide
Document Version 3.9.1 Rev1
Created by

Ben Sier @datareload & Heath Johnson @heathbarj

Support Options
Questions about VCF ask @sddccommander

Support for the VLC on Slack vlc-support.slack.com

Welcome to VMware Cloud Foundation Lab Constructor

Warning VCF 3.9.1 Contains new networking requirements, see below for an overview.

------------------------------------------------------<Disclaimer>--------------------------------------------------------
If your reading this that means you want to test VMware Cloud Foundation (VCF) on Non-
Certified Hardware in a nested environment. Just a word of caution before we start. While this
does work for testing and demo’s, it is not supported by VMware. Also, because we're nesting
3 layers deep this will impact performance. So, if you are planning on showing any of this as a
demo to a customer. Please communicate to all customers that it will be slower than a physical
environment.
---------------------------------------------------/<Disclaimer>----------------------------------------------------------

VCF Lab Constructor Overview


This document will not explain what VCF is see all VMware Documentation for that, this will
only cover how this nested version works, how to set it up, and access it. Let’s start by
explaining how we are virtualizing a full hardware set for VCF. Below is a Physical > Logical view
of the setup.
Creating Layer 1
Layer 1 is your physical lab equipment. This can be 1 host or many hosts setup in a cluster with
a vCenter. This can also run on a physical vSAN cluster. Providing the Physical Equipment to run
this on is up to you. The requirements for the Physical layer are listed below in the detailed
requirements section.

Note: If you do use a Physical vSAN, there is a requirement listed below.

Creating Layer 2 – Learning the 3 Methods


Layer 2 and layer 3 are installed by this VLC software script. The VLC script will begin by setting
up layer 2, I.E this will create a nested virtual version of the hardware requirements for VCF.

Method 1- I will build it!

Using method 1 assumes you will provide the required services as spelled out in the VCF
documentation. This includes DNS, NTP & DHCP and BGP Router See the Official VMware VCF
Documentation for exactly what is required.
Method 2 – Build it for me!

Using method 2 means you want this VLC software to provide the required services for you.

Method 3 – Expansion Pack!

Using this method assumes that you have already built a complete lab using Method 1 or 2 and
now you want to add more nested hosts to this existing environment.

Building your first four Hosts with Method 1 or 2


Selecting method 1 or 2 will build your first 4 hosts for the management workload domain. This
is done by creating 4 virtual nested ESXi hosts. These nested hosts are automatically sized and
created for you. You are able to configure the hostnames and IP’s to be used based on the
default-vcf-ems.json configuration you provide the VCF Lab Constructor. The VCF Lab
Constructor script comes with some this sample default-vcf-ems.json file This is for building
layer 2 and is also used later to complete the Bring-up Process. You can also modify this file to
your liking. After the nested hosts are created the VCF Lab Constructor can use this virtual
hardware and begin using the Cloud Foundation Builder appliance to create your first
Management Workload Domain.

Note: You are required to Insert your License Keys into the default-vcf-ems.json file.
Look for the "licenseFile": "<Insert vCenter License>", Sections in the JSON file.

Cloud Foundation Builder – Bring-up


The next phase, is the “Bring-up process”. This is a fully documented process in the installation
of VCF. Using the VCF Lab Constructor will allow you to manually do this so you can follow the
steps of the official VMware Documentation, or if you check the box in the GUI the VCF Lab
Constructor will complete Bring-up for you automatically.

During the bring up process VCF will use the 4 nested hosts and build them into a vSAN cluster.
Then it will create your first management workload domain.
Once bring up is complete. You now have fully deployed VCF into your nested environment.

Accessing the VCF UI


Because the VCF components are installed on Layer 3 you may be thinking how to I get network
access to it? To gain access your “jump” host will need a NIC with multiple IP’s or you will need
multiple NIC’s. This is diagramed below in the Jump host requirements. But know that because
everything is nested inside Layer 2 all network traffic is being broadcast back up to Layer 1 port
groups. Simply having your jump host on this subnet or port group and listening on the default
VCF subnet i.e. (192.168.0.0) will allow you to access everything in layer 3. This jump host can
also be nested at layer 1 or a physical desktop that has access to the same subnet. Nesting it at
Layer 1 has the best performance.

This concludes the overview of the VCF Lab Constructor.

Prerequisites

Physical Install Requirements

Basic Install
Use Case - Create 4 node Management Cluster. Great for learning the basics of VCF.
2 Sockets – 8 Cores Each
Total RAM Requirements – 192 – 512 GB RAM (can be done with less, results may vary)

Advanced Install
Use Case - Complete basic install, and in addition, be able to create additional
Workload Domains. Or install additional optional components (vROPs, vRA, vRNI)
4+ Sockets – 12+ Cores Each
Total RAM Requirements – 768GB – 2TB RAM

Note: your mileage may vary based upon how many of these optional components you
choose to install. The more RAM and CPU you can provide the better.

• Physical Hardware Running vSphere 6.5 or Higher


o 2TB+ GB of Disk (preferably High speed flash for best results)
§ If you try to deploy this on spinning disk, you may spend forever
troubleshooting. You have been warned
o A DVS or Standard Switch for VCF to Deploy to
§ Configure this switch with
§ Set VDS to MTU of 9000 to support Jumbo Frames inside the nested
environment
§ Note: If you modify the included JSON for BringUp or use the XLSX,
Ensure that the MTU of the vSAN and vMotion Network Pools is set to
8940. This is required because the nested packet being passed up to the
Physical Layer DVS needs overhead packet capacity
• A portgroup for the VCF components with:
o Promiscuous Mode = Accept
o Allow Forged Transmits = Accept
o Allow MAC Address Changes = Accept
• A portgroup for regular VM traffic (this probably exists already)
o Disable all HA and DRS and VMotion on the Physical Host(s)
o L3 BGP Router Required for VCF 3.9.1 + (Instructions here include the use of a
NE W VLC built in router injected using the Build it for me button with the VLC using
the Cloud Builder or see below on how to Setup your own VM router using
https://vyos.io/ You can also use your own existing physical or virtual router. It’s
your choice. We just need a working BGP router configured correctly.
o Optional
§ If you are running VSAN on Physical hosts, run this command on all
hosts
• esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1
§ -If you are using a ESXi host that is controlled by a vCenter and you want
to target that single host *instead* of the vCenter you will need to do
the following:
• Stop communication between the host and the vCenter Server
by
• stopping these services with the commands:
• /etc/init.d/vpxa stop
• /etc/init.d/hostd restart
• -When these commands have executed, the ESXi host has
stopped communicating with the vCenter Server. Then execute
the script to build the lab. After it completes, start the vpxa
service to add the ESXi host back to vCenter by running the
command:
• /etc/init.d/vpxa start

• A “jump box” to push the deployment of VCF onto vSphere or vCenter


o This can be a physical box that has access to all Subnets or a VM guest inside
your infrastructure.
o The Jump box will require the following.
§ Windows 10
§ Powershell latest version (execution mode bypass)
• Powercli, Run the following CMDs in Powershell
• Set-ExecutionPolicy Bypass
• Install-Module -Name VMware.PowerCLI
§ Run this to ignore Self-signed Certs (required)
• Set-PowerCLIConfiguration -DefaultVIServerMode Multiple -
InvalidCertificateAction Ignore -
DisplayDeprecationWarnings:$false -Confirm:$false
§ Putty - or your Favorite SSH tool
§ Notepad++ - or your favorite Text Editor
§ .
§ WinScp – Optional
§ Chrome,Firefox
§ Download all of the following VMware Software
• VMware Cloud Foundation Bundle 3 or higher
• vSphere ESXi ISO GA Build 6.5 or higher
o Use 6.5 for VCF 3.0 Builds
o Use 6.7 for VCF 3.5+
• VMware OVF tool 4.3
o Install the OVF tool on your Jump host
§ VCF Lab Constructor 3 or higher
§ Two nics attached to control and access the VCF deployment.
• All nics should be VMXNET3
o Connect one NIC to the Switch created on the VCF
portgroup
§ Set the IP to 10.0.0.220 (example IP)
§ Later, you will add a second '.220' IP on the
subnet you chose during bring up.
o Connect the other NIC to the Switch on the "VM
Network" portgroup so you have remote access to your
jump host.
§ Turn off all Windows Firewalls
§ Turn of Windows Defender Real-time Scanning

Networking Sample Diagram


New for VCF 3.9.1 and up is the use of an Application Virtual Network (AVN) for all vRealize components.

When using the Build it For Me option, the VLC will inject the necessary BGP L3 Router and additional
Services into the Cloud Builder Appliance. This will create the network diagram below for this new AVN

NE W

Begin Building
1. Extract the VCF Lab Constructor .zip to a folder to work from
a. C:\VCF_Constructor\
2. Place all downloaded prerequisite software into this folder as well
3. By default 4 hosts are automatically created for the management domain for you, The names
and IPs of these first four hosts are defined in a pre-configured .json file. You can modify this
.json file to your liking. This json file is also used by the Cloud Foundation Build for Bringup. Be
careful what you modify. Everything is required.

Note: You are required to Insert your License Keys into the
BUILD_IT_FOR_ME_default-vcf-ems.json file.
Look for the "licenseFile": "<Insert vCenter License>", Sections in the JSON file.

**************************<Important Notice!>*******************************
The VLC now contains two JSON Files to Lab with.
The BUILD_IT_FOR_ME_default-vcf-ems.json JSON is preconfigured for using the BUILD
IT FOR ME option, This file is different in that it has set the Default Gateway for everything
to point to the Cloud Build appliance (10.0.0.221) which in this instance also acts as the
Router/NTP/DNS/DHCP.
The second JSON is for the I WILL BUILD IT option, In this instance you will use your own
Router for AVN and You can customize the I_WILL_BUILD_IT_ default-vcf-ems.json file. The
Gateway in this file is set to 10.0.0.1
**********************</End of Important Notice!>*****************************

4.
a. Ý Pick_the_Right_JSON_default-vcf-ems.json
b. From the Jump host Run the Powershell script VCLGui.ps1
c. Powershell will open and run a few checks…then
d. Wait, The following Window will appear

5. Select the Method you plan to use, depending on the Method you select the window will change
and you can now fill out the data fields. Some Fields will be greyed out based on the method
selected, If they are greyed out, they are not required.
6. Enter the FQDN of the Lab you will be building.
a. Note it is best practice to give VCF its own sub domain. i.e. SDDC.LAB.LOCAL
7. Enter the CIDR for the Management domain. i.e. 192.168.150.0/24
8. Enter the Gateway for the management Network
9. Enter the IP to use for the Cloud Foundation Build appliance
10. Enter the path for the Cloud Foundation Builder OVA
11. Enter the path for the Host JSON file
a. Note: This is only used with the Expansion Pack or Expert mode Method. In Expert
Mode it will create the default 4 hosts for the management Workload domain plus any
additional hosts specified here.
12. Enter the path for the ESXi ISO
13. Enter a naming prefix for the first 4 hosts in the management workload domain.
14. Enter the Master Password to be used for the ESXi Hosts
15. Enter the IP address for the NTP and DNS Servers in your Environment.
16. Enter the path to the default-vcf-ems.json file.

Note: You are required to Insert your License Keys into the default-vcf-ems.json file.
Look for the "licenseFile": "<Insert vCenter License>", Sections in the JSON file.

17. Enter the Host or vCenter IP information of Physical environment that you would like to use to
deploy to
a. Enter the Username and password to connect to this infrastructure, be advised that the
password will be captured in clear text in the log and ini files, plan accordingly.
18. Click Connect
a. The Constructor will now connect and allow you to select the cluster, Network &
Datastore to place the nested hosts into.
19. Using the all flash button will Deploy your Layer 2 ESXi hosts as All Flash VSAN nodes. Note, By
default VSAN uses more RAM by using SSDs See KB######
20. Click on File, to save your data entry configuration to an .ini file before you construct, this will
save you time when you start over. Notice you can also load a previous config from this menu.
21. Click Construct and watch Magic happen…..

Once you click construct, the VCF Lab Constructor will build the nested ESXi hosts you specified in the
JSON, and will configure the hosts to work with the Cloud Foundation Builder appliance.

Next it will automatically deploy the Cloud Foundation Builder appliance

Once the Cloud Foundation Builder is deployed and running you can connect to it at this URL.

Note: if you selected Do Bring-up, You don’t need to do anything, the VLC Will monitor the progress for
you in the PowerShell window.

https://Cloud_Builder_VM_IP:8008

If you selected “Do Bring-up”. The VLC will complete the Bring-up Process for you.

If you did not select “Do Bring-up” You will need to complete the Bring-up Process Manually following
the VMware Documentation. Note the JSON for this manual bring-up process is pre configured for you.
Feel free to also follow the VMware Documentation and use the Excel Spreadsheet.

Using Expansion Pack - DNS


Congrats, If you have traveled this far that means you have successfully deployed VCF in a nested lab,
and you are ready to Scale out and expand on your installation.

There are two use case scenarios for the expansion pack.

I will Build it > Expansion Pack


Using this method means you have created DNS entries with the host names your adding to the
environment. The VLC will continue to use your DNS server.

Build it for me > Expansion Pack


Adding DNS entries
If you’ve used “Build it for me” and would like to add additional workload domains, or solutions to your
deployment you will need to add to the DNS. To accomplish this please log in to your CloudBuilder VM
using SSH. The username is admin, password you set in the VLC wizard.

You will need to edit the “db” file for the zone you asked it to create. In my case that zone file is located
here: /etc/maradns/db.vcf.lab.local

I use VI as an editor so enter the following command:


The result of this command opens the db zone file in the VI editor:

Next I will duplicate the line of the currently defined vcenter by moving the cursor to that line using the
arrow down key and then pressing YYPP in that order. Once that is done, you should see something like
this:
After this you will change the newly copied lines name and IP addresses, ensuring that the IP addresses
are not already in use! For those not familiar with VI commands here is a cheatsheet:
https://devhints.io/vim

After making your changes and saving the file you will need to reload maradns and the
maradns.deadwood services. MaraDNS takes care of forward lookups and Deadwood takes care of
Reverse DNS.
You would follow this same procedure for adding DNS entries for VRSLCM, vROps, VRA, Horzion and
their components.

Examples of the DNS files.

psc-1.vrack.vsphere.local. FQDN4 10.0.0.11

esxi-4.vrack.vsphere.local. FQDN4 10.0.0.103

sddc-manager.vrack.vsphere.local. FQDN4 10.0.0.10

esxi-3.vrack.vsphere.local. FQDN4 10.0.0.102

nsxmanager.vrack.vsphere.local. FQDN4 10.0.0.13

loginsight-node-1.vrack.vsphere.local. FQDN4 10.0.0.16

vcenter-1.vrack.vsphere.local. FQDN4 10.0.0.12

loginsight-node-3.vrack.vsphere.local. FQDN4 10.0.0.18

loginsight-node-2.vrack.vsphere.local. FQDN4 10.0.0.17

esxi-1.vrack.vsphere.local. FQDN4 10.0.0.100

esxi-10a.vrack.vsphere.local. FQDN4 10.0.0.110

esxi-11a.vrack.vsphere.local. FQDN4 10.0.0.111

esxi-12a.vrack.vsphere.local. FQDN4 10.0.0.112

esxi-2.vrack.vsphere.local. FQDN4 10.0.0.101

psc-2.vrack.vsphere.local. FQDN4 10.0.0.14

load-balancer.vrack.vsphere.local. FQDN4 10.0.0.15

EC35-CB-01a.vrack.vsphere.local. FQDN4 10.0.0.225

esxi-100a.vrack.vsphere.local. FQDN4 10.1.0.110

esxi-110a.vrack.vsphere.local. FQDN4 10.1.0.111

esxi-120a.vrack.vsphere.local. FQDN4 10.1.0.112

~
if you add hosts on a different network you'll need to update another file. /etc/dwood3rc and add in the
in-addr.arpa for that additional network

bind_address = "10.0.0.225"

chroot_dir = "/etc/maradns"

upstream_servers = {}

upstream_servers["."]="8.8.8.8, 8.8.4.4"

upstream_servers["0.0.10.in-addr.arpa."] = "127.0.0.1"

upstream_servers["0.1.10.in-addr.arpa."] = "127.0.0.1"

upstream_servers["vrack.vsphere.local."] = "127.0.0.1"

upstream_servers["corp.local."] = "10.0.0.250"

recursive_acl = "10.0.0.0/24"

filter_rfc1918 = 0

Best Practices
• Change Management Cluster DRS from fully automated to partially automated to prevent
vMotions from killing your lab.

Reduce the default size of the vRealize Log Insight nodes.


The default-vcf-ems.json file includes the size reduction configuration. If you create your own JSON file.
Add this Section to the JSON file.

"logInsightSpecs": [
{
"vcenterId": "vcenter-1",
"adminPassword": "VMware1234!",
"sshPassword": "VMware12345!",
"loadBalancerHostname": "load-balancer",
"loadBalancerIpAddress": "10.0.0.15",
"vmSize": "small", <------------------------ Node size set to "small"
"license": "<Insert License Key>",
"logInsightNodeSpecs": [
{

"ipAddress": "10.0.0.16",
"hostname": "loginsight-node-1"
}, {
"ipAddress": "10.0.0.17",
"hostname": "loginsight-node-2"
}, {
"ipAddress": "10.0.0.18",
"hostname": "loginsight-node-3"
}
]
}
]

Time
How long does this all take?

Your mileage will vary greatly depending on your hardware.

For reference, a single Dell R720 configured with 384 GB, & 4 SSD’s can complete the Deployment and
Bring-up process in 2.5 hours on average.

Clean up, Rinse and Repeat.


If at any point you would like to start from a fresh deployment. You need to complete the following.

1. Login to the host or vCenter at Layer 1.


2. Power off all Nested ESXi hosts and Cloud Builder
3. Delete them all from Disk.
4. Delete the temp directory from the Lab Constructor directory.
5. Delete the VLC_vSphere.iso from the Lab Constructor directory.
6. Optionally remove any old log files from the Lab Constructor directory.

Troubleshooting
To follow along in its progress and to troubleshoot any issues along the way.

• If your ESXi Host or vCenter in Physical Environment has a Self-Signed or Untrusted SSL cert
o Set your Power Shell Environment to Ignore the Certificate.
§ set-powercliconfiguration -invalidcertificateaction ignore
• Bring-up Logs Troubleshooting
o https://docs.vmware.com/en/VMware-Cloud-
Foundation/3.0/com.vmware.vcf.ovdeploy.doc_30/GUID-BD989650-BB59-4352-A278-
CDF1B9AB9B47.html
• Most times a FAILED task can be restarted (we typically try this as least 3 times before really
digging in to see what the problem is)
• If you think a task is stuck and it's been hours, try rebooting the SDDC Manager and Retrying the
bring-up task.
• Depending on what stage your having any issues at, you can login to any of the layer 2 hosts,
and vCenters to troubleshoot issues.
• Post Questions on the Socialcast group VCF LAB CONSTRUCTOR.

Known Issues
Because this software is originally to run on physical hardware with SSD’s and 10GB networks.

Performance issues will break your deployment.

During bring up the SDDC Mgr will deploy 3 vRLI nodes one at a time. If you choose to do a manual
install of the bring up process, VCF will force these to be medium sized nodes. (8vcpu 32gb RAM) This
may be too much for your lab. If so, choose the Automatic bring up process. The VCF Lab Constructor
will do some magic and resize these nodes to xsmall before they are deployed.

Do not Mount the ISO for ESXi from an NFS Datastore

Logging
The VCF Lab Constructor creates a log to verify successful deployment. For reference, below is the log of
a successful deployment. This log can be found in the VCF Directory where you launched the VCF Lab
Constructor.
11:56:30 :> ----------------------Form Inputs------------------3.0--
11:56:30 :> esxhost:
11:56:30 :> vc01.lab.local
11:56:30 :> username:
11:56:30 :> [email protected]
11:56:30 :> password:
11:56:30 :> VMware123!
11:56:30 :> netName:
11:56:30 :> PG-GuestVMs
11:56:30 :> cluster:
11:56:30 :> Nested
11:56:30 :> datastore:
11:56:30 :> HPDatastore
11:56:30 :> Mgmt CIDR:
11:56:30 :> 192.168.150.0/24
11:56:30 :> Mgmt Gateway:
11:56:30 :> 192.168.150.1
11:56:30 :> vmPrefix:
11:56:30 :> test
11:56:30 :> CB OVA Loc:
11:56:30 :> F:\VCF301\vcf-cloudbuild-3.0.1.0-10426441_OVF10.ova
11:56:30 :> vsphereISOLoc:
11:56:30 :> F:\VCF301\VMware-VMvisor-Installer-6.5.0.update02-
8294253.x86_64.iso
11:56:30 :> jsonLoc:
11:56:30 :>
11:56:30 :> Master Pass:
11:56:30 :> VMware123!
11:56:30 :> Config Internal Svcs:
11:56:30 :> False
11:56:30 :> bringupAfterBuild:
11:56:30 :> False
11:56:30 :> bringup JSON:
11:56:30 :> F:\VCF301\default-vcf-ems.json
11:56:30 :> allFlashBuild
11:56:30 :> True
11:56:30 :> --------------------END-Form Inputs--------------------
11:56:38 :> Extracting 'F:\VCF301\VMware-VMvisor-Installer-6.5.0.update02-
8294253.x86_64.iso' to 'F:\VCF301\temp\ISO'...
11:56:46 :> Copy complete
11:56:47 :> Starting creation of 4 found in JSON and template.
11:56:48 :> Creating host esx1
11:56:49 :> Creating host esx2
11:56:50 :> Creating host esx3
11:56:50 :> Creating host esx4
11:58:37 :> Nested Hosts Creation Time: 00:02:06.3926573
11:58:44 :> Setting ISOLINUX.CFG info...
11:58:44 :> Setting BOOT.CFG info...
11:58:44 :> Setting VLC.cfg info...
11:58:49 :> Uploading Custom vSphere ISO to the \ISO directory of HPDatastore
11:59:41 :> Starting up esx1
11:59:42 :> Starting up esx2
11:59:43 :> Starting up esx3
11:59:43 :> Starting up esx4
12:07:30 :> All hosts online, starting additional config.
12:07:30 :> Nested Hosts Online Time: 00:10:59.8137084
12:07:30 :> Importing CloudBuilder OVF
12:07:36 :> OVF CLI for CloudBuilder import: & 'C:\Program
Files\VMware\VMware OVF Tool\ovftool.exe' --name='test-CB-01a' --
acceptAllEulas --skipManifestCheck --X:injectOvfEnv --net:'Network 1=PG-
GuestVMs' -ds='HPDatastore' -dm=thin --noSSLVerify --
prop:guestinfo.ip0=192.168.150.225 --prop:guestinfo.netmask0=255.255.255.0 --
prop:guestinfo.gateway=192.168.150.1 --prop:guestinfo.hostname='test-CB-01a'
--prop:guestinfo.DNS=192.168.50.49 --prop:guestinfo.ntp=108.61.194.85 --
prop:guestinfo.ROOT_PASSWORD='VMware123!' --
prop:guestinfo.ADMIN_USERNAME='admin' --
prop:guestinfo.ADMIN_PASSWORD='VMware123!' --powerOn 'F:\VCF301\vcf-
cloudbuild-3.0.1.0-10426441_OVF10.ova'
'vi://[email protected]:[email protected]:443?moref=vim.Clu
sterComputeResource:domain-c71'
12:17:42 :> Waiting for CloudBuilder to be available...
12:18:13 :> Waiting for CloudBuilder to be available...
12:18:43 :> CloudBuilder online!
12:18:43 :> Total Time for Imaging: 00:22:12.2948619
12:18:43 :> Total RunTime: 00:22:12.3482089
Installing Nested VCF 3.9.1 with Vyos Router - Using the I will Build it
Method

Written by the amazing Tom Stephens NEW

There are several different methods that one can use to deploy VCF 3.9.1 in a nested environment.
Using VLC, you have the choice of having VLC build out the infrastructure services needed (NTP, DNS,
DHCP, ect) or you can choose to build the infrastructure yourself. This write-up focuses on the later
method.

For reference, I’ll be using the following diagram:

In this setup, there is a 10.0.0.0/24 network that is used as the management network. All of the core
VCF VMs (including Cloud Builder, NSX Manager, the SDDC Manager, ect) will be deployed here.

Additional infrastructure VMs are also deployed on this network. These can include a ‘jump’ host to
allow for remote access and a Active Directory server which provides AD, DNS, and NTP services.
A key component is a VM that acts as a router between all the networks. This router provides DHCP
services that are required for VXLAN as well as BGP to allow for the sharing of route information.

The BGP configuration is mandatory currently with VCF 3.9.1. This allows for the Application Virtual
Networks (AVNs) to be deployed. As a part of this, VCF will now deploy two edge service gateways and a
UDLR/DLR as it deploys the vRealize Suite products. As Log Insight is deployed during bringup, this
means that the BGP configuration must exist prior to deploying VCF.

The router functionality is provided by an appliance called vyos (https://vyos.io/). In this example, the
vyos VM has 4 legs as shown:

Eth0 is used for the 192.168.0/24 network, which provides connectivity to the Internet.

Eth1 has several IP addresses assigned to it, the most important for these purposes is the 10.0.0.1
address, which is the default gateway for the management network.

Eth2 and Eth3 provide legs for each of the AVN networks to be used.

Note that these AVN networks utilize a /24 netmask. This is overkill for these purposes. It would be
better to subnet the network into /27 networks to reduce the amount of IP waste in the environment.

The vyos configuration is very similar to other router configurations. The interface configuration section
of the configuration looks like this:
At this point, you should be able to ping any of the existing VMs on the management network.

The next step is to configure BGP.

In this configuration, the single vyos router will provide functionality that would typically be pushed up
past the TOR switches in a normal environment.

Referring to the first diagram, you see that we will have to autonomous systems (AS). The vyos router
will be a part of AS 65001. It will communicate to the ESRs which will be contained in AS 65003 as a
External BGP (eBGP) configuration.

The first step is to configure BGP on the vyos router. To do this, you will enter configuration mode and
execute the following commands as needed:

• To set a IP address on a interface:

set interfaces ethernet eth2 address 172.27.11.1/24

set interfaces ethernet eth3 address 172.27.12.1/24

• To configure BGP:

set protocols bgp 65001 neighbor 172.27.11.2 remote-as 65003

set protocols bgp 65001 neighbor 172.27.11.3 remote-as 65003

set protocols bgp 65001 neighbor 172.27.12.2 remote-as 65003

set protocols bgp 65001 neighbor 172.27.12.3 remote-as 65003

Once BGP is configured, the vyos configuration will have four neighbors like this:
At this point, you can examine the routing table on the vyos router and identify all the routes are present:
Note that no routes have been received by BGP yet as the bringup process for VCF hasn’t started yet and
as a result the edges are not deployed.

At this point, we have to tell BGP to advertise the route to the management (10.0.0.0/24) network. If we
don’t do this, when VCF gets to deploying the edge devices it will fail. This occurs as Cloud Builder will
login to the NSX manager and attempt to SSH to the edge devices from there to check the BGP status. If
it cannot route the packets successfully, then the connection will not be established and bringup will fail.

To advertise the management network, we will execute the following on the vyos router:

vyos@vm-router# set protocols bgp 65001 address-family ipv4-unicast network 10.0.0.0/24

This will make the vyos router config for BGP look like this:
protocols {

bgp 65001 {

address-family {

ipv4-unicast {

network 10.0.0.0/24 {

neighbor 172.27.11.2 {

remote-as 65003

neighbor 172.27.11.3 {

remote-as 65003

neighbor 172.27.12.2 {

remote-as 65003

neighbor 172.27.12.3 {

remote-as 65003

Once the vyos router configuration is complete, VLC can then be used to provision the initial set of hosts
and the Cloud Builder appliance. The 3.8 version of VLC will work for this, just ensure to select the
proper Cloud Builder .iso image and do not select the ‘Do Bringup’ option.
Next, connect to the Cloud Builder Appliance and begin bringup. You can use the json file included with
the VLC zip file.

Wait until you see the bringup has completed successfully:


Login to the vyos router and examine the BGP information:
What you want to see there is that you are receiving and sending messages and that BGP has been up
for a period of time.

Check for the state of each of the neighbors and verify that it is Established:

Verify that you are seeing the routes being advertised through BGP now:
Reboot the SDDC Manager.

You should now be able to enjoy your newly created VCF environment!

You might also like