Jekyll2023-03-31T15:17:34+00:00https://networkbrouhaha.com/feed.xmlNetwork BrouhahaNetworking, Cloud, Automation, Infrastructure, Containers and General GeekeryMatt ElliottCloud Director V to T Migration Videos2023-03-31T00:00:00+00:002023-03-31T00:00:00+00:00https://networkbrouhaha.com/2023/03/vcd-v2t-videos<p>Recently I recorded a couple videos with my teammate, Joseph Polcar, on Cloud Director V to T migration. The first video provides an overview of the migration tool, running and evaluating an assessment, and other steps needed to prepare for a migration. The second video provides an overview of the YAML configuration file used by the migration tool, a walkthrough of what happens during each phase of the migration, and how to perform a rollback. Hopefully you find these videos helpful. Feel free to leave any questions in the comments, or contact me in <a href="https://www.linkedin.com/in/ethernet0/?lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BigirzDRSQ9yap29XiT2WUA%3D%3D">LinkedIn</a>. :v:</p>
<div align="center">
<iframe width="560" height="315" src="https://www.youtube.com/embed/FsspwtmUny0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
<iframe width="560" height="315" src="https://www.youtube.com/embed/ZE7lyZesHco" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen=""></iframe>
</div>Matt ElliottLinks to two videos I created with my colleage Joesph Polcar on V to T migration for Cloud DirectorIntroducing IP Spaces for VMware Cloud Director2023-01-31T00:00:00+00:002023-01-31T00:00:00+00:00https://networkbrouhaha.com/2023/01/vcd-intro-ip-spaces<p class="center"><a href="/resources/2023/01/sd-computer-network.png" class="drop-shadow"><img src="/resources/2023/01/sd-computer-network.png" alt="" /></a></p>
<p>Welcome! This blog post is about a new feature in <a href="https://www.vmware.com/products/cloud-director.html">VMware Cloud Director</a> (VCD), IP Spaces. As a VMware employee, I want to make it clear that the thoughts and opinions expressed in this post are my own and do not necessarily reflect the position of my employer. With that out of the way, let’s try to wrap our heads around IP Spaces in Cloud Director!</p>
<p>I find myself asking the question “Why?” frequently in customer conversations (shout out to <a href="https://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action">Simon Sinek</a> and the <a href="https://simonsinek.com/books/start-with-why/">Golden Circle</a>!) In this blog post, my goal is to get to the “why” of IP Spaces. I will touch on the “how” and “what”, but these are fully covered in the Cloud Director documentation and other blog posts, which are linked at the bottom of this post.</p>
<p class="center"><a href="/resources/2023/01/golden-circle2.png" class="drop-shadow"><img src="/resources/2023/01/golden-circle2.png" alt="" width="400" /></a>
<br /><em>Simon Sinek’s Golden Circle</em></p>
<h1 id="the-background">The Background</h1>
<p>When backed by NSX-V, IP address management in Cloud Director is simple. The typical architecture consists of an external network with tenant edge gateways connected. The provider specifies a block of usable IPs that can be assigned to the external interface of each edge. If needed, additional IPs can be pulled from the block and assigned to the edge external interface for NAT, Load Balancing VIPs, VPN endpoints, etc. Everything the tenant needs to connect to the outside world can be accomplished by assigning one or more IPs to an edge interface and routing is very simple.</p>
<p class="center"><a href="/resources/2023/01/vcd-nsxv-connectivity.png" class="drop-shadow"><img src="/resources/2023/01/vcd-nsxv-connectivity.png" alt="" /></a>
<br /><em>Cloud Director External Connectivity with NSX-V</em></p>
<p>External connectivity is quite different when Cloud Director is backed by NSX-T. External networking is provided via a T0 Gateway, which is created by the provider and imported into Cloud Director. Each tenant edge gateway is a T1 router that is connected to the T0 (or in some cases, a T0 VRF). Addresses used by the tenant are no longer assigned to an interface, but rather assigned via endpoint IP, which is essentially a loopback address assigned to the T1. Since there are now multiple hops to get from the data center network, through the T0, to the tenant T1, dynamic routing (e.g. BGP) is typically used to advertise the endpoint IPs that are assigned to the T1. These endpoint IPs can be used to SNAT workloads to the internet or terminate IPsec tunnels, providing very similar functionality to what is available in NSX-V.</p>
<p>This change in behavior led to IP address sprawl and providers struggled to keep track of which tenants were using which IPs. To address this challenge, IP Spaces was born.</p>
<p class="center"><a href="/resources/2023/01/vcd-nsxt-connectivity.png" class="drop-shadow"><img src="/resources/2023/01/vcd-nsxt-connectivity.png" alt="" /></a>
<br /><em>Cloud Director External Connectivity with NSX-T</em></p>
<h1 id="ip-spaces-overview">IP Spaces Overview</h1>
<p>In VCD 10.4.1, there is a new configuration section to define IP Spaces. IP Spaces can be Public, Private, or Shared. Public IP Spaces are defined by the provider and specify what public IPs can be consumed by tenants. Private IP Spaces are defined by the tenant and are intended to simplify the process of connecting a tenant virtual data center (VDC) to a corporate WAN. Shared IP Spaces are like Private IP Spaces, allowing providers a streamlined way to provide dedicated services to tenants, such as NTP, software repos, managed services, etc.</p>
<p>The scope of an IP range defines which networks are internal or external, or in other words, which networks are local to VCD, and which are remote. If you are familiar with the old Cisco terminology for NAT, think inside and outside networks. Relating this to NAT is helpful because that is one of the primary reasons that these scopes are defined. In future VCD releases, this information may be used to automatically create NAT and NONAT rules to simplify the configuration of typical architectures.</p>
<p>Rounding out the concepts that are included in an IP Spaces are IP ranges, IP prefixes, and quota settings. IP ranges can be supplied in list form or CIDR notation and must be within the range defined as the internal scope. Tenants can request individual IPs out of the range to assign for services like NAT or a load balancer VIP. IP prefixes are also constrained to the internal scope, and they define specific subnets that tenants can consume. Quota settings define how many individual IPs and prefixes each tenant can use.</p>
<h1 id="the-why">The Why</h1>
<p>Defining these parameters – IP Space type, scope, ranges, prefixes, and quotas – provides VCD with far more information than was available with the basic IP address management in previous versions. Providers have fine-grained control over exactly which IP addresses and ranges tenants are allowed to consume. This also means that future VCD releases will have enough information to potentially configure NAT/NONAT rules, firewall rules, and BGP policy (prefix lists/filtering/etc.) for a variety of common topologies. The initial release of IP Spaces is just the beginning, providing a much more manageable and coherent IP address management system for providers and tenants. I am looking forward to seeing what other new capabilities will be unlocked as this feature evolves.</p>
<h1 id="helpful-links">Helpful Links</h1>
<p>Release Notes: <a href="https://docs.vmware.com/en/VMware-Cloud-Director/10.4.1/rn/vmware-cloud-director-1041-release-notes/index.html">https://docs.vmware.com/en/VMware-Cloud-Director/10.4.1/rn/vmware-cloud-director-1041-release-notes/index.html</a></p>
<p>Documentation: <a href="https://docs.vmware.com/en/VMware-Cloud-Director/10.4/VMware-Cloud-Director-Tenant-Portal-Guide/GUID-FB230D89-ACBC-4345-A11A-D099D359ED1B.html">https://docs.vmware.com/en/VMware-Cloud-Director/10.4/VMware-Cloud-Director-Tenant-Portal-Guide/GUID-FB230D89-ACBC-4345-A11A-D099D359ED1B.html</a></p>
<p>Other blog posts on IP Spaces:</p>
<ul>
<li>New Networking Features in VMware Cloud Director 10.4.1: <a href="https://fojta.wordpress.com/2022/12/16/new-networking-features-in-vmware-cloud-director-10-4-1/">https://fojta.wordpress.com/2022/12/16/new-networking-features-in-vmware-cloud-director-10-4-1/</a></li>
<li>IP Spaces in VMware Cloud Director 10.4.1 – Part 1 – Introduction & Public IP Spaces: <a href="https://kiwicloud.ninja/?p=69005">https://kiwicloud.ninja/?p=69005</a></li>
<li>IP Spaces in VMware Cloud Director 10.4.1 – Part 2 – Private IP Spaces: <a href="https://kiwicloud.ninja/?p=69028">https://kiwicloud.ninja/?p=69028</a></li>
<li>IP Spaces in VMware Cloud Director 10.4.1 – Part 3 – Tenant Experience, Compatibility & Summary: <a href="https://kiwicloud.ninja/?p=69044">https://kiwicloud.ninja/?p=69044</a></li>
</ul>
<h1 id="notes">Notes</h1>
<p>The <a href="/resources/2023/01/sd-computer-network.png">two</a> <a href="/resources/2023/01/golden-circle2.png">images</a> at the top of this post were made using <a href="https://en.wikipedia.org/wiki/Stable_Diffusion">Stable Diffusion</a>, an AI image generator. The first was generated by a prompt to create a picture with computer networking and clouds. The second was used to modify a <a href="/resources/2023/01/golden-circle.png">simple diagram</a> using pix2pix and img2img. I find it weird, and I like it.</p>Matt ElliottThis post provides an introduction to IP Spaces, a new IP Address Management scheme for VMware Cloud Director.Using cloud-init for Customization with VCD and Terraform2022-03-10T00:00:00+00:002022-03-10T00:00:00+00:00https://networkbrouhaha.com/2022/03/cloud-init-vcd<p>Recently I decided to update a blog post I wrote in 2018, <a href="https://networkbrouhaha.com/2018/03/vcd-terraform-example/">Simple cloud automation with vCD, Terraform, ZeroTier and Slack</a>. At a very high level, this blog post walks through deploying a vApp to VCD that is customized to run a script at first boot. In the original blog post, I relied on <a href="https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-58E346FF-83AE-42B8-BE58-253641D257BC.html">Guest Customization</a> with VMware tools to accomplish this. For a variety of reasons - primarily curiosity - I decided to use <a href="https://cloud-init.io/">cloud-init</a> to run the script instead. Cloud-init is quite flexible and well supported, but in hindsight, my choice led me down quite a rabbit hole. This post covers the details of how cloud-init reads its configuration through VMware tools, tips for troubleshooting cloud-init, and some other lessons learned along the way. Of course, I’ll share a working example that deploys a vApp to VCD using cloud-init for customization.</p>
<p>The act that set the stage for this post is something I have done many times: I uploaded an Ubuntu ISO to a VCD catalog and used it to create a vApp. That vApp, and the single VM it contained, would be added to the same VCD catalog as a vApp template. This was my first mistake, but it took me several hours to figure out why.</p>
<p class="center"><img src="https://media.giphy.com/media/xUPGcl3ijl0vAEyIDK/giphy.gif" alt="" /></p>
<p>Before we get into that, let’s level set on how cloud-init works.</p>
<h1 id="the-basics-of-cloud-init">The Basics of cloud-init</h1>
<p>Here is how cloud-init describes itself:</p>
<p>“Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization.
It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.”
-<a href="https://cloudinit.readthedocs.io/">https://cloudinit.readthedocs.io/</a></p>
<p>Taking a look at <a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html">the provided configuration examples</a> makes it clear what the capabilities are:</p>
<ul>
<li>Add/configure users</li>
<li>Create files</li>
<li>Install or update software</li>
<li>Configure networking</li>
<li>Configure Certificate Authorities</li>
<li>Run scripts/arbitrary commands</li>
<li>And <a href="https://cloudinit.readthedocs.io/en/latest/topics/modules.html">much more</a></li>
</ul>
<p>The typical scenario for cloud-init is that a config file is supplied when a server boots, is read by cloud-init and executed. The cloud-init docs refer to the config file as <code class="language-plaintext highlighter-rouge">user-data</code>. So, how is <code class="language-plaintext highlighter-rouge">user-data</code> supplied? The details vary, but a datasource is the vehicle to deliver configuration files cloud-init. Cloud-init supports several <a href="https://cloudinit.readthedocs.io/en/latest/topics/datasources.html">datasources</a> to deliver <code class="language-plaintext highlighter-rouge">user-data</code> (there are datasources available for major cloud providers), but in a VMware environment the most promising options are <a href="https://cloudinit.readthedocs.io/en/latest/topics/datasources/ovf.html">OVF</a> and <a href="https://cloudinit.readthedocs.io/en/latest/topics/datasources/vmware.html">VMware</a>.</p>
<ul>
<li>The <a href="https://cloudinit.readthedocs.io/en/latest/topics/datasources/vmware.html">VMware datasource docs</a> state that it supports <code class="language-plaintext highlighter-rouge">GuestInfo</code> keys for supplying <code class="language-plaintext highlighter-rouge">user-data</code>. <code class="language-plaintext highlighter-rouge">GuestInfo</code> is metadata in the form of key/value pairs set in a VM’s <code class="language-plaintext highlighter-rouge">extraConfig</code> property, which can be read by VMware tools. As long as this metadata can be set via the VCD Terraform provider, this sounds like the datasource that would be used by cloud-init.</li>
<li>The <a href="https://cloudinit.readthedocs.io/en/latest/topics/datasources/ovf.html">OVF datasource docs</a> state “The OVF Datasource provides a datasource for reading data from on an Open Virtualization Format ISO transport.” That sounds less promising. I’m not interested in building an ISO to bootstrap cloud-init.</li>
</ul>
<p>Queue my surprise when I finally got cloud-init working, and the logs indicated that it used the OVF datasource. The datasource used by cloud-init can be checked with the <code class="language-plaintext highlighter-rouge">cloud-id</code> command, and this was the output I received:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ubuntu@ubuntu-impish-21:~$ cloud-id
ovf
</code></pre></div></div>
<p>Since all of the cloud-init code is available on GitHub, it’s not too difficult to see how the various data sources work. After a bit of snooping, it’s clear that the OVF datasource also reads the <code class="language-plaintext highlighter-rouge">extraConfig</code> metadata through VMware tools. In this case, it appears that the cloud-init docs are out of date. That was one of many valuable lessons during this process. Let me share two important ones with you.</p>
<h1 id="lesson-1-check-github-issues">Lesson #1: Check GitHub issues</h1>
<p>The VCD Terraform Provider docs have a <a href="https://registry.terraform.io/providers/vmware/vcd/latest/docs/guides/vm_guest_customization">section on guest customization</a>, but it doesn’t mention cloud-init specifically. It does show an example of configuring metadata with the provider, so I felt confident that I could supply cloud-init <code class="language-plaintext highlighter-rouge">user-data</code> with that method. I mentioned in the intro that I made a mistake by attempting to use cloud-init with an Ubuntu server that I built from an ISO. I’m quite sure there is a way to make it work, but I kept hitting roadblocks. Had I skimmed the resolved issues in the VCD Terraform Provider repo, I would have found <a href="https://github.com/vmware/terraform-provider-vcd/issues/667#issuecomment-844030920">this helpful comment</a>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The problem that I had was the OVA machine I tried to use.
A standard version of Ubuntu.
First part to make this working correctly is to download the cloud image at:
http://cloud-images.ubuntu.com/
</code></pre></div></div>
<p>The commenter then goes on to provide a working example of using cloud-init with the VCD Terraform Provider. Normally I do a search through GitHub issues when I’m troubleshooting something. In this case, inexplicably, I did not. If I had read that comment first, I would have saved a lot of time. However, I would not have learned so many useful strategies for troubleshooting cloud-init.</p>
<p class="center"><img src="https://media.giphy.com/media/3o7aD4ubUVr8EkgQF2/giphy.gif" alt="" /></p>
<h1 id="lesson-2-use-a-cloud-image">Lesson #2: Use a Cloud Image</h1>
<p>I was aware cloud images existed, but I was set in my ways. I’d used a bootable ISO to build a Linux VM template so many times and I didn’t consider that there was an easier option. I also assumed cloud images were purely for cloud providers, and I didn’t bother to check if there was a VMware flavor available. Lesson learned. There’s a great post on using the Ubuntu cloud image on vSphere here: <a href="https://d-nix.nl/2021/04/using-the-ubuntu-cloud-image-in-vmware/">https://d-nix.nl/2021/04/using-the-ubuntu-cloud-image-in-vmware/</a>. That only covers the vSphere side of things, but that post is a great explainer.</p>
<h1 id="deploying-and-customizing-a-vcd-vapp-with-terraform">Deploying and Customizing a VCD vApp with Terraform</h1>
<p>With those (rather obvious) lessons learned, <strong>let’s do this thing</strong>.</p>
<p class="center"><img src="https://media.giphy.com/media/tyxovVLbfZdok/giphy.gif" alt="" /></p>
<p>You will need the following:</p>
<ul>
<li>A <code class="language-plaintext highlighter-rouge">cloud-config.yaml</code> file, containing the cloud-init <code class="language-plaintext highlighter-rouge">user-data</code>. The file extension is a clue that this is a YAML-formatted file. If you have cloud-init installed locally, you can verify that it is a valid config with <code class="language-plaintext highlighter-rouge">cloud-init devel schema -c cloud-init.yaml</code>. I highly recommend that you do this.</li>
<li>A cloud image OVA downloaded on your local workstation. For Ubuntu, these are available at <a href="http://cloud-images.ubuntu.com/">http://cloud-images.ubuntu.com/</a></li>
</ul>
<h2 id="creating-a-catalog">Creating a Catalog</h2>
<p>Creating a catalog in VCD with Terraform is pretty simple. Here is an example:</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"vcd_catalog"</span> <span class="s2">"mycatalog"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"my-catalog"</span>
<span class="nx">description</span> <span class="p">=</span> <span class="s2">"Catalog created by Terraform"</span>
<span class="nx">delete_recursive</span> <span class="p">=</span> <span class="s2">"true"</span>
<span class="nx">delete_force</span> <span class="p">=</span> <span class="s2">"true"</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="uploading-an-ova-to-a-catalog">Uploading an OVA to a Catalog</h2>
<p>Similarly, adding the cloud image OVA to the new catalog is straightforward. The upload time will be dependent on the bandwidth available, but the Ubuntu 21.10 cloud image is only about 540 MB.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"vcd_catalog_item"</span> <span class="s2">"ubuntu-2110-cloud"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="nx">vcd_catalog</span><span class="p">.</span><span class="nx">mycatalog</span><span class="p">.</span><span class="nx">org</span>
<span class="nx">catalog</span> <span class="p">=</span> <span class="nx">vcd_catalog</span><span class="p">.</span><span class="nx">mycatalog</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"ubuntu-2110-cloud"</span>
<span class="nx">description</span> <span class="p">=</span> <span class="s2">"Ubuntu 21.10 cloud image"</span>
<span class="nx">ova_path</span> <span class="p">=</span> <span class="s2">"./impish-server-cloudimg-amd64.ova"</span>
<span class="nx">upload_piece_size</span> <span class="p">=</span> <span class="mi">10</span>
<span class="p">}</span>
</code></pre></div></div>
<h2 id="deploying-the-vapp">Deploying the vApp</h2>
<p>This is the final step, and it requires a few different Terraform resources, but it’s not too difficult to follow.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"vcd_vapp"</span> <span class="s2">"ubuntu"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">vdc</span> <span class="p">=</span> <span class="s2">"my-vdc"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"ubuntu"</span>
<span class="nx">power_on</span> <span class="p">=</span> <span class="kc">true</span>
<span class="p">}</span>
<span class="k">resource</span> <span class="s2">"vcd_vapp_org_network"</span> <span class="s2">"ubuntu-network"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">vdc</span> <span class="p">=</span> <span class="s2">"my-vdc"</span>
<span class="nx">vapp_name</span> <span class="p">=</span> <span class="nx">vcd_vapp</span><span class="p">.</span><span class="nx">ubuntu</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">org_network_name</span> <span class="p">=</span> <span class="s2">"org-network"</span>
<span class="p">}</span>
<span class="k">resource</span> <span class="s2">"vcd_vapp_vm"</span> <span class="s2">"ubuntu"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">vdc</span> <span class="p">=</span> <span class="s2">"my-vdc"</span>
<span class="nx">vapp_name</span> <span class="p">=</span> <span class="nx">vcd_vapp</span><span class="p">.</span><span class="nx">ubuntu</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">catalog_name</span> <span class="p">=</span> <span class="s2">"my-catalog"</span>
<span class="nx">template_name</span> <span class="p">=</span> <span class="s2">"ubuntu-2110-cloud"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"ubuntu-vm"</span>
<span class="nx">memory</span> <span class="p">=</span> <span class="mi">4096</span>
<span class="nx">cpus</span> <span class="p">=</span> <span class="mi">1</span>
<span class="nx">os_type</span> <span class="p">=</span> <span class="s2">"ubuntu64Guest"</span>
<span class="nx">power_on</span> <span class="p">=</span> <span class="kc">true</span>
<span class="nx">network</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="s2">"org"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"org-network"</span>
<span class="nx">ip_allocation_mode</span> <span class="p">=</span> <span class="s2">"MANUAL"</span>
<span class="nx">ip</span> <span class="p">=</span> <span class="s2">"192.168.1.10"</span>
<span class="p">}</span>
<span class="nx">guest_properties</span> <span class="p">=</span> <span class="p">{</span>
<span class="s2">"user-data"</span> <span class="p">=</span> <span class="nx">base64encode</span><span class="p">(</span><span class="s2">"cloud-config.yaml"</span><span class="p">)</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<ul>
<li>The <code class="language-plaintext highlighter-rouge">vcd_vapp</code> resource creates the new vApp that contain a single VM running the cloud image template in my catalog</li>
<li>The <code class="language-plaintext highlighter-rouge">vcd_vapp_org_network</code> resource attaches an existing org network to the new vApp</li>
<li>The <code class="language-plaintext highlighter-rouge">vcd_vapp_vm </code>resource provides all of the configuration for the single VM that will be in the new vApp, including the cloud-init <code class="language-plaintext highlighter-rouge">user-data</code></li>
</ul>
<p>Most of the config in the <code class="language-plaintext highlighter-rouge">vcd_vapp_vm </code>resource is what you’d expect - compute, memory, and networking settings. The <code class="language-plaintext highlighter-rouge">guest_properties</code> section is the important bit. It configures the <code class="language-plaintext highlighter-rouge">extraConfig</code> property on the VM, which is where cloud-init will read the <code class="language-plaintext highlighter-rouge">user-data</code> from. Notice that the <a href="https://www.terraform.io/language/functions/base64encode">base64encode()</a> function is used to convert the <code class="language-plaintext highlighter-rouge">cloud-config.yaml</code> file into a single, long, encoded string. This is how cloud-init expects the <code class="language-plaintext highlighter-rouge">user-data</code> to be passed over.</p>
<p>If you have values in your <code class="language-plaintext highlighter-rouge">cloud-config.yaml</code> file that you need to change on the fly, like credentials or API keys, you can use the <a href="https://www.terraform.io/language/functions/templatefile">templatefile()</a> function to insert those values into the config file before encoding it. It’s possible that <code class="language-plaintext highlighter-rouge">user-data</code> will contain sensitive data and it is trivial to decode base64. In a production environment, you should remove the <code class="language-plaintext highlighter-rouge">user-data</code> from the VM after first boot.</p>
<p>I traveled down a winding road to get here, but I finally assembled all of the pieces needed to do what I set out for originally: <a href="/2022/03/vcd-verraform-example/">update an old blog post</a>. If all you needed was some tips on using cloud-init with Terraform and VCD, you can go along your merry way. Stick around if you want some tips on troubleshooting cloud-init.</p>
<h1 id="troubleshooting-cloud-init">Troubleshooting cloud-init</h1>
<p>Here are some basic troubleshooting steps for cloud-init with vSphere/VCD:</p>
<ul>
<li>Make sure you have a recent version of VMware Tools installed. This is required to read the metadata associated with the VM.</li>
<li>Make sure you are using a cloud image <em>or</em> you have taken the steps to ensure that your VM is properly configured to work with cloud-init. You can see an example of this with the <code class="language-plaintext highlighter-rouge">govc</code> tool at <a href="https://github.com/vmware/govmomi/blob/master/govc/USAGE.md#vmchange">https://github.com/vmware/govmomi/blob/master/govc/USAGE.md#vmchange</a>.</li>
<li>Verify that VMware Tools is able to access VM metadata. You can use the command <code class="language-plaintext highlighter-rouge">vmware-rpctool 'info-get guestinfo.ovfEnv' </code>to check this. If the command returns a slew of XML, it is working as expected.</li>
<li>Verify the VM metadata. You can view this in vSphere by browsing to the <code class="language-plaintext highlighter-rouge">VM -> Settings -> vApp Options</code>. Base64 encoded <code class="language-plaintext highlighter-rouge">user-data</code> should be visible under the properties section, and you can click the “View OVF Environment” button to see the XML formatted version of the metadata. This is the same information you should see from running the <code class="language-plaintext highlighter-rouge">vmware-rpctool</code> command on the VM. You can also view these properties in VCD by viewing the Guest Properties section in the VM properties.</li>
<li>Check the cloud-init logs at <code class="language-plaintext highlighter-rouge">/var/log/cloud-init.log</code> and <code class="language-plaintext highlighter-rouge">/var/log/cloud-init-output.log</code> for errors and warnings.</li>
<li>Run <code class="language-plaintext highlighter-rouge">cloud-id</code> to verify that the correct datasource is being used. If the output is <code class="language-plaintext highlighter-rouge">fallback</code> or <code class="language-plaintext highlighter-rouge">none</code>, cloud-init was not able to detect the datasource.</li>
<li><code class="language-plaintext highlighter-rouge">ds-identify</code> is used by cloud-init to find all available datasources. Check the logs at <code class="language-plaintext highlighter-rouge">/run/cloud-init/ds-identify.log</code> to see why the desired datasource is not found.</li>
<li>While troubleshooting, you can completely reset cloud-init with <code class="language-plaintext highlighter-rouge">sudo cloud-init clean --logs</code>, and reboot to have cloud-init run again. This saves time over redeploying a template.</li>
</ul>
<h1 id="resources">Resources</h1>
<ul>
<li>Terraform VCD provider: <a href="https://registry.terraform.io/providers/vmware/vcd/3.5.1">https://registry.terraform.io/providers/vmware/vcd/3.5.1</a></li>
<li><code class="language-plaintext highlighter-rouge">vcd_catalog </code>resource: <a href="https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/catalog">https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/catalog</a></li>
<li><code class="language-plaintext highlighter-rouge">vcd_catalog_item </code>resource: <a href="https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/catalog_item">https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/catalog_item</a></li>
<li><code class="language-plaintext highlighter-rouge">vcd_vapp </code>resource: <a href="https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp">https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp</a></li>
<li><code class="language-plaintext highlighter-rouge">vcd_vapp_org_network </code>resource: <a href="https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp_org_network">https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp_org_network</a></li>
<li><code class="language-plaintext highlighter-rouge">vcd_vapp_vm </code>resource: <a href="https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp_vm">https://registry.terraform.io/providers/vmware/vcd/latest/docs/resources/vapp_vm</a></li>
<li>OVF Runtime Environment: <a href="https://williamlam.com/2012/06/ovf-runtime-environment.html">https://williamlam.com/2012/06/ovf-runtime-environment.html</a></li>
<li>Using the Ubuntu Cloud Image in VMware: <a href="https://d-nix.nl/2021/04/using-the-ubuntu-cloud-image-in-vmware/">https://d-nix.nl/2021/04/using-the-ubuntu-cloud-image-in-vmware/</a></li>
<li>Terraform, vSphere, and Cloud-Init oh my! <a href="https://grantorchard.com/terraform-vsphere-cloud-init/">https://grantorchard.com/terraform-vsphere-cloud-init/</a></li>
<li>Cloud-init config examples: <a href="https://cloudinit.readthedocs.io/en/latest/topics/examples.html">https://cloudinit.readthedocs.io/en/latest/topics/examples.html</a></li>
</ul>Matt ElliottThis post covers the details of how cloud-init reads its configuration through VMware tools, tips for troubleshooting cloud-init, and some other lessons learned along the way. Of course, I’ll share a working example that deploys a vApp to VCD using cloud-init for customization.2022 Update: Simple Cloud Automation with VCD, Terraform, ZeroTier and Slack2022-03-10T00:00:00+00:002022-03-10T00:00:00+00:00https://networkbrouhaha.com/2022/03/vcd-verraform-example<p>In 2018 I wrote a blog titled <a href="https://networkbrouhaha.com/2018/03/vcd-terraform-example/">Simple cloud automation with vCD, Terraform, ZeroTier and Slack</a>. A lot has changed since I wrote that post, so it’s time to update it. The goal is still the same: deploy a VM (inside a vApp) in Cloud Director and automate network connectivity with ZeroTier. Slack is used to monitor the progress and display the IP address assigned by ZeroTier. Overall, I want to be able to deploy a VM that has outbound internet connectivity and be able to connect to it without having to configure any firewall rules, NAT, or SSL/IPsec VPN.</p>
<p>I did make some adjustments to my approach while preparing to write this post. Instead of relying on <a href="https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-58E346FF-83AE-42B8-BE58-253641D257BC.html">Guest Customization</a> with VMware tools, I chose to use <a href="https://cloud-init.io/">cloud-init</a>. This went so poorly that I wrote a dedicated post on it 😂: <a href="https://networkbrouhaha.com/2022/03/cloud-init-vcd/">Using cloud-init for Customization with VCD and Terraform</a>. VCD also has a completely different <a href="https://registry.terraform.io/providers/vmware/vcd/latest">Terraform provider</a> than the one I demoed in 2018, which I will dig into at the end of this post.</p>
<h1 id="tools-used-and-prerequisites">Tools Used and Prerequisites</h1>
<ul>
<li><a href="https://www.vmware.com/products/cloud-director.html">VMware Cloud Director</a> - VMware’s cloud service delivery platform, typically used by service providers in the VMware Cloud Provider Program. I used VCD 10.3 in my lab when using the Terraform code you will see below.</li>
<li><a href="https://terraform.io/">HashiCorp Terraform</a> - An open-source tool written in Go, Terraform allows users to define infrastructure as code. Many public cloud <a href="https://registry.terraform.io/browse/providers">providers</a> are supported in Terraform, as well as on-prem infrastructure like <a href="https://registry.terraform.io/providers/hashicorp/vsphere/latest">vSphere</a> and <a href="https://registry.terraform.io/providers/vmware/nsxt/latest">NSX-T</a>. The Terraform provider for VCD is available at <a href="https://registry.terraform.io/providers/vmware/vcd/latest">https://registry.terraform.io/providers/vmware/vcd/latest</a>.</li>
<li><a href="https://www.zerotier.com/">ZeroTier</a> - The ZeroTier docs state that “ZeroTier is a smart Ethernet switch for planet Earth.” ZeroTier uses an agent to provide connectivity between endpoints connected to the same ZeroTier network. Anyone can create a free account on the ZeroTier website and create multiple networks. Endpoints connected to ZeroTier are managed through the web portal (or API). In other words, ZeroTier is a simple, free<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>, fast<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> VPN. If you’re wondering how ZeroTier works, check out their awesome <a href="https://docs.zerotier.com/zerotier/manual">whitepaper</a>. My friend and uber-network nerd <a href="https://twitter.com/showipintbri">Tony Efantis</a> provides a deep dive into ZeroTier on YouTube: <a href="https://www.youtube.com/watch?v=Lao9T_RQTak">https://www.youtube.com/watch?v=Lao9T_RQTak</a></li>
<li><a href="https://slack.com/">Slack</a> - I’m assuming everyone is familiar with Slack by now. For this example, Slack is used to provide visibility into the process of connecting a new VM to ZeroTier. Slack’s free tier is great for testing simple automation and receiving notifications via webhooks.</li>
<li><a href="https://github.com/">GitHub</a> - I’m hosting scripts on GitHub, but any web host could fill this need. If you choose another host, you should still use Git for version control for Terraform code and other scripts. The current script I’m using is at <a href="https://github.com/shamsway/zerotier-installer">https://github.com/shamsway/zerotier-installer</a>, and it is a simplified and modified version of the install script provided by ZeroTier at <a href="https://install.zerotier.com/">https://install.zerotier.com/</a>.</li>
</ul>
<p>Before deploying anything with Terraform, I installed ZeroTier on my local workstation, uploaded an Ubuntu cloud image OVA to my VCD catalog, and configured an incoming webhook for Slack. My VCD environment is preconfigured to allow outbound internet traffic, but nothing else.</p>
<h1 id="terraform-example">Terraform Example</h1>
<p>Below is the <code class="language-plaintext highlighter-rouge">main.tf </code>file to create a vApp, attach an existing Org network to the vApp, and clone a VM into the vApp using cloud-init for customization.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">terraform</span> <span class="p">{</span>
<span class="nx">required_providers</span> <span class="p">{</span>
<span class="nx">vcd</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">source</span> <span class="p">=</span> <span class="s2">"vmware/vcd"</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="k">variable</span> <span class="s2">"ztnetwork"</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="nx">string</span>
<span class="nx">description</span> <span class="p">=</span> <span class="s2">"ZeroTier Network to join"</span>
<span class="p">}</span>
<span class="k">variable</span> <span class="s2">"ztapi"</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="nx">string</span>
<span class="nx">sensitive</span> <span class="p">=</span> <span class="kc">true</span>
<span class="nx">description</span> <span class="p">=</span> <span class="s2">"ZeroTier API Access Token"</span>
<span class="p">}</span>
<span class="k">variable</span> <span class="s2">"slack_webhook_url"</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="nx">string</span>
<span class="nx">description</span> <span class="p">=</span> <span class="s2">"Slack webhook URL"</span>
<span class="nx">default</span> <span class="p">=</span> <span class="s2">""</span>
<span class="p">}</span>
<span class="k">variable</span> <span class="s2">"vcd_vm_name"</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="nx">string</span>
<span class="nx">description</span> <span class="p">=</span> <span class="s2">"Name of new vApp created from template"</span>
<span class="p">}</span>
<span class="k">resource</span> <span class="s2">"vcd_vapp"</span> <span class="s2">"ubuntu"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">vdc</span> <span class="p">=</span> <span class="s2">"my-vdc"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"ubuntu"</span>
<span class="nx">power_on</span> <span class="p">=</span> <span class="kc">true</span>
<span class="p">}</span>
<span class="k">resource</span> <span class="s2">"vcd_vapp_org_network"</span> <span class="s2">"ubuntu-network"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">vdc</span> <span class="p">=</span> <span class="s2">"my-vdc"</span>
<span class="nx">vapp_name</span> <span class="p">=</span> <span class="nx">vcd_vapp</span><span class="p">.</span><span class="nx">ubuntu</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">org_network_name</span> <span class="p">=</span> <span class="s2">"org-network"</span>
<span class="p">}</span>
<span class="k">resource</span> <span class="s2">"vcd_vapp_vm"</span> <span class="s2">"ubuntu"</span> <span class="p">{</span>
<span class="nx">org</span> <span class="p">=</span> <span class="s2">"my-org"</span>
<span class="nx">vdc</span> <span class="p">=</span> <span class="s2">"my-vdc"</span>
<span class="nx">vapp_name</span> <span class="p">=</span> <span class="s2">"ubuntu"</span>
<span class="nx">catalog_name</span> <span class="p">=</span> <span class="s2">"my-catalog"</span>
<span class="nx">template_name</span> <span class="p">=</span> <span class="s2">"ubuntu-2110-cloud"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"ubuntu-vm"</span>
<span class="nx">memory</span> <span class="p">=</span> <span class="mi">4096</span>
<span class="nx">cpus</span> <span class="p">=</span> <span class="mi">1</span>
<span class="nx">os_type</span> <span class="p">=</span> <span class="s2">"ubuntu64Guest"</span>
<span class="nx">power_on</span> <span class="p">=</span> <span class="kc">true</span>
<span class="nx">network</span> <span class="p">{</span>
<span class="nx">type</span> <span class="p">=</span> <span class="s2">"org"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"org-network"</span>
<span class="nx">ip_allocation_mode</span> <span class="p">=</span> <span class="s2">"MANUAL"</span>
<span class="nx">ip</span> <span class="p">=</span> <span class="s2">"192.168.1.10"</span>
<span class="p">}</span>
<span class="nx">guest_properties</span> <span class="p">=</span> <span class="p">{</span>
<span class="s2">"user-data"</span> <span class="p">=</span> <span class="nx">base64encode</span><span class="p">(</span><span class="nx">templatefile</span><span class="p">(</span><span class="s2">"cloud-config.yaml"</span><span class="p">,</span> <span class="p">{</span> <span class="nx">ztnetwork</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">ztnetwork</span><span class="p">,</span> <span class="nx">ztapi</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">ztapi</span><span class="p">,</span> <span class="nx">slack_webhook_url</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">slack_webhook_url</span><span class="p">,</span> <span class="nx">hostname</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">vcd_vm_name</span> <span class="p">}))</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Most of this is straightforward, but the magic happens in the <code class="language-plaintext highlighter-rouge">guest_properties</code> block of the <code class="language-plaintext highlighter-rouge">vcd_vapp_vm</code> resource. The <code class="language-plaintext highlighter-rouge">user-data</code> property contains a base 64 encoded version of my cloud-init configuration. You can see that the <code class="language-plaintext highlighter-rouge">templatefile()</code> function is used to insert some values needed for the ZeroTier install script: the ZeroTier network to connect to, an API key for ZeroTier, the webhook URL for Slack, and the VM hostname.</p>
<p>Here is my cloud-config.yaml, which performs the customization of the VM upon first boot:</p>
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">#cloud-config</span>
<span class="na">hostname</span><span class="pi">:</span> <span class="s">${hostname}</span>
<span class="na">users</span><span class="pi">:</span>
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">ubuntu</span>
<span class="na">sudo</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">ALL=(ALL)</span><span class="nv"> </span><span class="s">NOPASSWD:ALL"</span><span class="pi">]</span>
<span class="na">groups</span><span class="pi">:</span> <span class="pi">[</span><span class="nv">sudo</span><span class="pi">]</span>
<span class="na">shell</span><span class="pi">:</span> <span class="s">/bin/bash</span>
<span class="na">ssh_authorized_keys</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">ssh-rsa alongstringthatisansshkey</span>
<span class="na">manage_resolv_conf</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">packages</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">python3-pip</span>
<span class="pi">-</span> <span class="s">jq</span>
<span class="na">runcmd</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">export ZTNETWORK=${ztnetwork}</span>
<span class="pi">-</span> <span class="s">export ZTAPI=${ztapi}</span>
<span class="pi">-</span> <span class="s">export SLACK_WEBHOOK_URL=${slack_webhook_url}</span>
<span class="pi">-</span> <span class="s">wget https://raw.githubusercontent.com/shamsway/zerotier-installer/master/zerotier-installer.sh</span>
<span class="pi">-</span> <span class="s">chmod +x zerotier-installer.sh</span>
<span class="pi">-</span> <span class="s">./zerotier-installer.sh</span>
<span class="pi">-</span> <span class="s">rm zerotier-installer.sh</span>
<span class="na">final_message</span><span class="pi">:</span> <span class="s2">"</span><span class="s">The</span><span class="nv"> </span><span class="s">system</span><span class="nv"> </span><span class="s">is</span><span class="nv"> </span><span class="s">ready</span><span class="nv"> </span><span class="s">and</span><span class="nv"> </span><span class="s">prepped</span><span class="nv"> </span><span class="s">(took</span><span class="nv"> </span><span class="s">$UPTIME</span><span class="nv"> </span><span class="s">seconds)"</span>
</code></pre></div></div>
<p>This cloud-init config will configure the local ubuntu user with sudo privileges, disable password-based logins, add my desired SSH key and install some necessary packages. The <code class="language-plaintext highlighter-rouge">runcmd</code> block is the bit that actually downloads my ZeroTier installer from GitHub and executes it, connecting the VM to my ZeroTier network and providing output to Slack.</p>
<p>Now, let’s see this in action.</p>
<h1 id="workflow">Workflow</h1>
<p>The output from <code class="language-plaintext highlighter-rouge">terraform apply</code> looks just as you’d expect if you’ve ever seen Terraform run:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Plan: 3 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
vcd_vapp.ubuntu-zt: Creating...
vcd_vapp.ubuntu-zt: Still creating... [10s elapsed]
vcd_vapp.ubuntu-zt: Creation complete after 16s [id=urn:vcloud:vapp:db4d4ee7-b171-45dc-a98a-67cd717db127]
vcd_vapp_org_network.ubuntu-zt-network: Creating...
vcd_vapp_vm.ubuntu: Creating...
vcd_vapp_org_network.ubuntu-zt-network: Creation complete after 5s [id=urn:vcloud:network:1b61037f-dc6d-4ae5-aefc-59962de1e647]
vcd_vapp_vm.ubuntu: Still creating... [10s elapsed]
[snip]
vcd_vapp_vm.ubuntu: Creation complete after 1m58s [id=urn:vcloud:vm:d20caca3-8b80-45da-8435-c4d44c988ccb]
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.
</code></pre></div></div>
<p>VCD creates the vApp, clones a template VM into the vApp, and powers it on. When the VM boots, cloud-init runs and executes each step specified in cloud-config.yaml, which will ultimately connect the new VM to my ZeroTier network. API calls are used to authorize the new VM to connect to my ZeroTier network automatically, so I don’t have to go in and manually accept the new VM in the ZeroTier portal. The process of connecting the VM to ZeroTier is output to Slack, and once complete I can grab the provided IP and immediately connect to the new VM.</p>
<p class="center"><a href="/resources/2022/03/vcd-automation-slack.png" class="drop-shadow"><img src="/resources/2022/03/vcd-automation-slack.png" alt="" /></a></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>user@ubuntu:~$ ssh ubuntu@172.29.189.205
The authenticity of host '172.29.189.205 (172.29.189.205)' can't be established.
ECDSA key fingerprint is SHA256:sOGaDtQ6D6bvIhmr/YhKt6Olt9EsVNRNGAomfVuIW1o.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '172.29.189.205' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 21.10 (GNU/Linux 5.13.0-28-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Wed Mar 16 23:47:03 UTC 2022
System load: 0.03 Processes: 138
Usage of /: 23.0% of 9.52GB Users logged in: 0
Memory usage: 6% IPv4 address for ens192: 192.168.1.10
Swap usage: 0% IPv4 address for ztmjfe5xok: 172.29.189.205
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu@ubuntu-impish-21:~$ ping google.com
PING google.com (142.250.191.238) 56(84) bytes of data.
64 bytes from ord38s32-in-f14.1e100.net (142.250.191.238): icmp_seq=1 ttl=113 time=3.54 ms
64 bytes from ord38s32-in-f14.1e100.net (142.250.191.238): icmp_seq=2 ttl=113 time=3.60 ms
</code></pre></div></div>
<p>Notice that SSH key-based authentication is used instead of a password, which is common practice for instances running in the cloud.</p>
<p>So there it is - a VM deployed into VCD and automatically connected to ZeroTier, making it available without having to configure any sort of inbound firewall rules, NAT, or IPSec/SSL VPN.</p>
<h1 id="state-of-the-vcd-terraform-provider-in-2022">State of the VCD Terraform Provider in 2022</h1>
<p>When I wrote about this in 2018, the VCD Terraform provider was written by HashiCorp and was based on a go library named <code class="language-plaintext highlighter-rouge">govcloudair</code>. This library was not maintained by VMware and it was not actively developed, meaning that the VCD provider supported a limited number of features. I am happy to report that the <a href="https://registry.terraform.io/providers/vmware/vcd/latest">current VCD provider</a> is in a much better state. The provider is actively developed by VMware along with the underlying go library, <a href="https://github.com/vmware/go-vcloud-director">go-vcloud-director</a>. As of March 2022, there were <strong>over 2 million installs</strong> of the VCD Terraform provider, and new features are being added regularly. Many of the workarounds and caveats I mentioned in my 2018 post are no longer required. Huzzah!</p>
<p class="center"><img src="https://media.giphy.com/media/d7qN2d6ktQphUeDoQ4/giphy.gif" alt="" /></p>
<h1 id="final-thoughts">Final Thoughts</h1>
<p>Here are a few random thoughts/potential improvements:</p>
<ul>
<li>This same workflow could be used in any cloud environment. It would require outbound internet access to be enabled, and cloud-init is well supported across cloud providers. Each cloud provider’s Terraform provider documentation should contain examples for using cloud-init.</li>
<li>Cloud-init could be used to install ZeroTier and send the output to Slack, but I didn’t want to spend the time to convert my install script. Initially, I used a script hosted on GitHub because there was a limit on the size of a script that can be used with Guest Customization, but cloud-init does not have that limit. I may convert my install script over to cloud-init at a later date.</li>
<li>The ZeroTier install script uses <a href="https://github.com/philippbosch/slack-webhook-cli">https://github.com/philippbosch/slack-webhook-cli</a> to send messages to Slack, which requires Python to be installed. Installing Python adds time to the process. Sending messages to Slack is just a webhook, so a bash script could be used instead. This would remove the requirement to install Python and the whole process would be a bit faster.</li>
</ul>
<h1 id="resources">Resources</h1>
<ul>
<li>VCD Terraform provider: <a href="https://registry.terraform.io/providers/vmware/vcd/latest">https://registry.terraform.io/providers/vmware/vcd/latest</a></li>
<li>Go-vcloud-director library: <a href="https://github.com/vmware/go-vcloud-director">https://github.com/vmware/go-vcloud-director</a></li>
<li>ZeroTier documentation: <a href="https://docs.zerotier.com/zerotier/manual/">https://docs.zerotier.com/zerotier/manual/</a></li>
<li>ZeroTier overview on Wikipedia: <a href="https://en.wikipedia.org/wiki/ZeroTier">https://en.wikipedia.org/wiki/ZeroTier</a></li>
<li>How Does ZeroTier Actually Work? <a href="https://www.youtube.com/watch?v=Lao9T_RQTak">https://www.youtube.com/watch?v=Lao9T_RQTak</a></li>
</ul>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>GPL license / Up to 100 devices / Requires license to embed in commercial products. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Quick setup, but actual traffic may proxy through ZeroTier servers. There is no throughput guarantee. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>Matt ElliottThis post covers the details of how cloud-init reads its configuration through VMware tools, tips for troubleshooting cloud-init, and some other lessons learned along the way. Of course, I’ll share a working example that deploys a vApp to VCD using cloud-init for customization.Intro to Google Cloud VMware Engine – Common Networking Scenarios2021-05-04T00:00:00+00:002021-05-04T00:00:00+00:00https://networkbrouhaha.com/2021/05/gcve-networking-scenarios<p>This post will cover some common networking scenarios in Google Cloud VMware Engine (GCVE), like exposing a VM via public IP, accessing cloud-native services, and configuring a basic load balancer in NSX-T. I’ll also recap some important and useful features in GCP and GCVE. There is a lot of material covered, so I’ve provided a table of contents to allow you to skip to the topic you’re interested in.</p>
<div style="position: relative;">
<a href="#toc-skipped" class="screen-reader-only">Skip table of contents</a>
</div>
<h1 class="no_toc" id="table-of-contents">Table of Contents</h1>
<ul id="markdown-toc">
<li><a href="#creating-workload-segments-in-nsx-t" id="markdown-toc-creating-workload-segments-in-nsx-t">Creating Workload Segments in NSX-T</a></li>
<li><a href="#exposing-a-vm-via-public-ip" id="markdown-toc-exposing-a-vm-via-public-ip">Exposing a VM via Public IP</a> <ul>
<li><a href="#creating-firewall-rules" id="markdown-toc-creating-firewall-rules">Creating Firewall Rules</a></li>
</ul>
</li>
<li><a href="#load-balancing-with-nsx-t" id="markdown-toc-load-balancing-with-nsx-t">Load Balancing with NSX-T</a></li>
<li><a href="#accessing-cloud-native-services" id="markdown-toc-accessing-cloud-native-services">Accessing Cloud-Native Services</a> <ul>
<li><a href="#google-private-access" id="markdown-toc-google-private-access">Google Private Access</a></li>
</ul>
</li>
<li><a href="#viewing-routing-information" id="markdown-toc-viewing-routing-information">Viewing Routing Information</a> <ul>
<li><a href="#vpc-routes" id="markdown-toc-vpc-routes">VPC Routes</a></li>
<li><a href="#vpc-network-peering-routes" id="markdown-toc-vpc-network-peering-routes">VPC Network Peering Routes</a></li>
<li><a href="#nsx-t" id="markdown-toc-nsx-t">NSX-T</a></li>
</ul>
</li>
<li><a href="#vpn-connectivity" id="markdown-toc-vpn-connectivity">VPN Connectivity</a></li>
<li><a href="#dns-notes" id="markdown-toc-dns-notes">DNS Notes</a></li>
<li><a href="#wrap-up" id="markdown-toc-wrap-up">Wrap Up</a></li>
<li><a href="#helpful-resources" id="markdown-toc-helpful-resources">Helpful Resources</a></li>
</ul>
<div id="toc-skipped"></div>
<p><strong>Other posts in this series:</strong></p>
<ul>
<li><a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a></li>
<li><a href="/2021/02/gcp-vpc-to-gcve/">Connecting a VPC to GCVE</a></li>
<li><a href="/2021/03/gcve-bastion/">Bastion Host Access with IAP</a></li>
<li><a href="/2021/03/gcve-network-overview/">Network and Connectivity Overview</a></li>
<li><a href="/2021/04/gcve-hcx-config/">HCX Configuration</a></li>
</ul>
<h1 id="creating-workload-segments-in-nsx-t">Creating Workload Segments in NSX-T</h1>
<p>Your GCVE SDDC initially comes with networking pre-configured, and you don’t need to worry about configuring and trunking VLANs. Instead, any new networking configuration will be done in NSX-T. If you are new to NSX-T, the GCVE documentation <a href="https://cloud.google.com/vmware-engine/docs/networking/howto-create-vlan-subnet">covers creating new workload segments</a>, which should be your first step before creating or migrating any VMs to your GCVE SDDC.</p>
<p class="center"><a href="/resources/2021/05/68_gcve_diagram_1.png" class="drop-shadow"><img src="/resources/2021/05/68_gcve_diagram_1.png" alt="" /></a></p>
<p>This diagram represents the initial setup of my GCVE environment, and I will be building on this example over the following sections. If you’ve been following along with this blog series, this should look familiar. You can see a “Customer Data Center” on the left, which in my case is a lab, but it could be any environment connected to GCP via Cloud VPN or Cloud Interconnect. There is also a VPC peered with my GCVE environment, which is where my bastion host is running.</p>
<p>I’ve created a workload segment, <code class="language-plaintext highlighter-rouge">192.168.83.0/24</code>, and connected three Ubuntu Linux VMs to it. A few essential steps must be completed outside of NSX-T when new segments are created while using VPC peering or dynamic routing over Cloud VPN or Cloud Interconnect.</p>
<p class="center"><a href="/resources/2021/05/52_vpc_peering_imported_edited.png" class="drop-shadow"><img src="/resources/2021/05/52_vpc_peering_imported_edited.png" alt="" /></a></p>
<p>First, you must have <code class="language-plaintext highlighter-rouge">Import/export custom routes</code> enabled in private service access for the VPC peered with GCVE. Custom routes are covered in my previous post, <a href="/2021/02/gcp-vpc-to-gcve/">Connecting a VPC to GCVE</a>. Notice that my newly created segment shows up under <code class="language-plaintext highlighter-rouge">Imported Routes</code>.</p>
<p class="center"><a href="/resources/2021/05/50_cloud_router_adv_edited.png" class="drop-shadow"><img src="/resources/2021/05/50_cloud_router_adv_edited.png" alt="" /></a></p>
<p>Second, any workload segments must be added as a custom IP range to any Cloud Router participating in BGP peering to advertise routes back to your environment. This would apply to both Cloud Interconnect and Cloud VPN, where BGP is used to provide dynamic routing. Configuring this will ensure that the workload subnet will be advertised to your environment. More information can be found <a href="https://cloud.google.com/vmware-engine/docs/networking/howto-connect-to-onpremises#end-to-end_connectivity_and_routing_considerations">here</a>.</p>
<p>NSX-T has an excellent <a href="https://registry.terraform.io/providers/vmware/nsxt/latest/docs">Terraform provider</a>, and I have already covered several GCP Terraform examples in previous posts. My recommendation is to add new NSX-T segments via Terraform and add the custom subnet advertisement for the segment to any Cloud Routers via Terraform in the same workflow. This way, you will be sure you never forget to update your Cloud Router advertisements after adding a new segment.</p>
<h1 id="exposing-a-vm-via-public-ip">Exposing a VM via Public IP</h1>
<p>Let’s add an application into the mix. I have a test webserver running on <code class="language-plaintext highlighter-rouge">VM1</code> that I want to expose to the internet.</p>
<p class="center"><a href="/resources/2021/05/69_gcve_diagram_2.png" class="drop-shadow"><img src="/resources/2021/05/69_gcve_diagram_2.png" alt="" /></a></p>
<p>In GCVE, public IPs are not assigned directly to a VM. Instead, public IPs are allocated through the GCVE portal and assigned to the private IP of the relevant VM. This creates a simple destination NAT from the allocated public IP to the internal private IP.</p>
<p class="center"><a href="/resources/2021/05/54_allocate_public_ip.png" class="drop-shadow"><img src="/resources/2021/05/54_allocate_public_ip.png" alt="" /></a></p>
<p>Browse to <code class="language-plaintext highlighter-rouge">Network > Public IPs</code> and click <code class="language-plaintext highlighter-rouge">Allocate</code> to allocate a public IP. You will be prompted to supply a name and the region for the public IP. Click <code class="language-plaintext highlighter-rouge">Submit</code>, and you will be taken back to the <code class="language-plaintext highlighter-rouge">Public IPs</code> page. This page will now show the public IP that has been allocated. The internal address it is assigned to is listed under the <code class="language-plaintext highlighter-rouge">Attached Address</code> column.</p>
<p>You can find more information on public IPs in the <a href="https://cloud.google.com/vmware-engine/docs/concepts-public-ip-address">GCVE documentation</a>.</p>
<h2 id="creating-firewall-rules">Creating Firewall Rules</h2>
<p class="center"><a href="/resources/2021/05/55_create_fw_table.png" class="drop-shadow"><img src="/resources/2021/05/55_create_fw_table.png" alt="" /></a></p>
<p>GCVE also includes a firewall beyond the NSX-T boundary, so it will need to be configured to allow access to the public IP that was just allocated. To do this, browse to <code class="language-plaintext highlighter-rouge">Network > Firewall tables</code> and click <code class="language-plaintext highlighter-rouge">Create new firewall table</code>. Provide a name for the firewall table and click <code class="language-plaintext highlighter-rouge">Add Rule</code>.</p>
<p class="center"><a href="/resources/2021/05/56_create_fw_rule.png" class="drop-shadow"><img src="/resources/2021/05/56_create_fw_rule.png" alt="" /></a></p>
<p>Configure the rule to allow the desired traffic, choosing <code class="language-plaintext highlighter-rouge">Public IP</code> as the destination. Choose the newly allocated public IP from the dropdown, and click <code class="language-plaintext highlighter-rouge">Done</code>.</p>
<p class="center"><a href="/resources/2021/05/57_firewall_config.png" class="drop-shadow"><img src="/resources/2021/05/57_firewall_config.png" alt="" /></a></p>
<p>The new firewall table will be displayed. Click <code class="language-plaintext highlighter-rouge">Attached Subnets</code>, then <code class="language-plaintext highlighter-rouge">Attach to a Subnet</code>. This will attach the firewall table to a network.</p>
<p class="center"><a href="/resources/2021/05/58_attach_fw_edited.png" class="drop-shadow"><img src="/resources/2021/05/58_attach_fw_edited.png" alt="" /></a></p>
<p>Choose your SDDC along with <code class="language-plaintext highlighter-rouge">System management</code> from the <code class="language-plaintext highlighter-rouge">Select a Subnet</code> dropdown, and click <code class="language-plaintext highlighter-rouge">Save</code>. <code class="language-plaintext highlighter-rouge">System management</code> is the correct subnet to use when applying the firewall table to traffic behind NSX-T per the GCVE documentation.</p>
<p class="center"><a href="/resources/2021/05/61_ubuntu_webserver_edited.png" class="drop-shadow"><img src="/resources/2021/05/61_ubuntu_webserver_edited.png" alt="" /></a></p>
<p>I am now able to access my test webserver via the allocated public IP. Huzzah! More information on firewall tables can be found in the <a href="https://cloud.google.com/vmware-engine/docs/concepts-firewall-tables">GCVE documentation</a>.</p>
<h1 id="load-balancing-with-nsx-t">Load Balancing with NSX-T</h1>
<p>Now that the test webserver is working as expected, it’s time to implement a load balancer in NSX-T. Keep in mind that GCP also has a <a href="https://cloud.google.com/load-balancing/docs/load-balancing-overview">native load balancing service</a>, but that is beyond the scope of this post.</p>
<p class="center"><a href="/resources/2021/05/70_gcve_diagram_3.png" class="drop-shadow"><img src="/resources/2021/05/70_gcve_diagram_3.png" alt="" /></a></p>
<p>Public IPs can be assigned to any private IP, not just IPs assigned to VMs. For this example, I’ll configure the NSX-T load balancer and move the previously allocated public IP to the load balancer VIP. There are several steps needed to create a load balancer, so let’s dive in.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_1.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_1.png" alt="" /></a></p>
<p>The first step is to create a new load balancer via the <code class="language-plaintext highlighter-rouge">Load Balancing</code> screen in NSX-T Manager. Provide a name, choose a size, and the tier 1 router to host the load balancer. Click <code class="language-plaintext highlighter-rouge">Save</code>. Now, expand the <code class="language-plaintext highlighter-rouge">Virtual Servers</code> section and click <code class="language-plaintext highlighter-rouge">Set Virtual Servers</code>.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_2.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_2.png" alt="" /></a></p>
<p>This is where the virtual server IP (VIP) will be configured, along with a backing server pool. Provide a name and internal IP for the VIP. I used an IP that lives in the same segment as my servers, but you could create a dedicated segment for your VIP. Click the dropdown under <code class="language-plaintext highlighter-rouge">Server Pool</code> and click <code class="language-plaintext highlighter-rouge">Create New</code>.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_3.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_3.png" alt="" /></a></p>
<p>Next, provide a name for your server pool, and choose a load balancing algorithm. Click <code class="language-plaintext highlighter-rouge">Select Members</code> to add VMs to the pool.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_4_edited.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_4_edited.png" alt="" /></a></p>
<p>Click <code class="language-plaintext highlighter-rouge">Add Member</code> to add a new VM to the pool and provide the internal IP and port. Rinse and repeat until you’ve added all of the relevant VMs to your virtual server pool, then click <code class="language-plaintext highlighter-rouge">Apply</code>.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_5.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_5.png" alt="" /></a></p>
<p>You’ll be taken back to the server pool screen, where you can add a monitor to check the health of the VMs in your pool. Click <code class="language-plaintext highlighter-rouge">Set Monitors</code> to choose a monitor.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_6_edited.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_6_edited.png" alt="" /></a></p>
<p>My pool members are running a simple webserver on port 80, so I’m using the <code class="language-plaintext highlighter-rouge">default-http-lb-monitor</code>. After choosing the appropriate monitor, click <code class="language-plaintext highlighter-rouge">Apply</code>.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_7_edited.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_7_edited.png" alt="" /></a></p>
<p>Review the settings for the VIP and click <code class="language-plaintext highlighter-rouge">Close</code>.</p>
<p class="center"><a href="/resources/2021/05/62_web_lb_8_edited.png" class="drop-shadow"><img src="/resources/2021/05/62_web_lb_8_edited.png" alt="" /></a></p>
<p>Finally, click <code class="language-plaintext highlighter-rouge">Save</code> to apply the new settings to your load balancer.</p>
<p class="center"><a href="/resources/2021/05/63_edit_public_ip.png" class="drop-shadow"><img src="/resources/2021/05/63_edit_public_ip.png" alt="" /></a></p>
<p>The last step is to browse to <code class="language-plaintext highlighter-rouge">Network > Public IPs</code> in the GCVE portal and edit the existing public IP allocation. Update the name as appropriate, and change the attached local address to the load balancer VIP. No firewall rules need to be changed since the traffic is coming in over the same port (<code class="language-plaintext highlighter-rouge">tcp/80</code>).</p>
<p class="center"><a href="/resources/2021/05/64_lb_test.gif" class="drop-shadow"><img src="/resources/2021/05/64_lb_test.gif" alt="" /></a></p>
<p>Browsing to the allocated public IP and pressing refresh a few times shows that our load balancer is working as expected!</p>
<h1 id="accessing-cloud-native-services">Accessing Cloud-Native Services</h1>
<p>The last addition to this example is to include a GCP cloud-native service. I’ve chosen to use Cloud Storage because it is a simple example, and it provides incredible utility. This diagram illustrates my desired configuration.</p>
<p class="center"><a href="/resources/2021/05/72_gcve_diagram_4.png" class="drop-shadow"><img src="/resources/2021/05/72_gcve_diagram_4.png" alt="" /></a></p>
<p>My goal is to stage a simple static website in a Google Storage bucket, then mount the bucket as a read-only filesystem on each of my webservers. The bucket will be mounted to <code class="language-plaintext highlighter-rouge">/var/www/html</code> and will replace the testing page that had been staged on each server. You may be thinking, “This is crazy. Why not serve the static site directly from Google Storage?!” This is a valid question, and my response is that this is merely an example, not necessarily a best practice. I could have chosen to use Google Filestore instead of Google Storage as well. This illustrates that there is more than one way to do many things in the cloud.</p>
<p>The first step is to create a Google Storage bucket, which I completed with this simple Terraform code:</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">provider</span> <span class="s">"google"</span> <span class="p">{</span>
<span class="n">project</span> <span class="o">=</span> <span class="n">var</span><span class="p">.</span><span class="n">project</span>
<span class="n">region</span> <span class="o">=</span> <span class="n">var</span><span class="p">.</span><span class="n">region</span>
<span class="n">zone</span> <span class="o">=</span> <span class="n">var</span><span class="p">.</span><span class="n">zone</span>
<span class="p">}</span>
<span class="n">resource</span> <span class="s">"google_storage_bucket"</span> <span class="s">"melliott-vmw-static-site"</span> <span class="p">{</span>
<span class="n">name</span> <span class="o">=</span> <span class="s">"melliott-vmw-static-site"</span>
<span class="n">location</span> <span class="o">=</span> <span class="s">"US"</span>
<span class="n">force_destroy</span> <span class="o">=</span> <span class="n">true</span>
<span class="n">storage_class</span> <span class="o">=</span> <span class="s">"STANDARD"</span>
<span class="p">}</span>
<span class="n">resource</span> <span class="s">"google_storage_bucket_acl"</span> <span class="s">"melliott-vmw-static-site-acl"</span> <span class="p">{</span>
<span class="n">bucket</span> <span class="o">=</span> <span class="n">google_storage_bucket</span><span class="p">.</span><span class="n">melliott</span><span class="o">-</span><span class="n">vmw</span><span class="o">-</span><span class="n">static</span><span class="o">-</span><span class="n">site</span><span class="p">.</span><span class="n">name</span>
<span class="n">role_entity</span> <span class="o">=</span> <span class="p">[</span>
<span class="s">"OWNER:user-melliott@vmware.com"</span>
<span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Next, I found a simple static website example, which I stored in the bucket and modified for my needs. After staging this, I completed the following steps on each webserver to mount the bucket.</p>
<ul>
<li>Install the Google Cloud SDK (<a href="https://cloud.google.com/sdk/docs/install">https://cloud.google.com/sdk/docs/install</a>)</li>
<li>Install gcsfuse (<a href="https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md">https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/installing.md</a>), which is used to mount Google Storage buckets in linux via <a href="https://en.wikipedia.org/wiki/Filesystem_in_Userspace">FUSE</a></li>
<li>Authenticate to Google Cloud with <code class="language-plaintext highlighter-rouge">gcloud auth application-default login</code>. This will provide a URL that will need to be pasted into a browser to complete authentication. The verification code returned will then need to be pasted back into the prompt on the webserver.</li>
<li>Remove existing files in <code class="language-plaintext highlighter-rouge">/var/www/html</code></li>
<li>Mount the bucket as a read-only filesystem with <code class="language-plaintext highlighter-rouge">gcsfuse -o allow_other -o ro [bucket-name] /var/www/html</code></li>
</ul>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>root@ubuntu:/var/www# gcsfuse <span class="nt">-o</span> allow_other <span class="nt">-o</span> ro melliott-vmw-static-site /var/www/html
2021/05/04 16:19:10.680365 Using mount point: /var/www/html
2021/05/04 16:19:10.686743 Opening GCS connection...
2021/05/04 16:19:11.037846 Mounting file system <span class="s2">"melliott-vmw-static-site"</span>...
2021/05/04 16:19:11.042605 File system has been successfully mounted.
root@ubuntu:/var/www#
root@ubuntu:/var/www#
root@ubuntu:/var/www# <span class="nb">ls</span> /var/www/html
assets error images index.html LICENSE.MD README.MD
</code></pre></div></div>
<p>After mounting the bucket and running an <code class="language-plaintext highlighter-rouge">ls</code> on <code class="language-plaintext highlighter-rouge">/var/www/html</code>, I can see that my static website is mounted correctly.</p>
<p class="center"><a href="/resources/2021/05/73_static_website.png" class="drop-shadow"><img src="/resources/2021/05/73_static_website.png" alt="" /></a></p>
<p>Browsing to the public IP fronting my load balancer VIP now displays my static website, hosted in a Google Storage bucket. Pretty snazzy!</p>
<h2 id="google-private-access">Google Private Access</h2>
<p>My GCVE environment has internet access enabled, so native services are accessed via the internet gateway. If you don’t want to allow internet access for your environment, you can still access native services via <a href="https://cloud.google.com/vpc/docs/configure-private-google-access">Private Google Access</a>. Much of the GCP documentation for this feature focuses on access to Google APIs from locations other than GCVE, but it is not too difficult to apply these practices to GCVE.</p>
<p class="center"><a href="/resources/2021/05/71_vpc_private_google_access.png" class="drop-shadow"><img src="/resources/2021/05/71_vpc_private_google_access.png" alt="" /></a></p>
<p>Google Private Access is primarily enabled by DNS, but you still need to enable this feature for any configured VPCs. The domain names used for this service are <code class="language-plaintext highlighter-rouge">private.googleapis.com</code> and <code class="language-plaintext highlighter-rouge">restricted.googleapis.com</code>. I was able to resolve both of these from my GCVE VMs, but my VMs are configured to use the resolvers in my GCVE environment. If you cannot resolve these hostnames, make sure you are using the GCVE DNS servers. As a reminder, these server addresses can be found under <code class="language-plaintext highlighter-rouge">Private Cloud DNS Servers</code> in the summary page for your GCVE cluster. You can find more information on Google Private Access <a href="https://cloud.google.com/vpc/docs/configure-private-google-access">here</a>.</p>
<h1 id="viewing-routing-information">Viewing Routing Information</h1>
<p>Knowing where to find routing tables is incredibly helpful when troubleshooting connectivity issues. There are a handful of places to look in GCP and GCVE to find this information.</p>
<h2 id="vpc-routes">VPC Routes</h2>
<p>You can view routes for a VPC in the GCP portal by browsing to <code class="language-plaintext highlighter-rouge">VPC networks</code>, clicking on the desired VPC, then clicking on the <code class="language-plaintext highlighter-rouge">Routes</code> tab. If you are using VPC peering, you will notice a message that says, “<em>This VPC network has been configured to import custom routes using VPC Network Peering. Any imported custom dynamic routes are omitted from this list, and some route conflicts might not be resolved. Please refer to the VPC Network Peering section for the complete list of imported custom routes, and the <a href="https://cloud.google.com/vpc/docs/routes?authuser=1#routeselection">routing order</a> for information about how GCP resolves conflicts.</em>” Basically, this message says that you will not see routes for your GCVE environment in this table.</p>
<h2 id="vpc-network-peering-routes">VPC Network Peering Routes</h2>
<p>To see routes for your GCVE environment, browse to <code class="language-plaintext highlighter-rouge">VPC Network Peering</code> and choose the <code class="language-plaintext highlighter-rouge">servicenetworking-googleapis-com</code> entry for your VPC. You will see routes for your GCVE environment under <code class="language-plaintext highlighter-rouge">Imported Routes</code> and any subnets in your VPC under <code class="language-plaintext highlighter-rouge">Exported Routes</code>. You can also view these routes using the <code class="language-plaintext highlighter-rouge">gcloud</code> tool.</p>
<ul>
<li>View imported routes: <code class="language-plaintext highlighter-rouge">gcloud compute networks peerings list-routes servicenetworking-googleapis-com --network=[VPC Name] --region=[REGION]] --direction=INCOMING</code></li>
<li>View exported routes: <code class="language-plaintext highlighter-rouge">gcloud compute networks peerings list-routes servicenetworking-googleapis-com --network=[VPC Name] --region=[REGION]] --direction=OUTGOING</code></li>
</ul>
<p>Example results:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>melliott@melliott-a01 gcp-bucket % gcloud compute networks peerings list-routes servicenetworking-googleapis-com <span class="nt">--network</span><span class="o">=</span>gcve-usw2 <span class="nt">--region</span><span class="o">=</span>us-west2 <span class="nt">--direction</span><span class="o">=</span>INCOMING
DEST_RANGE TYPE NEXT_HOP_REGION PRIORITY STATUS
192.168.80.0/29 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.0/29 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.16/29 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.16/29 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.8/29 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.8/29 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.112/28 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.80.112/28 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
10.30.28.0/24 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
10.30.28.0/24 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.81.0/24 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.81.0/24 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.83.0/24 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
192.168.83.0/24 DYNAMIC_PEERING_ROUTE us-west2 0 accepted
</code></pre></div></div>
<h2 id="nsx-t">NSX-T</h2>
<p>Routing and forwarding tables can be downloaded from the NSX-T manager web interface or via API. It’s also reasonably easy to grab the routing table with PowerCLI. The following example displays the routing table from the T0 router in my GCVE environment.</p>
<div class="language-powershell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">Import-Module</span><span class="w"> </span><span class="nx">VMware.PowerCLI</span><span class="w">
</span><span class="n">Connect-NsxtServer</span><span class="w"> </span><span class="nt">-Server</span><span class="w"> </span><span class="nx">my-nsxt-manager.gve.goog</span><span class="w">
</span><span class="nv">$t0s</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">Get-NsxtPolicyService</span><span class="w"> </span><span class="nt">-Name</span><span class="w"> </span><span class="nx">com.vmware.nsx_policy.infra.tier0s</span><span class="w">
</span><span class="nv">$t0_name</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nv">$t0s</span><span class="o">.</span><span class="nf">list</span><span class="p">()</span><span class="o">.</span><span class="nf">results</span><span class="o">.</span><span class="nf">display_name</span><span class="w">
</span><span class="nv">$t0</span><span class="o">.</span><span class="nf">list</span><span class="p">(</span><span class="nv">$t0_name</span><span class="p">)</span><span class="o">.</span><span class="nf">results</span><span class="o">.</span><span class="nf">route_entries</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Select-Object</span><span class="w"> </span><span class="nx">network</span><span class="p">,</span><span class="nx">next_hop</span><span class="p">,</span><span class="nx">route_type</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">Sort-Object</span><span class="w"> </span><span class="nt">-Property</span><span class="w"> </span><span class="nx">network</span><span class="w">
</span><span class="n">network</span><span class="w"> </span><span class="nx">next_hop</span><span class="w"> </span><span class="nx">route_type</span><span class="w">
</span><span class="o">-------</span><span class="w"> </span><span class="o">--------</span><span class="w"> </span><span class="o">----------</span><span class="w">
</span><span class="mf">0.0</span><span class="o">.</span><span class="nf">0</span><span class="o">.</span><span class="nf">0</span><span class="n">/0</span><span class="w"> </span><span class="nx">192.168.81.225</span><span class="w"> </span><span class="nx">t0s</span><span class="w">
</span><span class="mf">0.0</span><span class="o">.</span><span class="nf">0</span><span class="o">.</span><span class="nf">0</span><span class="n">/0</span><span class="w"> </span><span class="nx">192.168.81.241</span><span class="w"> </span><span class="nx">t0s</span><span class="w">
</span><span class="mf">10.30</span><span class="o">.</span><span class="nf">28</span><span class="o">.</span><span class="nf">0</span><span class="n">/24</span><span class="w"> </span><span class="nx">169.254.160.3</span><span class="w"> </span><span class="nx">t1c</span><span class="w">
</span><span class="mf">10.30</span><span class="o">.</span><span class="nf">28</span><span class="o">.</span><span class="nf">0</span><span class="n">/24</span><span class="w"> </span><span class="nx">169.254.160.3</span><span class="w"> </span><span class="nx">t1c</span><span class="w">
</span><span class="mf">169.254</span><span class="o">.</span><span class="nf">0</span><span class="o">.</span><span class="nf">0</span><span class="n">/24</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">169.254</span><span class="o">.</span><span class="nf">160</span><span class="o">.</span><span class="nf">0</span><span class="n">/31</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">169.254</span><span class="o">.</span><span class="nf">160</span><span class="o">.</span><span class="nf">0</span><span class="n">/31</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">169.254</span><span class="o">.</span><span class="nf">160</span><span class="o">.</span><span class="nf">2</span><span class="n">/31</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">169.254</span><span class="o">.</span><span class="nf">160</span><span class="o">.</span><span class="nf">2</span><span class="n">/31</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">192.168</span><span class="o">.</span><span class="nf">81</span><span class="o">.</span><span class="nf">224</span><span class="n">/28</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">192.168</span><span class="o">.</span><span class="nf">81</span><span class="o">.</span><span class="nf">240</span><span class="n">/28</span><span class="w"> </span><span class="nx">t0c</span><span class="w">
</span><span class="mf">192.168</span><span class="o">.</span><span class="nf">83</span><span class="o">.</span><span class="nf">0</span><span class="n">/24</span><span class="w"> </span><span class="nx">169.254.160.1</span><span class="w"> </span><span class="nx">t1c</span><span class="w">
</span><span class="mf">192.168</span><span class="o">.</span><span class="nf">83</span><span class="o">.</span><span class="nf">0</span><span class="n">/24</span><span class="w"> </span><span class="nx">169.254.160.1</span><span class="w"> </span><span class="nx">t1c</span><span class="w">
</span></code></pre></div></div>
<h1 id="vpn-connectivity">VPN Connectivity</h1>
<p>I haven’t talked much about VPNs in this blog series, but it is an important component that deserves more attention. Provisioning a VPN to GCP is an easy way to connect to your GCVE environment if you are waiting on a Cloud Interconnect to be installed. It can also be used as backup connectivity if your primary connection fails. NSX-T has can terminate an IPSec VPN, but I would recommend using <a href="https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview">Cloud VPN</a> instead. This will ensure you have connectivity to any GCP-based resources along with GCVE.</p>
<p>I’ve put together some example Terraform code to provision the necessary VPN-related resources in GCP. The example code is available at https://github.com/shamsway/gcp-terraform-examples in the <code class="language-plaintext highlighter-rouge">gcve-ha-vpn</code> subdirectory. Using this example will create the minimum configuration needed to stand up a VPN to GCP/GCVE. It is assumed that you have already created a VPC and <a href="https://networkbrouhaha.com/2021/02/gcp-vpc-to-gcve/">configured peering with your GCVE cluster</a>. This example does not create a redundant VPN solution, but it can be easily extended to do so by creating a secondary Cloud Router, interface, and BGP peer. You can find more information on HA VPN topologies in the <a href="https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies">GCP documentation</a>. After using the example code, you will still need to configure the VPN settings at your site. Google provides configuration examples for several different vendors at <a href="https://cloud.google.com/network-connectivity/docs/vpn/how-to/interop-guides">Using third-party VPNs with Cloud VPN</a>. I’ve written previously about VPNs for cloud connectivity, as well as other connection methods, in <a href="/2020/11/cloud-connectivity-101/">Cloud Connectivity 101</a></p>
<h1 id="dns-notes">DNS Notes</h1>
<p>I’ve saved the most important topic for last. DNS is a crucial component when operating in the cloud, so here are a few tips and recommendations to make sure you’re successful. <a href="https://cloud.google.com/dns">Cloud DNS</a> has a 100% uptime SLA, which is not something you see very often. This service is so crucial to GCP that Google has essentially guaranteed that it always be available. That is the type of guarantee that provides peace of mind, especially when you will have so many other services and applications relying on it.</p>
<p>In terms of GCVE, you must be able to properly resolve the hostnames for vCenter, NSX, HCX, and other applications deployed in your environment. These topics are covered in detail at these links:</p>
<ul>
<li><a href="https://cloud.google.com/vmware-engine/docs/networking/howto-dns-on-premises">Configuring DNS for management appliance access</a></li>
<li><a href="https://cloud.google.com/vmware-engine/docs/networking/howto-dns-profiles">Creating and applying DNS profiles</a></li>
<li><a href="https://cloud.google.com/vmware-engine/docs/vmware-platform/howto-identity-sources">Configuring authentication using Active Directory</a></li>
</ul>
<p>The basic gist is this: the DNS servers running in your GCVE environment will be able to resolve A records for the management applications running in GCVE (vCenter, NSX, HCX, etc.). If you have <a href="(/2021/02/gcp-vpc-to-gcve/)">configured VPC peering with GCVE</a>, Cloud DNS will be automatically configured forward requests to the GCVE DNS servers for any <code class="language-plaintext highlighter-rouge">gve.goog</code> hostname. This will allow you to resolve GCVE-related A records from your VPC or bastion host. The last step is to make sure that you can properly resolve GCVE-related hostnames in your local environment. If you are using Windows Server for DNS, you need to configure a conditional forwarder for <code class="language-plaintext highlighter-rouge">gve.goog</code>, using the DNS servers running in GCVE. Other scenarios, like configuring BIND, are covered in the documentation links above.</p>
<h1 id="wrap-up">Wrap Up</h1>
<p>This is a doozy of a post, so I won’t waste too many words here. I genuinely hope you enjoyed this blog series. There will definitely be more GCVE-related blogs in the future, and you can hit me up any time <a href="https://twitter.com/NetworkBrouhaha">@NetworkBrouhaha</a> and let me know what topics you’d like to see covered. Thanks for reading!</p>
<h1 id="helpful-resources">Helpful Resources</h1>
<ul>
<li><a href="https://cloud.google.com/vmware-engine/docs">Google Cloud VMware Engine documentation</a></li>
<li><a href="https://cloud.google.com/architecture/private-cloud-networking-for-vmware-engine">Private cloud networking for Google Cloud VMware Engine</a> Whitepaper</li>
<li><a href="https://cloud.google.com/vmware-engine/docs/workloads/howto-migrate-vms-using-hcx">Migrating VMware VMs using VMware HCX</a></li>
<li><a href="https://cloud.vmware.com/community/2021/02/25/introducing-google-cloud-vmware-engine-logical-design-poster-workload-mobility/">Google Cloud VMware Engine Logical Design Poster for Workload Mobility</a></li>
<li><a href="https://cloud.google.com/dns">Cloud DNS</a></li>
<li><a href="https://cloud.google.com/storage/docs/gcs-fuse">Cloud Storage FUSE</a></li>
<li><a href="https://github.com/GoogleCloudPlatform/gcsfuse">gcsfuse</a></li>
<li><a href="https://cloud.google.com/sdk/docs/install">Installing Google Cloud SDK</a></li>
<li><a href="https://cloud.google.com/network-connectivity/docs/vpn">Cloud VPN documentation</a></li>
<li><a href="https://cloud.google.com/community/tutorials/deploy-ha-vpn-with-terraform">Tutorial: Deploy HA VPN with Terraform</a></li>
<li><a href="https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies">Cloud VPN Technologies</a></li>
<li><a href="https://cloud.google.com/network-connectivity/docs/vpn/how-to/interop-guides">Using third-party VPNs with Cloud VPN</a></li>
<li><a href="https://cloud.google.com/blog/products/compute/how-to-use-multi-vpcs-with-google-cloud-vmware-engine">How to use multi-VPC networking in Google Cloud VMware Engine</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs">Google Cloud Platform Provider</a> for Terraform</li>
<li>My <a href="https://github.com/shamsway/gcp-terraform-examples">GCP Terraform Examples</a></li>
</ul>
<p>You can find a hands-on lab for GCVE at <a href="https://labs.hol.vmware.com/">https://labs.hol.vmware.com/</a> and searching for <code class="language-plaintext highlighter-rouge">HOL-2179-01-ISM</code></p>Matt ElliottThis post will cover some common networking scenarios in Google Cloud VMware Engine, like exposing a VM via public IP and configuring a basic load balancer in NSX-T. I'll also recap some important and useful features in GCP and GCVE.Intro to Google Cloud VMware Engine – HCX Configuration2021-04-12T00:00:00+00:002021-04-12T00:00:00+00:00https://networkbrouhaha.com/2021/04/gcve-hcx-config<p>Now that we have an SDDC running in Google Cloud VMware Engine, it is time to migrate some workloads into the cloud! <a href="https://cloud.vmware.com/vmware-hcx">VMware HCX</a> will be the tool I use to migrate Virtual Machines to GCVE. If you recall from the first post in this series, HCX was included in our SDDC deployment, so there is no further configuration needed in GCVE for HCX. The GCVE docs <a href="https://cloud.google.com/vmware-engine/docs/workloads/howto-migrate-vms-using-hcx#prepare-for-hcx-manager-installation-on-premises">cover installing and configuring the on-prem components for HCX</a>, so I’m not going to cover those steps in this post. As with previous posts, I will be taking an “automation first” approach to configuring HCX with Terraform. All of the code referenced in this post is available at <a href="https://github.com/shamsway/gcp-terraform-examples">https://github.com/shamsway/gcp-terraform-examples</a> in the <code class="language-plaintext highlighter-rouge">gcve-hcx</code> sub-directory.</p>
<p><strong>Other posts in this series:</strong></p>
<ul>
<li><a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a></li>
<li><a href="/2021/02/gcp-vpc-to-gcve/">Connecting a VPC to GCVE</a></li>
<li><a href="/2021/03/gcve-bastion/">Bastion Host Access with IAP</a></li>
<li><a href="/2021/03/gcve-network-overview/">Network and Connectivity Overview</a></li>
<li><a href="/2021/05/gcve-networking-scenarios/">Common Networking Scenarios</a></li>
</ul>
<p>Before we look at configuring HCX with Terraform, there are a few items to consider. The provider I’m using to configure HCX, <a href="https://registry.terraform.io/providers/adeleporte/hcx/">adeleporte/hcx</a>, is a community provider. It is not supported by VMware. It is also under active development, so you may run across a bug or some outdated documentation. In my testing of the provider, I have found that it works well for an environment with a single service mesh but needs some improvements to support environments with multiple service meshes.</p>
<p>Part of the beauty of open-source software is that anyone can contribute code. If you would like to submit an issue to track a bug, update documentation, or add new functionality, cruise over to the <a href="https://github.com/adeleporte/terraform-provider-hcx">GitHub repo</a> to get started.</p>
<h1 id="hcx-configuration-with-terraform">HCX Configuration with Terraform</h1>
<p>Configuring HCX involves configuring network profiles and a compute profile, which are then referenced in a service mesh configuration. The service mesh facilitates the migration of VMs to and from the cloud. The <a href="https://docs.vmware.com/en/VMware-HCX/4.0/hcx-user-guide/GUID-5D2F1312-EB62-4B25-AF88-9ADE129EDB57.html">HCX documentation</a> describes these components in detail, and I recommend reading through the user guide if you plan on performing a migration of any scale.</p>
<p>The example Terraform code linked at the beginning of the post will do the following:</p>
<ul>
<li>Create a <a href="https://docs.vmware.com/en/VMware-HCX/4.0/hcx-user-guide/GUID-4BA6FBD4-ED66-4BE0-A216-6F6FFE1E8A20.html">site pairing</a> between your on-premises data center and your GCVE SDDC</li>
<li>Add two <a href="https://docs.vmware.com/en/VMware-HCX/4.0/hcx-user-guide/GUID-184FCA54-D0CB-4931-B0E8-A81CD6120C52.html">network profiles</a>, one for management traffic and another for vMotion traffic. Network profiles for uplink and replication traffic can also be created, but in this example, I will use the management network for those functions.</li>
<li>Create a <a href="https://docs.vmware.com/en/VMware-HCX/4.0/hcx-user-guide/GUID-BBAC979E-8899-45AD-9E01-98A132CE146E.html">compute profile</a> consisting of the network profiles created, and other parameters specific to your environment, like the datastore in use.</li>
<li>Create a <a href="https://docs.vmware.com/en/VMware-HCX/4.0/hcx-user-guide/GUID-46AED982-8ED2-4CB1-807E-FEFD18FAC0DD.html">service mesh</a> between your on-prem data center and GCVE SDDC. This links the two compute profiles at each site for migration and sets other parameters, like the HCX features to enable.</li>
<li><a href="https://docs.vmware.com/en/VMware-HCX/4.0/hcx-user-guide/GUID-DD9C3316-D01C-4088-B3EA-84ADB9FED573.html">Extend a network</a> from your on-prem data center into your GCVE SDDC.</li>
</ul>
<p>After Terraform completes the configuration, you will be able to migrate VMs from your on-prem data center into your GCVE SDDC. To get started, clone the example repo with <code class="language-plaintext highlighter-rouge">git clone https://github.com/shamsway/gcp-terraform-examples.git</code>, then change to the <code class="language-plaintext highlighter-rouge">gcve-hcx</code> sub-directory. You will find these files:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">main.tf</code> – Contains the primary Terraform code to complete the steps mentioned above</li>
<li><code class="language-plaintext highlighter-rouge">variables.tf</code> – Defines the input variables that will be used in <code class="language-plaintext highlighter-rouge">main.tf</code></li>
</ul>
<p>Let’s take a look at the code that makes up this example.</p>
<h2 id="maintf-contents">main.tf Contents</h2>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">terraform</span> <span class="p">{</span>
<span class="nx">required_providers</span> <span class="p">{</span>
<span class="nx">hcx</span> <span class="p">=</span> <span class="p">{</span>
<span class="nx">source</span> <span class="p">=</span> <span class="s2">"adeleporte/hcx"</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Unlike previous examples, this one does not start with a <code class="language-plaintext highlighter-rouge">provider</code> block. Instead, this <code class="language-plaintext highlighter-rouge">terraform</code> block will download and install the <code class="language-plaintext highlighter-rouge">adeleporte/hcx</code> provider from <code class="language-plaintext highlighter-rouge">registry.terraform.io</code>, which is a handy shortcut for installing community providers.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">provider</span> <span class="s2">"hcx"</span> <span class="p">{</span>
<span class="nx">hcx</span> <span class="p">=</span> <span class="s2">"https://your.hcx.url"</span>
<span class="nx">admin_username</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">hcx_admin_username</span>
<span class="nx">admin_password</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">hcx_admin_password</span>
<span class="nx">username</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">hcx_username</span>
<span class="nx">password</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">hcx_password</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">provider</code> block specifies the URL for your HCX appliance, along with admin credentials (those used to access the appliance management UI over port 9443) and user credentials for the standard HCX UI. During my testing, I had to use an IP address instead of an FQDN for my HCX appliance. Note that this example has the URL specified directly in the code instead of using a variable. You will need to edit <code class="language-plaintext highlighter-rouge">main.tf</code> to set this value, along with a few other values that you will see below.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"hcx_site_pairing"</span> <span class="s2">"gcve"</span> <span class="p">{</span>
<span class="nx">url</span> <span class="p">=</span> <span class="s2">"https://gcve.hcx.url"</span>
<span class="nx">username</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">gcve_hcx_username</span>
<span class="nx">password</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">gcve_hcx_password</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">hcx_site_pairing</code> resource creates a site pairing between your on-prem and GCVE-based HCX appliances. This allows both HCX appliances to exchange information about their local environments and is a prerequisite to creating the service mesh. I used the FQDN of the HCX server running in GCVE for the <code class="language-plaintext highlighter-rouge">url</code> parameter, but I had previously configured DNS resolution between my lab and my GCVE environment. You can find the IP and FQDN of your HCX server in GCVE by browsing to <code class="language-plaintext highlighter-rouge">Resources > [Your SDDC] > vSphere Management Network</code>.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"hcx_network_profile"</span> <span class="s2">"net_management_gcve"</span> <span class="p">{</span>
<span class="nx">site_pairing</span> <span class="p">=</span> <span class="nx">hcx_site_pairing</span><span class="p">.</span><span class="nx">gcve</span>
<span class="nx">network_name</span> <span class="p">=</span> <span class="s2">"Management network name"</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"Management network profile name"</span>
<span class="nx">mtu</span> <span class="p">=</span> <span class="mi">1500</span>
<span class="nx">ip_range</span> <span class="p">{</span>
<span class="nx">start_address</span> <span class="p">=</span> <span class="s2">"172.17.10.10"</span>
<span class="nx">end_address</span> <span class="p">=</span> <span class="s2">"172.17.10.13"</span>
<span class="p">}</span>
<span class="nx">gateway</span> <span class="p">=</span> <span class="s2">"172.17.10.1"</span>
<span class="nx">prefix_length</span> <span class="p">=</span> <span class="mi">24</span>
<span class="nx">primary_dns</span> <span class="p">=</span> <span class="s2">"172.17.10.2"</span>
<span class="nx">secondary_dns</span> <span class="p">=</span> <span class="s2">"172.17.10.3"</span>
<span class="nx">dns_suffix</span> <span class="p">=</span> <span class="s2">"yourcompany.biz"</span>
<span class="p">}</span>
</code></pre></div></div>
<p>This block and the block immediately following it add new network profiles to your local HCX server. Network profiles specify a local network to use for specific traffic (management, uplink, vMotion, or replication) as well as an IP range reserved for use by HCX appliances. For smaller deployments, it is OK to use one network profile for multiple traffic types. This example creates a management network profile, which will also be used for uplink and replication traffic, and another profile dedicated for vMotion.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"hcx_compute_profile"</span> <span class="s2">"compute_profile_1"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"SJC-CP"</span>
<span class="nx">datacenter</span> <span class="p">=</span> <span class="s2">"San Jose"</span>
<span class="nx">cluster</span> <span class="p">=</span> <span class="s2">"Compute Cluster"</span>
<span class="nx">datastore</span> <span class="p">=</span> <span class="s2">"comp-vsanDatastore"</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span>
<span class="nx">hcx_network_profile</span><span class="p">.</span><span class="nx">net_management_gcve</span><span class="p">,</span> <span class="nx">hcx_network_profile</span><span class="p">.</span><span class="nx">net_vmotion_gcve</span>
<span class="p">]</span>
<span class="nx">management_network</span> <span class="p">=</span> <span class="nx">hcx_network_profile</span><span class="p">.</span><span class="nx">net_management_gcve</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">replication_network</span> <span class="p">=</span> <span class="nx">hcx_network_profile</span><span class="p">.</span><span class="nx">net_management_gcve</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">uplink_network</span> <span class="p">=</span> <span class="nx">hcx_network_profile</span><span class="p">.</span><span class="nx">net_management_gcve</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">vmotion_network</span> <span class="p">=</span> <span class="nx">hcx_network_profile</span><span class="p">.</span><span class="nx">net_vmotion_gcve</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">dvs</span> <span class="p">=</span> <span class="s2">"nsx-overlay-transportzone"</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"INTERCONNECT"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"WANOPT"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"VMOTION"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"BULK_MIGRATION"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"NETWORK_EXTENSION"</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">hcx_compute_profile</code> resource defines the compute, storage, and networking components at the local site that will participate in a service mesh. Compute and storage settings are defined at the beginning of the block. The management profile previously created is also specified for the replication and uplink networks. Finally, the <code class="language-plaintext highlighter-rouge">service</code> statements define which HCX features are enabled for the compute profile. If you attempt to enable a feature that you are not licensed for, Terraform will return an error.</p>
<p>There are two things to note with this resource. First, the <code class="language-plaintext highlighter-rouge">dvs</code> parameter is not accurately named. It would be more accurate to name this parameter <code class="language-plaintext highlighter-rouge">network_container</code> or something similar. In this example, I am referencing an NSX transport zone instead of a DVS. This is a valid setup as long as you have NSX registered with your HCX server, so some work is needed to update this provider to reflect that capability. Second, I’ve added a <code class="language-plaintext highlighter-rouge">depends_on</code> statement. I noticed during my testing that this provider would occasionally attempt to remove resources out of order, which ultimately would cause <code class="language-plaintext highlighter-rouge">terraform destroy</code> to fail. Using the <code class="language-plaintext highlighter-rouge">depends_on</code> statement fixes this issue, but some additional logic will need to be added to the provider better understand resource dependencies. I’ve also added <code class="language-plaintext highlighter-rouge">depends_on</code> statements to the following blocks for the same reason.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"hcx_service_mesh"</span> <span class="s2">"service_mesh_1"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"Service Mesh Name"</span>
<span class="nx">site_pairing</span> <span class="p">=</span> <span class="nx">hcx_site_pairing</span><span class="p">.</span><span class="nx">gcve</span>
<span class="nx">local_compute_profile</span> <span class="p">=</span> <span class="nx">hcx_compute_profile</span><span class="p">.</span><span class="nx">compute_profile_1</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">remote_compute_profile</span> <span class="p">=</span> <span class="s2">"GCVE Compute Profile"</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span> <span class="nx">hcx_compute_profile</span><span class="p">.</span><span class="nx">compute_profile_1</span> <span class="p">]</span>
<span class="nx">app_path_resiliency_enabled</span> <span class="p">=</span> <span class="kc">false</span>
<span class="nx">tcp_flow_conditioning_enabled</span> <span class="p">=</span> <span class="kc">false</span>
<span class="nx">uplink_max_bandwidth</span> <span class="p">=</span> <span class="mi">10000</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"INTERCONNECT"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"WANOPT"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"VMOTION"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"BULK_MIGRATION"</span>
<span class="p">}</span>
<span class="nx">service</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"NETWORK_EXTENSION"</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">hcx_service_mesh</code> resource is where the magic happens. This block creates the service mesh between your on-prem data center and your GCVE SDDC by deploying multiple appliances at both sites and building encrypted tunnels between them. Once this process is complete, you will be able to migrate VMs into GCVE. Notice that the configuration is relatively basic, referencing the site pairing and local compute profile configured by Terraform. You will need to know the name of the compute profile in GCVE, but if you are using the default configuration, it should be <code class="language-plaintext highlighter-rouge">GCVE Compute Profile</code>. Similar to the compute profile, the <code class="language-plaintext highlighter-rouge">service</code> parameters define which features are enabled on the service mesh. Typically, the services enabled in your compute profile should match the services enabled in your service mesh.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"hcx_l2_extension"</span> <span class="s2">"l2_extension_1"</span> <span class="p">{</span>
<span class="nx">site_pairing</span> <span class="p">=</span> <span class="nx">hcx_site_pairing</span><span class="p">.</span><span class="nx">gcve</span>
<span class="nx">service_mesh_id</span> <span class="p">=</span> <span class="nx">hcx_service_mesh</span><span class="p">.</span><span class="nx">service_mesh_1</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">source_network</span> <span class="p">=</span> <span class="s2">"Name of local network to extend"</span>
<span class="nx">network_type</span> <span class="p">=</span> <span class="s2">"NsxtSegment"</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span> <span class="nx">hcx_service_mesh</span><span class="p">.</span><span class="nx">service_mesh_1</span> <span class="p">]</span>
<span class="nx">destination_t1</span> <span class="p">=</span> <span class="s2">"Tier1"</span>
<span class="nx">gateway</span> <span class="p">=</span> <span class="s2">"192.168.10.1"</span>
<span class="nx">netmask</span> <span class="p">=</span> <span class="s2">"255.255.255.0"</span>
<span class="p">}</span>
</code></pre></div></div>
<p>This final block is optional but helpful in testing a migration. This block extends a network from your data center into GCVE using HCX Network Extension. This example extends an NSX segment, but the <a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/resources/l2_extension">hcx_l2_extension resource documentation</a> provides the parameters needed to extend a DVS-based network. You will need to know the name of the tier 1 router in GCVE you wish to connect this network to.</p>
<h2 id="variables-used">Variables Used</h2>
<p>The following input variables are required for this example:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">hcx_admin_username</code>: Username for on-prem HCX appliance management. Default value is <code class="language-plaintext highlighter-rouge">admin</code>.</li>
<li><code class="language-plaintext highlighter-rouge">hcx_admin_password</code>: Password for on-prem HCX appliance management</li>
<li><code class="language-plaintext highlighter-rouge">hcx_username</code>: Username for on-prem HCX instance</li>
<li><code class="language-plaintext highlighter-rouge">hcx_password</code>: Password for on-prem HCX instance</li>
<li><code class="language-plaintext highlighter-rouge">gcve_hcx_username</code>: Username for GCVE HCX instance. Default value is <code class="language-plaintext highlighter-rouge">CloudOwner@gve.local</code></li>
<li><code class="language-plaintext highlighter-rouge">gcve_hcx_password</code>: Password for GCVE HCX instance</li>
</ul>
<h3 id="using-environment-variables">Using Environment Variables</h3>
<p>You can use the following commands on macOS or Linux to provide these variable values via environment variables. This is a good practice when passing credentials to Terraform.</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">export </span><span class="nv">TF_VAR_hcx_admin_username</span><span class="o">=</span><span class="s1">'admin'</span>
<span class="nb">export </span><span class="nv">TF_VAR_hcx_admin_password</span><span class="o">=</span><span class="s1">'password'</span>
<span class="nb">export </span><span class="nv">TF_VAR_hcx_username</span><span class="o">=</span><span class="s1">'hcxuser@yourcompany.biz'</span>
<span class="nb">export </span><span class="nv">TF_VAR_hcx_password</span><span class="o">=</span><span class="s1">'password'</span>
<span class="nb">export </span><span class="nv">TF_VAR_gcve_hcx_username</span><span class="o">=</span><span class="s1">'CloudOwner@gve.local'</span>
<span class="nb">export </span><span class="nv">TF_VAR_gcve_hcx_password</span><span class="o">=</span><span class="s1">'password'</span>
</code></pre></div></div>
<p>You can use the <code class="language-plaintext highlighter-rouge">unset</code> commmand to remove set environment variables, if necessary.</p>
<h2 id="initializing-and-running-terraform">Initializing and Running Terraform</h2>
<p>See the <a href="https://github.com/shamsway/gcp-terraform-examples/blob/main/gcve-hcx/README.md">README</a> included in the example repo for the steps required to initialize and run Terraform. This is the same process as previous examples.</p>
<h1 id="final-thoughts">Final Thoughts</h1>
<p>It feels good to finally be able to migrate some workloads into our GCVE environment! Admittedly, this example is a bit of a stretch and may not be useful for all HCX users. My team works heavily with HCX, and we are frequently standing up or removing an HCX service mesh for various environments. This provider will be a huge time saver for us and will be especially valuable once there are a few fixes and improvements. Configuring HCX via the UI is an excellent option for new users, but once you are standing up your tenth service mesh, it becomes apparent that using Terraform is much quicker than clicking through several dialogs. I also believe that seeing the HCX configuration represented in Terraform code provides an excellent overview of all of the configuration needed, and how the configuration of different components stack together like Legos to form a completed service mesh.</p>
<p>What about automating the actual migration of VMs? This example prepares our environment for migration, but automating VM migration is best suited for a different tool than Terraform. Luckily, there are plenty of HCX-specific cmdlets in <a href="https://developer.vmware.com/powercli">PowerCLI</a>. Check out these <a href="https://blogs.vmware.com/PowerCLI/2019/02/getting-started-hcx-module.html">existing</a> <a href="https://code.vmware.com/samples?categories=Sample&tags=HCX">resources</a> for some examples of using PowerCLI with HCX.</p>
<p>This blog series is approaching its conclusion, but in my next post I’ll dive into configuring some common network use cases, like exposing a VM to the internet and configuring a load balancer in GCVE.</p>
<h1 id="helpful-links">Helpful Links</h1>
<ul>
<li><a href="https://cloud.google.com/vmware-engine/docs/workloads/howto-migrate-vms-using-hcx">Migrating VMware VMs using VMware HCX</a></li>
<li><a href="https://labs.hol.vmware.com/HOL/catalogs/lab/8843">Google Cloud VMware Engine Overview</a> Hands-on Lab, which includes HCX configuration.</li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/">adeleporte/hcx</a> community Terraform provider for HCX</li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/guides/lab">HCX Lab - Full HCX Connector configuration</a> Terraform example</li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/resources/site_pairing">hcx_site_pairing Resource</a></li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/resources/network_profile">hcx_network_profile Resource</a></li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/resources/compute_profile">hcx_compute_profile Resource</a></li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/resources/service_mesh">hcx_service_mesh Resource</a></li>
<li><a href="https://registry.terraform.io/providers/adeleporte/hcx/latest/docs/resources/l2_extension">hcx_l2_extension Resource</a></li>
<li><a href="https://blogs.vmware.com/PowerCLI/2019/02/getting-started-hcx-module.html">Getting Started with the PoweCLI HCX Module</a></li>
<li><a href="https://code.vmware.com/samples?categories=Sample&tags=HCX">PowerCLI Example Scripts for HCX</a></li>
</ul>
<h1 id="screenshots">Screenshots</h1>
<p>Below are screenshots from HCX showing the results of running this Terraform example in my lab, for reference. I have modified the example code to match the configuration of my lab environment.</p>
<p class="center"><a href="/resources/2021/04/39_hcx_np.png" class="drop-shadow"><img src="/resources/2021/04/39_hcx_np.png" alt="" /></a>
HCX Network Profiles</p>
<p class="center"><a href="/resources/2021/04/40_hcx_cp.png" class="drop-shadow"><img src="/resources/2021/04/40_hcx_cp.png" alt="" /></a>
HCX Compute Profile</p>
<p class="center"><a href="/resources/2021/04/41_hcx_sm_edited.png" class="drop-shadow"><img src="/resources/2021/04/41_hcx_sm_edited.png" alt="" /></a>
HCX Service Mesh</p>
<p class="center"><a href="/resources/2021/04/42_hcx_sm_appliance_details.png" class="drop-shadow"><img src="/resources/2021/04/42_hcx_sm_appliance_details.png" alt="" /></a>
HCX Service Mesh Appliance Details</p>
<p class="center"><a href="/resources/2021/04/43_hcx_ne_edited.png" class="drop-shadow"><img src="/resources/2021/04/43_hcx_ne_edited.png" alt="" /></a>
HCX Network Extension</p>
<p class="center"><a href="/resources/2021/04/45_hcx_vmotion_edited.png" class="drop-shadow"><img src="/resources/2021/04/45_hcx_vmotion_edited.png" alt="" /></a>
HCX vMotion Test</p>Matt ElliottNow that we have an SDDC running in Google Cloud VMware Engine, it is time to migrate some workloads into the cloud. VMware HCX will be the tool I use to migrate Virtual Machines to GCVE.Intro to Google Cloud VMware Engine – Network and Connectivity Overview2021-03-18T00:00:00+00:002021-03-18T00:00:00+00:00https://networkbrouhaha.com/2021/03/gcve-network-overview<p>In previous posts, I’ve shown you how to deploy an SDDC in Google Cloud VMware Engine, connect the SDDC to a VPC, and deploy a bastion host for managing your environment. In this post, we’ll take a pause on deploying anything new to take a closer look at our SDDC. This post will provide an overview of the networking configuration and capabilities, and how to connect to it from an external site.</p>
<p><strong>Other posts in this series:</strong></p>
<ul>
<li><a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a></li>
<li><a href="/2021/02/gcp-vpc-to-gcve/">Connecting a VPC to GCVE</a></li>
<li><a href="/2021/03/gcve-bastion/">Bastion Host Access with IAP</a></li>
<li><a href="/2021/04/gcve-hcx-config/">HCX Configuration</a></li>
<li><a href="/2021/05/gcve-networking-scenarios/">Common Networking Scenarios</a></li>
</ul>
<h1 id="sddc-networking-overview">SDDC Networking Overview</h1>
<p class="center"><a href="/resources/2021/03/gcve_arch.png" class="drop-shadow"><img src="/resources/2021/03/gcve_arch.png" alt="" /></a>
Google Cloud VMware Engine Overview by Google, licensed under <a href="https://creativecommons.org/licenses/by/3.0/">CC BY 3.0</a></p>
<p>An SDDC running in GCVE consists of VMware vSphere, vCenter, vSAN, NSX-T, and optionally HCX, all running on top of Google Cloud infrastructure. Let’s take a peek at an SDDC deployment.</p>
<h3 id="vds-and-n-vds-configuration">VDS and N-VDS Configuration</h3>
<p class="center"><a href="/resources/2021/03/25_gcve_dvs_edited.png" class="drop-shadow"><img src="/resources/2021/03/25_gcve_dvs_edited.png" alt="" /></a></p>
<p>Configuration of the single VDS in the SDDC is basic, and used to provide connectivity for HCX. The VLANs listed are locally significant to Google’s infrastructure and not something we need to worry about.</p>
<p class="center"><a href="/resources/2021/03/26_gcve_virtual_switches_edited.png" class="drop-shadow"><img src="/resources/2021/03/26_gcve_virtual_switches_edited.png" alt="" /></a></p>
<p>The virtual switch settings for one of the ESXi hosts provides a better picture of the networking landscape. Here we can see both the vanilla VDS deployed, along with the N-VDS managed by NSX-T. Almost all of the networking configuration we will perform will be in NSX-T, but I wanted to show the underlying configuration for curious individuals.</p>
<p class="center"><a href="/resources/2021/03/36_nsxt_nvds_visual_edited.png" class="drop-shadow"><img src="/resources/2021/03/36_nsxt_nvds_visual_edited.png" alt="" /></a></p>
<p>We’ll look at NSX-T further below, but this screenshot from NSX-T is a simple visualization of the N-VDS deployed.</p>
<h3 id="vmkernel-and-vmnic-configuration">VMkernel and vmnic Configuration</h3>
<p class="center"><a href="/resources/2021/03/28_gcve_vmk_edited.png" class="drop-shadow"><img src="/resources/2021/03/28_gcve_vmk_edited.png" alt="" /></a></p>
<p>VMkernel configuration is straightforward, with dedicated adapters for management, vSAN, and vMotion. The IP addresses correspond with the management, vSAN, and vMotion subnets that were automatically created when the SDDC was deployed.</p>
<p class="center"><a href="/resources/2021/03/27_gcve_phys_adapters_edited.png" class="drop-shadow"><img src="/resources/2021/03/27_gcve_phys_adapters_edited.png" alt="" /></a></p>
<p>There are four 25 Gbps vmnics (physical adapters) in each host, providing an aggregate of 100 Gbps per host. Two vmnics are dedicated to the VDS, and two are dedicated to the N-VDS.</p>
<h3 id="nsx-t-configuration">NSX-T Configuration</h3>
<p class="center"><a href="/resources/2021/03/30_gcve_t0_bgp.png" class="drop-shadow"><img src="/resources/2021/03/30_gcve_t0_bgp.png" alt="" /></a></p>
<p>The out-of-the-box NSX-T configuration for GCVE should look very familiar to you if you have ever deployed <a href="https://www.vmware.com/products/cloud-foundation.html">VMware Cloud Foundation</a>. The T0 router has redundant BGP connections to Google’s infrastructure.</p>
<p class="center"><a href="/resources/2021/03/31_gcve_nsx_firewall.png" class="drop-shadow"><img src="/resources/2021/03/31_gcve_nsx_firewall.png" alt="" /></a></p>
<p>There are no NAT rules configured, and the firewall has a default <code class="language-plaintext highlighter-rouge">allow any any</code> rule. This may not be what you were expecting, but by the end of this post, it should be more clear. We will look at traffic flows in the <strong>SDDC Networking Capabilities</strong> section below.</p>
<p class="center"><a href="/resources/2021/03/32_gcve_tzs.png" class="drop-shadow"><img src="/resources/2021/03/32_gcve_tzs.png" alt="" /></a></p>
<p>The configured transport zones consist of three VLAN TZs, and a single overlay TZ. The VLAN TZs facilitate the plumbing between the T0 router and Google infrastructure for BGP peering. The <code class="language-plaintext highlighter-rouge">TZ-OVERLAY</code> zone is where workload segments will be placed.</p>
<p class="center"><a href="/resources/2021/03/35_gcve_edge_nodes_edited.png" class="drop-shadow"><img src="/resources/2021/03/35_gcve_edge_nodes_edited.png" alt="" /></a></p>
<p>Finally, there is one edge cluster consisting of two edge nodes to host the NSX-T logical routers.</p>
<h1 id="sddc-networking-capabilities">SDDC Networking Capabilities</h1>
<p>Now that we’ve peeked behind the curtain, let’s talk about what you can actually <em>do</em> with your SDDC. This is by no means an exhaustive list, but here are some common use cases:</p>
<ul>
<li>Create workload segments in NSX-T</li>
<li>Expose VMs or services to the internet via public IP</li>
<li>Leverage NSX-T load balancing capabilities</li>
<li>Create north-south firewall policies with the NSX-T gateway firewall</li>
<li>Create east-west firewall policies (i.e., micro-segmentation) with the NSX-T distributed firewall</li>
<li>Access and consume Google Cloud native services</li>
<li>Migrate VMs from your on-prem data center to your GCVE SDDC with VMware HCX</li>
</ul>
<p>I will be covering many of these topics in future posts, including automation examples. Next, let’s look at the options for ingress and egress traffic.</p>
<h3 id="egress-traffic">Egress Traffic</h3>
<p class="center"><a href="/resources/2021/03/gcve_egress.png" class="drop-shadow"><img src="/resources/2021/03/gcve_egress.png" alt="" /></a>
Google Cloud VMware Engine Egress Traffic Flows by Google, licensed under <a href="https://creativecommons.org/licenses/by/3.0/">CC BY 3.0</a></p>
<p>One of the strengths of GCVE is that it provides you with options. As you can see on this diagram, you have three options for egress traffic:</p>
<ol>
<li>Egress through the GCVE internet gateway</li>
<li>Egress through an attached VPC</li>
<li>Egress through your on-prem data center via Cloud Interconnect or Cloud VPN</li>
</ol>
<p>In <a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a>, I walked through the steps to enable <code class="language-plaintext highlighter-rouge">Internet Access</code> and <code class="language-plaintext highlighter-rouge">Public IP Service</code> for your SDDC. This is all that is needed to provide egress internet access through the internet gateway. Internet-bound traffic will be routed from the T0 router to the internet gateway, which NATs all traffic behind a public IP.</p>
<p>Egress through an attached VPC or on-prem datacenter requires additional steps that are beyond the scope of this post, but I will provide documentation links at the end of this post for these scenarios.</p>
<h3 id="ingress-traffic">Ingress Traffic</h3>
<p class="center"><a href="/resources/2021/03/gcve_ingress.png" class="drop-shadow"><img src="/resources/2021/03/gcve_ingress.png" alt="" /></a>
Google Cloud VMware Engine Ingress Traffic Flows by Google, licensed under <a href="https://creativecommons.org/licenses/by/3.0/">CC BY 3.0</a></p>
<p>Ingress traffic to GCVE follows similar paths as egress traffic. You can ingress via the public IP service, connected VPC, or through your on-prem data center. Using the public IP service is the least complicated option and requires that you’ve enabled <code class="language-plaintext highlighter-rouge">Public IP Service</code> for your SDDC.</p>
<p class="center"><a href="/resources/2021/03/37_allocate_public_ip.png" class="drop-shadow"><img src="/resources/2021/03/37_allocate_public_ip.png" alt="" /></a></p>
<p>Public IPs are not assigned directly to VM. Instead, a public IP is allocated and NATed to a private IP in your SDDC. is You can allocate a public IP in the GCVE portal by supplying a name for the IP allocation, region, and the private address.</p>
<h1 id="connecting-to-your-sddc">Connecting to your SDDC</h1>
<p>My previous post, <a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a>, outlines the steps to set up client VPN access to your SDDC, and <a href="/2021/03/gcve-bastion/">Bastion Host Access with IAP</a> provides an example bastion host setup for managing your SDDC. These are “day 1” options for connectivity, so you will likely need some other method to connect to your on-prem data center to your GCVE SDDC. I covered cloud connectivity options in <a href="/2020/11/cloud-connectivity-101/">Cloud Connectivity 101</a>, and many of the methods outlined that post are available for connecting to GCVE. Today, your options are to use <a href="https://cloud.google.com/network-connectivity/docs/interconnect">Cloud Interconnect</a> or an IPSec tunnel via <a href="https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview">Cloud VPN</a> or <a href="https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A8B113EC-3D53-41A5-919E-78F1A3705F58.html">NSX-T IPSec VPN</a>.</p>
<p>In our lab, we are lucky to have a connection to <a href="https://www.megaport.com/">Megaport</a>, so I am using Partner Interconnect for my testing with GCVE. This is a very easy solution for connecting to the cloud, and their documentation provides simple step-by-step instructions to get up and running. Once complete, BGP peering will be established between the Megaport Cloud Router and a Google Cloud Router.</p>
<h3 id="advertising-routes-to-gcve">Advertising Routes to GCVE</h3>
<p class="center"><a href="/resources/2021/03/38_cloud_router_custom_ip_range_edited.png" class="drop-shadow"><img src="/resources/2021/03/38_cloud_router_custom_ip_range_edited.png" alt="" /></a></p>
<p>VPC peering in Google Cloud does not support transitive routing. This means that I had to add a custom advertised IP range for my GCVE subnets to the Google Cloud Router. After adding this configuration, I was able to ping IPs in my SDDC. You will need to <a href="https://cloud.google.com/vmware-engine/docs/networking/howto-dns-on-premises">configure your DNS server to resolve queries for <code class="language-plaintext highlighter-rouge">gve.goog</code></a> to be able to access vCenter, NSX and HCX by their hostnames.</p>
<h3 id="icmp-in-gcve">ICMP in GCVE</h3>
<p>One nuance in GCVE that threw me off is that ICMP is not supported by the internal load balancer, which is in the path for egress traffic if you are using the internet gateway. Trying to ping 8.8.8.8 will fail, even if your SDDC is correctly connected to the internet. To test internet connectivity from a VM in your SDDC, use another tool like <code class="language-plaintext highlighter-rouge">curl</code> or follow the instructions <a href="https://www.xmodulo.com/how-to-install-tcpping-on-linux.html">here</a> to install <code class="language-plaintext highlighter-rouge">tcpping</code> for testing.</p>
<h1 id="next-steps">Next Steps</h1>
<p>Next, we will stage our SDDC networking segments and connect HCX to begin migrating workloads to GCVE. I highly recommend you read the <a href="https://cloud.google.com/solutions/private-cloud-networking-for-vmware-engine">Private cloud networking for Google Cloud VMware Engine</a> whitepaper, which goes into many of the subjects I’ve touched on in this blog in greater detail.</p>Matt ElliottIn previous posts, I've shown you how to deploy a SDDC in Google Cloud VMware Engine, connect the SDDC to a VPC, and deploy a bastion host for managing your environment. In this post, we'll take a pause on deploying anything new to review what we have done so far. This post will provide an overview of the networking configuration and capabilities of our SDDC, and how to connect to it from an external site.Intro to Google Cloud VMware Engine – Bastion Host Access with IAP2021-03-03T00:00:00+00:002021-03-03T00:00:00+00:00https://networkbrouhaha.com/2021/03/gcve-bastion<p>Welcome back! This post will build on the previous posts in this series by deploying a Windows Server 2019 bastion host to manage our Google Cloud VMware Engine (GCVE) SDDC. Access to the bastion host will be provided with <a href="https://cloud.google.com/iap">Identity-Aware Proxy</a> (IAP). Everything will be deployed and configured with Terraform, with all of the code referenced in this post is available at <a href="https://github.com/shamsway/gcp-terraform-examples">https://github.com/shamsway/gcp-terraform-examples</a> in the <code class="language-plaintext highlighter-rouge">gcve-bastion-iap</code> sub-directory.</p>
<p><strong>Other posts in this series:</strong></p>
<ul>
<li><a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a></li>
<li><a href="/2021/02/gcp-vpc-to-gcve/">Connecting a VPC to GCVE</a></li>
<li><a href="/2021/03/gcve-network-overview/">Network and Connectivity Overview</a></li>
<li><a href="/2021/04/gcve-hcx-config/">HCX Configuration</a></li>
<li><a href="/2021/05/gcve-networking-scenarios/">Common Networking Scenarios</a>
<h1 id="identity-aware-proxy-overview">Identity Aware Proxy Overview</h1>
</li>
</ul>
<p>Standing up initial cloud connectivity is challenging. I walked through the steps to deploy a client VPN in <a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a>, but this post will show how to use IAP as a method for accessing a new bastion host. Using IAP means that the bastion host will be accessible without having to configure a VPN or expose it to the internet. I am a massive fan of this approach, and while there are some tradeoffs to discuss, it is a simpler and more secure approach than traditional access methods.</p>
<p>IAP can be used to access various resources, including App Engine and GKE. Accessing the bastion host over RDP (TCP port 3389) will be accomplished using <a href="https://cloud.google.com/iap/docs/using-tcp-forwarding">IAP for TCP forwarding</a>. Once configured, IAP will allow us to establish a connection to our bastion host over an encrypted tunnel on demand. Configuring this feature will require some specific IAM roles, as well as some firewall rules in your VPC. If you have <code class="language-plaintext highlighter-rouge">Owner</code> permissions in your GCP project, then you’re good to go. Otherwise, you will need the following roles assigned to complete the tasks outlined in the rest of this post:</p>
<ul>
<li>Compute Admin (<code class="language-plaintext highlighter-rouge">roles/compute.admin</code>)</li>
<li>Service Account Admin (<code class="language-plaintext highlighter-rouge">roles/iam.serviceAccountAdmin</code>)</li>
<li>Service Account User (<code class="language-plaintext highlighter-rouge">roles/iam.serviceAccountUser</code>)</li>
<li>IAP Policy Admin (<code class="language-plaintext highlighter-rouge">roles/iap.admin</code>)</li>
<li>IAP settings Admin (<code class="language-plaintext highlighter-rouge">roles/iap.settingsAdmin</code>)</li>
<li>IAP-secured Tunnel User (<code class="language-plaintext highlighter-rouge">roles/iap.tunnelResourceAccessor</code>)</li>
<li>Service Networking Admin (<code class="language-plaintext highlighter-rouge">roles/servicenetworking.networksAdmin</code>)</li>
<li>Project IAM Admin (<code class="language-plaintext highlighter-rouge">roles/resourcemanager.projectIamAdmin</code>)</li>
</ul>
<p>The VPC firewall will need to allow traffic sourced from <code class="language-plaintext highlighter-rouge">35.235.240.0/20</code>, which is the range that IAP uses for TCP forwarding. This rule can be further limited to specific TCP ports, like 3389 for RDP or 22 for SSH.</p>
<h1 id="bastion-host-deployment-with-terraform">Bastion Host Deployment with Terraform</h1>
<p>The example Terraform code linked at the beginning of the post will do the following:</p>
<ul>
<li>Create a <a href="https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances">service account</a>, which will be associated with the bastion host</li>
<li>Create Windows 2019 Server instance, which will be used as a bastion host</li>
<li>Create <a href="https://cloud.google.com/iap/docs/using-tcp-forwarding#create-firewall-rule">firewall rules</a> for accessing the bastion host via IAP, and accessing resources from the bastion host</li>
<li>Assign <a href="https://cloud.google.com/iap/docs/using-tcp-forwarding#grant-permission">IAM roles needed for IAP</a></li>
<li>Set a password on the bastion host using the <code class="language-plaintext highlighter-rouge">gcloud</code> tool</li>
</ul>
<p>After Terraform completes configuration, you will be able to use the <code class="language-plaintext highlighter-rouge">gcloud</code> tool to enable TCP forwarding for RDP. Once connected to the bastion host, you will be able to log into your GCVE-based vSphere portal. To get started, clone the example repo with <code class="language-plaintext highlighter-rouge">git clone https://github.com/shamsway/gcp-terraform-examples.git</code>, then change to the <code class="language-plaintext highlighter-rouge">gcve-bastion-iap</code> sub-directory. You will find these files:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">main.tf</code> – Contains the primary Terraform code to complete the steps mentioned above</li>
<li><code class="language-plaintext highlighter-rouge">variables.tf</code> – Defines the input variables that will be used in <code class="language-plaintext highlighter-rouge">main.tf</code></li>
<li><code class="language-plaintext highlighter-rouge">terraform.tfvars</code> – Supplies values for the input variables defined in <code class="language-plaintext highlighter-rouge">variables.tf</code></li>
<li><code class="language-plaintext highlighter-rouge">outputs.tf</code> – Defines the output variables to be returned from <code class="language-plaintext highlighter-rouge">main.tf</code></li>
</ul>
<p>Let’s take a closer look at what is happening in each of these files.</p>
<h2 id="maintf-contents">main.tf Contents</h2>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">provider</span> <span class="s2">"google"</span> <span class="p">{</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">region</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">region</span>
<span class="nx">zone</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">zone</span>
<span class="p">}</span>
<span class="k">data</span> <span class="s2">"google_compute_network"</span> <span class="s2">"network"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">network_name</span>
<span class="p">}</span>
<span class="k">data</span> <span class="s2">"google_compute_subnetwork"</span> <span class="s2">"subnet"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">subnet_name</span>
<span class="nx">region</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">region</span>
<span class="p">}</span>
</code></pre></div></div>
<p>Just like the example from my <a href="/2021/02/gcp-vpc-to-gcve/">last post</a>, <code class="language-plaintext highlighter-rouge">main.tf</code> begins with a <code class="language-plaintext highlighter-rouge">provider</code> block to define the Google Cloud project, region, and zone in which Terraform will create resources. The following data blocks, <code class="language-plaintext highlighter-rouge">google_compute_network.network</code> and <code class="language-plaintext highlighter-rouge">google_compute_network.subnet</code>, reference an existing VPC network and subnetwork. These data blocks will provide parameters necessary for creating a bastion host and firewall rules.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_service_account"</span> <span class="s2">"bastion_host"</span> <span class="p">{</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">account_id</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">service_account_name</span>
<span class="nx">display_name</span> <span class="p">=</span> <span class="s2">"Service Account for Bastion"</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The first resource block creates a new <a href="https://cloud.google.com/compute/docs/access/service-accounts">service account</a> that will be associated with our bastion host instance.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_compute_instance"</span> <span class="s2">"bastion_host"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">machine_type</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">machine_type</span>
<span class="nx">boot_disk</span> <span class="p">{</span>
<span class="nx">initialize_params</span> <span class="p">{</span>
<span class="nx">image</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">image</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="nx">network_interface</span> <span class="p">{</span>
<span class="nx">subnetwork</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">google_compute_subnetwork</span><span class="p">.</span><span class="nx">subnet</span><span class="p">.</span><span class="nx">self_link</span>
<span class="nx">access_config</span> <span class="p">{}</span>
<span class="p">}</span>
<span class="nx">service_account</span> <span class="p">{</span>
<span class="nx">email</span> <span class="p">=</span> <span class="nx">google_service_account</span><span class="p">.</span><span class="nx">bastion_host</span><span class="p">.</span><span class="nx">email</span>
<span class="nx">scopes</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">scopes</span>
<span class="p">}</span>
<span class="nx">tags</span> <span class="p">=</span> <span class="p">[</span><span class="kd">var</span><span class="p">.</span><span class="nx">tag</span><span class="p">]</span>
<span class="nx">labels</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">labels</span>
<span class="nx">metadata</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">metadata</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">google_compute_instance.bastion_host</code> block creates the bastion host. There are a few things to take note of in this block. <code class="language-plaintext highlighter-rouge">subnetwork</code> is set based on one of the data blocks at the beginning of <code class="language-plaintext highlighter-rouge">main.tf</code>, <code class="language-plaintext highlighter-rouge">data.google_compute_subnetwork.subnet.self_link</code>. The <code class="language-plaintext highlighter-rouge">self_link</code> property provides a unique reference to the subnet that Terraform will use when submitting the API call to create the bastion host. Similarly, the service account created by <code class="language-plaintext highlighter-rouge">google_service_account.bastion_host</code> is assigned to the bastion host.</p>
<p><code class="language-plaintext highlighter-rouge">tags</code>, <code class="language-plaintext highlighter-rouge">labels</code>, and <code class="language-plaintext highlighter-rouge">metadata</code> all serve similar, but distinct, purposes. <code class="language-plaintext highlighter-rouge">tags</code> are network tags, which will be used in firewall rules. <code class="language-plaintext highlighter-rouge">labels</code> are informational data that can be used for organizational or billing purposes. <code class="language-plaintext highlighter-rouge">metadata</code> has numerous uses, the most common of which is supplying a boot script that the instance will run on first boot.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_compute_firewall"</span> <span class="s2">"allow_from_iap_to_bastion"</span> <span class="p">{</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">fw_name_allow_iap_to_bastion</span>
<span class="nx">network</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">network</span><span class="p">.</span><span class="nx">self_link</span>
<span class="nx">allow</span> <span class="p">{</span>
<span class="nx">protocol</span> <span class="p">=</span> <span class="s2">"tcp"</span>
<span class="nx">ports</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"3389"</span><span class="p">]</span>
<span class="p">}</span>
<span class="c1"># https://cloud.google.com/iap/docs/using-tcp-forwarding#before_you_begin</span>
<span class="c1"># This range is needed to allow IAP to access the bastion host</span>
<span class="nx">source_ranges</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"35.235.240.0/20"</span><span class="p">]</span>
<span class="nx">target_tags</span> <span class="p">=</span> <span class="p">[</span><span class="kd">var</span><span class="p">.</span><span class="nx">tag</span><span class="p">]</span>
<span class="p">}</span>
<span class="k">resource</span> <span class="s2">"google_compute_firewall"</span> <span class="s2">"allow_access_from_bastion"</span> <span class="p">{</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">fw_name_allow_mgmt_from_bastion</span>
<span class="nx">network</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">network</span><span class="p">.</span><span class="nx">self_link</span>
<span class="nx">allow</span> <span class="p">{</span>
<span class="nx">protocol</span> <span class="p">=</span> <span class="s2">"icmp"</span>
<span class="p">}</span>
<span class="nx">allow</span> <span class="p">{</span>
<span class="nx">protocol</span> <span class="p">=</span> <span class="s2">"tcp"</span>
<span class="nx">ports</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"22"</span><span class="p">,</span> <span class="s2">"80"</span><span class="p">,</span> <span class="s2">"443"</span><span class="p">,</span> <span class="s2">"3389"</span><span class="p">]</span>
<span class="p">}</span>
<span class="c1"># Allow management traffic from bastion</span>
<span class="nx">source_tags</span> <span class="p">=</span> <span class="p">[</span><span class="kd">var</span><span class="p">.</span><span class="nx">tag</span><span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The next two blocks create firewall rules: one for accessing the bastion host via IAP, and the other for accessing resources from the bastion host. <code class="language-plaintext highlighter-rouge">google_compute_firewall.allow_from_iap_to_bastion</code> allows traffic from <code class="language-plaintext highlighter-rouge">35.235.240.0/24</code> on <code class="language-plaintext highlighter-rouge">tcp/3389</code> to instances that have the same network tag as the one that was assigned to the bastion host. <code class="language-plaintext highlighter-rouge">google_compute_firewall.allow_access_from_bastion</code> allows traffic from the bastion host by referencing the same network tag to anything else in our project, using common management ports/protocols.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_iap_tunnel_instance_iam_binding"</span> <span class="s2">"enable_iap"</span> <span class="p">{</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">zone</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">zone</span>
<span class="nx">instance</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">role</span> <span class="p">=</span> <span class="s2">"roles/iap.tunnelResourceAccessor"</span>
<span class="nx">members</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">members</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span><span class="nx">google_compute_instance</span><span class="p">.</span><span class="nx">bastion_host</span><span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">google_iap_tunnel_instance_iam_binding.enable_iap</code> block assigns the <code class="language-plaintext highlighter-rouge">roles/iap.tunnelResourceAccessor</code> IAM role to the accounts defined in the <code class="language-plaintext highlighter-rouge">members</code> variable. This value could be any valid IAM resource like a specific account or a group. This role is required to be able to access the bastion host via IAP.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_service_account_iam_binding"</span> <span class="s2">"bastion_sa_user"</span> <span class="p">{</span>
<span class="nx">service_account_id</span> <span class="p">=</span> <span class="nx">google_service_account</span><span class="p">.</span><span class="nx">bastion_host</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">role</span> <span class="p">=</span> <span class="s2">"roles/iam.serviceAccountUser"</span>
<span class="nx">members</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">members</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">google_project_iam_member.bastion_sa_user</code> block allows accounts specified in the <code class="language-plaintext highlighter-rouge">members</code> variable to use the newly created service account via the <code class="language-plaintext highlighter-rouge">Service Account User</code> role (<code class="language-plaintext highlighter-rouge">roles/iam.serviceAccountUser</code>). This allows the users or groups defined in the <code class="language-plaintext highlighter-rouge">members</code> variable to access all of the resources that the service account has rights to access. More information on this can be found <a href="https://cloud.google.com/iam/docs/service-accounts#user-role">here</a>.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_project_iam_member"</span> <span class="s2">"bastion_sa_bindings"</span> <span class="p">{</span>
<span class="nx">for_each</span> <span class="p">=</span> <span class="nx">toset</span><span class="p">(</span><span class="kd">var</span><span class="p">.</span><span class="nx">service_account_roles</span><span class="p">)</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">role</span> <span class="p">=</span> <span class="nx">each</span><span class="p">.</span><span class="nx">key</span>
<span class="nx">member</span> <span class="p">=</span> <span class="s2">"serviceAccount:</span><span class="k">${</span><span class="nx">google_service_account</span><span class="p">.</span><span class="nx">bastion_host</span><span class="p">.</span><span class="nx">email</span><span class="k">}</span><span class="s2">"</span>
<span class="p">}</span>
</code></pre></div></div>
<p><code class="language-plaintext highlighter-rouge">google_project_iam_member.bastion_sa_bindings</code> completes the IAM-related configuration by granting roles defined in the <code class="language-plaintext highlighter-rouge">service_account_roles</code> variable to the service account. This service account is assigned to the bastion host, which defines what the bastion host can do. The default roles assigned are listed below, but they can be modified in <code class="language-plaintext highlighter-rouge">variables.tf</code>.</p>
<ul>
<li>Log Writer (<code class="language-plaintext highlighter-rouge">roles/logging.logWriter</code>)</li>
<li>Monitoring Metric Writer (<code class="language-plaintext highlighter-rouge">roles/monitoring.metricWriter</code>)</li>
<li>Monitoring Viewer (<code class="language-plaintext highlighter-rouge">roles/monitoring.viewer</code>)</li>
<li>Compute OS Login (<code class="language-plaintext highlighter-rouge">roles/compute.osLogin</code>)</li>
</ul>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"time_sleep"</span> <span class="s2">"wait_60_seconds"</span> <span class="p">{</span>
<span class="nx">create_duration</span> <span class="p">=</span> <span class="s2">"60s"</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span><span class="nx">google_compute_instance</span><span class="p">.</span><span class="nx">bastion_host</span><span class="p">]</span>
<span class="p">}</span>
<span class="k">data</span> <span class="s2">"external"</span> <span class="s2">"gcloud_set_bastion_password"</span> <span class="p">{</span>
<span class="nx">program</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"bash"</span><span class="p">,</span> <span class="s2">"-c"</span><span class="p">,</span> <span class="s2">"gcloud compute reset-windows-password </span><span class="k">${</span><span class="kd">var</span><span class="p">.</span><span class="nx">name</span><span class="k">}</span><span class="s2"> --user=</span><span class="k">${</span><span class="kd">var</span><span class="p">.</span><span class="nx">username</span><span class="k">}</span><span class="s2"> --format=json --quiet"</span><span class="p">]</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span><span class="nx">time_sleep</span><span class="p">.</span><span class="nx">wait_60_seconds</span><span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>
<p>These final two blocks are what I refer to as “cool Terraform tricks.” The point of these blocks is to set the password on the bastion host. There are a few ways to do this, but unfortunately, there is no way to set a Windows instance password with a native Terraform resource. Instead, an <code class="language-plaintext highlighter-rouge">external</code> data source is used to run the appropriate <code class="language-plaintext highlighter-rouge">gcloud</code> command, with JSON formatted results returned (this is a requirement of the <code class="language-plaintext highlighter-rouge">external</code> data source.) The password cannot be set until the bastion host is fully booted, so <code class="language-plaintext highlighter-rouge">external.gcloud_set_bastion_pasword</code> depends on <code class="language-plaintext highlighter-rouge">time_sleep.wait_60_seconds</code>, which is a simple 60-second timer that gives the bastion host time to boot up before the <code class="language-plaintext highlighter-rouge">gcloud</code> command is run.</p>
<p>There is a chance that 60 seconds may not be long enough for the bastion host to boot. If you receive an error stating that the instance is not ready for use, you have two options:</p>
<ol>
<li>Run <code class="language-plaintext highlighter-rouge">terraform destroy</code> to remove the bastion host. Edit <code class="language-plaintext highlighter-rouge">main.tf</code> and increase the <code class="language-plaintext highlighter-rouge">create_duration</code> to a higher value, then run <code class="language-plaintext highlighter-rouge">terraform apply</code> again.</li>
<li>Run the <code class="language-plaintext highlighter-rouge">gcloud compute reset-windows-password</code> command manually</li>
</ol>
<p>Ideally, the password reset functionality would be built into the Google Cloud Terraform provider, and I wouldn’t be surprised to see it added in the future. If you’re reading this post in 2022 or beyond, it’s probably worth a quick investigation to see if this has happened.</p>
<h2 id="outputtf-contents">output.tf Contents</h2>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">output</span> <span class="s2">"bastion_username"</span> <span class="p">{</span>
<span class="nx">value</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">external</span><span class="p">.</span><span class="nx">gcloud_set_bastion_password</span><span class="p">.</span><span class="nx">result</span><span class="p">.</span><span class="nx">username</span>
<span class="p">}</span>
<span class="k">output</span> <span class="s2">"bastion_password"</span> <span class="p">{</span>
<span class="nx">value</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">external</span><span class="p">.</span><span class="nx">gcloud_set_bastion_password</span><span class="p">.</span><span class="nx">result</span><span class="p">.</span><span class="nx">password</span>
<span class="p">}</span>
</code></pre></div></div>
<p>These two outputs are the results of running the gcloud command. Once Terraform has completed running, it will display the username and password set on the bastion host. A password is sensitive data, so if you want to prevent it from being displayed, add <code class="language-plaintext highlighter-rouge">sensitive = true</code> to the <code class="language-plaintext highlighter-rouge">bastion_password</code> output block. Output values are stored in the Terraform state file, so you should take precautions to protect the state file from unauthorized access. Additional information on Terraform outputs is available <a href="https://www.terraform.io/docs/language/values/outputs.html">here</a>.</p>
<h2 id="terraformtfvars-contents">terraform.tfvars Contents</h2>
<p><code class="language-plaintext highlighter-rouge">terraform.tfvars</code> is the file that defines all the variables that are referenced in <code class="language-plaintext highlighter-rouge">main.tf</code>. All you need to do is supply the desired values for your environment, and you are good to go. Note that the variables below are all examples, so simply copying and pasting may not lead to the desired result.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">members</span> <span class="o">=</span> <span class="p">[</span><span class="s">"user:you@domain.com"</span><span class="p">]</span>
<span class="n">project</span> <span class="o">=</span> <span class="s">"your-gcp-project"</span>
<span class="n">region</span> <span class="o">=</span> <span class="s">"us-west2"</span>
<span class="n">zone</span> <span class="o">=</span> <span class="s">"us-west2-a"</span>
<span class="n">service_account_name</span> <span class="o">=</span> <span class="s">"bastion-sa"</span>
<span class="n">name</span> <span class="o">=</span> <span class="s">"bastion-vm"</span>
<span class="n">username</span> <span class="o">=</span> <span class="s">"bastionuser"</span>
<span class="n">labels</span> <span class="o">=</span> <span class="p">{</span> <span class="n">owner</span> <span class="o">=</span> <span class="s">"GCVE Team"</span><span class="p">,</span> <span class="n">created_with</span> <span class="o">=</span> <span class="s">"terraform"</span> <span class="p">}</span>
<span class="n">image</span> <span class="o">=</span> <span class="s">"gce-uefi-images/windows-2019"</span>
<span class="n">machine_type</span> <span class="o">=</span> <span class="s">"n1-standard-1"</span>
<span class="n">network_name</span> <span class="o">=</span> <span class="s">"gcve-usw2"</span>
<span class="n">subnet_name</span> <span class="o">=</span> <span class="s">"gcve-usw2-mgmt"</span>
<span class="n">tag</span> <span class="o">=</span> <span class="s">"bastion"</span>
</code></pre></div></div>
<p>Additional information on the variables used is available in <a href="https://github.com/shamsway/gcp-terraform-examples/blob/main/gcve-bastion-iap/README.md">README.md</a>. You can also find information on these variables, including their default values should one exist, in <code class="language-plaintext highlighter-rouge">variables.tf</code>.</p>
<h2 id="initializing-and-running-terraform">Initializing and Running Terraform</h2>
<p>Terraform will use <a href="https://cloud.google.com/sdk/gcloud/reference/auth/application-default">Application Default Credentials</a> to authenticate to Google Cloud. Assuming you have the <code class="language-plaintext highlighter-rouge">gcloud</code> cli tool installed, you can set these by running <code class="language-plaintext highlighter-rouge">gcloud auth application-default</code>. Additional information on authentication can be found in the <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_starte">Getting Started with the Google Provider</a> Terraform documentation. To run the Terraform code, follow the steps below.</p>
<p><strong>Following these steps will create resources in your Google Cloud project, and you will be billed for them.</strong></p>
<ol>
<li>Run <code class="language-plaintext highlighter-rouge">terraform init</code> and ensure no errors are displayed</li>
<li>Run <code class="language-plaintext highlighter-rouge">terraform plan</code> and review the changes that Terraform will perform</li>
<li>Run <code class="language-plaintext highlighter-rouge">terraform apply</code> to apply the proposed configuration changes</li>
</ol>
<p>Should you wish to remove everything created by Terraform, run <code class="language-plaintext highlighter-rouge">terraform destroy</code> and answer <code class="language-plaintext highlighter-rouge">yes</code> when prompted. This will only remove the VPC network and related configuration created by Terraform. Your GCVE environment will have to be deleted using <a href="https://cloud.google.com/vmware-engine/docs/private-clouds/howto-delete-private-cloud">these instructions</a>, if desired.</p>
<h1 id="accessing-the-bastion-host-with-iap">Accessing the Bastion Host with IAP</h1>
<p>Now, you should have a fresh Windows 2019 Server running in Google Cloud to serve as a bastion host. Use this command to create a tunnel to the bastion host:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>gcloud compute start-iap-tunnel <span class="o">[</span>bastion-host-name] 3389 <span class="nt">--zone</span> <span class="o">[</span>zone]
</code></pre></div></div>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/03/20_gcloud_iap_tunnel.png" alt="" class="drop-shadow" /></p>
<p>You will see a message that says <code class="language-plaintext highlighter-rouge">Listening on port [random number]</code>. This random high port is proxied to your bastion host port 3389. Fire up your favorite RDP client and connect to <code class="language-plaintext highlighter-rouge">localhost:[random number]</code>. Login with the credentials that were output from running Terraform. Once you’re able to connect to the bastion host, install the vSphere-compatible browser of your choice, along with any other management tools you may need.</p>
<p>If you’re a Windows user, there is an IAP-enabled RDP client available <a href="https://github.com/GoogleCloudPlatform/iap-desktop">here</a>.</p>
<h1 id="accessing-gcve-resources-from-the-bastion-host">Accessing GCVE Resources from the Bastion Host</h1>
<p>Open the GCVE portal, browse to <code class="language-plaintext highlighter-rouge">Resources</code>, and click on your SDDC, then <code class="language-plaintext highlighter-rouge">vSphere Management Network</code>. This will display the hostnames for your vCenter, NSX and HCX instances. Copy the hostname for vCenter and paste it into a browser in your bastion host to verify you can access your SDDC.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/03/21_cloud_dns_forwarding_edited.png" alt="" class="drop-shadow" />
<em>Cloud DNS forwarding config to enable resolution of GCVE resources</em></p>
<p>Access to GCVE from your VPC is made possible by private service access and a DNS forwarding configuration in Cloud DNS. The DNS forwarding configuration enables name resolution from your VPC for resources in GCVE. It is automatically created in Cloud DNS when private service access is configured between your VPC and GCVE. This is a relatively new feature and a nice improvement. Previously, name resolution for GCVE required manually changing resolvers on your bastion host or configuring a standalone DNS server.</p>
<h1 id="wrap-up">Wrap Up</h1>
<p>A quick recap of everything we’ve accomplished if you’ve been following this blog series from the beginning:</p>
<ul>
<li>Deployed an SDDC in GCVE</li>
<li>Created a new VPC and configured private service access to your SDDC</li>
<li>Deployed a bastion host in your VPC, accessible via IAP</li>
</ul>
<p>Clearly, we are just getting started! My next post will look at configuring Cloud Interconnect and standing up an HCX service mesh. With that in place, we can begin migrating some workloads into our SDDC.</p>
<h1 id="terraform-documentation-links">Terraform Documentation Links</h1>
<ul>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference">Google Provider Configuration Reference</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_network">google_compute_network Data Source</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_subnetwork">google_compute_subnetwork Data Source</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account">google_service_account Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance">google_compute_instance Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_firewall">google_compute_firewall Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/iap_tunnel_instance_iam">google_iap_tunnel_instance_iam_binding Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_service_account_iam">google_service_account_iam_binding Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_project_iam">google_project_iam_member Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep">time_sleep Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source">external Data Source</a></li>
</ul>Matt ElliottWelcome back! This post will build on our previous work by deploying a Windows Server 2019 bastion host to manage our Google Cloud VMware Engine (GCVE) SDDC. Access to the bastion host will be provided with Identity-Aware Proxy (IAP). As was done in my previous post, everything will be deployed and configured with Terraform.Intro to Google Cloud VMware Engine – Connecting a VPC to GCVE2021-02-19T00:00:00+00:002021-02-19T00:00:00+00:00https://networkbrouhaha.com/2021/02/gcp-vpc-to-gcve<p>My <a href="/2021/02/gcve-sddc-with-hcx/">previous post</a> walked through deploying an SDDC in Google Cloud VMware Engine (GCVE). This post will show the process of connecting a VPC to your GCVE environment, and we will use Terraform to do the vast majority of the work. The diagram below shows the basic concept of what I will be covering in this post. Once connected, you will be able to communicate from your VPC to your SDDC and vice versa. If you would like to complete this process using the cloud console instead of Terraform, see <a href="https://cloud.google.com/vmware-engine/docs/networking/howto-setup-private-service-access">Setting up private service access</a> in the VMware Engine documentation.</p>
<p><strong>Other posts in this series:</strong></p>
<ul>
<li><a href="/2021/02/gcve-sddc-with-hcx/">Deploying a GCVE SDDC with HCX</a></li>
<li><a href="/2021/03/gcve-bastion/">Bastion Host Access with IAP</a></li>
<li><a href="/2021/03/gcve-network-overview/">Network and Connectivity Overview</a></li>
<li><a href="/2021/04/gcve-hcx-config/">HCX Configuration</a></li>
<li><a href="/2021/05/gcve-networking-scenarios/">Common Networking Scenarios</a></li>
</ul>
<p class="center"><a href="/resources/2021/02/gcve-vpc-peeing.png" class="drop-shadow"><img src="/resources/2021/02/gcve-vpc-peeing.png" alt="" /></a></p>
<p>I’m assuming you have a working SDDC deployed in VMware Engine and some basic knowledge of how Terraform works so you can use the provided Terraform examples. If you have not yet deployed an SDDC, please do so before continuing. If you need to get up to speed with Terraform, browse over to <a href="https://learn.hashicorp.com/terraform">https://learn.hashicorp.com/terraform</a>. All of the code referenced in this post will be available at <a href="https://github.com/shamsway/gcp-terraform-examples">https://github.com/shamsway/gcp-terraform-examples</a> in the <code class="language-plaintext highlighter-rouge">gcve-network</code> sub-directory. You will need to have git installed to clone the repo, and I highly recommend using <a href="https://github.com/microsoft/vscode">Visual Studio Code</a> with the Terraform add-on installed to view the files.</p>
<h1 id="private-service-access-overview">Private Service Access Overview</h1>
<p>GCVE SDDCs can establish connectivity to native GCP services with <a href="https://cloud.google.com/vpc/docs/private-services-access">private services access</a>. This feature can be used to establish connectivity from a VPC to a third-party “service producer,” but in this case, it will simply plumb connectivity between our VPC and SDDC. Configuring private services access requires allocating one or more reserved ranges that cannot be used in your local VPC network. In this case, we will supply the ranges that we have allocated for our VMware Engine SDDC networks. Doing this prevents issues with overlapping IP ranges.</p>
<h1 id="leveraging-terraform-for-configuration">Leveraging Terraform for Configuration</h1>
<p>I have provided Terraform code that will do the following:</p>
<ul>
<li>Create a VPC network</li>
<li>Create a subnet in the new VPC network that will be used to communicate with GCVE</li>
<li>Create two Global Address pools that will be used to reserve addresses used in GCVE</li>
<li>Create a private connection in the new VPC, using the two Global Address pools as reserved ranges</li>
<li>Enable import and export of custom routes for the VPC</li>
</ul>
<p>After Terraform completes configuration, you will be able to establish peering with the new VPC in GCVE. To get started, clone the example repo with <code class="language-plaintext highlighter-rouge">git clone https://github.com/shamsway/gcp-terraform-examples.git</code>, then change to the <code class="language-plaintext highlighter-rouge">gcve-network</code> sub-directory. You will find these files:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">main.tf</code> – Contains the primary Terraform code to complete the steps mentioned above</li>
<li><code class="language-plaintext highlighter-rouge">variables.tf</code> – Defines the input variables that will be used in <code class="language-plaintext highlighter-rouge">main.tf</code></li>
<li><code class="language-plaintext highlighter-rouge">terraform.tfvars</code> – Supplies values for the input variables defined in <code class="language-plaintext highlighter-rouge">variables.tf</code></li>
</ul>
<p>Let’s take a look at what is happening in <code class="language-plaintext highlighter-rouge">main.tf</code>, then we will supply the necessary variables in <code class="language-plaintext highlighter-rouge">terraform.tfvars</code> and run Terraform. You will see <code class="language-plaintext highlighter-rouge">var.[name]</code> appear over and over in the code, as this is how Terraform references variables. You may think it would be easier to place the desired values directly into <code class="language-plaintext highlighter-rouge">main.tf</code> instead of defining and supplying variables, but it is worth the time to get used to using variables with Terraform. Hardcoding values in your code is rarely a good idea, and most Terraform code that I have consumed from other authors use variables heavily.</p>
<h2 id="maintf-contents">main.tf Contents</h2>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">provider</span> <span class="s2">"google"</span> <span class="p">{</span>
<span class="nx">project</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">project</span>
<span class="nx">region</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">region</span>
<span class="nx">zone</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">zone</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The file begins with a provider block, which is common in Terraform. This block defines the Google Cloud project, region, and zone in which Terraform will create resources. The values used are specified in <code class="language-plaintext highlighter-rouge">terraform.tfvars</code>, which is the same method we will use throughout this example.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_compute_network"</span> <span class="s2">"vpc_network"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">network_name</span>
<span class="nx">description</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">network_descr</span>
<span class="nx">auto_create_subnetworks</span> <span class="p">=</span> <span class="kc">false</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The first resource block creates a new VPC in the region and zone specified in the provider block. Setting <code class="language-plaintext highlighter-rouge">auto_create_subnetworks</code> to <code class="language-plaintext highlighter-rouge">false</code> specifies that we want a custom VPC instead of auto-creating subnets for each region.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_compute_subnetwork"</span> <span class="s2">"vpc_subnet"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">subnet_name</span>
<span class="nx">ip_cidr_range</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">subnet_cidr</span>
<span class="nx">region</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">region</span>
<span class="nx">network</span> <span class="p">=</span> <span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">vpc_network</span><span class="p">.</span><span class="nx">id</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The next block creates a subnet in the newly created VPC. Notice that the last line references <code class="language-plaintext highlighter-rouge">google_compute_network.vpc_network.id</code> for the network value, meaning that it uses the ID value of the VPC created by Terraform.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_compute_global_address"</span> <span class="s2">"private_ip_alloc_1"</span> <span class="p">{</span>
<span class="nx">name</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">reserved1_name</span>
<span class="nx">address</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">reserved1_address</span>
<span class="nx">purpose</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">address_purpose</span>
<span class="nx">address_type</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">address_type</span>
<span class="nx">prefix_length</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">reserved1_address_prefix_length</span>
<span class="nx">network</span> <span class="p">=</span> <span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">vpc_network</span><span class="p">.</span><span class="nx">id</span>
<span class="p">}</span>
</code></pre></div></div>
<p>This block and the following block (<code class="language-plaintext highlighter-rouge">google_compute_global_address.private_ip_alloc_2</code>) create a private IP allocation used for the private services configuration.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_service_networking_connection"</span> <span class="s2">"gcve-psa"</span> <span class="p">{</span>
<span class="nx">network</span> <span class="p">=</span> <span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">vpc_network</span><span class="p">.</span><span class="nx">id</span>
<span class="nx">service</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">service</span>
<span class="nx">reserved_peering_ranges</span> <span class="p">=</span> <span class="p">[</span><span class="nx">google_compute_global_address</span><span class="p">.</span><span class="nx">private_ip_alloc_1</span><span class="p">.</span><span class="nx">name</span><span class="p">,</span> <span class="nx">google_compute_global_address</span><span class="p">.</span><span class="nx">private_ip_alloc_2</span><span class="p">.</span><span class="nx">name</span><span class="p">]</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span><span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">vpc_network</span><span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>
<p>These last two blocks are where things get interesting. The block above configures the private services connection using the VPC network and private IP allocation created by Terraform. <code class="language-plaintext highlighter-rouge">Service</code> is a specific string, <code class="language-plaintext highlighter-rouge">servicenetworking.googleapis.com</code>, since Google is the service provider in this scenario. This value is set in <code class="language-plaintext highlighter-rouge">terraform.tfvars</code>, as we will see in a moment. If you find this confusing, check the available documentation for this resource, and it should help you to understand it.</p>
<div class="language-tf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"google_compute_network_peering_routes_config"</span> <span class="s2">"peering_routes"</span> <span class="p">{</span>
<span class="nx">peering</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">peering</span>
<span class="nx">network</span> <span class="p">=</span> <span class="nx">google_compute_network</span><span class="p">.</span><span class="nx">vpc_network</span><span class="p">.</span><span class="nx">name</span>
<span class="nx">import_custom_routes</span> <span class="p">=</span> <span class="kc">true</span>
<span class="nx">export_custom_routes</span> <span class="p">=</span> <span class="kc">true</span>
<span class="nx">depends_on</span> <span class="p">=</span> <span class="p">[</span><span class="nx">google_service_networking_connection</span><span class="p">.</span><span class="nx">gcve</span><span class="err">-</span><span class="nx">psa</span><span class="p">]</span>
<span class="p">}</span>
</code></pre></div></div>
<p>The final block enables the import and export of custom routes for our VPC peering configuration.</p>
<p>Note that the final two blocks contain an argument that none of the others do: <code class="language-plaintext highlighter-rouge">depends_on</code>. The Terraform documentation describes <code class="language-plaintext highlighter-rouge">depends_on</code> in-depth <a href="https://www.terraform.io/docs/language/meta-arguments/depends_on.html">here</a>, but basically, this is a hint for Terraform to describe resources that rely on each other. Typically, Terraform can determine this automatically, but there are occasional cases where this statement needs to be used. Running <code class="language-plaintext highlighter-rouge">terraform destroy</code> without this argument in place may lead to errors, as Terraform could delete the VPC before removing the private services connection or route peering configuration.</p>
<h2 id="terraformtfvars-contents">terraform.tfvars Contents</h2>
<p><code class="language-plaintext highlighter-rouge">terraform.tfvars</code> is the file that defines all the variables that are referenced in <code class="language-plaintext highlighter-rouge">main.tf</code>. All you need to do is supply the desired values for your environment, and you are good to go. Note that the variables below are all examples, so simply copying and pasting may not lead to the desired result.</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">project</span> <span class="o">=</span> <span class="s">"your-gcp-project"</span>
<span class="n">region</span> <span class="o">=</span> <span class="s">"us-west2"</span>
<span class="n">zone</span> <span class="o">=</span> <span class="s">"us-west2-a"</span>
<span class="n">network_name</span> <span class="o">=</span> <span class="s">"gcve-usw2"</span>
<span class="n">network_descr</span> <span class="o">=</span> <span class="s">"Network for testing of GCVE in USW2"</span>
<span class="n">subnet_name</span> <span class="o">=</span> <span class="s">"gcve-usw2-mgmt"</span>
<span class="n">subnet_cidr</span> <span class="o">=</span> <span class="s">"192.168.82.0/24"</span>
<span class="n">reserved1_name</span> <span class="o">=</span> <span class="s">"gcve-managemnt-ip-alloc"</span>
<span class="n">reserved1_address</span> <span class="o">=</span> <span class="s">"192.168.80.0"</span>
<span class="n">reserved1_address_prefix_length</span> <span class="o">=</span> <span class="mi">23</span>
<span class="n">reserved2_name</span> <span class="o">=</span> <span class="s">"gcve-workload-ip-alloc"</span>
<span class="n">reserved2_address</span> <span class="o">=</span> <span class="s">"192.168.84.0"</span>
<span class="n">reserved2_address_prefix_length</span> <span class="o">=</span> <span class="mi">23</span>
<span class="n">address_purpose</span> <span class="o">=</span> <span class="s">"VPC_PEERING"</span>
<span class="n">address_type</span> <span class="o">=</span> <span class="s">"INTERNAL"</span>
<span class="n">service</span> <span class="o">=</span> <span class="s">"servicenetworking.googleapis.com"</span>
<span class="n">peering</span> <span class="o">=</span> <span class="s">"servicenetworking-googleapis-com"</span>
</code></pre></div></div>
<p>Additional information on the variables used is available in <a href="https://github.com/shamsway/gcp-terraform-examples/blob/main/gcve-vpc-peering/README.md">README.md</a>. You can also find information on these variables, including their default values should one exist, in <code class="language-plaintext highlighter-rouge">variables.tf</code>.</p>
<h2 id="initializing-and-running-terraform">Initializing and Running Terraform</h2>
<p>Terraform will use <a href="https://cloud.google.com/sdk/gcloud/reference/auth/application-default">Application Default Credentials</a> to authenticate to Google Cloud. Assuming you have the <code class="language-plaintext highlighter-rouge">gcloud</code> cli tool installed, you can set these by running <code class="language-plaintext highlighter-rouge">gcloud auth application-default</code>. Additional information on authentication can be found in the <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_starte">Getting Started with the Google Provider</a> Terraform documentation. To run the Terraform code, follow the steps below.</p>
<p><strong>Following these steps will create resources in your Google Cloud project, and you will be billed for them.</strong></p>
<ol>
<li>Run <code class="language-plaintext highlighter-rouge">terraform init</code> and ensure no errors are displayed</li>
<li>Run <code class="language-plaintext highlighter-rouge">terraform plan</code> and review the changes that Terraform will perform</li>
<li>Run <code class="language-plaintext highlighter-rouge">terraform apply</code> to apply the proposed configuration changes</li>
</ol>
<p>Should you wish to remove everything created by Terraform, run <code class="language-plaintext highlighter-rouge">terraform destroy</code> and answer <code class="language-plaintext highlighter-rouge">yes</code> when prompted. This will only remove the VPC network and related configuration created by Terraform. Your GCVE environment will have to be deleted using <a href="https://cloud.google.com/vmware-engine/docs/private-clouds/howto-delete-private-cloud">these instructions</a>, if desired.</p>
<h1 id="review-vpc-configuration">Review VPC Configuration</h1>
<p>Once <code class="language-plaintext highlighter-rouge">terraform apply</code> completes, you can see the results in the <a href="https://console.cloud.google.com/">Google Cloud Console</a>.</p>
<p class="center"><a href="/resources/2021/02/network_allocated_ips_edited.png" class="drop-shadow"><img src="/resources/2021/02/network_allocated_ips_edited.png" alt="" /></a></p>
<p>IP ranges allocated for use in GCVE are reserved.</p>
<p class="center"><a href="/resources/2021/02/network_service_connection_edited.png" class="drop-shadow"><img src="/resources/2021/02/network_service_connection_edited.png" alt="" /></a></p>
<p>Private service access is configured.</p>
<p class="center"><a href="/resources/2021/02/network_peering_edited.png" class="drop-shadow"><img src="/resources/2021/02/network_peering_edited.png" alt="" /></a></p>
<p>Import and export of custom routes on the <code class="language-plaintext highlighter-rouge">servicenetworking-googleapis-com</code> private connection is enabled.</p>
<h1 id="complete-peering-in-gcve">Complete Peering in GCVE</h1>
<p>The final step is to create the private connection in the VMware Engine portal. You will need the following information to configure the private connection.</p>
<ul>
<li>Project ID (found under <code class="language-plaintext highlighter-rouge">Project info</code> on the console dashboard.) <code class="language-plaintext highlighter-rouge">Project ID</code> may be different than <code class="language-plaintext highlighter-rouge">Project Name</code>, so verify you are gathering the correct information.</li>
<li>Project Number (also found under <code class="language-plaintext highlighter-rouge">Project info</code> on the console dashboard.)</li>
<li>Name of the VPC (<code class="language-plaintext highlighter-rouge">network_name</code> in your <code class="language-plaintext highlighter-rouge">variables.tf</code> file.)</li>
<li>Peered project ID from VPC Network Peering screen</li>
</ul>
<p>Save all of these values somewhere handy, and follow these steps to complete peering</p>
<p class="center"><a href="/resources/2021/02/15b_add_private_connection_edited.png" class="drop-shadow"><img src="/resources/2021/02/15b_add_private_connection_edited.png" alt="" /></a></p>
<ol>
<li>Open the VMware Engine portal, and browse to <code class="language-plaintext highlighter-rouge">Network > Private connection</code>.</li>
<li>Click <code class="language-plaintext highlighter-rouge">Add network connection</code> and paste the required values. Supply the peered project ID in the <code class="language-plaintext highlighter-rouge">Tenant project ID</code> field, VPC name in the <code class="language-plaintext highlighter-rouge">Peer VPC ID</code> field, and complete the remaining fields.</li>
<li>Choose the region your VMware Engine private cloud is deployed in, and click <code class="language-plaintext highlighter-rouge">submit</code>.</li>
</ol>
<p class="center"><a href="/resources/2021/02/16_add_private_connection_edited.png" class="drop-shadow"><img src="/resources/2021/02/16_add_private_connection_edited.png" alt="" /></a></p>
<p>After a few moments, <code class="language-plaintext highlighter-rouge">Region Status</code> should show a status of <code class="language-plaintext highlighter-rouge">Connected</code>. Your VMware Engine private cloud is now peered with your Google Cloud VPC. You can verify peering is working by checking the routing table of your VPC.</p>
<h1 id="verify-vpc-routing-table">Verify VPC Routing Table</h1>
<p>Once peering is completed, you should see routes for networks in your GCVE SDDC in your VPC routing table. You can view these routes in the cloud console or with:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> gcloud couple networks peerings list-routes service-networking-googleapis-com –network=[VPC Name] –region=[Region name] –direction=incoming
</code></pre></div></div>
<p class="center"><a href="/resources/2021/02/19_gcloud_routes_output.png" class="drop-shadow"><img src="/resources/2021/02/19_gcloud_routes_output.png" alt="" /></a>
Verifying routes with the gcloud cli</p>
<p class="center"><a href="/resources/2021/02/17_peering_imported_routes_edited.png" class="drop-shadow"><img src="/resources/2021/02/17_peering_imported_routes_edited.png" alt="" /></a>
Viewing routes in the console</p>
<h1 id="wrap-up">Wrap Up</h1>
<p>Well, that was fun! You should now have established connectivity between your VMware Engine SDDC and your Google Cloud VPC, but we are only getting started. My next post will cover creating a bastion host in GCP to manage your GCVE environment, and I may take a look at Cloud DNS as well.</p>
<p>This post comes at a good time, as Google has just announced <a href="https://cloud.google.com/blog/products/compute/whats-new-in-google-cloud-vmware-engine-in-february-2021">several enhancements to GCVE</a>, including multiple VPC peering. I’m planning on exploring these enhancements in future posts.</p>
<h1 id="terraform-documentation-links">Terraform Documentation Links</h1>
<ul>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/provider_reference">Google Provider Configuration Reference</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_network">google_compute_network Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_subnetwork">google_compute_subnetwork Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_global_address">google_compute_global_address Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/service_networking_connection">google_service_networking_connection Resource</a></li>
<li><a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_network_peering_routes_config">google_compute_network_peering_routes_config Resource</a></li>
</ul>Matt ElliottThis post will show the process of connecting a VPC to your GCVE environment, and we will use Terraform to do the vast majority of the work.Intro to Google Cloud VMware Engine - Deploying a GCVE SDDC with HCX2021-02-04T00:00:00+00:002021-02-04T00:00:00+00:00https://networkbrouhaha.com/2021/02/gcve-sddc-with-hcx<p>Welcome to the first post in a new series focusing on <a href="https://cloud.google.com/vmware-engine">Google Cloud VMware Engine</a> (GCVE)! This first post will walkthrough prerequisites, deploying an SDDC with VMware HCX, and accessing vCenter via VPN Gateway (i.e., OpenVPN).</p>
<p>Before we dive into deploying an SDDC, I want to set expectations for this blog series. My goal when working in the cloud is to create, modify and destroy resources programmatically. My tool of choice is <a href="https://www.terraform.io/">Terraform</a>, but I will also use CLI-based tools like <a href="https://cloud.google.com/sdk/gcloud">gcloud</a>. Occasionally I will inspect API calls directly and perform API calls with Python or <a href="https://github.com/curl/curl">cURL</a>. I have found that learning a product’s API is an excellent way to master it. Cloud consoles (GUIs) are adequate when getting started, but interfacing with the API, whether through Terraform or an SDK, is how these platforms are designed to work.</p>
<p>This first post will be different from the others because the GCVE API documentation is not yet public, nor is there any Terraform functionality available to create or destroy GCVE resources. API documentation and Terraform for GCVE is coming, so when it is available, I will certainly blog about it! For now, I will walk through the GCVE GUI to detail SDDC and VPN gateway creation. Have no fear – there will be plenty of Terraform in future posts.</p>
<p><strong>Other posts in this series:</strong></p>
<ul>
<li><a href="/2021/02/gcp-vpc-to-gcve/">Connecting a VPC to GCVE</a></li>
<li><a href="/2021/03/gcve-bastion/">Bastion Host Access with IAP</a></li>
<li><a href="/2021/03/gcve-network-overview/">Network and Connectivity Overview</a></li>
<li><a href="/2021/04/gcve-hcx-config/">HCX Configuration</a></li>
<li><a href="/2021/05/gcve-networking-scenarios/">Common Networking Scenarios</a></li>
</ul>
<h1 id="prerequisites-for-creating-a-gcve-sddc">Prerequisites for Creating a GCVE SDDC</h1>
<p>If you’ve read any of my previous blog posts on cloud networking, you will already know that the most important thing to do before deploying anything into the cloud is rigorous planning. Deploying an SDDC in GCVE is no different. You will need to designate several <em>unique</em> IP ranges to be used for SDDC infrastructure and workloads, ensure the proper firewall ports are allowed to manage your SDDC, and prepare your GCP environment before deploying an SDDC. All of these prerequisites are detailed in the <a href="https://cloud.google.com/vmware-engine/docs/quickstart-prerequisites">GCVE prerequisites documentation</a>, which I highly recommend reading. Google’s documentation is thorough, and there is nothing better than reading through all of the docs if you want to understand how this solution works. Here is an overview of the required steps:</p>
<ul>
<li>Plan the IP ranges you will use with GCVE. These are all <a href="https://en.wikipedia.org/wiki/Private_network">RFC 1918 private addresses</a>. You will need ranges for each of the following:
<ul>
<li><strong>vSphere and vSAN</strong> (/21 - /24 accepted). Depending on the size of the range you choose, it will be divided into additional subnets for management, vMotion, vSAN, and NSX. Details on the layout for these subnets are available <a href="https://cloud.google.com/vmware-engine/docs/concepts-vlans-subnets#management_network_cidr_range_breakdown">here</a>.</li>
<li><strong>HCX</strong> (/27 or higher)</li>
<li><strong>Edge Services</strong>, required for client VPN and internet access (/26)</li>
<li><strong>Client subnet</strong>, assigned to clients connecting via VPN Gateway (/24)</li>
<li><strong>Workload subnets</strong>, which will be configured in NSX-T after your SDDC is deployed. These are entirely up to you to determine, but my advice is to reserve plenty of IPs to use.</li>
</ul>
</li>
<li>Ensure your local firewall is configured for communication with vCenter and workload VMs. Ports used for communication are documented in the <a href="https://cloud.google.com/vmware-engine/docs/quickstart-prerequisites#firewall-port-requirements">prerequisites</a>.</li>
<li>Enable the VMware Engine API in your Google Cloud Project</li>
<li>Enable the VMware Engine <a href="https://cloud.google.com/vmware-engine/quotas">node quota</a></li>
</ul>
<p>Once these are completed, you are ready to create your SDDC!</p>
<h1 id="creating-a-gcve-sddc">Creating a GCVE SDDC</h1>
<p>To create a GCVE SDDC, browse to <code class="language-plaintext highlighter-rouge">Compute > VMware Engine</code> in the GCP Console. This will bring you to the GCVE homepage.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/01_create_private_cloud_edited.png" alt="" class="drop-shadow" /></p>
<p>Click <code class="language-plaintext highlighter-rouge">Create a Private Cloud</code> to get started.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/02_create_private_cloud.png" alt="" class="drop-shadow" /></p>
<p>Specify your cloud name, location, node count, and predetermined network ranges. If you cannot choose your desired region, ensure you have requested VMware Engine nodes quota for that region. Your quota will also determine how many nodes you can request. The minimum node count is three nodes. After clicking <code class="language-plaintext highlighter-rouge">Review and Create</code>, you will be shown a confirmation page. Review your choices and click <code class="language-plaintext highlighter-rouge">Create</code>.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/04_create_private_cloud_edited.png" alt="" class="drop-shadow" /></p>
<p>You will be taken to a summary page for your new cluster once provisioning begins. Note that the state is <code class="language-plaintext highlighter-rouge">Provisioning</code> in the screenshot above, and it will take between 30 minutes to 2 hours to complete. My experience has been that it takes just over 30 minutes to provision an SDDC, which is pretty impressive. You can click on the <code class="language-plaintext highlighter-rouge">Activity</code> to tab view recent events, tasks, and alerts. Drilling into those will provide specifics on any activity in your SDDC, including the provisioning process.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/05_gcve_cluster_info_edited.png" alt="" class="drop-shadow" /></p>
<h1 id="setting-up-the-gcve-vpn-gateway">Setting Up the GCVE VPN Gateway</h1>
<p>There are several ways to access your GCVE environment, including Cloud Interconnect and Cloud VPN. I will explore these topics in future posts. To establish initial connectivity to GCVE, a <a href="https://cloud.google.com/vmware-engine/docs/networking/howto-vpn-configure">VPN gateway</a> can be used. This is an OpenVPN-based client VPN that will allow you to connect to your SDDC’s vCenter and perform any initial configuration that you desire.</p>
<p>Before the VPN gateway can be deployed, you will need to configure the “Edge Services” range for the region where your SDDC is deployed. To do this, browse to <code class="language-plaintext highlighter-rouge">Network > Regional</code> settings in the GCVE portal, and click <code class="language-plaintext highlighter-rouge">Add Region</code>.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/06_region_edge_services.png" alt="" class="drop-shadow" /></p>
<p>Choose the region where your SDDC is deployed and enable <code class="language-plaintext highlighter-rouge">Internet Access</code> and <code class="language-plaintext highlighter-rouge">Public IP Service</code>. Supply the Edge Services range you earmarked during planning and click <code class="language-plaintext highlighter-rouge">Submit</code>. Enabling these services will take 10-15 minutes. Once complete, they will show as <code class="language-plaintext highlighter-rouge">Enabled</code> on the Regional Settings page. Enabling these settings will allow Public IPs to be allocated to your SDDC, which is a requirement for deploying a VPN Gateway. To begin the deployment, browse to <code class="language-plaintext highlighter-rouge">Network > VPN Gateways</code> and click <code class="language-plaintext highlighter-rouge">Create New VPN Gateway</code>.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/08_create_vpn_gw.png" alt="" class="drop-shadow" /></p>
<p>Supply the name for the VPN gateway and the client subnet reserved during planning and click <code class="language-plaintext highlighter-rouge">Next</code>.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/09_create_vpn_gw_edited.png" alt="" class="drop-shadow" /></p>
<p>Choose specific users to grant VPN access, or enable <code class="language-plaintext highlighter-rouge">Automatically add all users</code>, and click <code class="language-plaintext highlighter-rouge">Next</code>.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/10_create_vpn_gw.png" alt="" class="drop-shadow" /></p>
<p>Next, specify which networks to make accessible over VPN. I opted to add all subnets automatically. Click <code class="language-plaintext highlighter-rouge">Next</code>, and a summary screen will be displayed. Verify your choice and click <code class="language-plaintext highlighter-rouge">Submit</code> to create the VPN Gateway.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/12_create_vpn_gw_edited.png" alt="" class="drop-shadow" /></p>
<p>You will be returned to the VPN Gateways page, and the new VPN gateway will have a status of <code class="language-plaintext highlighter-rouge">Creating</code>. Once the status shows as <code class="language-plaintext highlighter-rouge">Operational</code>, click on the new VPN gateway.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/13_create_vpn_gw_edited.png" alt="" class="drop-shadow" /></p>
<p>Click <code class="language-plaintext highlighter-rouge">Download my VPN configuration</code> to download a ZIP file containing pre-configured OpenVPN profiles for the VPN gateway. Profiles for connecting via UDP/1194 and TCP/443 are available. Choose whichever is your preference and import it into Open VPN, then connect. In the GCVE portal, browse to <code class="language-plaintext highlighter-rouge">Resources</code> and click on your SDDC.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/13_launch_vsphere_edited.png" alt="" class="drop-shadow" /></p>
<p>Finally, you can click <code class="language-plaintext highlighter-rouge">Launch vSphere Client</code>. Login with username <code class="language-plaintext highlighter-rouge">cloudowner@gve.local</code> and password <code class="language-plaintext highlighter-rouge">VMwareEngine123!</code>. Huzzah! You are now free to explore your newly created SDDC in GCVE. Your first task should be updating the password for the <code class="language-plaintext highlighter-rouge">cloudowner@gve.local</code> account.</p>
<p class="center"><img src="https://networkbrouhaha.com/resources/2021/02/14_launch_vsphere_edited.png" alt="" class="drop-shadow" /></p>
<h1 id="wrap-up">Wrap Up</h1>
<p>As you can see, deploying in SDDC in GCVE is easier than setting up client VPN access. Now, a standalone SDDC is cool, but in the next post we will look at connecting it to a VPC. This will be almost entirely automated with Terraform, apart from a tiny bit of work that needs to be done in the GCVE portal. Later posts will cover creating a bastion host, connecting with Cloud VPN and Cloud Interconnect, configuring HCX for workload migration, and all sorts of other use cases. Are you using GCVE? If so, please reach out to me on Twitter (<a href="https://www.twitter.com/networkbrouhaha">@NetworkBrouhaha</a>) and let me know what topics you’d like to see covered.</p>Matt ElliottWelcome to the first post in a new series focusing on Google Cloud VMware Engine (GCVE)! This first post will walkthrough prerequisites, deploying an SDDC with VMware HCX, and accessing vCenter via VPN Gateway.