Horizon on Google Cloud VMware Engine Architecture

This chapter is one of a series that make up the VMware Workspace ONE and VMware Horizon Reference Architecture, a framework that provides guidance on the architecture, design considerations, and deployment of VMware Workspace ONE® and VMware Horizon® solutions. This chapter provides information about architecting VMware Horizon on Google Cloud VMware® Engine.

Introduction

VMware Horizon for Google Cloud VMware Engine (GCVE) delivers a seamlessly integrated hybrid cloud for virtual desktops and applications. It combines the enterprise capabilities of the VMware Software-Defined Data Center (SDDC), delivered as a service on Google Cloud Platform (GCP), with the market-leading capabilities of VMware Horizon for a simple, secure, and scalable solution. You can easily address use cases such as on-demand capacity, disaster recovery, and cloud co-location without buying additional data center resources.

For customers who are already familiar with Horizon or have Horizon deployed on-premises, deploying Horizon on Google Cloud VMware Engine lets you leverage a unified architecture and familiar tools. This means that you use the same expertise you know from VMware vSphere and Horizon for operational consistency and leverage the same rich feature set and flexibility you expect. By outsourcing the management of the vSphere platform to VMware, you can simplify management of Horizon deployments. For more information about Horizon for Google Cloud VMware Engine, visit Google Cloud VMware Engine.

Figure 1: Horizon on Google Cloud VMware Engine

Figure 2: Horizon on Google Cloud VMware Engine

The purpose of this design chapter is to provide a set of best practices on how to design and deploy Horizon on Google Cloud VMware Engine. This guide is designed to be used in conjunction with Horizon documentation, Google Cloud VMware Engine documentation, and Google Cloud Platform documentation. There is also a related chapter of the Reference Architecture called Horizon on GCVE Configuration that should be reviewed.

It is highly recommended that you review the design concepts covered in the Horizon Architecture chapter. This chapter builds on the Horizon Architecture chapter and only covers specific information for Horizon on Google Cloud VMware Engine.

Table 1: Horizon on Google Cloud VMware Engine Strategy

Decision

A Horizon deployment was designed and deployed on GCVE.

The environment was designed to be capable of scaling to 1,500 concurrent connections per SDDC for users. This per SDDC number is valid regardless of Architecture chosen.

Justification

This strategy allowed the design, deployment, and integration to be validated and documented.

 

Deployment Options

There are two possible deployment options for Horizon on Google Cloud VMware Engine (GCVE):

  • All-in-SDDC Architecture - A consolidated design with all the Horizon components located inside each SDDC. This design will scale to approximately 1,500 concurrent users per SDDC with a maximum of two SDDCs (approximately 3,000) users.
  • Federated Architecture – A design where the Horizon management components are located in the Google Compute Engine and the Horizon resources (desktops and RDS Hosts for published applications) are located in the GCVE SDDCs. This design is still limited to 1,500 concurrent users per SDDC but supports multiple SDDCs. Since this design places the UAG devices in GCP and in front of the NSX edge appliances in the SDDC, standard Horizon limits will apply to the size of the Architecture. It is scaled and architected the same way as on-premises Horizon. See the VMware Configuration Maximum Tool for details.

All-in-SDDC Architecture

In the SDDC-based deployment model, all components, including management, are located inside the Google Cloud VMware Engine Private Cloud. Google Cloud VMware Engine allows you to create vSphere Software-Defined Data Centers (SDDCs) on Google Cloud Platform (GCP). These SDDCs include VMware vCenter Server for VM management, VMware vSAN for storage, and VMware NSX for networking. For reference, the SDDC construct is the same as the Google term Private Cloud. We use the term SDDC in this document. Due to current limitations of the NSX-T Edge gateways and the placement of the Unified Access Gateways, the SDDC-based model is only recommended for small-scale deployments (3,000 desktops or less).

The following figure shows the high-level logical architecture of this deployment model and all the management components inside the SDDC. There is a customer-provided load balancer that sits in the Google Compute Engine to forward Horizon protocol traffic into the SDDC. The following figure shows the high-level logical architecture of this deployment model with all the management components in the Google Cloud VMware Engine SDDC.

Figure 3: Logical view of the All-In-SDDC Horizon on GCVE Architecture

In this design, because all management components are located inside the SDDC, this can potentially lead to display protocol hairpinning. If using global entitlements as part of cloud pod architecture or using a single namespace, the user could be directed to either pod. If the users’ resource is not in the SDDC that they are initially directed to for authentication, their Horizon protocol traffic would go into the initial SDDC to the Unified Access Gateway, then back out via the NSX edge, and then be directed to where their desktop or published application is being delivered from. This causes a reduction in achievable scale due to this protocol hairpinning. For this reason, Horizon Cloud Pod is not supported with the All-in-SDDC Architecture. In this design, each SDDC would have a unique name (desktops1.company.com) and users would connect directly to this FQDN with the Horizon Client, or use Workspace ONE Access.

Federated Architecture

In the federated deployment model, all Horizon management components are located in the Google Compute Engine. The desktop / RDS capacity is located inside each SDDC. The Federated Architecture can be scaled to include multiple SDDC and address large scale deployments needs. The following figure shows the high-level logical architecture of this deployment model with all the management components in Google Compute Engine. In this design, the built-in Google load balancer can be used to load balance the UAG appliances, and no customer-provided load balancer is required.

Graphical user interface, website

Description automatically generated

Figure 4: Logical view of the Federated Architecture of Horizon on GCVE

Architectural Overview

Components include management, compute, and NSX-T components, SDDC, and resource pools. 

Components

The individual server components used for Horizon, whether deployed on Google Cloud VMware Engine (GCVE) or on-premises, are the same. They are also the same whether deployed in the Federated or All-in-SDDC Architecture. The main difference is where the components are placed. See the Components section in Horizon Architecture for details on the common server components.

The components and features that are specific to Horizon on GCVE are described in this section.

Software-Defined Data Centers (SDDC)

Google Cloud VMware Engine allows you to create vSphere Software-Defined Data Centers (SDDCs) on Google Cloud Platform (GCP). These SDDCs include VMware vCenter Server® for VM management, VMware vSAN™ for storage, and VMware NSX® for networking. For reference, the SDDC construct is the same as the Google term Private Cloud.

You can connect an on-premises SDDC to a cloud SDDC and manage both from a single VMware vSphere Web Client interface. Review the VMware Engine Prerequisites document for more details on requirements. You log in to https://console.cloud.google.com/ to access both your GCP and GCVE environments. For more information, see the Google Cloud VMware Engine documentation.

Important:  If you are deploying more than one SDDC in Google Cloud VMware Engine for use with Horizon, make sure to use unique CIDR (a method for allocating and routing IP addresses) for the management components. By default, they use the same CIDR and have the same IP addresses. If you do not change this default, you will not be able to use the multiple vCenters with Horizon. For more information, see the GCVE documentation on setting up a private cloud.

After you have deployed an SDDC on Google Cloud VMware Engine, you can deploy Horizon in that cloud environment just like you would in an on-premises vSphere environment. This enables Horizon customers to outsource the management of the SDDC infrastructure. There is no requirement to purchase new hardware, and you can choose between two pricing options for VMware Engine nodes:

  • On-demand pricing - Prices are based on your hourly usage in a particular region.
  • Commitment-based pricing - Prices are discounted in exchange for committing to continuously use VMware Engine nodes in a particular region for a one- or three-year term.

Management Components

The management components for the SDDC include vCenter Server and NSXT. These components are managed by Google. Google will handle upgrades and maintenance of these components at the infrastructure level. The management interfaces are available to customers to manage vSphere and NSX‑T. See the GCVE Documentation for details.

Compute Components

The compute component includes the following Horizon infrastructure components:

  • Horizon Connection Servers
  • Unified Access Gateway appliances
  • App Volumes Managers
  • Virtual Desktops
  • RDSH Hosts
  • Database Servers
    • App Volumes
    • Horizon Events Database
  • Horizon Cloud Connector Appliance

NSX-T Components

VMware NSX-T is the network virtualization platform for the Software-Defined Data Center (SDDC), delivering networking and security entirely in software, abstracted from the underlying physical infrastructure.

Important: The maximum number of ports per logical network is 1000, but you can create multiple pools using different logical networks.

  • Tier-0 router Handles Internet, route or policy based IPSEC VPN, and serves as an edge firewall for the Tier-1 Compute Gateway (CGW).
  • Tier-1 Compute Gateway (CGW) Serves as a distributed firewall for all customer internal networks.
  • The Tier-1 Management Gateway (MGW) Serves as a firewall for the Google-maintained components like vCenter and NSX.

Timeline

Description automatically generated

Figure 5: NSX-T components in the All-in-SDDC Architecture per SDDC

Timeline

Description automatically generated

Figure 6: NSX-T components in the Federated Architecture per SDDC

Resource Pools

A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources.

Within a Horizon pod on Google Cloud VMware Engine, you can use vSphere resource pools to separate management components from virtual desktops or published applications workloads to make sure resources are allocated correctly.

After a Google Cloud VMware Engine SDDC is provisioned, two resource pools exist: 

  • A Workload Resource Pool - Described below
  • HCX Management - Not in the scope of this Reference Architecture

All-In-SDDC Architecture strategy for Resource Pools

In the All-in-SDDC architecture, both management and user resources are deployed in the same SDDC. It is recommended to create two sub-resource pools within the Workload Resource Pool for your Horizon deployments:

  • A Horizon Management Resource Pool for your Horizon management components, such as connection servers
  • A Horizon User Resource Pool for your desktop pools and published applications.

A picture containing text

Description automatically generated

Figure 7: Resource Pools for All-in-SDDC Architecture of Horizon on Google Cloud VMware Engine

Federated Architecture strategy for Resource Pools

In the federated architecture, only the desktop and RDS resources are located inside of the GCVE SDDC. There is no need to create sub-resource pools. The Workload resource pool can be used.

Figure 8: Resource Pools for the Federated Architecture of Horizon on Google Cloud VMware Engine

Scalability and Availability

This section covers the concepts of pods and blocks, Cloud Pod architecture, Horizon Universal Broker, Workspace ONE Access, and Sizing Horizon on Google Cloud VMware Engine.

Pod and Block

A key concept of Horizon, whether deployed on Google Cloud VMware Engine or on-premises, is the use of blocks and pods. See the Pod and Block section in Horizon Architecture.

Cloud Pod Architecture

Cloud Pod Architecture (CPA) is a standard Horizon feature that allows you to connect your Horizon deployment across multiple pods and sites for federated management. It can be used to scale up your deployment, to build hybrid cloud, and to provide redundancy for business continuity and disaster recovery. CPA introduces the concept of a global entitlement (GE) that spans the federation of multiple Horizon pods and sites. Any users or user groups belonging to the global entitlement are entitled to access virtual desktops and RDS published apps on multiple Horizon pods that are part of the CPA.

Important: CPA is not a stretched deployment; each Horizon pod is distinct and all Connection Servers belonging to each of the individual pods are required to be located in a single location and run on the same broadcast domain from a network perspective.

Here is a logical overview of a basic two site/ two pod CPA implementation. For Google Cloud VMware Engine, Site 1 and Site 2 might be different GCP regions, or Site 1 might be on-prem and Site 2 might be on Google Cloud VMware Engine.

Figure 9: Cloud Pod Architecture

For the full documentation on how to set up and configure CPA, refer to Administering View Cloud Pod Architecture in the Horizon documentation and the Cloud Pod Architecture section in Horizon Configuration.

In the All-in-SDDC Architecture, because all management components are located inside the SDDC, this can lead to protocol hairpinning (shown in Figure 10). The users could be directed to either pod with a global entitlement. If the users’ resource is not in the SDDC they get directed to, they are directed back out via the NSX edge and directed to the other site. This causes reduction in scale due to this protocol hairpinning. For this reason, we are not supporting the use of Horizon Cloud Pod with the All-in-SDDC Architecture.Timeline

Description automatically generated

Figure 10: Horizon Protocol Traffic Hairpinning with All-in-SDDC design (UAGs inside of SDDC)

Using CPA to Build Hybrid Cloud and Scale for Horizon

You can deploy Horizon in a hybrid cloud environment when you use CPA to interconnect Horizon on-premises and Horizon pods on Horizon on GCVE with the Federated Architecture. You can easily entitle your users to virtual desktop and RDS published apps on-premises and/or on Horizon on GCVE. You can configure it such that they can connect to whichever site is closest to them geographically as they roam.

You can also stretch CPA across Horizon pods in two or more Google Cloud VMware Engine SDDCs with the same flexibility to entitle your users to one or multiple pods as desired. Of course, use of CPA is optional, and only supported with the Federated Architecture. There is still risk of Horizon protocol hairpinning if using Horizon PODs in locations behind different UAG appliances. An example would be a Federation created with one POD on-premises and one POD in GCVE. With a Global Entitlement, the user may be directed to the GCVE environment, if the user’s resource is not located in that POD, it will go back out of the POD and direct the user to the other POD located on-premises. This flow is illustrated below.

Figure 11: Horizon Protocol Traffic Hairpinning with Federated Design

Horizon Universal Broker

The Horizon Universal Broker is the cloud-based brokering technology used to manage and allocate virtual resources from multi-cloud assignments to end users. It allows users to access assignments by connecting to a fully qualified domain name (FQDN), which is defined in the Horizon Universal Broker configuration settings.

Currently the Universal Broker is not supported or available with Horizon on Google Cloud VMware Engine.

Table 2: Universal Broker Strategy

Decision

Universal Broker was not used.

Justification

Currently, Universal Broker is not currently supported with Horizon on GCVE.

Workspace ONE Access

Workspace ONE® Access can be used to broker Horizon resources. In this case, the entitlements and resources are synched to Workspace ONE access, and Workspace ONE access knows the FQDN of each POD to properly direct users to their desktop or application resources. When a user logs into Workspace ONE Access, they are presented with icons for each desktop and application they are entitled to. They do not need to worry about FQDNs, they just click the resource and they are connected. In the Federated Architecture, Cloud Pod Federations can also be added to Workspace ONE Access.

For more design details, see Horizon and Workspace ONE Access Integration in the Platform Integration chapter.

Figure 12: Synching Horizon resources into Workspace ONE Access



Figure 13: Brokering into Horizon resources using Workspace ONE Access with All-in-SDDC Architecture

Timeline

Description automatically generated

Figure 14: Brokering into Horizon resources using Workspace ONE Access with the Federated Architecture

Sizing Horizon on Google Cloud VMware Engine

Similar to deploying Horizon on-premises, you must size your requirements for deploying Horizon on Google Cloud VMware Engine to determine the number of hosts you will need to deploy. Hosts are needed for the following purposes:

  • Your virtual desktop or RDS workloads.
  • Your Horizon infrastructure components, such as connection servers, Unified Access Gateways, VMware App Volumes™ managers.
  • SDDC infrastructure components on Google Cloud VMware Engine. These components are deployed and managed automatically for you by Google.

The methodology for sizing Horizon on Google Cloud VMware Engine is the same as for on-premises deployments. What is different (and simpler) is the fixed hardware configurations on Google Cloud VMware Engine. Work with your Google Account team to determine the correct sizing and the latest on scaling GCVE. Refer to the VMware Configurations Maximum Tool for details on sizing of Horizon components.

At time of writing, the minimum number of hosts required per SDDC on Google Cloud VMware Engine for production use is 3 nodes (hosts). For testing purposes, a 1-node SDDC is also available. However, because a single node does not support HA, we do not recommend it for production use. Horizon can be deployed on a single-node SDDC or a multi-node SDDC. If you are deploying on a single-node SDDC, be sure to change the FTT policy setting on vSAN from 1 (default) to 0.

Network Configuration

This section will go over the network configurations when deploying Horizon on Google Cloud VMware Engine.

All-In-SDDC Architecture Networking

When the All-in-SDDC Architecture is used, all the networking for the Horizon Management components in done within NSX-T inside the GCVE SDDC. However, in this design, there is no way to get the Horizon protocol traffic from GCP into the UAG devices inside of GCVE, without the use of a third-party load balancer located in GCP to forward the Horizon protocol traffic. This configuration is detailed in the Horizon on Google Cloud VMware Engine Configuration chapter.

The recommended network architecture consists of a double DMZ and a separation between the Unified Access Gateway appliances and the RDS and VDI virtual machines located inside the SDDC. You must create segments in NSX-T to use within your SDDC. You will need the segments listed below. Review the NSX-T documentation for details.

  • NSX-T Desktop / RDS Segments with DHCP Enabled
  • NSX-T Management Segment (DHCP Optional)
    • Connection Servers
    • Unified Access Gateway - Deployed as two-NIC
    • App Volumes Managers
    • File Server
    • Database Server
    • Workspace ONE Connector
    • Horizon Cloud Connector
  • NSX-T Segment for Unified Access Gateway Internal DMZ
    • NIC 2 on each Unified Access Gateway
  • NSX-T Segment for Unified Access Gateway External DMZ
    • NIC 1 on each Unified Access Gateway
  • Subnet in the VPC network in GCP for the third-party load balancer to forward Horizon Protocol Traffic.

The following diagram illustrates this networking configuration.

Diagram

Description automatically generated

Figure 15: Network Diagram with All-In-SDDC Architecture (Subnets are for Illustrative Purposes Only)

Federated Architecture Networking

In the Federated Architecture, all of the Horizon management components including the Unified Access Gateway appliances are located inside of GCP. This allows the use of multiple GCVE SDDC as a target for capacity for desktops / RDS hosts. In the Federated Architecture, you will need to create a subnet for the Horizon Management components in a VPC network. You will also need to create new VPC networks for the Internal and External DMX networks to use with UAG. See Horizon on Google Cloud VMware Engine Configuration for details.

  • NSX-T - Desktop / RDS Segments with DHCP Enabled
  • Subnet in Google Cloud Platform VPC Network for Horizon Management Components
    • Connection Servers
    • Unified Access Gateway - Deployed as two-NIC
    • App Volumes Managers
    • File Server
    • Database Server
    • Workspace ONE Connector
    • Horizon Cloud Connector
  • New VPC Network for Internal DMZ
  • New VPC Network for External DMZ

The following diagram illustrates the network design of the Federated Architecture.

Diagram

Description automatically generated

Figure 16: Network Diagram with Federated Architecture (Subnets are for Illustrative Purposes Only)

Load Balancing Unified Access Gateway Appliances

To ensure redundancy and scale, multiple Unified Access Gateways are deployed. To implement them in a highly available configuration and provide a single namespace, a third-party load balancer, such as the Google Load Balancer or the NSX Advanced Load Balancer (Avi) can be used. A minimum of Two Unified Access gateways should be deployed in a two-NIC (Double DMZ) configuration. See Two-NIC Deployment in the Unified Access Gateway Architecture chapter for details. To provide direct external access, a public IP address would be configured forward traffic to the load balancer virtual IP address. In the All-in-SDDC architecture, a third-party load balancer is required to forward Horizon protocol traffic into the Unified Access Gateways located inside the SDDC.

Unified Access Gateway Deployment in the All-in-SDDC Architecture

In the All-in-SDDC Architecture, the UAG appliances are located inside of the GCVE SDDC. There is no way to forward Horizon protocol traffic from the Google Load Balancer into the UAG devices inside of the SDDC. In this case, you will need a third-party load balancer located in GCP. The Google Load Balancer would have forwarding rules to send TCP and UDP to the third-party load balancer, which would be configured to load-balance the UAG appliances inside of GCVE. See the Horizon on GCVE Configuration document for information on configuring protocol traffic forwarding. See Load Balancing Unified Access Gateway in Horizon Architecture for details on load-balancing the UAG appliances.

In this configuration, an internal and external DMZ segment should be created in GCVE with the following firewall rules:

  • Internal DMZ – Access to segments containing desktops, connection servers, and DNS
  • External DMZ – Access to the segment in GCP containing the 3rd party Load Balancer used for forwarding Horizon protocol traffic

Static routes might be needed on the Unified Access Gateway as follows:

  • Internal DMZ (NIC 2) – Static route to segments containing desktops and the connection servers
  • External DMZ (NIC 1) – Static route to segments in GCP containing the third-party load balancer used for forwarding Horizon protocol traffic

Table 3: Load Balancing Strategy for All-in-SDDC Architecture

Decision

A third-party load balancer was used to load balance the Unified Access Gateways.

NSX Advanced load balancer was used.

Justification

A load balancer enables the deployment of multiple management components, such as Unified Access Gateways, to allow scale and redundancy.

A third-party load balancer is required to route protocol traffic into the SDDC from GCP.

The NSX load balancer was used as it integrates with Google Cloud Platform for orchestration.

Unified Access Gateway Deployment in the Federated Architecture

In this Architecture, the UAG appliances are located directly in GCP, which eliminates the need to place a third-party load balancer to forward protocol traffic into the SDDC. It also prevents protocol hairpinning when using cloud-pod inside of GCP. As of Unified Access Gateway 2103, running the UAG appliances directly inside of GCP is supported. See the Deploying Unified Access Gateway on Google Cloud for details on deployment of UAG into GCP.

In this configuration, Internal and External DMZ VPC networks should be created in GCP with the following VPC Peering and firewall rules:

  • Internal DMZ – Access to segments (VPC Peering) containing desktops, connection servers, and DNS
  • External DMZ – Access to third party load balancer (if used). If using the built-in Google load balancer, this VPC network needs no peering configured.

Static routes might be needed on the Unified Access Gateway as follows:

  • Internal DMZ (NIC 2) – Static route to segments containing desktops and the connection servers

See Change Network Settings for details on setting static routes in Unified Access Gateway. See the Horizon on GCVE Configuration documents for details on setting up the VPC networks for UAG and on configuring the Google Load Balancer for use with UAG.

Table 4: Load Balancing Strategy for Federated Architecture

Decision

A third-party load balancer was used to load balance the Unified Access Gateways.

The Google Cloud Platform Load Balancer was used.

Justification

A load balancer enables the deployment of multiple management components, such as Unified Access Gateways, to allow scale and redundancy.

The Google Cloud Platform load balancer can load balance the Unified Access Gateway appliances in GCP.

Load Balancing Connection Servers

Multiple Connection Servers are deployed for scale and redundancy. Depending on the user connectivity, you may or may not need to deploy a third-party load balancer to provide a single namespace for the Connection Servers. When all user connections originate externally and come through a Unified Access Gateway, it is not necessary to have a load balancer between the Unified Access Gateways and the Connection Servers. Each Unified Access Gateway can be defined with a specific Connection server as its destination.

Timeline

Description automatically generated

Figure 17: Load Balancing when all Connections Originate Externally

Where users will be connecting from internally routed networks and their session will not go via a Unified Access Gateway, a load balancer should be used to present a single namespace for the Connection Servers. A load balancer such as the Google Load Balancer or NSX Advanced Load Balancer (Avi), should be deployed. See the Load Balancing Connection Servers section of the Horizon Architecture chapter.

When required for internally routed connections, a load balancer for the Connection Servers can be either:

  • Placed in between the Unified Access Gateways and the Connection Server and used as an FQDN target for both internal users and the Unified Access Gateways.
  • Located so that only the internal users use it as an FQDN.

Diagram

Description automatically generated

Figure 18: Options for Load Balancing Connection Servers for Internal Connections

A load balancer such as the NSX Advanced Load Balancer or the Google Cloud Platform Load Balancer must be deployed to allow multiple Unified Access Gateway appliances and Connection Servers to be implemented in a highly available configuration. An example of internal users in this use case would be users connecting through Direct Connect, via a Shared VPC or via VPN with split-DNS. So, the same namespace (desktops.company.com) would resolve externally to the public IP address in GCP and internally to the Internal Load Balancer as illustrated in Figure 18.

Table 5: Load Balancing Connection Servers Strategy

Decision

Connection Servers were not load balanced.

Justification

All the users will connect to the environment externally.

External Networking

When direct external access is required, configure a public IP address in GCP that will be bound to a public DNS entry; for example, desktops.company.com. In the All-In-SDDC Architecture, this IP would use GCP forwarding rules to send TCP and UPD traffic to the third-party load balancer, which would be load-balancing the UAG appliances located inside of each SDDC. In the Federated Architecture, the external IP address would be assigned to a GCP load balancer front-end for TCP and UPD, which would point to the UAG appliances in GCP as a back-end resource. See Horizon on GCVE Configuration for details on how to configure this.

For external management or access to external resources, create a VPN or direct connection within GCP, and do this in the Networking | Hybrid Connectivity section of the GCP console. See Google Cloud How-to guides for details.

DHCP service

It is critical to ensure that all VDI enabled desktops have properly assigned IP addresses. In most cases, you would opt for automatic IP assignment.

Horizon on GCVE supports assigning IP addresses to clients as follows:

  • NSX-T based local DHCP service, attached to the Compute Gateway (default).

Table 6: DHCP Service Strategy both All-in-SDDC and Federated Architectures

Decision

NSX-T was used to provide DHCP assigned IP addresses for desktops.

Justification

This provided properly assigned IP address and integrates directly into Horizon.

DNS Service

Reliable and correctly configured name resolution is vital for a successful Horizon deployment. While designing your environment, make sure you understand Configuring DNS for Management Appliance Access. Your design choice will directly influence the configuration details.

  • VMware recommends using local DNS Server (hosted on Google Cloud VMware Engine or GCP) to reduce dependency on the connection link to on-premises.
  • For a single GCVE SDDC, you can use a conditional forwarder in your DNS server for gve.goog pointing to the IP addresses of the DNS servers in the SDDC. See Configure DNS in Horizon on GCVE Configuration.
  • For multiple GCVE SDDCs with the Federated Architecture, you need to use a DNS forward lookup zone for gce.goog, which contains the fully qualified domain names and IP address of the vCenter and NSX-T appliances. See Configure DNS in the Horizon on GCVE Configuration document.

Important:  If you are deploying more than one SDDC in Google Cloud VMware Engine for use with Horizon, make sure to use unique CIDR for the management components. By default, they use the same CIDR and have the same IP addresses. If you don’t change this default, you will not be able to use the multiple vCenters with Horizon. For more information, see the GCVE documentation on setting up a private cloud.

Table 7: DNS Strategy for a single SDDC

Decision

Add the DNS role to a local Domain Controller located in GCVE.

Configure DNS Conditional forwarding for gce.goog.

Justification

The local DNS server allows local name resolution without depending on a connection to on-premises.

Since only one SDDC is used, the conditional forwarder for gve.goog pointing to the DNS servers in that SDDC can be used.

Table 8: DNS Strategy for Multiple SDDCs

Decision

Add the DNS role to a local Domain Controller located in GCP.

Create forward lookup zone for gve.goog.

Justification

The local DNS server allows local name resolution without depending on a connection to on-premises.

Since we have more than one SDDC defined, we need to create a forward lookup zone for gve.goog.

Scaling Deployments

In this section, we will discuss scaling options for scaling out Horizon on GCVE. Following are the design decisions for both the All-in-SDDC and Federated Architectures.

All-in-SDDC - Single SDDC

This design is comprised of a single SDDC which will handle up to 1,500 concurrent Horizon users. There will be a single FQDN (desktops.customer.com) bound to the Public IP address. A third-party load balancer is used to forward Horizon protocol traffic to the UAG appliances located inside of the SDDC. Users will use the desktops.customer.com FQDN via the Horizon Client, HTML access or it will automatically be used with Workspace ONE Access integration. The following figure is a logical view of the All-in-SDDC Architecture showing all the Horizon components (Management and User Resources) located inside of the SDDC.

Graphical user interface

Description automatically generated with low confidence

Figure 19: Logical Architecture for a single SDDC with the All-in-SDDC Architecture

The figure below illustrates the collection flow with a single SDDC using the All-in-SDDC Architecture. Since there is no way to get Horizon Protocol traffic directly from the Internet to the UAG appliances located in the SDDC, we need to use a third-party load balancer. The Google Load balancer forwards tcp/upd from the public IP to a pool containing the third-party load balancer which is configured to load balance the UAG appliances located inside of the GCVE SDDC.

Figure 20: Horizon connection flow with single SDDC using the All-in-SDDC Architecture

The figure below shows the All-in-SDDC Architecture, with all networking done via NST-T inside the SDDC and the third-party load balancer located in GCP to forward Horizon protocol traffic.

Graphical user interface

Description automatically generated with medium confidence

Figure 21: All-in-SDDC Architecture with a single SDDC with Networking

Table 9: Connection Server Strategy for the All-in-SDDC Architecture

Decision

Two connection servers were deployed.

These ran on dedicated Windows 2019 VMs located in the internal network segment within the SDDC.

Justification

One connection server is recommended per 2,000 concurrent connections.

A second server provides redundancy and availability (n+1).

Table 10: Unified Access Gateway Strategy for the All-in-SDDC Architecture

Decision

Two standard-size Unified Access Gateway appliances were deployed inside a GVCE SDDC as part of the Horizon All-in-SDDC Architecture.

These spanned the Internal and External DMZ networks.

Justification

UAG provides secure external access to internally hosted Horizon desktops and applications.

One standard UAG appliance is recommended for up to 2,000 concurrent Horizon connections.

A second UAG provides redundancy and availability (n+1).

Table 11: App Volumes Manager Strategy for the All-in-SDDC Architecture

Decision

Two App Volumes Managers were deployed.

The two App Volumes Managers are load balanced with the NSX Advanced (Avi) load balancer in GCP.

Justification

App Volumes is used to deploy applications locally.

The two App Volumes Managers provide scale and redundancy.

Table 12: Horizon Cloud Connector Strategy for the All-in-SDDC Architecture

Decision

One Horizon Cloud Connector was deployed in licensing only mode.

Justification

To license the solution via subscription licensing.

Control plane services beyond licensing are not supported at this time.

Table 13: Workspace ONE Access Connector Strategy for the All-in-SDDC Architecture

Decision

Two Workspace ONE Access connectors were deployed.

Justification

To connect back to our Workspace ONE instance to integrate entitlements and allow for seamless brokering into desktops and applications.

Two connectors deployed for resiliency.

Table 14: Dynamic Environment Manager Strategy for the All-in-SDDC Architecture

Decision

VMware Dynamic Environment Manager (DEM) was deployed on the local file server.

Justification

This is used to provide customization / configuration of the Horizon desktops.

This location contains the Configuration and local Profile shares for DEM.

Table 15: SQL Server Strategy for the All-in-SDDC Architecture

Decision

A single SQL server was deployed.

Justification

This SQL server was used for the Horizon Events Database and App Volumes.

Table 16: Load balancing strategy for the All-in-SDDC Architecture

Decision

A third-party load balancer was used to load balance the Unified Access Gateways and App Volumes Managers.

NSX Advanced load balancer was used.

Justification

A load balancer enables the deployment of multiple management components, such as Unified Access Gateways, to allow scale and redundancy.

A third-party load balancer is required to route protocol traffic into the SDDC from GCP.

The NSX load balancer was used as it integrates with Google Cloud Platform for orchestration.

All-in-SDDC Architecture - Two SDDCs

As the All-in-SDDC architecture for Horizon in Google Cloud VMware Engine places all the management components inside each SDDC, there are some considerations when scaling above a single SDDC. A maximum of two SDDCs in this design are supported. This would be a total of approximately 3,000 concurrent users (1,500 per SDDC).

Graphical user interface, website

Description automatically generated

Figure 22: Logical Architecture for the All-in-SDDC Architecture with two SDDCs

With the All-in-SDDC Architecture, the recommendation is that a Horizon POD should only contain a single Google Cloud VMware Engine SDDC. As mentioned earlier, each pod will have a dedicated FQDN for user access. For example, desktops1.company.com and desktops2.company.com. Each SDDC would have a dedicated external IP address that would tie to the FQDN. A maximum of two SDDCs are supported in this architecture.

As illustrated in the figure below, users are assigned an FQDN, which they use to connect via the Horizon Client, HTML Access, or integration with Workspace ONE Access, which would only show the users the items they are entitled to. It does not matter if the user is entitled to site1 or site2; they are automatically routed to the proper FQDN. They have the option of connecting via the Horizon Client or HTML Access. The figure below illustrates this by showing each Horizon pod having a dedicated FQDN. Users would connect to that FQDN and be directed directly to the pod where their resources are located. Just like in the single SDDC design, all of the Horizon components are located inside each SDDC and there is a third-party load balancer in GCP that would forward Horizon protocol traffic to the UAGs for each respective pod. The third-party load balancer would be used for both SDDCs. There would be two external IPs (VIPs) that would each be tied to an external FQDN that users would connect to desktops / apps with. The two sites would be set up the same way.

Figure 23: Horizon connection flow for All-in-SDDC Architecture with two SDDCs

The figure below shows the All-in-SDDC Architecture, with all networking done via NST-T inside the SDDC and the third-party load balancer located in GCP to forward Horizon protocol traffic. This load balancer will be configured with the public IP addresses of both SDDC and will load balance the UAG appliances located in each SDDC.

Figure 24: All-in-SDDC Architecture with two SDDCs and Networking

Federated Architecture

In the Federated Architecture, all of the Horizon Management components are located inside of native GCP. There is a still a limit of 1,500 concurrent connections per SDDC, which allows you to create a Horizon POD consisting of multiple SDDCs. This solution is architected the same way that on-premises Horizon is architected, and the front-end management components would just need to be scaled to handle the concurrent connections from the back end GCVE desktop resources. See VMware Configuration Maximums or product  documentation for the version of Horizon you are deploying for the maximums of the Horizon components. Once DNS resolution is set up as described in DNS Service in this document, you can simply add the additional vCenter Servers in Horizon. You will notice in the following figure that the Deployment Type shows as Google Cloud VMware Engine.

Graphical user interface, text, application

Description automatically generated

Figure 25: Two vCenter Servers from two GCVE SDDCs in the Federated Architecture added to Horizon

The figure below illustrates the Federated Architecture with all of the Horizon components located inside of GCP, and multiple SDDCs used for desktop/RDS capacity to create a Horizon POD. In this example, the SDDCs could handle up to 6,000 concurrent connections and the front-end Horizon management resources would be sized accordingly. In the Federated Architecture, a single Horizon POD would be comprised of one or more SDDCs, and scaled up to Horizon maximums for the POD. As of Horizon 2103, this is 12,000 concurrent connections, so a single Horizon POD can contain up to eight (8) Google Cloud VMware Engine SDDCs for desktop/RDS capacity. Beyond this, a new Horizon POD should be created with new Horizon management components. The two can be joined into a Cloud POD Federation and Global Entitlements can be used.

Figure 26: Multiple GCVE SDDCs within one Horizon POD in the Horizon Federated Architecture

The following figure illustrates the connection flow for a user. They would connect via the FQDN, which would then go to the Google Load Balancer in GCP hosting the public IP. This is set up to load balance the UAG appliances. The user would be challenged for authentication by one of the UAG appliances via a Connection Server, and then once authenticated, a Horizon protocol session would be established to the desktop or published application in one of the SDDCs in the Horizon POD.

Timeline

Description automatically generated

Figure 27: Multiple GCVE SDDCs with one Horizon POD in the Federated Architecture

The figure below illustrates the Federated Architecture with the Horizon POD management resources located in GCP and the desktop/RDS resources scaled out into multiple SDDCs. Networking is done both in the GCP VPC and inside the GCVE SDDC with NSX-T for the desktop/RDS segments.

Timeline

Description automatically generated

Figure 28: Federated Architecture with two SDDCs Networking

Table 17: SDDC Strategy for the Federated Architecture

Decision


Two GCVE SDDCs were deployed in different GCP regions. .

Justification

This allowed testing of the Federated Architecture with the Horizon management components located in GCP and the Desktop/RDS resources located in vCenters located in two separate SDDCs.

Table 18: Connection Server Strategy for the Federated Architecture

Decision

Three connection servers were deployed.

These ran on dedicated Windows 2019 VMs located in the GCP Management network.

Justification

One connection server is recommended per 2,000 concurrent connections.

A third server provides redundancy and availability (n+1).

Table 19: Unified Access Gateway Strategy for the Federated Architecture

Decision

Three standard-size Unified Access Gateway appliances were deployed as part of the Horizon solution.

These spanned the Internal and External DMZ VPC networks.

Justification

UAG provides secure external access to internally hosted Horizon desktops and applications.

One standard UAG appliance is recommended for up to 2,000 concurrent Horizon connections.

A third UAG provides redundancy and availability (n+1).

Table 20: App Volumes Manager Strategy for the Federated Architecture

Decision

Three App Volumes Managers were deployed.

The three App Volumes Managers are load balanced with the built-in load balancer in GCP.

Justification

App Volumes is used to deploy applications locally.

The three App Volumes Managers provide scale and redundancy.

Table 21: Horizon Cloud Connector Strategy for the Federated Architecture

Decision

One Horizon Cloud Connector (1.10) was deployed in licensing only mode.

This Connector was deployed directly into GCP.

Justification

To license the solution via subscription licensing.

Control plane services beyond licensing are not supported at this time.

Table 22: Workspace ONE Access Connector Strategy for the Federated Architecture

Decision

Two Workspace ONE Access connectors were deployed.

Justification

To connect back to our Workspace ONE instance to integrate entitlements and allow for seamless brokering into desktops and applications.

Two connectors were deployed for resiliency.

Table 23: Dynamic Environment Manager Strategy for the Federated Architecture

Decision

VMware Dynamic Environment Manager (DEM) was deployed on the local file server.

Justification

This is used to provide customization / configuration of the Horizon desktops.

This location contains the Configuration and local Profile shares for DEM.

Table 24: SQL Server Strategy for the Federated Architecture

Decision

A single SQL server was deployed.

Justification

This SQL server was used for the shared App Volumes database.


Table 25: PostgreSQL Server Strategy for the Federated Architecture

Decision

A single PostgreSQL server 12.x was deployed.

Justification

This PostgreSQL server was used for the Horizon Events Database.

Table 26: Load balancing strategy for the Federated Architecture

Decision

A load balancer was used to load balance the Unified Access Gateways and App Volumes Managers.

The Google Load Balancer was used.

Justification

A load balancer enables the deployment of multiple management components, such as Unified Access Gateways, to allow scale and redundancy.

Multiple Horizon Pods

When deploying environments with multiple Horizon Pods, there are different options available for administrating and entitling users to resources across the pods. This is applicable for Horizon environments on Google Cloud VMware Engine, on other cloud platforms, or on-premises.

  • The Horizon pods can be managed separately, users entitled separately to each pod, and users directed to the use the unique FQDN for the correct pod.
  • Alternatively, you can manage and entitle the Horizon environments by linking them using Cloud Pod Architecture (CPA).
    • Important: CPA is only supported in the Federated Architecture.

Protocol Traffic Hairpinning

One consideration to be aware of is potential hairpinning of the Horizon protocol traffic through another Horizon POD or SDDC. This can occur if the user’s session is initially sent to the wrong SDDC for authentication. With each SDDC, all traffic into and out of the SDDC goes via the NSX edge gateways and there is a limit to the traffic throughput. In the All-in-SDDC architecture, the management components, including the Unified Access Gateways, are located inside the SDDC, authentication traffic must enter an SDDC and pass through the NSX edge gateway.

If the authentication traffic is not precisely directed, by using a unique namespace for each Horizon Pod, the user could be directed to any Horizon pod in its respective SDDC. If the user’s Horizon resource is not in that SDDC where they are initially directed to for authentication, their Horizon protocol traffic would go into the initial SDDC to a Unified Access Gateway in that SDDC and then back out via the NSX edge gateways and then be directed to where their desktop or published application is being delivered from.

This causes a reduction in achievable scale due to this protocol hairpinning. For this reason, Cloud Pod Architecture is not supported in the All-In-SDDC Architecture and caution should be exercised if using Horizon Cloud Pod Architecture in the Federated Design. If you create a Cloud Pod Federation that contains both a Horizon pod on-premises and a Horizon pod in GCVE, you could run into a similar issue where the user gets directed to one pod which does not contain their assigned resource and then would be routed back out to the other pod. This flow is illustrated in the following diagram.

Timeline

Description automatically generated

Figure 29: Horizon Protocol Traffic Hairpinning with Federated Design (UAGs in GCP)

Networking to On-Premises

With Horizon on GCVE, you can connect your on-premises resources such as Horizon Cloud Pod (Federated Architecture), expand your Active Directory into the GCVE environment, or replicate DEM data. Either a Cloud Interconnect or a Cloud VPN can be established between on-premises sites and GCP.

Figure 30: GCVE Federated Architecture with Networking to On-Premises

Table 27: Virtual Private Network Strategy

Decision

A Virtual Private Network (VPN) was established between GCP and our on-premises environment.

Justification

Allow for extension of on-premises services into the GCP/GCVE environment.

Table 28: Domain Controller Strategy

Decision

A single domain controller was deployed as a member of on-premises domain.

This domain controller is used as a DNS server for the SDDC.

Justification

A site was created to keep authentication traffic local to the site.

This allows Group Policy settings to be applied locally.

This allows for local DNS queries.

In the event of an issue with the local domain controller, authentication requests can traverse the VPN back to our on-premises domain controllers.

Table 29: File Server Strategy

Decision

A single file server was deployed with DFS-R replication to on-premises file servers.

Justification

General file services for the local site.

Allow the common DEM configuration data to be replicated to this site.

Keep the Profile data for DEM local to the site, with a backup to on-premises.

Licensing

Enabling Horizon to run on Google Cloud VMware Engine requires two separate licenses: a capacity license for Google Cloud VMware Engine and a Horizon subscription license.

For a POC or pilot deployment of Horizon on Google Cloud VMware Engine, you can use a temporary evaluation license or your existing perpetual license. However, to enable Horizon for production deployment on Google Cloud VMware Engine, you must purchase a Horizon subscription license. To obtain a Horizon subscription license or for more information on how to upgrade your existing perpetual license to a subscription license and associated discounts, contact your VMware representative.

For more information on the features and packaging of Horizon subscription licenses, see VMware Horizon Subscription - Feature comparison.

You can use different licenses (including perpetual licenses) on different Horizon pods, regardless of whether the pods are connected by CPA. You cannot mix different licenses within a pod because each pod only takes one type of license. For example, you cannot use both a perpetual license and a subscription license for a single pod. You also cannot use both the Horizon Apps Universal Subscription license and the Horizon Universal Subscription license in a single pod.

Horizon Cloud Connector

Regardless of whether you are deploying Horizon on-premises or on Google Cloud VMware Engine, if you are using any of the subscription licenses, you must install the Horizon Cloud Connector to enable subscription license management for Horizon. The Horizon Cloud Connector is a virtual appliance that connects a Horizon pod with Horizon Cloud Service features. With the All-in-SDDC Architecture, the Horizon Cloud Connector is installed inside of the SDDC. In the Federated Architecture, the Horizon Cloud Connector is installed directly into GCP. This requires version 1.10 or later of the Horizon Cloud Connector. See the Horizon on GCVE Configuration for details on how to deploy the Horizon Cloud Connector 1.10 or later into GCP. Horizon control plane services beyond license management are not supported on GCVE at this time.

A MyVMware account from https://my.vmware.com is required for Horizon subscription license. After you purchase the subscription license, a record will be created in the Horizon Cloud Service using your MyVMware email address, and your subscription license information will be visible to the Horizon Administrator console.

As part of the subscription license fulfillment process, you will receive an email with a link to download the Horizon Cloud Connector as an OVA (open virtual appliance) file. Follow the instructions in the email to deploy the Cloud Connector, using the vSphere web client, alongside your new or existing Horizon pods. After the Cloud Connector is deployed, it is paired with a Connection Server in the Horizon pod, and this pod is connected to the Horizon Cloud Service. The Horizon Cloud Service manages the Horizon subscription license between connected Horizon pods.

Unlike the Horizon perpetual license, with a subscription license, you do not need to retrieve or manually enter a license key for Horizon product activation. However, supporting component license keys, such as the license keys for App Volumes, and others, will be delivered separately, and the administrator must manually enter them to activate the product.

Review the Horizon documentation for more details on Enabling VMware Horizon for Subscription Licenses and Horizon Control Plane Services. You will need a separate Cloud Connector for each Horizon pod.

Preparing Active Directory

Horizon requires Active Directory services. For supported Active Directory Domain Services (AD DS) domain functional levels, see the VMware Knowledge Base (KB) article: Supported Operating Systems and MSFT Active Directory Domain Functional Levels for VMware Horizon 8 2006 (78652).

If you are deploying Horizon in a hybrid cloud environment by linking an on-premises environment with an GCVE Horizon pod, you should extend the on-premises Microsoft Active Directory (AD) to GCP or GCVE.

Although you could access on-premises active directory services and not locate new domain controllers in GCP/GCVE, this could introduce undesirable latency, service resiliency, or availability.

A site should be created in Active Directory Sites and Services and defined to the subnets containing the Domain Controller(s) for GCP/GCVE. This will keep the active directory services traffic local.

Table 30: Active Directory Strategy

Decision

An Active Directory domain controller was installed into GCP.

Justification

Locating domain controllers in GCP reduces latency for Active Directory queries, DNS, and KMS.

Shared Content Library

Content Libraries are container objects for VM, vApp, and OVF templates and other types of files, such as templates, ISO images, text files, and so on. vSphere administrators can use the templates in the library to deploy virtual machines and vApps in the vSphere inventory. Sharing golden images across multiple vCenter Server instances, between multiple Google Cloud VMware Engine and/or on-premises SDDCs guarantees consistency, compliance, efficiency, and automation in deploying workloads at scale.

For more information, see Using Content Libraries in the vSphere Virtual Machine Administration guide in the VMware vSphere documentation.

Resource Sizing

When resource sizing, make sure to take memory, CPU, and VM-level reservations into consideration, as well as leveraging CPU shares for different workloads.

Memory Reservations

Because physical memory cannot be shared between virtual machines, and because swapping or ballooning should be avoided at all costs, be sure to reserve all memory for all Horizon virtual machines, including management components, virtual desktops, and RDS hosts.

CPU Reservations

CPU reservations are shared when not used, and a reservation specifies the guaranteed minimum allocation for a virtual machine. For the management components, the reservations should equal the number of vCPUs times the CPU frequency. Any amount of CPU reservations not actively used by the management components will still be available for virtual desktops and RDS hosts when they are not deployed to a separate cluster.

Virtual Machine–Level Reservations

As well as setting a reservation on the resource pool, be sure to set a reservation at the virtual machine level. This ensures that any VMs that might later get added to the resource pool will not consume resources that are reserved and required for HA failover. These VM-level reservations do not remove the requirement for reservations on the resource pool. Because VM-level reservations are taken into account only when a VM is powered on, the reservation could be taken by other VMs when one VM is powered off temporarily.

Leveraging CPU Shares for Different Workloads

Because RDS hosts can facilitate more users per vCPU than virtual desktops can, a higher share should be given to them. When desktop VMs and RDS host VMs are run on the same cluster, the share allocation should be adjusted to ensure relative prioritization.

Deploying Desktops

With Horizon on Google Cloud VMware Engine both instant clones and full clones can be used. Instant clones, coupled with App Volumes and Dynamic Environment Manager helps accelerate the delivery of user-customized and fully personalized desktops.

Connection Server

When deploying the first Connection Server in the SDDC, make sure to choose “Google Cloud” as the deployment type. This sets the proper configuration and permissions on the Connection Server and Virtual Center.

Graphical user interface, text, application

Description automatically generated

Figure 31: Horizon Deployment Methods Option

At the time of writing, Horizon 8 (2006) and later, and Horizon 7.13 are supported on GCVE. For details on supported features, see the KB: VMware Horizon on Google Cloud VMware Engine (GCVE) Support (81922).

Instant Clones

Dramatically reduce infrastructure requirements while enhancing security by delivering brand-new personalized desktop and application services to end users every time they log in:

  • Reap the economic benefits of stateless, non-persistent virtual desktops served up to date upon each login.
  • Deliver a pristine, high-performance personalized desktop every time a user logs in.
  • Improve security by destroying desktops every time a user logs out.
  • On the golden image VM, add the domain’s DNS to avoid customization failures.

When you install and configure Horizon for instant clone for deployment on Google Cloud VMware Engine, do the following:

Important: CBRC is not supported or needed on Google Cloud VMware Engine. CBRC is not disabled when deploying Horizon on GCVE. Do NOT enable CBRC on vCenter when creating an Instant Clone Pool. You will see this message when creating an Instant Clone pool because CBRC is disabled on vCenter.

Figure 32: Warning about turning on View Storage Accelerator (CBRC)

Make sure to choose Ignore on this dialog message.

Multi-VLAN is not yet supported when creating Horizon instant-clone pools on Google Cloud VMware Engine.

For more information on Instant clones, see Instant Clone Smart Provisioning.

App Volumes

App Volumes provides real-time application delivery and management, for on-premises and now on Google Cloud VMware Engine:

  • Quickly provisions applications at scale
  • Dynamically attaches applications to users, groups, or devices, even when users are already logged in to their desktop
  • Provisions, delivers, updates, and retires applications in real time
  • Provides a user-writable volume, allowing users to install applications that follow them across desktops
  • Provides end users with quick access to a Windows workspace and applications, with a personalized and consistent experience across devices and locations
  • Simplifies end-user profile management by providing organizations with a single and scalable solution that leverages the existing infrastructure
  • Speeds up the login process by applying configuration and environment settings in an asynchronous process instead of all at login
  • Provides a dynamic environment configuration, such as drive or printer mappings, when a user launches an application

For design guidance, see App Volumes Architecture.

Transfer App Volumes from vSphere to Google Cloud VMware Engine

For migration or BCDR purposes, you can transfer your AppStacks or user-writable volumes from on-premises to the Google Cloud VMware Engine environment using your vSphere client in a two-step process.

From the vSphere Client:

  1. Create a VM with thin provisioning and attach the volume that you want to transfer to the VM.
  2. Select the VM and export it as an OVF template from File > Export to OVF Template.

From the Google Cloud VMware Engine web client:

  1. Click Actions > Deploy OVF Template.
  2. Follow the on-screen instructions and when prompted to select the storage format, select Thin provision.

After the VM is created, browse the datastore where the OVF was exported and move the VMDK file with its metadata to the appvolumes/packages directory.

Ensure that you change the template location in the metadata file to point to the new datastore.

Dynamic Environment Manager

Use VMware Dynamic Environment Manager for application personalization and dynamic policy configuration across any virtual, physical, and cloud-based environment. Install and configure Dynamic Environment Manager on Google Cloud VMware Engine just like you would install it on-premises.

See Dynamic Environment Manager Architecture.

What’s Next?

Now that you have come to the end of this chapter, you can return to the landing page and search or scroll to select your next chapter in one of the following sections:

  • Overview chapters provide understanding of business drivers, use cases, and service definitions.
  • Architecture chapters explore the products you are interested in including in your platform, including Workspace ONE UEM, Workspace ONE Access, Workspace ONE Assist, Workspace ONE Intelligence, Horizon, App Volumes Dynamic Environment Manager, and Unified Access Gateway.
  • Integration chapters cover the integration of components and services you need to create the platform capable of delivering what you want.
  • Configuration chapters provide reference for specific tasks as you build your platform, such as installation, deployment, and configuration processes for Horizon, App Volumes, Dynamic Environment Management, and more.

 

 

 

 

Filter Tags

Horizon Horizon Document Reference Architecture Advanced Design Windows Delivery