Solution

  • Workspace ONE
  • Horizon

Type

  • Document

Level

  • Advanced

Category

  • Reference Architecture

Product

  • App Volumes
  • Dynamic Environment Manager
  • Horizon 7
  • Horizon Cloud Service
  • NSX for Horizon
  • Unified Access Gateway
  • Workspace ONE Access
  • Workspace ONE Intelligence
  • Workspace ONE UEM

OS/Platform

  • Azure
  • Linux
  • Windows 10

Phase

  • Design
  • Deploy

Use-Case

  • Identity / Access Management
  • Secure Remote Access

VMware Workspace ONE and VMware Horizon Reference Architecture

VMware Horizon VMware Workspace ONE

Executive Summary

Use this reference architecture as you build services to create an integrated digital workspace—one that addresses your unique business requirements and use cases. You will integrate the components of VMware Workspace ONE®, including VMware Horizon® 7 Enterprise Edition and VMware Horizon® Cloud Service™ on Microsoft Azure.

This reference architecture provides a framework and guidance for architecting an integrated digital workspace using Workspace ONE and Horizon. Design guidance is given for each product—with a corresponding component design chapter devoted to each product—followed by chapters that provide best practices for integrating the components into a complete platform. For validation, an example environment was built. The design decisions made for this environment are listed throughout the document, along with the rationale for each decision and descriptions of the design considerations.

Workspace ONE combines identity and mobility management to provide frictionless and secure access to all the apps and data that employees need to do their work, wherever, whenever, and from whatever device they choose. Mobile device and identity services are delivered through VMware Workspace ONE® Unified Endpoint Management (UEM), powered by AirWatch, and VMware Workspace ONE Access™ (formerly called VMware Identity Manager).

Additionally, Workspace ONE integrates with VMware Horizon® virtual desktops and published applications delivered through Horizon 7 and Horizon Cloud Service on Microsoft Azure. This integration provides fast single-sign-on (SSO) access to a Windows desktop or set of Windows applications for people who use the service.

User Workspace with VMware Workspace ONE

Figure 1: User Workspace with VMware Workspace ONE

The example architecture and deployment described in this guide address key business drivers. The approach taken is, as with any technology solution, to start by defining those business drivers and then identify use cases that need to be addressed. Each use case will entail a set of requirements that need to be fulfilled to satisfy the use case and the business drivers.

Once the requirements are understood, the solutions can be defined and blueprints outlined for the services to be delivered. This step allows us to identify and understand the products, components, and parts that need to be designed, built, and integrated.

Design Approach

Figure 2: Design Approach

To deliver a Workspace ONE and Horizon solution, you build services efficiently from several reusable components. This modular, repeatable design approach combines components and services to customize the end-user experience without requiring specific configurations for individual users. The resultant environment and services can be easily adapted to address changes in the business and use case requirements.

Sample Service Blueprint

Figure 3: Sample Service Blueprint

This reference architecture underwent validation of design, environment adaptation, component and service build, integration, user workflow, and testing to ensure that all the objectives were met, that the use cases were delivered properly, and that real-world application is achievable.

This VMware Workspace ONE and VMware Horizon Reference Architecture illustrates how to architect and deliver a modern digital workspace that meets key business requirements and common use cases for the increasingly mobile workplace, using Workspace ONE and Horizon.

Workspace ONE and Horizon Solution Overview

VMware Workspace ONE® is a simple and secure enterprise platform that delivers and manages any app on any device. Workspace ONE integrates identity, application, and enterprise mobility management while also delivering feature-rich virtual desktops and applications. It is available either as a cloud service or for on-premises deployment. The platform is composed of several components—VMware Workspace ONE® UEM (powered by VMware AirWatch®), Workspace ONE Access™, VMware Horizon®, and the Workspace ONE productivity apps, which are supported on most common mobile platforms.

VMware Reference Architectures

VMware reference architectures are designed and validated by VMware to address common use cases, such as enterprise mobility management, enterprise desktop replacement, remote access, and disaster recovery.

This Workspace ONE and Horizon reference architecture is a framework intended to provide guidance on how to architect and deploy Workspace ONE and Horizon solutions. It presents high-level design and low-level configuration guidance for the key features and integration points of Workspace ONE and Horizon. The result is a description of cohesive services that address typical business use cases.

VMware reference architectures offer customers:

  • Standardized, validated, repeatable components
  • Scalable designs that allow room for future growth
  • Validated and tested designs that minimize implementation and operational risks
  • Quick implementation and reduced costs

This reference architecture does not provide performance data or stress-testing metrics. However, it does provide a structure and guidance on architecting in repeatable blocks for scale. The principles followed include the use of high availability and load balancing to ensure that there are no single points of failure and to provide a production-ready design.

Design Tools

The VMware Digital Workspace Designer is a companion and aid for planning and sizing a Workspace ONE and Horizon deployment, maintaining current VMware best practices and assisting you with key design decisions. The tool is aimed at establishing an initial high-level design for any planned deployment and is intended to complement a proper planning and design process.

The VMware Digital Workspace Topology Tool allows you to create a logical architectural diagram by selecting the Workspace ONE and Horizon components. It generates a diagram that shows the selected components and the links between them. The Topology Tool can also be also launched from the Digital Workspace Designer to automatically create an architectural diagram with the components generated as part of a design. Both tools can be found https://techzone.vmware.com/tools.

Design Decisions

As part of the creation of this reference architecture guide, full environments are designed, built, and tested. Throughout this guide, design decisions are listed that describe the choices we made for our implementation. 

Table 1: Design Decision Regarding the Purpose of This Reference Architecture

Decision

Full production-ready environments were architected, deployed, and tested.

Justification

This allows the content of this guide, including the design, deployment, integration, and delivery, to be verified, validated, and documented.

Each implementation of Workspace ONE and Horizon is unique and will pose distinct requirements. The implementation followed in this reference architecture tries to address the common use cases, decisions, and challenges that need to be addressed in a manner that can be adapted to differing circumstances.

Audience

This reference architecture guide helps IT architects, consultants, and administrators involved in the early phases of planning, designing, and deploying Workspace ONE, VMware Horizon® 7, and VMware Horizon® Cloud Service™ solutions.

You should have:

  • A solid understanding of the mobile device landscape
  • Deep experience regarding the capabilities and configuration of mobile operating systems
  • Familiarity with device-management concepts
  • Knowledge of identity solutions and standards, such as SAML authentication
  • Understanding of enterprise communication and collaboration solutions, including Microsoft Office 365, Exchange, and SharePoint
  • A good understanding of virtualization, in particular any platform used to host services such as VMware vSphere® or Microsoft Azure.
  • A solid understanding of desktop and application virtualization
  • A solid understanding of firewall policy and load-balancing configurations
  • A good working knowledge of networking and infrastructure, covering topics such as Active Directory, DNS, and DHCP

Workspace ONE Features

Workspace ONE features provide enterprise-class security without sacrificing convenience and choice for end users:

  • Real-time app delivery and automation – Taking advantage of new capabilities in Windows, Workspace ONE allows desktop administrators to automate application distribution and updates. This automation, combined with virtualization technology, helps ensure application access as well as improve security and compliance. Provision, deliver, update, and retire applications in real time.
  • Self-service access to cloud, mobile, and Windows apps – After end users are authenticated through either the Workspace ONE app or the VMware Workspace ONE® Intelligent Hub app, they can instantly access mobile, cloud, and Windows applications with one-touch mobile single sign-on (SSO).
  • Choice of any device, employee or corporate owned – Administrators can facilitate adoption of bring-your-own-device (BYOD) programs by putting choice in the hands of end users. Give the level of convenience, access, security, and management that makes sense for their work style.
  • Device enrollment – The enrollment process allows a device to be managed in a Workspace ONE UEM environment so that device profiles and applications can be distributed and content can be delivered or removed. Enrollment also allows extensive reporting based on the device’s check-in to the Workspace ONE UEM service.
  • Adaptive management – For some applications, end users can log in to Workspace ONE and access the applications without first enrolling their device. For other applications, device enrollment is required, and the Workspace ONE app can prompt the user to initiate enrollment.

    Administrators can enable flexible application access policies, allowing some applications to be used prior to enrollment in device management, while requiring full enrollment for apps that need higher levels of security.

  • Conditional access – Both Workspace ONE Access and Workspace ONE UEM have mechanisms to evaluate compliance. When users register their devices with Workspace ONE, data samples from the device are sent to the Workspace ONE UEM cloud service on a scheduled basis to evaluate compliance. This regular evaluation ensures that the device meets the compliance rules set by the administrator in the Workspace ONE UEM Console. If the device goes out of compliance, corresponding actions configured in the Workspace ONE UEM Console are taken.

    Workspace ONE Access includes an access policy option that administrators can configure to check the Workspace ONE UEM server for device compliance status when users sign in. The compliance check ensures that users are blocked from signing in to an application or using SSO to the Workspace ONE Access self-service catalog if the device is out of compliance. When the device is compliant again, the ability to sign in is restored.

    Actions can be enforced based on the network that users are on, the platform they are using, or the applications being accessed. In addition to checking Workspace ONE UEM for device compliance, Workspace ONE Access can evaluate compliance based on network range of the device, type of device, operating system of the device, and credentials.

  • Unified application catalog – The Workspace ONE Access and Workspace ONE UEM application catalogs are combined and presented on either the Workspace ONE app’s Catalog tab or the VMware Workspace ONE Intelligent Hub app, depending on which is being used.
  • Secure productivity apps: VMware Workspace ONE® Boxer, Web, Content, Notebook, People, Verify, and PIV-D Manager – End users can use the included mail, calendar, contacts, browser, content, organization, and authentication capabilities, while policy-based security measures protect the organization from data leakage by restricting the ways in which attachments and files are edited and shared.
  • Mobile SSO – One-touch SSO technology is available for all supported platforms. The implementation on each OS is based on features provided by the underlying OS. For iOS, one-touch SSO uses technology known as the key distribution center (KDC). For Android, the authentication method is called mobile SSO for Android. And for Windows 10, it is called cloud certificate.
  • Secure browsing – Using VMware Workspace ONE® Web instead of a native browser or third-party browser ensures that access to sensitive web content is secure and manageable.
  • Data loss prevention (DLP) – This feature forces documents or URLs to open only in approved applications to prevent accidental or purposeful distribution of sensitive information.
  • Resource types – Workspace ONE supports a variety of applications exposed through the Workspace ONE Access and Workspace ONE UEM catalogs, including SaaS-based SAML apps, VMware Horizon apps and desktops, Citrix virtual apps and desktops, VMware ThinApp® packaged apps delivered through Workspace ONE Access, and native mobile applications delivered through Workspace ONE UEM.

Workspace ONE Platform Integration

Workspace ONE UEM delivers the enterprise mobility management portion of the solution. Workspace ONE UEM allows device enrollment and uses profiles to enforce configuration settings and management of users’ devices. It also enables a mobile application catalog to publish public and internally developed applications to end users.

Workspace ONE Access provides the solution’s identity-related components. These components include authentication using username and password, two-factor authentication, certificate, Kerberos, mobile SSO, and inbound SAML from third-party Workspace ONE Access systems. Workspace ONE Access also provides SSO to entitled web apps and Windows apps and desktops delivered through either VMware Horizon or Citrix.

Figure 4: Workspace ONE Logical Architecture Overview

VMware Workspace ONE Intelligence

VMware Workspace ONE® Intelligence is a service that gives organizations visualization tools and automation to help them make data-driven decisions for operating their Workspace ONE environment.

By aggregating, analyzing, and correlating device, application, and user data, Workspace ONE Intelligence provides extensive ways to filter and reveal key performance indicators (KPIs) at speed and scale across the entire digital workspace environment. After information of interest has been surfaced by Workspace ONE Intelligence, IT administrators can:

  • Use the built-in decision engine to create rules that take actions based on an extensive set of parameters.
  • Create policies that take automated remediation actions based on context.

With Workspace ONE Intelligence, organizations can easily manage complexity and security without compromising a great user experience.

Workspace ONE Intelligence Overview

Figure 5: Workspace ONE Intelligence Overview

Horizon 7 Platform

With Horizon 7 Enterprise Edition, VMware offers simplicity, security, speed, and scale in delivering on-premises virtual desktops and applications with cloud-like economics and elasticity of scale.  With this latest release, customers can now enjoy key features such as:

JMP allows components of a desktop or RDSH server to be decoupled and managed independently in a centralized manner, yet reconstituted on demand to deliver a personalized user workspace when needed. JMP is supported with both on-premises and cloud-based Horizon 7 deployments, providing a unified and consistent management platform regardless of your deployment topology. The JMP approach provides several key benefits, including simplified desktop and RDSH image management, faster delivery and maintenance of applications, and elimination of the need to manage “full persistent” desktops.

  • JMP (Next-Generation Desktop and Application Delivery Platform) – JMP (pronounced jump), which stands for Just-in-Time Management Platform, represents capabilities in VMware Horizon 7 Enterprise Edition that deliver Just-in-Time Desktops and Apps in a flexible, fast, and personalized manner. JMP is composed of the following VMware technologies:
    • VMware Instant Clone Technology for fast desktop and RDSH provisioning
    • VMware App Volumes™ for real-time application delivery
    • VMware Dynamic Environment Manager™ for contextual policy management
  • Just-in-Time Desktops – Leverages Instant Clone Technology coupled with App Volumes to accelerate the delivery of user-customized and fully personalized desktops. Dramatically reduce infrastructure requirements while enhancing security by delivering a brand-new personalized desktop and application services to end users every time they log in.
    • Reap the economic benefits of stateless, nonpersistent virtual desktops served up to date upon each login.
    • Deliver a pristine, high-performance personalized desktop every time a user logs in.
    • Improve security by destroying desktops every time a user logs out.
  • App Volumes – Provides real-time application delivery and management.
    • Quickly provision applications at scale.
    • Dynamically attach applications to users, groups, or devices, even when users are already logged in to their desktop.
    • Provision, deliver, update, and retire applications in real time.
    • Provide a user-writable volume, allowing users to install applications that follow them across desktops.

      Note: App Volumes is not currently supported as part of VMware Horizon® Cloud Service™ on Microsoft Azure.

  • Dynamic Environment Manager – Offers personalization and dynamic policy configuration across any virtual, physical, and cloud-based environment.
    • Provide end users with quick access to a Windows workspace and applications, with a personalized and consistent experience across devices and locations.
    • Simplify end-user profile management by providing organizations with a single and scalable solution that leverages the existing infrastructure.
    • Speed up the login process by applying configuration and environment settings in an asynchronous process instead of all at login.
    • Provide a dynamic environment configuration, such as drive or printer mappings, when a user launches an application.
  • vSphere Integration – Horizon 7 Enterprise Edition extends the power of virtualization with virtual compute, virtual storage, and virtual networking and security to drive down costs, enhance the user experience, and deliver greater business agility.
    • Leverage native storage optimizations from vSphere, including SE Sparse, VAAI, and storage acceleration, to lower storage costs while delivering a superior user experience.
    • Horizon 7 Enterprise Edition with VMware vSAN™ for Desktop Advanced automates storage provisioning and leverages direct-attached storage resources to reduce storage costs for desktop workloads. Horizon 7 supports all-flash capabilities to better support more end users at lower costs across distributed locations.

Horizon Cloud Service on Microsoft Azure Platform

Horizon Cloud Service on Microsoft Azure provides customers with the ability to pair their existing Microsoft Azure infrastructure with the Horizon Cloud Service to deliver feature-rich virtual desktops and applications.

Horizon Cloud uses a purpose-built cloud platform that is scalable across multiple deployment options, including fully managed infrastructure from VMware and public cloud infrastructure from Microsoft Azure. The service supports a cloud-scale architecture that makes it easy to deliver virtualized Windows desktops and applications to any device, anytime.

Horizon Cloud Service on Microsoft Azure Overview

Figure 6: Horizon Cloud Service on Microsoft Azure Overview

Reference Architecture Design Methodology

To ensure a successful Workspace ONE and Horizon deployment, it is important to follow proper design methodology. To start, you need to understand the business requirements, reasons, and objectives for undertaking the project. From there, you can identify the needs of the users and organize these needs into use cases with understood requirements. You can then align and map those use cases to a set of integrated services provided by Workspace ONE and Horizon.

Reference Architecture Design Methodology

Figure 7: Reference Architecture Design Methodology

A Workspace ONE and Horizon design uses a number of components to provide the services that address the identified use cases. Before you can assemble and integrate these components to form a service, you must first design and build the components in a modular and scalable manner to allow for change and growth. You also must consider integration into the existing environment. Then you can bring the parts together to deliver the integrated services to satisfy the use cases, business requirements, and the user experience.

As with any design process, the steps are cyclical, and any previous decision should be revisited to make sure a subsequent one has not impacted it.

Business Drivers and Use Cases

An end-user-computing (EUC) solution based on VMware Workspace ONE®, VMware Horizon® 7 and VMware Horizon® Cloud Service™ on Microsoft Azure can address a wide-ranging set of business requirements and use cases. In this reference architecture, the solution targets the most common requirements and use cases seen in customer deployments to date.

Addressing Business Requirements

A technology solution should directly address the critical business requirements that justify the time and expense of putting a new set of capabilities in place. Each and every design choice should center on a specific business requirement. Business requirements could be driven by the end user or by the team deploying EUC services.

The following are sample common key business drivers that can be addressed by the Workspace ONE solution.

Mobile Access

Requirement definition: Provide greater business mobility by providing mobile access to modern and legacy applications on laptops, tablets, and smartphones.

Workspace ONE and Horizon solution: Workspace ONE provides a straightforward, enterprise-secure method of accessing all types of applications that end users need from a wide variety of platforms.

  • It is the first solution that brings together identity, device and application management, a unified application catalog, and mobile productivity.
  • VMware Horizon® Client™ technology supports all mobile and laptop devices as well as common operating systems.
  • VMware Unified Access Gateway™ virtual appliances provide secure external access to internal resources without the need for a VPN.

Fast Provisioning and Access

Requirement definition: Allow fast provisioning of and secure access to line-of-business applications for internal users and third-party suppliers, while reducing physical device management overhead.

Workspace ONE and Horizon solution: Workspace ONE can support a wide range of device access scenarios, simplifying the onboarding of end-user devices.

  • Adaptive management allows a user to download an app from a public app store and access some published applications. If a user needs to access more privileged apps or corporate data, they are prompted to enroll their device from within the app itself rather than through an agent, such as the VMware Workspace ONE® Intelligent Hub app.
  • Horizon 7 Enterprise Edition can provision hundreds of desktops in minutes using Instant Clone Technology. Horizon 7 provides the ability to entitle groups or users to pools of desktops quickly and efficiently. Applications are delivered on a per-user basis using VMware App Volumes™.
  • Horizon Cloud Service on Microsoft Azure delivers feature-rich virtual desktops and applications using a purpose-built cloud platform. This makes it easy to deliver virtualized Windows desktops and applications to any device, anytime. IT can save time getting up and running with an easy deployment process, simplified management, and a flexible subscription model.
  • Unified Access Gateway appliances provide a secure and simple mechanism for external users to access virtual desktops or published applications customized using VMware Dynamic Environment Manager™.

Reduced Application Management Effort

Requirement definition: Reduce application management overhead and reduce application provisioning time.

Workspace ONE and Horizon solution: Workspace ONE provides end users with a single application catalog for native mobile, SaaS, and virtualized applications and improves application management.

  • Workspace ONE provides a consolidated view of all applications hosted across different services with a consistent user experience across all platforms.
  • With Horizon 7 and Horizon Cloud Service on Microsoft Azure, Windows-based applications are delivered centrally, either through virtual desktops or as RDSH-published applications. These can be centrally managed, allowing for access control, fast updates, and version control.
  • VMware Workspace ONE® Intelligence™ gives IT administrators insights into app deployments and app engagement. Analysis of user behavior combined with automation capabilities allow for quick resolution of issues, reduced escalations, and increased employee productivity.
  • App Volumes provides a simple solution to managing and deploying applications. Applications can be deployed “once” to a single central file and accessed by thousands of desktops. This simplifies application maintenance, deployment, and upgrades.
  • VMware ThinApp® provides additional features to isolate or make Windows applications portable across platforms.

Centralized and Secure Data and Devices

Requirement definition: Centralize management and security of corporate data and devices to meet compliance standards.

Workspace ONE and Horizon solution: All components are designed with security as a top priority.

  • VMware Workspace ONE® UEM (powered by AirWatch) provides aggregation of content repositories, including SharePoint, network file shares, and cloud services. Files from these repositories can be synced to the VMware Workspace ONE® Content app for viewing and secure editing.
  • Workspace ONE UEM policies can also be established to prevent distribution of corporate files, control where files can be opened and by which apps, and prevent such functions as copying and pasting into other apps, or printing.
  • Horizon 7 is a virtual desktop solution where user data, applications, and desktop activity do not leave the data center. Additional Horizon 7 and Dynamic Environment Manager policies restrict and control user access to data.
  • VMware NSX® provides network-based services such as security, network virtualization and can provide network least-privilege trust and VM isolation using micro-segmentation and identity-based firewalling for the Horizon 7 management, RDSH, and desktop environments.
  • Horizon Cloud Service on Microsoft Azure is the platform for delivering virtual desktops or published applications where user data, applications, and desktop activity do not leave the data center. Additional Horizon Cloud and VMware Dynamic Environment Manager policies restrict and control user access to data.
  • Workspace ONE Intelligence detects and remediates security vulnerabilities at scale. Quickly identify out-of-compliance devices and automate access control policies based on user behavior.

Comprehensive and Flexible Platform for Corporate-Owned or BYOD Strategies

Requirement definition: Allow users to access applications, especially the Microsoft Office 365 suite, and corporate data from their own devices.

Workspace ONE and Horizon solution: Workspace ONE can meet the device-management challenges introduced by the flexibility demands of BYOD.

  • Workspace ONE and features like adaptive management simplify end-user enrollment and empower application access in a secure fashion to drive user adoption.
  • With Horizon 7 and Horizon Cloud Service on Microsoft Azure, moving to a virtual desktop and published application solution removes the need to manage client devices, applications, or images. A thin client, zero client, or employee-owned device can be used in conjunction with Horizon Client. IT now has the luxury of managing single images of virtual desktops in the data center.
  • Get insights into device and application usage over time with Workspace ONE Intelligence to enable optimizing resource allocation and license renewals. The built-in automation capabilities can tag devices that have been inactive for specific periods of time or notify users when their devices need to be replaced.

Reduced Support Calls and Improved Time to Resolution

Requirement definition: Simplify and secure access to applications to speed up root-cause analysis and resolution of user issues.

Workspace ONE and Horizon solution: Workspace ONE provides single-sign-on (SSO) capabilities to a wide range of platforms and applications. By leveraging SSO technology, password resets are unnecessary.

  • Workspace ONE Access™ provides a self-service single point of access to all applications and, in conjunction with True SSO, provides a platform for SSO. Users no longer need to remember passwords or request applications through support calls.
  • Both Workspace ONE UEM and Workspace ONE Access include dashboards and analytics to help administrators understand what a profile of application access and device deployment looks like in the enterprise. With greater knowledge of which applications users are accessing, administrators can more quickly identify issues with licensing or potential attempted malicious activities against enterprise applications.
  • Workspace ONE Intelligence ensures that end users get the best mobile application experience by keeping an eye on app performance, app engagement, and user behavior. With detailed insights around devices, networks, operating systems, geolocation, connectivity state, and current app version, LOB owners can optimize their apps for their unique audience and ensure an optimal user experience.
  • Horizon 7 Enterprise Edition includes the Horizon Help Desk Tool, which gives insights into users’ sessions and aids in troubleshooting and maintenance operations.
  • VMware vRealize® Operations Manager™ for Horizon provides a single pane of glass for monitoring and predicting the health of any entire Horizon 7 infrastructure. From display protocol performance to storage and compute utilization, vRealize Operations Manager for Horizon accelerates root-cause analysis of issues that arise.

Multi-site Deployment Business Drivers 

There are many ways and reasons to implement a multi-site solution, especially when deploying components on-premises. The most typical setup and requirement is for a two-data-center strategy. The aim is to provide disaster recovery, with the lowest possible recovery time objective (RTO) and recovery point objective (RPO); that is, to keep the business running with the shortest possible time to recovery and with the minimum amount of disruption.

The overall business driver for disaster recovery is straightforward:

  • Keep the business operating during an extended or catastrophic technology outage.
  • Provide continuity of service.
  • Allow staff to carry out their day-to-day responsibilities.

With services, applications, and data delivered by Workspace ONE and Horizon, that means providing continuity of service and mitigating against component failure, all the way up to a complete data center outage.

With respect to business continuity and disaster recovery, this reference architecture addresses the following common key business drivers: 

  • Cope with differing levels and types of outages and failures.
  • Develop predictable steps to recover functionality in the event of failures.
  • Provide essential services and access to applications and data delivered by Workspace ONE and Horizon during outages.
  • Minimize interruptions during outages.
  • Provide the same or similar user experience during outages.
  • Provide mobile secure access.

The following table describes the strategy used for responding to each of these business drivers. In this table, the terms active/passive and active/active are used.

  • Active/passive recovery mode – Requires that the passive instance of the service be promoted to active status in the event of a service outage.
  • Active/active recovery mode – Means that the service is available from multiple data centers without manual intervention.
Table 2: Meeting Business Requirements with Multi-site Deployments 
Business Driver  Comments 

Provide essential services and access to applications and data delivered by Workspace ONE and Horizon 7 during outages.

Minimize interruptions during outages.

The highest possible service level is delivered, and downtime is minimized, when all intra-site components are deployed in pairs and all services are made highly available. These services must be capable of being delivered from multiple sites, either in an active/active or active/passive manner.

Provide a familiar user experience during outages.

To maintain personalized environments for end users, replicate the parts that a user considers persistent (profile, user configuration, applications, and more). Reconstruct the desktop in the second data center using those parts.

Workspace ONE Access provides a common entry point to all types of applications, regardless of which data center is actively being used.

Cope with differing levels and types of outages and failures.

This reference architecture details a design for multi-site deployments to cope with catastrophic failures all the way up to a site outage. The design ensures that there is no single point of failure within a site.

Develop predictable steps to recover functionality in the event of failures.

The services are constructed from several components and designed in a modular fashion. A proper design methodology, as followed in this reference architecture, allows each component to be designed for availability, redundancy, and predictability.

With an effective design in place, you can systematically plan and document the whole end-user service and the recovery steps or processes for each component of the service.

Provide mobile secure access.

 

Desktop mobility is a core capability in the Horizon 7 platform. As end users move from device to device and across locations, the solution reconnects end users to the virtual desktop instances that they are already logged in to, even when they access the enterprise from a remote location through the firewall. VMware Unified Access Gateway virtual appliances provide secure external access without the need for a VPN.

Use Cases

Use cases drive the design for any EUC solution and dictate which technologies are deployed to meet user requirements. Use cases can be thought of as common user scenarios. For example, a finance or marketing user might be considered a “normal office worker” use case.

Designing an environment includes building out the functional definitions for the use cases and their requirements. We define typical use cases that are also adaptable to cover most scenarios. We also define services to deliver the requirements of those use cases.

Workspace ONE Use Cases

This reference architecture includes the following common Workspace ONE use cases.

Table 3: Workspace ONE Common Use Cases 
Use Case Description

Mobile Task-Based Worker

Users who typically use a mobile device for a single task through a single application.

  • Mobile device is highly managed and used for only a small number of tasks, such as inventory control, product delivery, or retail applications.
  • Communications tools, such as email, might be restricted to only sending and receiving email with internal parties.
  • Device is typically locked down from accessing unnecessary applications. Access to public app stores is restricted or removed entirely.
  • Device location tracking, full device wipe, and other features are typically used.

Mobile Knowledge Worker

Many roles fit this profile, such as a hospital clinician or an employee in finance, marketing, HR, health benefits, approvals, or travel.

  • These workers use their own personal device (BYOD), a corporate device they personally manage, or a managed corporate device with low restrictions.
  • Users are typically allowed to access email, including personal email, along with public app stores for personal apps.
  • Device is likely subject to information controls over corporate data, such as data loss prevention (DLP) controls, managed email, managed content, and secure browsing.
  • Users need access to SaaS-based applications for HR, finance, health benefits, approvals, and travel, as well as native applications where those applications are available.
  • Device is a great candidate for SSO because the need to access many diverse applications and passwords becomes an issue for users and the helpdesk.
  • Privacy is typically a concern that might prevent device enrollment, so adaptive management and clear communication regarding the data gathered and reported to the Workspace ONE UEM service is important to encourage adoption.

Contractor

Contractors might require access to specific line-of-business applications, typically from a remote or mobile location.

  • Users likely need access to an organization’s systems for performing specific functions and applications, but access might be for a finite time period or to a subset of resources and applications.
  • When the contractor is no longer affiliated with the organization, all access to systems must be terminated immediately and completely, and all corporate information must be removed from the device.
  • Users typically need access to published applications or VDI-based desktops, and might use multiple devices not under company control to do so. Devices include mobile devices as well as browser-based devices.

VMware Horizon Use Cases

This reference architecture includes the following Horizon 7 or Horizon Cloud Service on Microsoft Azure use cases.

Table 4: VMware Horizon Use Cases  
Use Case Description

Static Task Worker

These workers are typically fixed to a specific location with no remote access requirement. Some examples include call center worker, administration worker, and retail user.

A static task worker:

  • Uses a small number of Microsoft Windows applications.
  • Does not install their own applications and does not require SaaS application access.
  • Might require location-aware printing.

Mobile Knowledge Worker

This worker could be a hospital clinician, a company employee, or have a finance or marketing role. This is a catch-all corporate use case.

A mobile knowledge worker:

  • Mainly uses applications from a corporate location but might access applications from mobile locations.
  • Uses a large number of core and departmental applications but does not install their own applications. Requires SaaS application access.
  • Requires access to USB devices.
  • Might require location-aware printing.
  • Might require two-factor authentication when accessing applications remotely.

Software Developer / IT (Power User)

Power users require administrator privileges to install applications. The operating system could be either Windows or a Linux OS, with many applications, some of which could require extensive CPU and memory resources.

A power user:

  • Mainly uses applications from a corporate location but might access applications from mobile locations.
  • Uses a large number of core and departmental applications and installs their own applications. Requires SaaS application access.
  • Requires the ability to view video and Flash content.
  • Requires two-factor authentication when accessing applications remotely.

Multimedia Designer / Engineer

These users might require GPU-accelerated applications, which have intensive CPU or memory workloads, or both. Examples are CAD/CAM designers, architects, video editors and reviewers, graphic artists, and game designers.

A multimedia designer:

  • Has a GPU requirement with API support for DirectX 10+, video playback, and Flash content.
  • Mainly uses applications from a corporate location but might access applications from mobile locations.
  • Might require two-factor authentication when accessing applications remotely.

Contractor

External contractors usually require access to specific line-of-business applications, typically from a remote or mobile location.

A contractor:

  • Mainly uses applications from a corporate location but might access applications from mobile locations.
  • Uses a subset of core and departmental applications based on the project they are working on. Might require SaaS application access.
  • Has restricted access to the clipboard, USB devices, and so on.
  • Requires two-factor authentication when accessing applications remotely.

Recovery Use Case Requirements 

When disaster recovery is being considered, the main emphasis falls on the availability and recoverability requirements of the differing types of users. For each of the previously defined use cases and their requirements, we can define the recovery requirements.

When using the cloud-based versions of services, such as Workspace ONE UEM, Workspace ONE Access, and Workspace ONE Intelligence, availability is delivered as part of the overall service SLA.

With solutions that have components deployed on-premises, the availability of both the platform delivering the service to the user, and the data they expect to use, has to be considered. For VMware Horizon–based services, the availability portion of the solution might have dependencies on applications, personalization, and user data to deliver a full experience to the user in a recovery site. Consider carefully what type of experience will be offered in a recovery scenario and how that matches the business and user requirements.

This reference architecture discusses two common disaster recovery classifications: active/passive and active/active. When choosing between these recovery classifications, which are described in the following table, be sure to view the scenario from the user’s perspective.

Table 5: Disaster Recovery Classifications 
Use Case and Recoverability Objective  Description 

Active/Passive 

RTO = Medium 

RPO = Medium 

  • Users normally work in a single office location.
  • Service consumed is pinned to a single data center.
  • Failover of the service to the second data center ensures business continuity.

Active/Active 

RTO = Low 

RPO = Low 

  • Users require the lowest possible recovery time for the service (for example, health worker).
  • Mobile users might roam from continent to continent.
  • Users need to get served from the nearest geographical data center per continent.
  • Service consumed is available in both primary and secondary data centers without manual intervention.
  • Timely data replication between data centers is extremely important.

With a VMware Horizon–based service, the recovery service should aim to offer an equivalent experience to the user. Usually the service at the secondary site is constructed from the same or similar parts and components as the primary site service. Consideration must be given to data replication and the speed and frequency at which data from the primary site can be replicated to the recovery site. This can influence which type of recovery service is offered, how quickly a recovery service can become available to users, and how complete that recovery service might be.

The RTO (recovery time objective) is defined as the time it takes to recover a given service. RPO (recovery point objective) is the maximum period in which data might be lost. Low targets are defined as 30- to 60-second estimates. Medium targets are estimated at 45–60 minutes. These targets depend on the environment and the components included in the recovery service.

Service Definitions

From our business requirements, we outlined several typical use cases and their requirements. Taking the business requirements and combining them with one or more use cases enables the definition of a service.

The service, for a use case, defines the unique requirements and identifies the technology or feature combinations that satisfy those unique requirements. After the service has been defined, you can define the service quality to be associated with that service. Service quality takes into consideration the performance, availability, security, and management and monitoring requirements to meet SLAs.

The detail required to build out the products and components comes later, after the services are defined and the required components are understood.

Do not treat the list of services as exclusive or prescriptive; each environment is different. Adapt the services to your particular use cases. In some cases, that might mean adding components, while in others it might be possible to remove some that are not required.

You could also combine multiple services together to address more complex use cases. For example, you could combine a VMware Workspace ONE® service with a VMware Horizon® 7 or VMware Horizon® Cloud Service™ and a recovery service.

Example of Combining Multiple Services for a Complex Use Case

Figure 8: Example of Combining Multiple Services for a Complex Use Case

Workspace ONE Use Case Services

A use case service identifies the features required for a specific type of user. For example, a mobile task worker might use a mobile device for a single task through a single application. The Workspace ONE use case service for this worker could be called the mobile device management service. This service uses only a few of the core Workspace ONE components, as described in the following table.

Table 6: Core Components of Workspace ONE 
Component Function

VMware Workspace ONE® UEM

Enterprise mobility management

Workspace ONE Access™

Identity platform

VMware Workspace ONE® Intelligence™

Integrated insights, app analytics, and automation

Workspace ONE app

End-user access to apps

VMware Horizon

Virtual desktops and Remote Desktop Services (RDS) published applications delivered either through Horizon Cloud or Horizon 7

VMware Workspace ONE® Boxer

Secure email client

VMware Workspace ONE® Web

Secure web browser

VMware Workspace ONE® Content

Mobile content repository

VMware Workspace ONE® Tunnel

Secure and effective method for individual applications to access corporate resources

VMware AirWatch Cloud Connector

Directory sync with enterprise directories

Workspace ONE Access Connector

Directory sync with enterprise directories

Sync to Horizon resources

VMware Unified Access Gateway™

Gateway that provides secure edge services

VMware Workspace ONE® Secure Email Gateway

Email proxy service

Enterprise Mobility Management Service

Overview: Many organizations have deployed mobile devices and have lightweight management capabilities, like simple email deployment and device policies, such as a PIN requirement, device timeouts, and device wiping. But they lack a comprehensive and complete management practice to enable a consumer-simple, enterprise-secure model for devices.

Use case: Mobile Task-Based Workers

Table 7: Unique Requirements of Mobile Task Workers  
Unique Requirements Components

Provide device management beyond simple policies

  • Workspace ONE native app
  • Workspace ONE Access authentication
  • AirWatch Cloud Connector

Enable adaptive management capabilities

  • Workspace ONE native app
  • Adaptive management
  • Workspace services device enrollment

Blueprint

The following figure shows a high-level blueprint of a Workspace ONE Standard deployment and the available components.

Enterprise Mobility Management Service Blueprint

Figure 9: Enterprise Mobility Management Service Blueprint

Enterprise Productivity Service

Overview: Organizations with a more evolved device management strategy are often pushed by end users to enable more advanced mobility capabilities in their environment. Requested capabilities include single sign-on (SSO) and multi-factor authentication, and access to productivity tools. However, from an enterprise perspective, providing this much access to corporate information means instituting a greater degree of control, such as blocking native email clients in favor of managed email, requiring syncing content with approved repositories, and managing which apps can be used to open files.

Use cases: Mobile Knowledge Workers, Contractors

Table 8: Unique Requirements of Mobile Knowledge Workers and Contractors  
Unique Requirements Components

Multi-factor authentication

VMware Workspace ONE® Verify

SSO

Workspace ONE Access and Workspace ONE UEM

Managed email

Workspace ONE Boxer

Enterprise content synchronization

Workspace ONE Content

Secure browsing

VMware Workspace ONE® Web

VPN per application

Workspace ONE Tunnel

Blueprint

The following figure shows a high-level blueprint of a Workspace ONE Advanced deployment and the available components.

Enterprise Productivity Service Blueprint

Figure 10: Enterprise Productivity Service Blueprint

Enterprise Application Workspace Service

Overview: Recognizing that some applications are not available as a native app on a mobile platform and that some security requirements dictate on-premises application access, virtualized applications and desktops become a core part of a mobility strategy. Building on the mobile productivity service, and adding access to VMware Horizon–based resources, enables this scenario.

Many current VMware Horizon users benefit from adding the Workspace ONE catalog capabilities as a single, secure point of access for their virtual desktops and applications.

Use cases: Contractors, Mobile Knowledge Workers

Table 9: Unique Requirements of Contractors and Mobile Knowledge Workers 
Unique Requirements Components

Access to virtual apps and desktops

  • Horizon Cloud or Horizon 7
  • Workspace ONE Access Connector

Blueprint

The following figure shows a high-level blueprint of a Workspace ONE Enterprise Edition deployment and the available components.

Enterprise Application Workspace Service Blueprint

Figure 11: Enterprise Application Workspace Service Blueprint

Horizon 7 Use Case Services

Horizon 7 use case services address a wide range of user needs. For example, a Published Application service can be created for static task workers, who require only a few Windows applications. In contrast, a GPU-Accelerated Desktop service can be created for multimedia designers who require graphics drivers that use hardware acceleration.

The following components are used across the various use cases.

Table 10: Core Components of Horizon 7 
Component Function

Horizon 7

Virtual desktops and RDSH-published applications

VMware App Volumes™

Application deployment

Dynamic Environment Manager™

User profile, IT settings, and configuration for environment and applications

VMware vRealize® Operations for Horizon®

Management and monitoring

VMware vSphere®

Infrastructure platform

VMware vSAN™

Storage platform

VMware NSX®

Networking and security platform

Horizon 7 Published Application Service

Overview: Windows applications are delivered as published applications provided by farms of RDSH servers. The RDSH servers are created using instant clones to provide space and operational efficiency. Applications are delivered through App Volumes. Individual or conflicting applications are packaged with VMware ThinApp® and are available through the Workspace ONE Access catalog. Dynamic Environment Manager applies profile settings and folder redirection.

Use case: Static Task Worker

Table 11: Unique Requirements of Static Task Workers  
Unique Requirements Components

Small number of Windows applications

  • Horizon 7 RDSH-published applications (a good fit for a small number of applications)
  • App Volumes packages

Requires location-aware printing

  • ThinPrint
  • Dynamic Environment Manager
Table 12: Service Qualities the of Horizon 7 Published Application Service 
Performance Availability Security Management and Monitoring

Basic

Medium

Basic

(no external access)

Basic

Blueprint

Horizon 7 Published Application Service Blueprint

Figure 12: Horizon 7 Published Application Service Blueprint

Horizon 7 GPU-Accelerated Application Service

Overview: Similar to the Horizon 7 Published Application service but has more CPU and memory, and uses hardware-accelerated rendering with NVIDIA GRID graphics cards installed in the vSphere servers (vGPU).

Use case: Occasional Graphic Application Users

Table 13: Unique Requirements of Occasional Graphic Application Users 
Unique Requirements Components

GPU accelerated

NVIDIA vGPU-powered

Small number of Windows applications

  • Horizon 7 RDSH-published applications (a good fit for a small number of applications)
  • App Volumes packages

Hardware H.264 encoding

Blast Extreme

Table 14: Service Qualities of the Horizon 7 GPU-Accelerated Application Service

Performance Availability Security Management and Monitoring

Basic

Medium

Medium

Medium

Blueprint

Horizon 7 GPU-Accelerated Application Service Blueprint

Figure 13: Horizon 7 GPU-Accelerated Application Service Blueprint

Horizon 7 Desktop Service

Overview: The core Windows 10 desktop is an instant clone, which is kept to a plain Windows OS, allowing it to address a wide variety of users.

The majority of applications are delivered through App Volumes, with core and different departmental versions. Individual or conflicting applications are packaged with ThinApp and are available through the Workspace ONE Access catalog.

Dynamic Environment Manager applies profile settings and folder redirection. Although Windows 10 was used in this design, Windows 7 could be substituted.

Use cases: Mobile Knowledge Worker, Contractors

Table 15: Unique Requirements of Mobile Knowledge Workers and Contractors 
Unique Requirements Components

Large number of core and departmental applications

  • Horizon 7 instant-clone virtual desktop (a good fit for larger numbers of applications)
  • App Volumes packages for core applications and departmental applications

Require access from mobile locations

Unified Access Gateway, Blast Extreme

Two-factor authentication when remote

Unified Access Gateway, True SSO

Video content and Flash playback

URL content redirection, Flash redirection

Access to USB devices

Restricted access to clipboard, USB, and so on (for example, for contractors)

Dynamic Environment Manager, Smart Policies, application blocking

Table 16: Service Qualities of the Horizon 7 Desktop Service 
Performance Availability Security Management and Monitoring

Medium

High

Medium high (contractors)

Medium

Blueprint

Horizon 7 Desktop Service Blueprint

Figure 14: Horizon 7 Desktop Service Blueprint

Horizon 7 Desktop with User-Installed Applications Service

Overview: Similar to the construct of the Horizon 7 Desktop service, with the addition of an App Volumes writable volume. Writable volumes allow users to install their own applications and have them persist across sessions.

Use case: Software Developer / IT (Power User)

Table 17: Unique Requirements of Software Developers and Power Users
Unique Requirements Components

Windows extensive CPU and memory

Horizon 7 instant-clone virtual desktop

User-installed applications

App Volumes writable volume

Table 18: Service Qualities of the Horizon 7 Desktop with User-Installed Applications Service 
Performance Availability Security Management and Monitoring

Medium

High

High

Medium

Blueprint

Horizon 7 Desktop with User-Installed Applications Service Blueprint

Figure 15: Horizon 7 Desktop with User-Installed Applications Service Blueprint

Horizon 7 GPU-Accelerated Desktop Service

Overview: Similar to the Horizon 7 Desktop Service or the Horizon 7 Desktop with User-Installed Applications service but has more CPU and memory, and can use hardware-accelerated rendering with NVIDIA GRID graphics cards installed in the vSphere servers (vGPU).

Use case: Multimedia Designer

Table 19: Unique Requirements of Multimedia Designers
Unique Requirements Components

GPU accelerated

NVIDIA vGPU-powered

User-installed applications

App Volumes writable volume

Hardware H.264 encoding

Blast Extreme

Table 20: Service Qualities of the Horizon 7 GPU-Accelerated Desktop Service
Performance Availability Security Management and Monitoring

High

High

Medium

High

Blueprint

Horizon 7 GPU-Accelerated Desktop Service Blueprint

Figure 16: Horizon 7 GPU-Accelerated Desktop Service Blueprint

Horizon 7 Linux Desktop Service

Overview: The core desktop is an instant clone of Linux. Applications can be pre-installed into the master VM.

Use case: Linux User

Table 21: Unique Requirements of Linux Users
Unique Requirements Components

Linux extensive CPU and memory

Horizon 7 for Linux instant clone

Table 22: Service Qualities of the Linux Desktop Service
Performance Availability Security Management and Monitoring

Medium

Medium

Medium

Basic

Blueprint

Linux Desktop Service Blueprint

Figure 17: Linux Desktop Service Blueprint

Horizon Cloud Service on Microsoft Azure Use Case Services

These services address a wide range of user needs. For example, a published application service can be created for static task workers, who require only a few Windows applications. In contrast, a secure desktop service could be created for users who need a larger number of applications that are better suited to a Windows desktop–based offering.

The following core components are used across the various use cases.

Table 23: Core Components of VMware Horizon® Cloud Service™ on Microsoft Azure 
Component Function

Horizon Cloud Service on Microsoft Azure

Virtual desktops and RDSH-published applications

VMware Dynamic Environment Manager

User profile, IT settings, and configuration for environment and applications

Microsoft Azure

Infrastructure platform

Horizon Cloud Published Application Service

Overview: Windows applications are delivered as published applications provided by farms of RDSH servers. These applications are optionally available in the catalog and through the Workspace ONE app or web application. Dynamic Environment Manager applies profile settings and folder redirection.

Use case: Static Task Worker

Table 24: Unique Requirements of Static Task Workers 
Unique Requirements Components

Small number of Windows applications

  • Horizon Cloud on Microsoft Azure RDSH-published applications (a good fit for a small number of applications)

(Optional) location-aware printing

  • ThinPrint
  • Dynamic Environment Manager

Blueprint

Horizon Cloud Published Application Service Blueprint

Figure 18: Horizon Cloud Published Application Service Blueprint

Horizon Cloud GPU-Accelerated Application Service

Overview: Similar to the Horizon Cloud Published Application service, but this service uses hardware-accelerated rendering with NVIDIA GRID graphics cards available through Microsoft Azure. The Windows applications are delivered as published applications provided by farms of RDSH servers.  

Use case: Multimedia Designer/Engineer

Table 25: Unique Requirements of Multimedia Designers
Unique Requirements Components

GPU-accelerated rendering

NVIDIA backed GPU RDSH VM

Hardware H.264 encoding

Blast Extreme

Blueprint

Horizon Cloud GPU-Accelerated Application Service Blueprint

Figure 19: Horizon Cloud GPU-Accelerated Application Service Blueprint

Horizon Cloud Desktop Service

Overview: This service uses a standard Windows 10 desktop that is cloned from a master VM image. Dynamic Environment Manager applies the user’s Windows environment settings, application settings, and folder redirection. Desktop and application entitlements are optionally made available through the Workspace ONE Access catalog.

Use cases: Mobile Knowledge Worker, Contractors

Table 26: Unique Requirements of Mobile Knowledge Workers and Contractors 
Unique Requirements Components

Large number of core and departmental applications

Horizon virtual desktop running Windows 10 (a good fit for larger numbers of applications)

Access from mobile locations

Unified Access Gateway, Blast Extreme

Two-factor authentication when remote

Unified Access Gateway, True SSO

Video content and Flash playback

URL content redirection, HTML5 redirection, Flash redirection

  • Access to USB devices
  • Restricted access to clipboard, USB, and so on (for example, for contractors)

Dynamic Environment Manager, Horizon Smart Policies, application blocking

Blueprint

Horizon Cloud Desktop Service Blueprint

Figure 20: Horizon Cloud Desktop Service Blueprint

Recovery Services 

To ensure availability, recoverability, and business continuity, the design of the services also needs to consider disaster recovery. We can define recovery services and map them to the previously defined use-case services.

Recovery services can be designed to operate in either an active/active or an active/passive mode and should be viewed from the users’ perspective.

  • In active/passive mode, loss of an active data center instance requires that the passive instance of the service be promoted to active status for the user.
  • In active/active mode, the loss of a data center instance does not impact service availability for the user because the remaining instance or instances continue to operate independently and can offer the end service to the user.

In the use cases, a user belongs to a home site and can have an alternative site available to them. Where user pinning is required, an active/passive approach results in a named user having a primary site they always connect to or get redirected to during normal operations.

Also, a number of components are optional to a service, depending on what is required. Blueprints for multi-site Workspace ONE Access, App Volumes, and Dynamic Environment Manager data are detailed after the main active/passive and active/active recovery services.

VMware Workspace ONE UEM Recovery Service (On-Premises)

Workspace ONE UEM can be consumed as a cloud-based service or deployed on-premises. When deployed on-premises, it is important to provide resilience and failover capability both within and between sites to ensure business continuity. Workspace ONE UEM can be architected in an active/passive manner, with a failover process recovering the service in the standby site.

VMware Workspace ONE UEM Recovery Blueprint

Figure 21: VMware Workspace ONE UEM Recovery Blueprint

Workspace ONE Access Recovery Service (On-Premises)

Workspace ONE Access can also be consumed as a cloud-based service or deployed on-premises. When deployed on-premises, it is important to provide resilience and failover capability both within and between sites to ensure business continuity. Workspace ONE Access can be architected in an active/passive manner, with a failover process recovering the service in the standby site.

Workspace ONE Access Recovery Blueprint

Figure 22: Workspace ONE Access Recovery Blueprint

Horizon 7 Active/Passive Recovery Service 

Requirement: The use case service is run from a specific data center but can be failed over to a second data center in the event of an outage.

Overview: The core Windows desktop is an instant clone or linked clone, which is preferably kept to a vanilla Windows OS, allowing it to address a wide variety of users. The core could also be a desktop or session provided from an RDSH farm of linked clones or instant clones.

Although applications can be installed in the master image OS, the preferred method is to have applications delivered through App Volumes, with core and department-specific applications included in various packages. Individual or conflicting applications are packaged with VMware ThinApp and are available through the Workspace ONE Access catalog.

If the use case requires the ability for users to install applications themselves, App Volumes writable volumes can be assigned.

Dynamic Environment Manager applies the profile, IT settings, user configuration, and folder redirection.

The following table details the recovery requirements and the corresponding Horizon 7 component that addresses each requirement.

Table 27: Active/Passive Recovery Service Requirements 
Requirement  Comments 

Windows desktop or RDSH available in both sites 

  • Horizon 7 pools or farms are created in both data centers.
  • Master VM can be replicated to ease creation.
  • Cloud Pod Architecture (CPA) is used for user entitlement and to control consumption.

Native applications 

Applications are installed natively in the base Windows OS. No replication is required because native applications exist in both data center pools.

Attached applications

(optional)

Applications contained in App Volumes packages are replicated.

User-installed applications

(optional)

App Volumes writable volumes.

  • RTO = 60–90 minutes 
  • RPO = 1–2 hours (array dependent) 

IT settings 

Dynamic Environment Manager IT configuration is replicated to another data center.

  • RTO = 30–60 seconds 
  • RPO = Approximately 5 minutes 

User data and configuration 

Dynamic Environment Manager user data is replicated to another data center.

  • RTO = 30–60 seconds 
  • RPO = Approximately 2 hours 

SaaS applications 

Workspace ONE Access is used as a single-sign-on workspace and is present in both locations to ensure continuity of access.

Mobile access 

Unified Access Gateway, Blast Extreme 

At a high level, this service consists of a Windows environment delivered by either an instant- or linked-clone desktop or RDSH server, with identical pools created at both data centers. With this service, applications can be natively installed in the OS, provided by App Volumes packages, or some combination of the two. User profile and user data files are made available at both locations and are also recovered in the event of a site outage.

Blueprint

Horizon 7 Active/Passive Recovery Service Blueprint

Figure 23: Horizon 7 Active/Passive Recovery Service Blueprint 

Horizon 7 Active/Active Recovery Service 

Requirement: This use case service is available from multiple data centers without manual intervention.

Overview: Windows applications are delivered as natively installed applications in the Windows OS, and there is little to no reliance on the Windows profile in case of a disaster. Dynamic Environment Manager provides company-wide settings during a disaster. Optionally, applications can be delivered through App Volumes packages, with core and department-specific applications included in various packages.

This service generally requires the lowest possible RTO, and the focus is to present the user with a desktop closest to his or her geographical location. For example, when traveling in Europe, the user gets a desktop from a European data center; when traveling in the Americas, the same user gets a desktop from a data center in the Americas.

The following table details the recovery requirements and the corresponding Horizon 7 component that addresses each requirement.

Table 28: Active/Active Recovery Service Requirements 
Requirements Products, Solutions, and Settings

Lowest possible RTO during a disaster 

No reliance on services that cannot be immediately failed over.

Windows desktop or RDSH server available in both sites 

  • Horizon 7 desktop and application pools are created in both data centers.
  • Master VM can be replicated to ease creation.
  • Cloud Pod Architecture (CPA) is used to ease user entitlement and consumption.

Native applications 

Applications are installed natively in the base Windows OS. No replication is required because native applications exist in both data center pools.

Attached applications

(optional)

Applications contained in App Volumes packages are replicated using App Volumes storage groups.

IT settings 

Dynamic Environment Manager IT configuration is replicated to another data center. The following RTO and RPO targets apply during a data center outage when a recovery process is required:

  • RTO = 30–60 seconds 
  • RPO = 30–60 seconds 

User data and configuration 

(optional)

Dynamic Environment Manager user data is replicated to another data center. The following RTO and RPO targets apply during a data center outage when a recovery process is required:

  • RTO = 30–60 seconds 
  • RPO = Approximately 2 hours 

Mobile access 

Unified Access Gateway, Blast Extreme 

At a high level, this service consists of a Windows environment delivered by a desktop or an RDSH server available at both data centers. With this service, applications can be natively installed in the OS, attached using App Volumes packages, or some combination of the two. If required, the user profile and user data files can be made available at both locations and can also be recovered in the event of a site outage.

Horizon 7 Active/Active Recovery Service Blueprint

Figure 24: Horizon 7 Active/Active Recovery Service Blueprint

App Volumes Active/Passive Recovery Service

Although applications can be installed in the base OS, they can alternatively be delivered by App Volumes packages. A package is used to attach applications to either the Horizon 7 desktop or the RDSH server that provides Horizon 7 published applications.

Applications are attached either to the desktop, at user login, or to the RDSH server as it boots. Because packages are read-only to users and are infrequently changed by IT, packages can be replicated to the second, and subsequent, locations and are available for assignment and mounting in those locations as well.

App Volumes writable volumes are, by contrast, used for content such as user-installed applications, and are written to by the end user. Writable volumes must be replicated and made available at the second site. Due to the nature of the content, writable volumes can have their content updated frequently by users. These updates can affect the RPO and RTO achievable for the overall service. Operational decisions can be made as to whether to activate the service in Site 2 with or without the writable volumes to potentially reduce the RTO.

App Volumes Active/Passive Recovery Blueprint

Figure 25: App Volumes Active/Passive Recovery Blueprint

App Volumes Active/Active Recovery Service

As can be seen in the active/passive App Volumes blueprint, App Volumes packages can be replicated from one site to another and made available, actively, in both because packages require read-only permissions for the user. The complication comes with writable volumes because these require both read and write permissions for the user. If a service does not include writable volumes, the App Volumes portion of the service can be made active/active.

App Volumes Active/Active Recovery Blueprint

Figure 26: App Volumes Active/Active Recovery Blueprint

Dynamic Environment Manager Profile Data Recovery Service

Dynamic Environment Manager provides profile management by capturing user settings for the operating system, applications, and user personalization. The captured settings are stored on file shares that need to be replicated to ensure site redundancy.

Although profile data can be made available to both data centers, there is a failover process in the event of the loss of Site 1 that can impact the RTO and RPO.

Operational decisions can be made in these scenarios as to whether the service in Site 2 would be made available with reduced functionality (for example, available with the Windows base, the applications, and the IT configuration but without the user-specific settings).

Dynamic Environment Manager Profile Recovery Blueprint

Figure 27: Dynamic Environment Manager Profile Recovery Blueprint

Horizon Cloud on Microsoft Azure Active/Passive Recovery Service 

Requirement: The use case service is run from a specific Azure region. An equivalent service can be provided from a second Azure region.

Overview: The core Windows desktop or RDSH server is a clone of a master VM image. Dynamic Environment Manager applies the profile, IT settings, user configuration, and folder redirection.

Table 29: Active/Passive Recovery Service Requirements 
Requirement  Comments 

Windows desktop or RDSH server available in both sites 

Horizon desktop pools or RDSH server farms are created in both data centers.

Native applications 

Applications are installed natively in the base Windows OS.

IT settings 

Dynamic Environment Manager IT configuration is replicated to ensure availability in the event that the primary Azure region becomes unavailable.

User data and configuration 

Dynamic Environment Manager user data is replicated to ensure availability in the event that the primary Azure region becomes unavailable.

At a high level, this service consists of a Windows environment delivered by either a desktop or an RDSH server, with equivalent resources created at both data centers. User profile and user data files are made available at both locations and are also recovered in the event of a site outage.

Horizon Cloud Active/Passive Recovery Service Blueprint 

Figure 28: Horizon Cloud Active/Passive Recovery Service Blueprint 

Dynamic Environment Manager provides profile management by capturing user settings for the operating system, applications, and user personalization. The captured settings are stored on file shares that need to be replicated to ensure site redundancy.

Although profile data can be made available to both regions, there is a failover process in the event of the loss of Region 1 that can impact the RTO and RPO.

Operational decisions can be made in these scenarios as to whether the service in Region 2 should be made available with reduced functionality (for example, available with the Windows base, the applications, and the IT configuration but without the user-specific settings).

Architectural Overview

A VMware Workspace ONE® design uses several complementary components and provides a variety of highly available services to address the identified use cases. Before we can assemble and integrate these components to form the desired service, we first need to design and build the infrastructure required.

The components in Workspace ONE, such as Workspace ONE Access™, VMware Workspace ONE® UEM (powered by VMware AirWatch®), and VMware Horizon® are available as on-premises and cloud-hosted products.

For this reference architecture, both cloud-hosted and on-premises Workspace ONE UEM and Workspace ONE Access are used separately to prove the functionality of both approaches. These are shown in the cloud-based and on-Premises logical architecture designs described in this chapter.

Note that other components, such as VMware Horizon® 7 or VMware Horizon® Cloud Service™ on Microsoft Azure, can be combined with either a cloud-based or an on-premises Workspace ONE deployment.

Workspace ONE Logical Architecture

The Workspace ONE platform is composed of Workspace ONE Access and Workspace ONE UEM. Although each product can operate independently, integrating them is what enables the Workspace ONE product to function.

Workspace ONE Access and Workspace ONE UEM provide tight integration between identity and device management. This integration has been simplified in recent versions to ensure that configuration of each product is relatively straightforward.

Although Workspace ONE Access and Workspace ONE UEM are the core components in a Workspace ONE deployment, you can deploy a variety of other components, depending on your business use cases. For example, and as shown in the figure in the next section, you can use VMware Unified Access Gateway™ to provide the VMware Workspace ONE® Tunnel or VPN-based access to on-premises resources.

For more information about the full range of components that might apply to a deployment, refer to the VMware Workspace ONE UEM documentation.

Cloud-Based Logical Architecture

With a cloud-based architecture, Workspace ONE is consumed as a service requiring little or no infrastructure on-premises.

  • VMware Workspace ONE UEM SaaS tenant – Cloud-hosted instance of the Workspace ONE UEM service. Workspace ONE UEM acts as the mobile device management (MDM), mobile content management (MCM), and mobile application management (MAM) platform.
  • Workspace ONE Access SaaS tenant – Cloud-hosted instance of Workspace ONE Access. Workspace ONE Access acts as an identity provider by syncing with Active Directory to provide single sign-on (SSO) across SAML-based applications, VMware Horizon–based apps and desktops, and VMware ThinApp® packaged apps. It is also responsible for enforcing authentication policy based on networks, applications, or platforms.

Sample Workspace ONE Cloud-Based Logical Architecture

Figure 29: Sample Workspace ONE Cloud-Based Logical Architecture

On-Premises Logical Architecture

With an on-premises deployment of Workspace ONE, both Workspace ONE UEM and Workspace ONE Access are deployed in your data centers.

  • VMware Workspace ONE UEM – On-premises installation of Workspace ONE UEM. Workspace ONE UEM consists of several core components, which can be installed on a single server. Workspace ONE UEM acts as the MDM, MCM, and MAM platform.
  • Workspace ONE Access – Acts as an identity provider by syncing with Active Directory to provide SSO across SAML-based applications, VMware Horizon–based applications and desktops, and VMware ThinApp packaged apps. Workspace ONE Access is also responsible for enforcing authentication policy based on networks, applications, or platforms.

Workspace ONE Sample On-Premises Logical Architecture

Figure 30: Workspace ONE Sample On-Premises Logical Architecture

Common Components

A number of optional components in a Workspace ONE deployment are common to both a cloud-based and an on-premises deployment.

  • AirWatch Cloud Connector (ACC) – Runs in the internal network, acting as a proxy that securely transmits requests from Workspace ONE UEM to the organization’s critical backend enterprise infrastructure components. Organizations can leverage the benefits of Workspace ONE® UEM MDM, running in any configuration, together with those of their existing LDAP, certificate authority, email, and other internal systems.
  • Workspace ONE Access Connector – Performs directory sync and authentication between an on-premises Active Directory and the Workspace ONE Access service.
  • Workspace ONE native mobile app  OS-specific versions of the native app are available for iOS, Android, and Windows 10. The Workspace ONE app presents a unified application catalog across Workspace ONE Access resources and native mobile apps, allows users to easily find and install enterprise apps, and provides an SSO experience across resource types.
  • Secure email gateway – Workspace ONE UEM supports integration with email services, such as Microsoft Exchange, GroupWise, IBM Notes (formerly Lotus Notes), and G Suite (formerly Google Apps for Work). You have three options for integrating email:
    • VMware Secure Email Gateway – Requires a server to be configured in the data center.
    • PowerShell integration – Communicates directly with Exchange ActiveSync on Exchange 2010 or later or Microsoft Office 365.
    • G Suite integration – Integrates directly with the Google Cloud services and does not need additional servers.
  • Content integration – The Workspace ONE UEM MCM solution helps organizations address the challenge of securely deploying content to a wide variety of devices using a few key actions. An administrator can leverage the Workspace ONE UEM Console to create, sync, or enable a file repository. After configuration, this content deploys to end-user devices with VMware Workspace ONE® Content. Access to content can be either read-only or read-write.
  • VMware Unified Access Gateway – Virtual appliance that provides secure edge services and allows external access to internal resources. Unified Access Gateway provides:
    • Workspace ONE UEM Per-App Tunnels and the Tunnel Proxy to allow mobile applications secure access to internal services
    • Access from Workspace ONE Content to internal file shares or SharePoint repositories by running the Content Gateway service
    • Reverse proxying of web servers
    • SSO access to on-premises legacy web applications by identity bridging from SAML or certificates to Kerberos
    • Secure external access to Horizon 7 desktops and applications

Horizon Virtual Desktops and Published Applications

Both Horizon 7 and Horizon Cloud Service can be combined and integrated into a Workspace ONE deployment, regardless of whether you use a cloud-based or on-premises deployment.

  • Horizon 7 – Manages and delivers virtualized or hosted desktops and applications to end users.
    • Connection Servers – Broker instances that securely connect users to desktops and published applications running on VMware vSphere® VMs, physical PCs, blade PCs, or RDSH servers. Connection Servers authenticate users through Windows Active Directory and direct the request to the appropriate and entitled resource.
    • Horizon Administrative Console – An administrative console that allows configuration, deployment, management, and entitlement of users to resources.
  • Horizon Cloud Service – A multi-tenant, cloud-scale architecture that enables you to choose where virtual desktops and apps reside: VMware-managed cloud, BYO cloud, or both.
    • Horizon Cloud Control Plane – A control plane that VMware hosts in the cloud for central orchestration and management of VDI desktops, RDSH-published desktops, and RDSH-published applications. Because VMware hosts the service, feature updates and enhancements are consistently provided for a software-as-a-service experience.
    • Horizon Cloud Administration Console – The cloud control plane also hosts a common management user interface, which runs in industry-standard browsers. This console provides IT administrators with a single location for management tasks involving user assignments to and management of VDI desktops, RDSH-published desktops, and RDSH-published applications.
    • Horizon Cloud pod – VMware software deployed to a supported capacity environment, such as Microsoft Azure cloud. Along with access to the Horizon Cloud Administration Console, the service includes the software necessary to pair the deployed pod with the cloud control plane and deliver virtual desktops and applications.

General Multi-site Best Practices 

There are numerous ways to implement a disaster recovery architecture, but some items can be considered general best practices.

Components That Must Always Run with a Primary Instance

Even with an active/active usage model across two data centers, meaning that the service is available from both data centers without manual intervention, one of the data centers holds certain roles that are not multi-master defined. The following components must run with a primary instance in a given site: 

  • On-premises Workspace ONE UEM
  • On-premises Workspace ONE Access
  • User profile and data shares containing VMware Dynamic Environment Manager™ (formerly called User Environment Manager) user data
  • Active Directory flexible single master operations (FSMO) roles, specifically, Primary Domain Controller (PDC) Emulator, because it is required to make changes to domain-based DFS namespaces
  • Microsoft SQL Server Always On availability groups (if used)

Be sure to secure those resources that are not multi-master by nature or that cannot be failed over automatically. Procedures must be put in place to define the steps required to recover these resources.

For this reference architecture design, we chose to place the primary availability group member in Site 1 as well as all AD FSMO roles on a domain controller. We made this choice because we had a good understanding of the failover steps required if either Site 1 or Site 2 failed.

Component Replication and Traveling Users

Use Workspace ONE and Horizon components to create effective replication strategies and address the needs of users who travel between sites:

  • Create a disaster plan up front that defines what a disaster means in your organization. The plan should specify whether you require a 1:1 mapping in terms of resources, or what portion of the workforce is required to keep the organization operational.
  • Understand what user data will need to be replicated between sites to allows users to be productive. The quantity, speed, and frequency of replication will affect the time it takes to present a complete service to a user from another site.
  • Replicate Horizon desktop and server (RDSH) master image templates between sites to avoid having to build the same templates on both sites. You can use a vSphere content library or perform a manual replication of the resources needed across the whole implementation.
  • With Horizon 7, use Cloud Pod Architecture and avoid using a metro-cluster with VMware vSAN™ stretched cluster unless you have a persistent desktop model in the organization that cannot easily be transformed into a nonpersistent-desktop use case.
  • With regard to initial user placement, even with a traveling worker use case, a given user must be related to user profile data (Dynamic Environment Manager user data), meaning that a relationship must be established between a user account and a data center. This also holds true when planning how users in the same part of the organization (such as sales) should be split between sites to avoid an entire function of the company being unable to work should a disaster strike.
  • For a traveling worker use case, where Dynamic Environment Manager is used to control the user profile data, VMware recommends that FlexEngine be used whenever possible in combination with folder redirection. This keeps the core profile to a minimum size and optimizes login times in the case where a profile is loaded across the link between the two data centers.
  • Use Microsoft SQL Server failover cluster instances and Always On availability groups for on-premises Workspace ONE UEM and Workspace ONE Access where possible. This is not required for VMware vCenter Server®, the Connection Server event database, and VMware vSphere® Update Manager™.

Component Design: Workspace ONE UEM Architecture

VMware Workspace ONE® UEM (powered by AirWatch) is responsible for device enrollment, a mobile application catalog, policy enforcement regarding device compliance, and integration with key enterprise services, such as email, content, and social media.

Workspace ONE Unified Endpoint Management (UEM) features include:

  • Device management platform – Allows full life-cycle management of a wide variety of devices, including phones, tablets, Windows 10, and rugged and special-purpose devices.
  • Application deployment capabilities – Provides automatic deployment or self-service application access for employees.
  • User and device profile services – Ensures that configuration settings for users and devices:
    • Comply with enterprise security requirements
    • Simplify end-user access to applications
  • Productivity tools – Includes an email client with secure email functionality, a content management tool for securely storing and managing content, and a web browser to ensure secure access to corporate information and tools.

Workspace ONE UEM can be implemented using an on-premises or a cloud-based (SaaS) model. Both models offer the same functionality.

To avoid repetition, an overview of the product, its architecture, and the common components are described in the cloud-based architecture section, which follows. The on-premises architecture section then adds to this information if your preference is to build on-premises.

Table 30: Strategy of Using Both Deployment Models

Decision

Both a cloud-based and an on-premises Workspace ONE UEM deployment were carried out separately.

Deployments were sized for 50,000 devices, which allows for additional growth over time without a redesign.

Justification

This strategy allows both architectures to be validated and documented independently.

Cloud-based Architecture

With a cloud-based implementation, the Workspace ONE UEM software is delivered as a service (SaaS). To synchronize Workspace ONE with internal resources such as Active Directory or a Certificate Authority, you use a separate cloud connector, which can be implemented using an AirWatch Cloud Connector. The separate connector can run within the internal network in an outbound-only connection mode, meaning the connector receives no incoming connections from the DMZ.

The simple implementation usually consists of:

  • A Workspace ONE UEM tenant
  • VMware AirWatch Cloud Connector

Cloud-Based Workspace ONE UEM Logical Architecture

Figure 31: Cloud-Based Workspace ONE UEM Logical Architecture

The main components of Workspace ONE UEM are described in the following table.

Table 31: Workspace ONE UEM Components 
Component Description
Workspace ONE UEM Console

Administration console for configuring policies within Workspace ONE UEM, to monitor and manage devices and the environment.

This service is hosted in the cloud and is managed for you as a part of the SaaS offering.

Workspace ONE UEM Device Services

Services that communicate with managed devices. Workspace ONE UEM relies on this component for:

  • Device enrollment
  • Application provisioning
  • Delivering device commands and receiving device data
  • Hosting the Workspace ONE UEM self-service catalog

This service is hosted in the cloud and is managed for you as a part of the SaaS offering.

API endpoint Collection of RESTful APIs, provided by Workspace ONE UEM, that allows external programs to use the core product functionality by integrating the APIs with existing IT infrastructures and third-party applications.

Workspace ONE APIs are also used by various Workspace ONE UEM services, such as Secure Email Gateway for interactions and data gathering.

This service is hosted in the cloud and is managed for you as a part of the SaaS offering.

AirWatch Cloud Connector Component that performs directory sync and authentication using an on-premises resource such as Active Directory or a trusted Certificate Authority.

This service is hosted in your internal network in outbound-only mode and can be configured for automatic updates.

AirWatch Cloud Messaging service (AWCM) Service used in conjunction with the AirWatch Cloud Connector to provide secure communication to your backend systems. AirWatch Cloud Connector also uses AWCM to communicate with the Workspace ONE UEM Console.

AWCM also streamlines the delivery of messages and commands from the Workspace ONE UEM Console by eliminating the need for end users to access the public Internet or utilize consumer accounts, such as Google IDs.

It serves as a comprehensive substitute for Google Cloud Messaging (GCM) for Android devices and is the only option for providing mobile device management (MDM) capabilities for Windows rugged devices. Also, Windows desktop devices that use the VMware Workspace ONE® Intelligent Hub use AWCM for real-time notifications.

This service is hosted in the cloud and is managed for you as a part of the SaaS offering.

VMware Tunnel

The VMware Tunnel™ provides a secure and effective method for individual applications to access corporate resources hosted in the internal network. The VMware Tunnel uses a unique X.509 certificate (delivered to enrolled devices by Workspace ONE) to authenticate and encrypt traffic from applications to the tunnel.

VMware Tunnel has two components – Proxy and Per-App VPN. The Proxy component is responsible for securing traffic from endpoint devices to internal resources through the VMware Workspace ONE® Web app and through enterprise apps that leverage the Workspace ONE SDK. The Per-App Tunnel component enables application-level tunneling (as opposed to full device-level tunneling) for managed applications on iOS, macOS, Android, and Windows devices.

Table 32: Implementation Strategy for Cloud-Based Workspace ONE UEM 

Decision

A cloud-based deployment of Workspace ONE UEM and the components required were architected for 50,000 devices, which allows for additional growth over time without a redesign.

Justification

This strategy provides validation of design and implementation of a cloud-based instance of Workspace ONE UEM.

 

AirWatch Cloud Connector

Even when utilizing cloud solutions, such as Workspace ONE UEM, you might want to use some in-house components and resources, for example, email relay, directory services (LDAP/ AD), Certificate Authority, and PowerShell Integration with Exchange. These resources are usually secured by strict firewall rules in order to avoid any unintended or malicious access. Even though these components are not exposed to public networks, they offer great benefits when integrated with cloud solutions such as Workspace ONE.

The AirWatch Cloud Connector allows seamless integration of on-premises resources with the Workspace ONE UEM deployment, whether it be cloud-based or on-premises. This allows organizations to leverage the benefits of Workspace ONE UEM, running in any configuration, together with those of their existing LDAP, Certificate Authority, email relay, PowerShell Integration with Exchange, and other internal systems.

The AirWatch Cloud Connector (ACC) runs in the internal network, acting as a proxy that securely transmits requests from Workspace ONE UEM to the organization’s enterprise infrastructure components. The ACC always works in an outbound-only mode, which protects it from targeted inbound attacks and allows it to work with existing firewall rules and configurations.

Workspace ONE UEM and the ACC communicate by means of AirWatch Cloud Messaging (AWCM). This communication is secured through certificate-based authentication, with the certificates generated from a trusted Workspace ONE UEM Certificate Authority.

The ACC integrates with the following internal components:

  • Email relay (SMTP)
  • Directory services (LDAP/AD)
  • Exchange 2010 (PowerShell)
  • Syslog (event log data)

The ACC also allows the following PKI integration add-ons:

  • Microsoft Certificate Services (PKI)
  • Simple Certificate Enrollment Protocol (SCEP PKI)
  • Third-party certificate services (on-premises only)
    • OpenTrust CMS Mobile
    • Entrust PKI
    • Symantec MPKI

There is no need to go through AirWatch Cloud Connector for cloud certificate services. You use the ACC only when the PKI is on-premises, not in the cloud (SaaS).

Table 33: Deployment Strategy for the AirWatch Cloud Connector

Decision

The AirWatch Cloud Connector was deployed.

Justification

The ACC provides integration of Workspace ONE UEM with Active Directory.

Scalability

You can configure multiple instances of ACC by installing them on additional dedicated servers using the same installer. The traffic is automatically load-balanced by the AWCM component and does not require a separate load balancer.

Multiple ACC instances can receive traffic (that is, use a live-live configuration) as long as the instances are in the same organization group and connect to the same AWCM server for high availability. Traffic is routed by AWCM using an LRU (least recently used) algorithm, which examines all available connections to decide which ACC node to use for routing the next request.

For recommendations on the number of ACC instances required, and for hardware requirements, see On-Premises Architecture Hardware Assumptions. Note that the documentation shows only the number of connectors required for each sizing scenario to cope with the load demand. It does not include additional servers in those numbers to account for redundancy.

Table 34: Strategy for Scaling the ACC Deployment

Decision

Three instances of AirWatch Cloud Connector were deployed in the internal network.

These instances were installed on Windows Server 2016 VMs.

Justification

Two ACC instances are required based on load, and a third is added for redundancy.

AirWatch Cloud Connector Installation

Refer to the latest VMware Workspace ONE UEM documentation for full details on the VMware AirWatch Cloud Connector Installation Process.

On-Premises Architecture

Workspace ONE UEM is composed of separate services that can be installed on a single- or multiple-server architecture to meet security and load requirements. Service endpoints can be spread across different security zones, with those that require external, inbound access located in a DMZ and the administrative console located in a protected, internal network, as shown in the following figure.

Syncing with internal resources such as Active Directory or a Certificate Authority can be achieved directly from the core components (Device Services and Admin Console) or using an AirWatch Cloud Connector. The separate connector can run within the LAN in outbound-only connection mode, meaning the connector receives no incoming connections from the DMZ.

The implementation is separated into the three main components:

  • Workspace ONE UEM Admin Console
  • Workspace ONE UEM Device Services
  • AirWatch Cloud Connector

The AirWatch Cloud Messaging Service can be installed as part of the Workspace ONE UEM Device Services server, and the API Endpoint is installed as part of the Admin Console server. Depending on the scale of the environment, these can also be deployed on separate servers.

In addition to the components already described for this cloud-based architecture, there are additional components required for an on-premises deployment.

Table 35: Additional On-Premises Workspace ONE UEM Components 
Component Description

Database

Microsoft SQL Server database that stores Workspace ONE UEM device and environment data.

All relevant application configuration data, such as profiles and compliance policies, persist and reside in this database. Consequently, the majority of the application’s backend workload is processed here.

Memcached Server

A distributed data caching application that reduces the workload on the Workspace ONE UEM database. This server is intended for deployments of more than 5,000 devices.

On-Premises Simple Workspace ONE UEM Architecture

Figure 32: On-Premises Simple Workspace ONE UEM Architecture

Table 36: Implementation Strategy for an On-Premises Deployment of Workspace ONE UEM

Decision

An on-premises deployment of Workspace ONE UEM and the components required were architected, scaled, and deployed to support 50,000 devices, and additional growth over time without a redesign.

Justification

This provides validation of design and implementation of an on-premises instance of Workspace ONE UEM.

Database

All critical data and configurations for Workspace ONE UEM are stored in the database. This is the data tier of the solution. Workspace ONE UEM databases are based on the Microsoft SQL Server platform. Application servers receive requests from the console and device users and then process the data and results. No persistent data is maintained on the application servers (device and console services), but user and device sessions are maintained for a short time.

In this reference architecture, Microsoft SQL Server 2016 was used and its cluster offering Always On availability groups, which is supported with Workspace ONE UEM. This allows the deployment of multiple instances of each of the Workspace ONE UEM components, pointing to the same database and protected by an availability group. An availability group listener is the connection target for all instances.

Windows Server Failover Clustering (WSFC) can also be used to improve local database availability and redundancy. In a WSFC cluster, two Windows servers are clustered together to run one instance of SQL Server, which is called a SQL Server failover cluster instance (FCI). Failover of the SQL Server services between these two Windows servers is automatic.

Workspace ONE UEM runs on an external SQL database. Prior to running the Workspace ONE UEM database installer, you must have your database administrator prepare an empty external database and schema. Licensed users can use a Microsoft SQL Server 2012, SQL Server 2014, or SQL Server 2016 database server to set up a high-availability database environment.

For guidance on hardware sizing for Microsoft SQL Servers, see On-Premises Recommended Architecture Hardware Sizing.

Table 37: Implementation Strategy for the On-Premises Workspace ONE UEM Database

Decision

An external Microsoft SQL database was implemented for this design.

Justification

An external SQL database is recommended for production and allows for scale and redundancy.

Memcached

Memcached is a distributed data-caching application available for use with Workspace ONE UEM environments. It reduces the workload on the database. Memcached replaces the previous caching solution, AW Cache, and is recommended for deployments of more than 5,000 devices.

Once enabled in the Workspace ONE UEM Console, Memcached begins storing system settings and organization group tree information as they are accessed by Workspace ONE UEM components. When a request for data is sent, Workspace ONE UEM automatically checks for the results stored in memory by Memcached before checking the database, thereby reducing the database workload. If this process fails, results data is retrieved from the database and stored in Memcached for future queries. As new values are added and existing values are changed, the values are written to both Memcached and the database.

Note: All key/value pairs in Memcached expire after 24 hours.

You can deploy multiple Memcached servers, with each caching a portion of the data, to mitigate against a single server failure degrading the service. With two servers, 50 percent of the data resides on server 1 and 50 percent on server 2, with no replication across servers. A hash table tells the services what data is stored on which server.

If server 1 experiences an outage for any reason, only 50 percent of the cache is impacted. The tables are rebuilt on the second server as services failover to the database and look to cache those gathered items.

For guidance on hardware sizing for Memcached servers, see On-Premises Recommended Architecture Hardware Sizing.

Table 38: Implementation Strategy for Memcached Servers

Decision

Two Memcached servers were deployed in the internal network.

Justification

Memcached servers are recommended for environments with more than 5,000 devices. Memcached servers reduce the load on the SQL database.

Load Balancing

To remove a single point of failure, you can deploy more than one instance of the different Workspace ONE UEM components behind an external load balancer. This strategy not only provides redundancy but also allows the load and processing to be spread across multiple instances of the component. To ensure that the load balancer itself does not become a point of failure, most load balancers allow for setup of multiple nodes in a high-availability (HA) or active/passive configuration.

The AirWatch Cloud Connector traffic is load-balanced by the AirWatch Cloud Messaging component. It does not require a separate load balancer. Multiple AirWatch Cloud Connectors in the same organization group that connect to the same cloud messaging server for high availability can all expect to receive traffic (an active-active configuration). How traffic is routed is determined by the component and depends on the current load.

For more information on load balancing recommendations and HA support for the different Workspace ONE UEM components, see On-Premises Architecture Load Balancer Considerations and High Availability Support for Workspace ONE UEM Components.

Scalability and Availability

Workspace ONE UEM core components can be deployed in a single, shared server design, but this is really only recommended for proof-of-concept engagements. For production use, to satisfy load demands and to meet most network architecture designs, the core application components are usually installed on two separate, dedicated servers (Admin Console and Device Services).

For a high-availability environment and to meet load demands of large deployments, multiple instances of each one of these components can be deployed on dedicated servers behind a load balancer.

Table 39: Implementation Strategy for the Workspace ONE UEM Device Services

Decision

Four instances of the Workspace ONE UEM Device Services servers were deployed in the DMZ.

Justification

Three servers are required to handle the load and supporting 50,000 devices. A fourth server is added for redundancy.

These servers will include the following components: Workspace ONE UEM Device Services.

Table 40: Implementation Strategy for Workspace ONE UEM Console Servers

Decision

Three instances of the Workspace ONE UEM Console servers were installed in the internal network.

Justification

Two servers are required based on load and based on supporting 50,000 devices. A third server is added for redundancy.

These servers include the following component: Workspace ONE UEM Admin Console.

In larger environments, which generally include 50,000 devices or more, the API and AWCM services should also be located on separate, dedicated servers to remove their load from the Device Services and Admin Console servers.

Table 41: Implementation Strategy for AWCM Servers

Decision

Two instances of the AWCM servers were deployed in the internal network.

Justification

To support deployments of 50,000 devices and more, VMware recommends that you separate the AWCM function from the Device Services function.

Table 42: Implementation Strategy for API Servers

Decision

Two instances of the API servers were deployed in the internal network.

Justification

To support deployments of 50,000 devices and more, VMWare recommends use separate servers for the API server and Device Services.

Multiple instances of the AirWatch Cloud Connector (ACC) can be deployed in the internal network for a high-availability environment. The load for this service is balanced without the need for an external load balancer.

Table 43: Implementation Strategy for the ACC

Decision

Three instances of the AirWatch Cloud Connector were deployed.

Justification

Two ACC instances are required based on load, and a third is added for redundancy.

Workspace ONE UEM can be scaled horizontally to meet demands regardless of the number of devices. For server numbers, hardware sizing, and recommended architectures for deployments of varying sizes, see On-Premises Recommended Architecture Hardware Sizing. Note that the guide shows only the number of application server components required for each sizing scenario to cope with the load demand. It does not include additional servers in those numbers to account for redundancy.

Due to the amount of data flowing in and out of the Workspace ONE UEM database, proper sizing of the database server is crucial to a successful deployment. For guidance on sizing the database server resources, CPU, RAM, and disk IO requirements, see On-Premises Recommended Architecture Hardware Sizing.

This reference architecture is designed to accommodate up to 50,000 devices, allowing additional growth over time without a redesign. Multiple nodes of each of the components (Device Services, Admin Consoles, API servers, AWCM servers, AirWatch Cloud Connectors) are recommended to meet the demand. To guarantee the resilience of each service within a single site, additional application servers are added. For example, four Device Services nodes are used instead of the three that would be required to meet only the load demand.

On-Premises Single-Site Scaled Workspace ONE UEM Components

Figure 33: On-Premises Single-Site Scaled Workspace ONE UEM Components

This figure shows a scaled environment suitable for up to 50,000 devices. It will also allow additional growth over time without a redesign because it uses dedicated API servers and AWCM servers.

  • Workspace ONE UEM Devices Services servers are located in the DMZ, and a load balancer distributes the load.
  • Workspace ONE UEM Admin Console Services, Memcached, AWCM servers, and API servers are hosted in the internal network with a load balancer in front of them.
  • AirWatch Cloud Connector servers are hosted in the internal network and can use an outbound-only connection without the need for an external load balancer.

For this reference architecture, split DNS was used; that is, the same fully qualified domain name (FQDN) was used both internally and externally for user access to the Workspace ONE UEM Device Services server. Split DNS is not a strict requirement for a Workspace ONE UEM on-premises deployment but it does improve the user experience.

Multi-site Design

Workspace ONE UEM servers are the primary endpoint for management and provisioning of end user devices. These servers should be deployed to be highly available within a site and deployed in a secondary data center for failover and redundancy. A robust back-up policy for application servers and database servers can minimize the steps required for restoring a Workspace ONE UEM environment in another location.

You can configure disaster recovery (DR) for your Workspace ONE UEM solution using whatever procedures and methods meet your DR policies. Workspace ONE UEM has no dependency on your DR configuration, but we strongly recommend that you develop some type of failover procedures for DR scenarios. Workspace ONE UEM components can be deployed to accommodate most of the typical disaster recovery scenarios.

Workspace ONE UEM consists of the following core components, which need to be designed for redundancy:

  • Workspace ONE UEM Device Services
  • Workspace ONE UEM Admin Console
  • Workspace ONE UEM AWCM server
  • Workspace ONE UEM API server
  • AirWatch Cloud Connector
  • Memcached server
  • SQL database server

Table 44: Site Resilience Strategy for Workspace ONE UEM

Decision

A second site was set up with Workspace ONE UEM.

Justification

This strategy provides disaster recovery and site resilience for the on-premises implementation of Workspace ONE UEM.

Workspace ONE UEM Application Servers and AirWatch Cloud Connectors

To provide site resilience, each site requires its own group of Workspace ONE UEM application and connector servers to allow the site to operate independently, without reliance on another site. One site runs as an active deployment, while the other has a passive deployment.

Within each site, sufficient application servers must be installed to provide local redundancy and withstand the load on its own. The Device Services servers are hosted in the DMZ, while the Admin Console server resides in the internal network. Each site has a local load balancer that distributes the load between the local Device Services servers, and a failure of an individual server is handled with no outage to the service or requirement to fail over to the backup site.

A global load balancer is used in front of each site’s load balancer.

At each site, AirWatch Cloud Connector servers are hosted in the internal network and can use an outbound-only connection.

For recommendations on server quantities and hardware sizing of Device Services and Admin Console servers, see On-Premises Recommended Architecture Hardware Sizing.

Table 45: Disaster Recovery Strategy for Workspace ONE UEM Application Servers

Decision

A second set of servers was installed in a second data center. The number and function of the servers was the same as sized for the primary site.

Justification

This strategy provides full disaster recovery capacity for all Workspace ONE UEM on-premises services.

Multi-site Console Servers

When deploying multiple Console servers, certain Workspace ONE UEM services must be active on only one primary Console server to ensure maximum performance. These services must be disabled on non-primary servers after Workspace ONE UEM installation is complete. 

Workspace ONE UEM services that must be active on only one server are:

  • AirWatch Device Scheduler 
  • AirWatch GEM Inventory Service 
  • Directory Sync
  • Content Delivery Service

When you upgrade the Workspace ONE UEM Console servers, the Content Delivery Service automatically restarts. You must then manually disable the applicable services again on all extra servers to maintain best performance.

Multi-site Database

As previously stated, Workspace ONE UEM supports Microsoft SQL Server 2012 (and later) and its cluster offering Always On availability groups. This allows the deployment of multiple instances of Device Services servers and Workspace ONE UEM Console servers that point to the same database. The database is protected by an availability group, with an availability group listener as the single database connection target for all instances.

For this design, an active/passive database instance was configured using SQL Server Always On. This allows the failover to the secondary site if the primary site becomes unavailable. Depending on the configuration of SQL Server Always On, inter-site failover of the database can be automatic, though not instantaneous.

For this reference architecture, we chose an Always On implementation with the following specifications:

  • No shared disks were used.
  • The primary database instance ran in Site 1 during normal production.

Within a site, Windows Server Failover Clustering (WSFC) was used to improve local database availability and redundancy. In a WSFC cluster, two Windows servers are clustered together to run one instance of SQL Server, which is called a SQL Server failover cluster instance (FCI). Failover of the SQL Server services between these two Windows servers is automatic. For details of the implementation we used, see Appendix D: Workspace ONE UEM Configuration .

Table 46: Strategy for Multi-site Deployment of the On-Premises Database

Decision

A Microsoft SQL Always-On database was used.

Justification

This strategy provides replication of the database from the primary site to the recovery site and allows for recovery of the database functionality.

Failover to a Second Site

A Workspace ONE UEM multi-site design allows administrators to maintain constant availability of the different Workspace ONE UEM services in case a disaster renders the original active site unavailable. The following diagram shows a sample multi-site architecture.

On-Premises Multi-site Workspace ONE UEM Components

Figure 34: On-Premises Multi-site Workspace ONE UEM Components

To achieve failover to a secondary site, manual intervention might be required for two main layers of the solution:

  • Database – Depending on the configuration of SQL Server Always On, inter-site failover of the database can be automatic. If necessary, steps should be taken to manually control which site has the active SQL node.
  • Device Services – The global load balancer controls which site traffic is directed to. During normal operation, the global load balancer directs traffic to the local load balancer in front of the Device Service servers in Site 1. In a failover scenario, the global load balancer should be changed to direct traffic to the equivalent local load balancer in Site 2.
  • Console servers – When multiple Console servers are deployed, ensure the Workspace ONE UEM services mentioned in Multi-site Console Servers are active only on the primary servers and are disabled on the non-primary servers for Site 2.

Prerequisites

This section details the prerequisites for the Workspace ONE UEM configuration:

  • Network Configuration – Verify that the following requirements are met:
    • Static IP address and DNS Forward (A) are used.
    • Inbound firewall port 443 is open so that external users can connect to the Workspace ONE UEM instance or the load balancer.
  • Active Directory – Workspace ONE UEM supports Active Directory configurations on Windows 2008 R2, 2012, 2012 R2, and 2016, including:
    • Single AD domain
    • Multidomain, single forest
    • Multi-forest with trust relationships
    • Multi-forest with untrusted relationships (requires external connector configuration)
    • Active Directory Global Catalog optional for Directory Sync

For this reference architecture, Windows 2016 Active Directory was used.

Installation and Initial Configuration

Workspace ONE UEM is delivered as separate installer for the database and application servers. The database installer must be run before installing any of the application servers. For more information on installing Workspace ONE UEM see the VMware Workspace ONE UEM Installation Guide.

At a high level, the following tasks should be completed:

  • Database:
    • Create the Workspace ONE UEM database.
    • Run the Workspace ONE UEM database installer.
  • Application Servers (Console, Device Services, API, and AWCM):
    • Run the application installer on each application server.
    • Select the appropriate services for the component you are installing.
  • Run the Secure Channel installer on each AWCM server, and restart the AWCM service after installation is complete.
  • Install and configure the Memcached servers.
  • Install the AirWatch Cloud Connector.
  • Configure Active Directory:
    • Create a connection to Active Directory.
    • Select a bind account with permission to read from AD.
    • Choose groups and users to sync.
    • Initiate a directory sync.
  • Configure email (SMTP) (if applicable) at the company organizational group level.
  • Upload the SSL certificate for the iOS signing-profile certificate at the global organizational group level .
  • Set up Apple Push Notification service (APNs) for iOS devices and a notification service for Android.

Integration with Workspace ONE Access

Integrating Workspace ONE UEM and Workspace ONE Access into your Workspace ONE environment provides several benefits. Workspace ONE uses Workspace ONE Access for authentication, SaaS, and VMware Horizon® application access. Workspace ONE uses Workspace ONE UEM for device enrollment and management.

The integration process between the two solutions is detailed in Integrating Workspace ONE UEM With Workspace ONE Access.

Also see Platform Integration for more detail.

Resource Types

A Workspace ONE implementation can include the following types of application resources.

Native Mobile Apps

Native mobile apps from the Apple App Store, Google Play, and the Microsoft Windows Store have brought about new ways of easily accessing tools and information to make users more productive. A challenge has been making the available apps easy to find, install, and control. Workspace ONE UEM has long provided a platform for distribution, management, and security for these apps. Apps can be published from the app stores themselves, or internally developed apps can be uploaded to the Workspace ONE UEM service for distribution to end users.

VMware Native Mobile Apps

Figure 35: VMware Native Mobile Apps

Unified App Catalog

When Workspace ONE UEM and Workspace ONE Access are integrated so that apps from both platforms can be enabled for end users, the option to use the unified catalog in Workspace ONE Access is enabled. This catalog pulls entitlements from both platforms and displays them appropriately in the Workspace ONE native app on a mobile device. The Workspace ONE client determines which apps to display on which platform. For example, iOS apps appear only on devices running iOS, and Android apps appear only on Android devices. 

Unified Catalog in Workspace ONE Access

Figure 36: Unified Catalog in Workspace ONE Access

Conditional Access

With the Workspace ONE conditional access feature, administrators can create access policies that go beyond the evaluation of user identity and valid credentials. Combining Workspace ONE UEM and Workspace ONE Access, administrators can evaluate the target resource being accessed, the source network from which the request originated, and the type and compliance status of the device. With these criteria, access policies can provide a more sophisticated authentication challenge only when needed or deny access when secure conditions are not met.

Using the Workspace ONE UEM Console to Create Access Policies

Configuration of compliance starts in the Workspace ONE UEM Console. Compliance policies are created by determining:

  1. A criterion to check, such as a jail-broken or rooted device
  2. An action to take, such as an email to an administrator or a device wipe
  3. An escalation to further actions if the device is not returned to compliance within a set time
  4. An assignment to devices or users

Examples of rules are listed in the following table.

Table 47: Examples of Access Policy Rules
Compliance Criterion Policy Description

Application list

A device is out of compliance with the policy for one or more of the following reasons:

  • Denylisted apps are installed on the device.
  • Non-allowlisted apps are installed on the device.
  • Required apps are not installed.
  • The version of the installed app is different from the one defined in the policy.

Last compromised scan

A device complies with this policy if the device was last scanned for compliance within the timeframe defined in the policy.

Passcode

A device complies with this policy if a passcode is set in the device by the user. A corresponding rule provides information on the passcode and encryption status of the device.

Device roaming

A device is out of compliance with this policy if the device is roaming.

Refer the section Compliance Policy Rules Descriptions for the complete list. Because not all the options apply to all the platforms, also see Compliance Policy Roles by Platform.

Using the Workspace ONE UEM REST API to Extend Device Compliance Parameters

With the Workspace ONE UEM REST API, the definition of a device’s compliance status can be extended beyond what is available within the Workspace ONE UEM Console by leveraging an integration with one or more partners from the extensive list of VMware Mobile Security Alliance (MSA) partners. For more information, see Mitigate Mobile Threats with Best-of-Breed Security Solutions.

To use the device posture from Workspace ONE UEM with Workspace ONE Access, you must enable the Device Compliance option when configuring the Workspace ONE UEM–Workspace ONE Access integration. The Compliance Check function must also be enabled.

Enable Compliance Check

Figure 37: Enable Compliance Check

After you enable the compliance check through Workspace ONE UEM, you can add a rule that defines what kind of compliance parameters are checked and what kind of authentication methods are used.

Device Compliance Policy

Figure 38: Device Compliance Policy

The device’s unique device identifier (UDID) must also be captured in Workspace ONE UEM and used in the compliance configuration. This feature works with mobile SSO for iOS, mobile SSO for Android, and certificate cloud deployment authentication methods.

Note: Before you use the Device Compliance authentication method, you must use a method that obtains the device UDID. Evaluating device compliance before obtaining the UDID does not result in a positive validation of the device’s status.

Multi-factor Authentication

Workspace ONE Access supports chained, two-factor authentication. The primary authentication methods can be username and password or mobile SSO. You can combine these authentication methods with RADIUS, RSA Adaptive Authentication, and VMware Workspace ONE Verify as secondary authentication methods to achieve additional security for access control.

BYOD and Mobile Application Management (MAM)

Bring your own device (BYOD) refers to employees using personal devices to access corporate resources that contain potentially sensitive information. Personal devices could include smartphones, personal computers, or tablets.

Mobile Application Management (MAM) is the ability for administrators to secure and enable IT control of enterprise apps on a mobile device, such as by requiring full device management to access an app, or restricting copy-paste functionality within an app.

Workspace ONE supports a variety of device and application management approaches based on the ownership of the device and the level of security required by an organization. Corporate-owned devices, or devices used within a regulated industry, will likely require a greater level of management than employee-owned devices. However, employees will expect more privacy and fewer restrictions on the devices they own.

Each mobile platform has slightly different terminology to describe management methods, but here are the high-level options:

  • Full mobile device management (MDM) – Requires the user to enroll the device into management to access any work applications. Enrollment involves downloading a management, or MDM, profile to the device. This method provides the most device control for the administrator, including full policies and restrictions, attestation and compliance, conditional access, and device remediation. Usually this management method is used for corporate-owned devices; however, some organizations require this level of management for BYOD as well.
  • OS partitioned – Management options that are made possible through device manufacturers. This includes iOS User Enrollment and Android Enterprise Work Profile. This configuration separates work apps and data from personal apps and data. See the Managing Android Devices Operational Tutorial or the Guide to Apple’s User Enrollment for more details on these mobile platforms. Partitioning the OS is a common management option for both BYOD and corporate-owned devices because it provides a user-friendly method to distinguish between personal and work apps.
  • Registered mode – If configured in the Workspace ONE UEM console, users are able to log in to the Workspace ONE Intelligent Hub app and access applications without full, device-level management (MDM profile). This management method provides the most privacy for the user and is a common option for BYOD. For instructions on enabling access to Intelligent Hub without full device management, see Enable Intelligent Hub Without Requiring Full Management.
  • Containerized apps – Across any of the above-mentioned device management options, Workspace ONE provides secure apps that can be accessed by the user regardless of whether a device is fully managed or uses registered mode. Containerized apps include Workspace ONE productivity apps (Intelligent Hub, Boxer, Content, and so on) and apps that include the Workspace ONE SDK. Using containerized apps on a device in registered mode is the most common configuration to achieve BYOD MAM.  Review this Mobile Application Management learning path to learn more about the Workspace ONE productivity apps

Workspace ONE Device Management Options Along the Continuum

Figure 39: Workspace ONE Device Management Options Along the Continuum

Enabling Adaptive Management for iOS

The Workspace ONE UEM administrator can allow users to log in to Workspace ONE Intelligent Hub without requiring full device management. In other words, the user can access the catalog of corporate applications without installing the iOS MDM profile on their device. This option is called registered mode: the user’s device is registered but not fully managed.

However, if an iOS user attempts to access a restricted corporate application in the catalog that requires MDM enrollment, the user is prompted to install the iOS MDM profile. This is referred to as adaptive management.

Adaptive management can be enabled on an application-by-application basis within the Workspace ONE UEM Console. Within an application profile, an administrator can choose to require device-level management prior to allowing use of a specific app, as shown in the following screenshot.

Mobile Application Deployment in the Workspace ONE UEM Console

Figure 40: Mobile Application Deployment in the Workspace ONE UEM Console

When adaptive management is required for an app, the app has an indicator in the catalog, so the end user understands that the app has specific requirements.

Adaptive management is only supported on iOS. The Android platform no longer supports this functionality. The new standard for app deployment with Android is through Android Enterprise, as described in the Application Management for Android part of the Integrating Workspace ONE UEM with Android.

Mobile Single Sign-On

One of the hallmark features of the Workspace ONE experience is mobile SSO technology, which provides the ability to sign in to the app once and gain access to all entitled applications, including SaaS apps. This core capability can help address security concerns and password-cracking attempts and vastly simplifies the end-user experience for a mobile user. A number of methods enable this capability on both Workspace ONE Access and Workspace ONE UEM. SAML becomes a bridge to the apps, but each native mobile platform requires different technologies to enable SSO.

Configuration of mobile SSO for iOS and Android devices can be found in the Guide to Deploying VMware Workspace ONE with Workspace ONE Access.

Mobile SSO for iOS

Kerberos-based SSO is the recommended SSO experience on managed iOS devices. Workspace ONE Access offers a built-in Kerberos adapter, which can handle iOS authentication without the need for device communication to your internal Active Directory servers. In addition, Workspace ONE UEM can distribute identity certificates to devices using a built-in Workspace ONE UEM Certificate Authority, eliminating the requirement to maintain an on-premises CA.

Alternatively, enterprises can use an internal key distribution center (KDC) for SSO authentication, but this typically requires the provisioning of an on-demand VPN. Either option can be configured in the Standard Deployment model, but the built-in KDC must be used in the Simplified Deployment model that is referenced in Implementing Mobile Single Sign-in Authentication for Workspace ONE UEM-Managed iOS Devices.

Mobile SSO for Android

Workspace ONE offers universal Android mobile SSO, which allows users to sign in to enterprise apps securely without a password. Android mobile SSO technology requires device enrollment and the use of Workspace ONE Tunnel to authenticate users against SaaS applications.

Refer to Implementing Mobile Single Sign-On Authentication for Managed Android Devices.

Windows 10 and macOS SSO

Certificate-based SSO is the recommended experience for managed Windows and Mac desktops and laptops. An Active Directory Certificate Services or other CA is required to distribute certificates. Workspace ONE UEM can integrate with an on-premises CA through AirWatch Cloud Connector or an on-demand VPN.

For guidance on Workspace ONE UEM integration with a Certificate Authority, see Certificate Management.

Email Integration

Workspace ONE offers a great number of choices when it comes to devices and email clients. Although this flexibility offers many choices of email clients, it also potentially exposes the enterprise to data leakage due to a lack of control after email messages reach the device.

Another challenge is that many organizations are moving to cloud-based email services, such as Microsoft Office 365 and G Suite (formerly Google Apps for Work). These services provide fewer email control options than the on-premises models that an enterprise might be accustomed to. 

This section looks at the email connectivity models and the pros and cons of each.

Workspace ONE UEM Secure Email Gateway Proxy Model

The Workspace ONE UEM Secure Email Gateway proxy server is a separate server installed in-line with your existing email server to proxy all email traffic going to mobile devices. Based on the settings you define in the Workspace ONE UEM Console, the Workspace ONE UEM Secure Email Gateway proxy server allows or blocks email for every mobile device it manages, and it relays traffic only from approved devices. With some additional configuration, no devices are allowed to communicate directly with the corporate email server.

Workspace ONE UEM Secure Email Gateway Architecture

Figure 41: Workspace ONE UEM Secure Email Gateway Architecture

Workspace ONE UEM Secure Email Gateway runs as a service on Unified Access Gateway. For architecture and sizing considerations, see Component Design: Unified Access Gateway Architecture.

Direct PowerShell Model

In this model, Workspace ONE UEM adopts a PowerShell administrator role and issues commands to the Exchange ActiveSync infrastructure to permit or deny email access based on the policies defined in the Workspace ONE UEM Console. PowerShell deployments do not require a separate email proxy server, and the installation process is simpler. In the case of an on-premises Exchange server, AirWatch Cloud Connector (ACC) can be leveraged to prevent inbound traffic flow.

Microsoft Office 365 Email Architecture

Figure 42: Microsoft Office 365 Email Architecture

Supported Email Infrastructure and Models

Use the following table to compare these models and the mail infrastructures they support.

Table 48: Supported Email Deployment Models
Deployment Model Configuration Mode Mail Infrastructure

Proxy model

Workspace ONE UEM Secure Email Gateway (proxy)

  • Microsoft Exchange 2010, 2013, and 2016
  • IBM Domino with Lotus Notes
  • Novel GroupWise (with EAS)
  • G Suite
  • Office 365 (For attachment encryption)

Direct model

PowerShell model

  • Microsoft Exchange 2010, 2013, and 2016
  • Microsoft Office 365

Direct model

Google model

G Suite

Microsoft Office 365 requires additional configuration for the Workspace ONE UEM Secure Email Gateway proxy model. VMware recommends the direct model of integration with cloud-based email servers unless encryption of attachments is required.

The following table summarizes the pros and cons of the deployment features of Workspace ONE UEM Secure Email Gateway and PowerShell to help you choose which deployment is most appropriate.

Table 49: Workspace ONE UEM Secure Email Gateway and PowerShell Feature Comparison
Model Pros Cons

Workspace ONE UEM Secure Email Gateway

  • Real-time compliance
  • Attachment encryption
  • Hyperlink transformation
  • Additional servers needed
  • Office 365 must be federated with Workspace ONE to prevent users from directly connecting to Office 365

PowerShell

No additional on-premises servers required for email management

 

  • No real-time compliance sync
  • Not recommended for deployments larger than 100,000 devices
  • VMware Workspace ONE® Boxer is required to containerize attachments and hyperlinks in Workspace ONE Content and Workspace ONE Web

Key Design Considerations

VMware recommends using Workspace ONE UEM Secure Email Gateway for all on-premises email infrastructures with deployments of more than 100,000 devices. For smaller deployments or cloud-based email, PowerShell is another option.

For more information on design considerations for mobile email management, see the most recent Workspace ONE UEM Mobile Email Management Guide.

Table 50: Email Deployment Model for This Reference Architecture

Decision

The PowerShell model was used with Workspace ONE Boxer.

Justification

This design includes Microsoft Office 365 email. Although this decision limits employee choice of mail client and removes native email access in the Mobile Productivity service, it provides the best protection available against data leakage.

Next Steps

  • Configure Microsoft Office 365 email through PowerShell.
  • Configure Workspace ONE Boxer as an email client for deployment as part of device enrollment.

Conditional Access Configured for Microsoft Office 365 Basic Authentication

By default, Microsoft Office 365 basic authentication is vulnerable because credentials are entered in the app itself rather than being submitted to an identity provider (IdP) in a browser, as with modern authentication. However, with Workspace ONE, you can easily enhance the security and control over Microsoft Office 365 with an active flow.

You can now control access to Office 365 active flows based on the following access policies in Workspace ONE Access:

  • Network range
  • Device OS type
  • Group membership
  • Email protocol
  • Client name

Microsoft Office 365 Active Flow Conditional Access Policies

Figure 43: Microsoft Office 365 Active Flow Conditional Access Policies

Content Integration

Mobile content management (MCM) can be critical to device deployment, ensuring that content is safely stored in enterprise repositories and available to end users when and where they need it with the appropriate security controls. The MCM features in Workspace ONE UEM provide users with the content they need while also providing the enterprise with the security control it requires.

Content Management Overview

  1. Workspace ONE UEM managed content repository – Workspace ONE UEM administrators with the appropriate permissions can upload content to the repository and have complete control over the files that are stored in it. 
    The synchronization process involves two components:
    • VMware Content Gateway – This on-premises node provides secure access to content repositories or internal file shares. You can deploy it as a service on a VMware Unified Access Gateway™ virtual appliance. This gateway supports both cascade mode (formally known as relay-endpoint) and basic (formally known as endpoint-only) deployment models.
    • Corporate file server – This preexisting repository can reside within an organization’s internal network or on a cloud service. Depending on an organization’s structure, the Workspace ONE UEM administrator might not have administrative permissions for the corporate file server.
  2. VMware Workspace ONE Content – After this app is deployed to end-user devices, users can access content that conforms to the configured set of parameters.

Mobile Content Management with Workspace ONE UEM

Figure 44: Mobile Content Management with Workspace ONE UEM

You can integrate Workspace ONE Content with a large number of corporate file services, including Box, Google Drive, network shares, various Microsoft services, and most websites that support Web Distributed Authoring and Versioning (WebDAV). It is beyond the scope of this document to list all of them.

For full design considerations for mobile content management, see the most recent Workspace ONE UEM Mobile Content Management.

Content Gateway

VMware Content Gateway provides a secure and effective method for end users to access internal repositories. Users are granted access only to their approved files and folders based on the access control lists defined in the internal repository through Workspace ONE Content. To prevent security vulnerabilities, Content Gateway servers support only Server Message Block (SMB) v2.0 and SMBv3.0. SMBv2.0 is the default. Content Gateway offers basic and cascade mode (formally known as relay-endpoint) architecture models for deployment.

Content Gateway can be deployed as a service within VMware Unified Access Gateway 3.3.2 and later. For guidance on deployment and configuration of Content Gateway service, see Content Gateway on Unified Access Gateway.

For step-by-step instructions, see Configuring Content Gateway Edge Services on Unified Access Gateway.

Scalability

Unified Access Gateway can be used to provide edge and gateway services for VMware Content Gateway, Secure Email Gateway, and VMware Tunnel functionality. For architecture and sizing guidance, see Component Design: Unified Access Gateway Architecture.

Data Protection in Workspace ONE Content

Workspace ONE Content provides considerable control over the types of activities that a user can perform with documents that have been synced to a mobile device. Applications must be developed using Workspace ONE SDK features or must be wrapped to use these restrictions. The following table lists the data loss prevention features that can be controlled.

Table 51: Data Loss Prevention Features
Feature Name Description

Enable Copy and Paste

Allows an application to copy and paste on devices

Enable Printing

Allows an application to print from devices

Enable Camera

Allows applications to access the device camera

Enable Composing Email

Allows an application to use the native email client to send email

Enable Data Backup

Allows wrapped applications to sync data with a storage service such as iCloud

Enable Location Services

Allows wrapped applications to receive the latitude and longitude of the device

Enable Bluetooth

Allows applications to access Bluetooth functionality on devices

Enable Screenshot

Allows applications to access screenshot functionality on devices

Enable Watermark

Displays text in a watermark in documents in the Workspace ONE Content

Limit Documents to Open Only in Approved Apps

Controls the applications used to open resources on devices

Allowed Applications List

Lists the applications that are allowed to open documents

Key Design Considerations

Because this environment is configured with Microsoft Office 365, SharePoint-based document repositories are configured as part of the Workspace ONE Content implementation. Data loss prevention (DLP) controls are used in the Mobile Productivity service and Mobile Application Workspace profiles to protect corporate information.

Table 52: Implementation Strategy for Providing Content Gateway Services

Decision

Unified Access Gateway was used to provide Content Gateway services.

Justification

Unified Access Gateway was chosen as the standard edge gateway appliance for Workspace ONE services, including VMware Horizon and content resources.

VMware Tunnel

VMware Tunnel leverages unique certificates deployed from Workspace ONE UEM to authenticate and encrypt traffic from the mobile device to resources on the internal network. It consists of following two components:

  1. Proxy – This component secures the traffic between the mobile device and the backend resources through the Workspace ONE Web application. To leverage the proxy component with an internally developed app, you must embed the Workspace ONE SDK in the app.
    The proxy component, when deployed, supports SSL offloading.
  2. Per-App Tunnel – This component allows certain applications on your device to communicate with your backend resources. This restricts access to unwanted applications, unlike the device-level VPN. The Per-App Tunnel supports TCP, UDP and HTTP(S) traffic and works for both public and internally developed apps. It requires the Workspace ONE Tunnel application to be installed and managed by Workspace ONE UEM.

    Note: The Per-App Tunnel does not support SSL offloading.

VMware Tunnel Service Deployment

The VMware Tunnel service can be deployed as a service within VMware Unified Access Gateway 3.3.2 and later as the preferred method, or as a standalone Linux server, both deployments support the Proxy and the Per-App Tunnel modules.

For guidance on deployment and configuration of the VMware Tunnel service, see Deploying VMware Tunnel on Unified Access Gateway. For step-by-step instructions, see Configuring VMware Tunnel Edge Services on Unified Access Gateway.

Architecture

The Per-App Tunnel component is recommended because it provides most of the functionality with easier installation and maintenance. It leverages native APIs offered by Apple, Google, and Windows to provide a seamless end-user experience and does not require additional configuration as the Proxy model does.

The VMware Tunnel service can reside in:

  • DMZ (single-tier, basic mode)
  • DMZ and internal network (multi-tier, cascade mode)

Both configurations support load balancing and high availability.

VMware Tunnel and Content Deployment Modes

Figure 45: VMware Tunnel and Content Deployment Modes

For guidance on Deployment modes, see Deploying VMware Tunnel on Unified Access Gateway.

Scalability

Unified Access Gateway can be used to provide edge and gateway services for VMware Content Gateway and VMWare Tunnel functionality. For architecture and sizing guidance, see Component Design: Unified Access Gateway Architecture.

Installation

For installation prerequisites, see System Requirements for Deploying VMware Tunnel with Unified Access Gateway.

After the installation is complete, configure the VMware Tunnel by following the instructions in VMware Tunnel Core Configuration.

Table 53: Strategy for Providing Tunnel Services

Decision

Unified Access Gateway was used to provide tunnel services.

Justification

Unified Access Gateway was chosen as the standard edge gateway appliance for Workspace ONE services, including VMware Horizon and content resources.

Data Loss Prevention

Applications built using the Workspace ONE SDK or wrapped by the Workspace ONE UEM App Wrapping engine can integrate with the SDK settings in the Workspace ONE UEM Console to apply policies, control security and user behavior, and retrieve data for specific mobile applications without changing the application itself. The application can also take advantage of controls designed to make accidental, or even purposeful, distribution of sensitive information more difficult. DLP settings include the ability to disable copy and paste, prevent printing, disable the camera or screenshot features, or require adding a watermark to content when viewed on a device. You can configure these features at a platform level with iOS- or Android-specific profiles applied to all devices, or you can associate a specific application for which additional control is required.

Workspace ONE UEM applications, including Workspace ONE Boxer and Workspace ONE Content, are built to the Workspace ONE SDK, conform to the Workspace ONE platform, and can natively take advantage of these capabilities. Other applications can be wrapped to include such functionality, but typically are not enabled for it out of the box.

Workspace ONE UEM Data Loss Prevention Settings

Figure 46: Workspace ONE UEM Data Loss Prevention Settings

Another set of policies can restrict actions a user can take with email. For managed email clients such as Workspace ONE Boxer, restrictions can be set to govern copy and paste, prevent attachments from being accessed, or force all hyperlinks in email to use a secure browser, such as Workspace ONE Web.

Workspace ONE Boxer Content Restriction Settings

Figure 47: Workspace ONE Boxer Content Restriction Settings

Component Design: Workspace ONE Access Architecture

VMware Workspace ONE Access™ (formerly called VMware Identity Manager) is a key component of VMware Workspace ONE®. Among the capabilities of Workspace ONE Access are:

  • Simple application access for end users – Provides access to different types of applications, including internal web applications, SaaS-based web applications (such as Salesforce, Dropbox, Concur, and more), native mobile apps, native Windows and macOS apps, VMware ThinApp® packaged applications, VMware Horizon®–based applications and desktops, and Citrix-based applications and desktops, all through a unified application catalog.
  • Self-service app store – Allows end users to search for and select entitled applications in a simple way, while providing enterprise security and compliance controls to ensure that the right users have access to the right applications.
    Users can customize the Favorites tab for fast, easy access to frequently used applications, and place the apps in a preferred order. IT can optionally push entries onto the Favorites tab using automated application entitlements.
  • Enterprise single sign-on (SSO) – Simplifies business mobility with an included Identity Provider (IdP) or integration with existing on-premises identity providers so that you can aggregate SaaS, native mobile, macOS, and Windows 10 apps into a single catalog. Users have a single sign-on experience regardless of whether they log in to an internal, external, or virtual-based application.
  • Conditional access – Includes a comprehensive policy engine that allows the administrator to set different access policies based on the risk profile of the application. An administrator can use criteria such as network range, user group, application type, method of authentication, or device operating system to determine if the user should have access or not.
  • Productivity tools – Enables the Hub Services suite of productivity tools such as People Search, Notifications, Mobile Flow, Assistant, and more.

In addition, Workspace ONE Access has the ability to validate the compliance status of the device in VMware Workspace ONE® UEM (powered by AirWatch). Failure to meet the compliance standards blocks a user from signing in to an application or accessing applications in the catalog until the device becomes compliant. By integrating Workspace ONE Access and VMware Workspace ONE® Intelligence™ you can add user behavior and risk scoring into the access decision.

  • Enterprise identity management with adaptive access – Establishes trust between users, devices, and applications for a seamless user experience and powerful conditional access controls that leverage Workspace ONE UEM device enrollment and SSO adapters.
  • Workspace ONE native mobile apps – Includes native apps for iOS, Android, macOS, and Windows 10 to simplify finding, installing enterprise apps, and providing an SSO experience across resource types.
  • VMware Horizon / Citrix – Workspace ONE Access can also be integrated with VMware Horizon, VMware Horizon® Cloud Service™, and Citrix published applications and desktops. The Workspace ONE Access handles authentication and provides SSO services to applications and desktops.

User Workspace Delivered by Workspace ONE Access

Figure 48: User Workspace Delivered by Workspace ONE Access

To leverage the breadth of the Workspace ONE experience, you must integrate Workspace ONE UEM and Workspace ONE Access into Workspace ONE. After integration, Workspace ONE UEM can use Workspace ONE Access for authentication and access to native applications, web, SaaS, Citrix, ThinApp, and VMware Horizon applications. Workspace ONE can use Workspace ONE UEM for device enrollment and management.

See the Guide to Deploying VMware Workspace ONE with Workspace ONE Access for more details.

Workspace ONE Access can be implemented using either an on-premises or a cloud-based (SaaS) implementation model.

To avoid repetition, an overview of the product, its architecture, and the common components are described in the cloud-based architecture section, which follows. The on-premises architecture section then adds to this information if your preference is to build on-premises.

Table 54: Strategy of Using Both Deployment Models

Decision

Both a cloud-based and an on-premises Workspace ONE Access deployment were carried out separately.

Both deployments were sized for 50,000 users.

Justification

This strategy allows both architectures to be validated and documented independently.

Cloud-Based Architecture

In a cloud-based implementation, the Workspace ONE Access Connector service synchronizes user accounts from Active Directory to the Workspace ONE Access tenant service. Applications can then be accessed from a cloud-based entry point.

Cloud-based Workspace ONE Access Logical Architecture

Figure 49: Cloud-based Workspace ONE Access Logical Architecture

The main components of a cloud-based Workspace ONE Access implementation are described in the following table.

Table 55: Workspace ONE Access Components
Component Description

Workspace ONE Access tenant

Hosted in the cloud and runs the main Workspace ONE Access service.

Workspace ONE Access Connector

Responsible for directory synchronization and handles some of the authentication methods between on-premises resources such as Active Directory, VMware Horizon, Citrix, and the Workspace ONE Access service.

You deploy the connector by running the Windows-based installer.

 

Table 56:  Implementation Strategy for Cloud-Based Workspace ONE Access 

Decision

A cloud-based deployment of Workspace ONE Access and the components required were architected for 50,000 users.

Justification

This strategy provides validation of design and implementation of a cloud-based instance of Workspace ONE Access.

Workspace ONE Access Tenant Installation and Initial Configuration

Because the Workspace ONE Access tenant is cloud-based, you do not have to make design decisions with regards to database, network access, or storage considerations. The Workspace ONE Access service scales to accommodate virtually any size of organization.

Connectivity to the Workspace ONE Access service is through outbound port 443. This connection is used for directory synchronization, a subset of the supported authentication methods, and syncing entitlements for resources, such as Horizon desktops and apps. Organizations can take advantage of this configuration with no additional inbound firewall ports opened to the Internet.

Initial configuration involves logging in to the Workspace ONE Access service with the provided credentials at a URL similar to https://<company>.vmwareidentity.com.

Workspace ONE Access Connector

The Workspace ONE Access Connector can synchronize resources such as Active Directory, Horizon Cloud, VMware Horizon and Citrix virtual apps and desktops. The connector enables other typical on-premises resources such as RSA SecurID, RSA Adaptive Authentication, RADIUS, and Active Directory Kerberos authentication. The connector typically runs inside the LAN and connects to the hosted Workspace ONE Access service using an outbound-only connection. This means there is no need to expose the connector to the Internet.

Deploying a Workspace ONE Access Connector provides the following capabilities:

  • Synchronization with an enterprise directory (Active Directory/LDAP) to import directory users to Workspace ONE components
  • Workspace ONE Access Connector–based authentication methods such as username and password, certificate, RSA Adaptive Authentication, RSA SecurID, RADIUS, and Active Directory Kerberos authentication for internal users
  • Integration with the following resources:
    • On-premises Horizon desktop and application pools
    • Horizon Cloud Service desktops and applications
    • Citrix-published desktops and applications

With the release of Workspace ONE Access version 20.01, the architecture of the connector was changed. The 20.01 connector is separated into multiple microservices rather than one monolithic service. Currently this means that there is not feature parity between the different connector versions.

The three services of the Workspace ONE Access Connector version 20.01 are separated into Directory Sync, User Authentication, and Kerberos Authentication Service.

Microservices Included in Workspace ONE Access Connector Version 20.01

Figure 50: Microservices Included in Workspace ONE Access Connector Version 20.01

The correct version of connector should be chosen and deployed based on the use case and required functionality.

Table 57: Workspace ONE Access Connector Support Matrix
Version Required Functionality
Connector version 3.3 (2018.8.1.0) Required for ThinApp application repository synchronization.
Connector version 19.03 Required for Horizon and Citrix applications and virtual desktops support.
Connector version 20.01 Recommended to be used only if the use case does not require ThinApp, Citrix, or Horizon support.

For more information about the connector and the different versions, watch video VMware Workspace ONE Access 20.01: Overview and Connector Architecture.

Table 58: Implementation Strategy for the Workspace ONE Access Connector

Decision

The Workspace ONE Access Connector version 19.03 was deployed.

Justification

This strategy supports the requirements of Workspace ONE Access directory integration and allows a wide range of authentication methods.

This connector also enables synchronization of resources from VMware® Horizon 7 and Horizon Cloud Service into the Workspace ONE Hub catalog.

Connector Sizing and Availability

Workspace ONE Access Connector can be set up for high availability and failover by adding multiple connector instances in a cluster. If one of the connector instances becomes unavailable for any reason, other instances will still be available.

To create a cluster, you install new connector instances and configure the authentication methods in exactly the same way as you set up the first connector. You then associate all the connector instances with the built-in identity provider. The Workspace ONE Access service automatically distributes traffic among all the connectors associated with the built-in identity providers so that you do not need an external load balancer. If one of the connectors becomes unavailable, the service does not direct traffic to it until connectivity is restored.

See Configuring High Availability for the VMware Identity Manager Connector for more detail.

Note: Active Directory Kerberos authentication has different requirements than other authentication methods with regards to clustering the Workspace ONE Access Connector. See Adding Kerberos Authentication Support to Your VMware Identity Manager Connector Deployment for more detail.

After you set up the connector cluster, the authentication methods that you have enabled on the connectors are highly available. If one of the connector instances becomes unavailable, authentication is still available. Next, you should configure directory synchronization to also be highly available. For instructions, see Configure High Availability for Directory Sync.

Sizing guidance and the recommended number of Workspace ONE Access Connectors are presented in the online documentation.There is some difference between the different connector versions that you need to take into consideration.

Table 59: Strategy for Scaling the Workspace ONE Access Connector Service

Decision

Two instances of Workspace ONE Access Connectors were deployed in the internal network.

Justification

Two connectors are recommended to support an environment with 50,000 users.

Workspace ONE Access Connector Installation and Configuration

For prerequisites, including system and network configuration requirements, see Preparing to Install the VMware Identity Manager Connector on Windows.

For installation instructions, see Installing the VMware Identity Manager Connector on Windows.

Be sure to configure the Workspace ONE Access Connector authentication methods in outbound-only mode. This removes any requirement for organizations to change their inbound firewall rules and configurations. See Enable Outbound Mode for the VMware Identity Manager Connector.

On-Premises Architecture

For the on-premises deployment, we use the Linux-based virtual appliance version of the Workspace ONE Access service. This appliance is often deployed to the DMZ. There are use cases for LAN deployment, but they are rare, and we focus on the most common deployment method in this guide.

Syncing resources such as Active Directory, Citrix apps and desktops, and Horizon desktops and published apps is done by using a separate Workspace ONE Access Connector. The Workspace ONE Access Connector runs inside the LAN using an outbound-only connection to the Workspace ONE Access service, meaning the connector receives no incoming connections from the DMZ or from the Internet.

On-Premises Workspace ONE Access Logical Architecture

Figure 51: On-Premises Workspace ONE Access Logical Architecture

Table 60: Strategy for an On-Premises Deployment of Workspace ONE Access

Decision

An on-premises deployment of Workspace ONE Access and the components required were architected, scaled, and deployed for 50,000 users.

Justification

This strategy provides validation of design and implementation of an on-premises instance of Workspace ONE Access.

The implementation is separated into the three main components.

Table 61: Workspace ONE Access Components
Component Description

Workspace ONE Access appliance

Runs the main Workspace ONE Access Service.

The Workspace ONE Access Service is a virtual appliance (OVA file) that you deploy in a VMware vSphere® environment.

Workspace ONE Access Connector

Performs directory synchronization and authentication between on-premises resources such as Active Directory, VMware Horizon, and the Workspace ONE Access service.

You deploy the connector by running a Windows-based installer.

Database

Stores and organizes server-state data and user account data.

Database

Workspace ONE Access can be set up with an internal or external database to store and organize server data and user accounts. A PostgreSQL database is embedded in the Workspace ONE Access virtual appliance, but this internal database is not recommended for use with production deployments.

To use an external database, have your database administrator prepare an empty external database and schema before you use the Workspace ONE Access web-based setup wizard to connect to the external database. Licensed users can use an external Microsoft SQL Server 2012, 2014, or 2016 database server to set up a high-availability external database environment. For more information, see Create the Workspace ONE Access Service Database.

The database requires 100 GB of disk space for the first 100,000 users. Add another 10 MB disk space for each 1,000 users brought into the system, plus an additional 1 MB for each 1,000 entitlements. For example, if you had 5,000 users and each user was entitled to 5 apps, you would have 25,000 entitlements in total. Therefore, the additional space required would be 50 MB + 25 MB = 75 MB.

For more guidance on hardware sizing for Microsoft SQL Servers, see System and Network Configuration Requirements.

Table 62: Implementation Strategy for the On-Premises Workspace ONE Access Database

Decision

An external Microsoft SQL database was implemented for this design.

Justification

An external SQL database is recommended for production because it provides scalability and redundancy.

Scalability and Availability

Workspace ONE Access has been tested to 100,000 users per single virtual appliance installation. To achieve failover and redundancy, deploy multiple Workspace ONE Access virtual appliances in a cluster. If one of the appliances has an outage, Workspace ONE Access will still be available.

A cluster is recommended to contain three Workspace ONE Access service appliance nodes to avoid split-brain scenarios. See Recommendations for VMware Identity Manager Cluster for more information. After initial configuration, the first virtual appliance is cloned twice and deployed with new IP addresses and host names.

In this reference architecture, Microsoft SQL Server 2016 was used along with its cluster offering Always On availability groups, which is supported with Workspace ONE Access. This allows the deployment of multiple instances of Workspace ONE Access service appliances, pointing to the same database and protected by an availability group. An availability group listener is the single Java Database Connectivity (JDBC) target for all instances.

Windows Server Failover Clustering (WSFC) can also be used to improve local database availability and redundancy. In a WSFC cluster, two Windows servers are clustered together to run one instance of SQL Server, which is called a SQL Server failover cluster instance (FCI). Failover of the SQL Server services between these two Windows servers is automatic.

On-Premises Scaled Workspace ONE Access Architecture

Figure 52: On-Premises Scaled Workspace ONE Access Architecture

For more information on how to set up Workspace ONE Access in a high-availability configuration, see Using a Load Balancer or Reverse Proxy to Enable External Access to Workspace ONE Access and Appendix C: Workspace ONE Access Configuration for Multi-site Deployments.

For guidance on server quantities and hardware sizing of Workspace ONE Access servers and Workspace ONE Access Connectors, as well as TCP port requirements, see System and Network Configuration Requirements.

Network Latency

There are multiple connectivity points between Workspace ONE Access service nodes, connectors, and the backend identity store (that is, AD domain controllers). The maximum latency between nodes and components, within a site cluster, must not exceed 4 ms (milliseconds).

Table 63: Latency Requirements for Various Workspace ONE Access Connections
Source Destination Latency Target

Workspace ONE Access service nodes

Microsoft SQL Server

<= 4 ms

Workspace ONE Access Connector

Domain controller (AD)

<= 4 ms

 

Table 64: Implementation Strategy for On-Premises Workspace ONE Access Appliances

Decision

Three instances of the Workspace ONE Access appliance were deployed in the DMZ.

Justification

Three servers are required to support high availability for 50,000 users.

 

Table 65: Implementation Strategy for Workspace ONE Access Connectors

Decision

Two instances of Workspace ONE Access Connectors version 19.03 were deployed in the internal network.

Justification

Two connectors are recommended to support an environment with 50,000 users.

 

Table 66: Cluster Strategy for SQL Servers

Decision

SQL Server 2016 database server was installed on a two-node Windows Server Failover Cluster (WSFC), which uses a SQL Server Always On availability group.

Justification

The WSFC provides local redundancy for the SQL database service.

The use of SQL Server Always On allows for the design of a disaster-recovery scenario in a second site.

Load Balancing

To remove a single point of failure, we can deploy the Workspace ONE Access service in a cluster configuration and use a third-party load balancer. Most load balancers can be used with Workspace ONE Access. The load balancer must, however, support long-lived connections and web sockets, which are required for the Workspace ONE Access Connector communication channel.

Deploying Workspace ONE Access in a cluster not only provides redundancy but also allows the load and processing to be spread across multiple instances of the service. To ensure that the load balancer itself does not become a point of failure, most load balancers allow for the setup of multiple nodes in an HA or active/passive configuration.

The following figure illustrates how load balancers distribute the load to a cluster of Workspace ONE Access appliances in the DMZ. Workspace ONE Access Connector virtual appliances are hosted in the internal network. The connectors communicate to the Workspace ONE Access service nodes through the service URL using an outbound-only connection.

On-Premises Workspace ONE Access Load Balancing and External Access

Figure 53: On-Premises Workspace ONE Access Load Balancing and External Access

In this example, the Workspace ONE Access service URL is my.vmweuc.com, and this hostname is resolved in the following ways:

  • External clients resolve this name to 80.80.80.80.
  • All internal components and clients resolve this name to 192.168.2.50.

Split DNS is not a requirement for Workspace ONE Access but is recommended. Workspace ONE Access supports only one namespace; that is, the same fully qualified domain name (FQDN) for Workspace ONE Access must be used both internally and externally for user access. This FQDN is referred to as the Workspace ONE Access service URL.

You might decide to use two load balancer instances; one for external access and one that handles internal traffic. This is optional but provides an easy way to block access from the Internet to the management console of the Workspace ONE Access web interface. Blocking access to the management console is most easily done by simply blocking traffic to https://[ServiceURL]/SAAS/admin/ from external clients.

Although Workspace ONE Access does support configuring load balancers for TLS pass-through, it is often easier to deploy using TLS termination (re-encrypt) on the load balancer. This way the certificates on each Workspace ONE Access service node can be left using the default self-signed certificate.

Certificate Restrictions

Workspace ONE Access has the following requirements for certificates to be used on the load balancer and, if using pass-through, also on each node.

  • Only SHA-256 (and above) based certificates are supported. SHA-1-based certificates are not supported due to security concerns.
  • The required key size is 2048 bits.
  • The full certificate chain must be available.

Note: Workspace ONE Access Connectors must be able to use TCP 443 (HTTPS) to communicate with the Workspace ONE Access service URL.

It could be beneficial to add a redirect from HTTP to HTTPS for the load balancer and for the Workspace ONE Access service URL. This way end users do not have to specify https:// when accessing Workspace ONE Access.

The following features must be supported by the load balancer:

  • TLS 1.2
  • Sticky sessions
  • WebSockets
  • X-Forwarded-For (XFF) headers
  • Cipher support with forward secrecy
  • SSL pass-through/termination
  • Configurable request time-out value
  • Layer 4 support if using iOS Mobile SSO
Table 67: Implementation Strategy for Global and Local Load Balancing

Decision

In this reference architecture, we deployed load balancers for both the local data center and the global load balancer. We used split DNS for the Workspace ONE Access service URL.

Justification

Our load balancer supports the global load-balancing functionality required for the design. Split DNS allows for the most efficient traffic flow.

Multi-site Design

Workspace ONE Access is the primary entry point for end users to consume all types of applications, including SaaS, web, VMware Horizon virtual desktops and published applications, Citrix XenApp and XenDesktop, and mobile apps. Therefore, when deployed on-premises, it should be highly available within a site, and also deployed in a secondary data center for failover and redundancy.

The failover process that makes the secondary site’s Workspace ONE Access appliances active requires a change at the global load balancer to direct the traffic of the service URL to the desired instance. For more information see Deploying Workspace ONE Access in a Secondary Data Center for Failover and Redundancy.

Workspace ONE Access consists of the following components, which need to be designed for redundancy:

  • Workspace ONE Access appliances and the service URL
  • Workspace ONE Access Connectors

  • Database

Table 68: Site Resilience Strategy for On-Premises Workspace ONE Access

Decision

A second site was set up with Workspace ONE Access.

Justification

This strategy provides disaster recovery and site resilience for the on-premises implementation of Workspace ONE Access.

Workspace ONE Access Appliances and Connectors

To provide site resilience, each site requires its own group of Workspace ONE Access virtual appliances to allow the site to operate independently, without reliance on another site. One site runs as the active Workspace ONE Access cluster, while the second site has a passive cluster group. The determination of which site has the active Workspace ONE Access is usually controlled by the global load balancer’s namespace entry or a DNS entry, which sets a given instance as the target for the namespace in use by users.

You can achieve this architecture using one of two methods:

  • Traditional multi-site redundancy with a passive second cluster in Site 2, powered on and ready to handle requests upon failover
  • Using VMware Site Recovery Manager™ (SRM)

Using Site Recovery Manager greatly simplifies setting up disaster recovery in Workspace ONE Access and is the recommended method for multi-site deployment. For details, see Performing Disaster Recovery for Workspace ONE Access Using Site Recovery Manager.

Both failover methods take about the same amount of time. The traditional method often requires performing manual tasks to achieve failover to the second site; whereas Site Recovery Manager takes some time powering up the replicated virtual machines. In this reference architecture, we focus on the traditional multi-site disaster recovery architecture, mainly because it is the most complex to set up and operate.

For the traditional multi-site architecture, within each site, Workspace ONE Access must be installed with a minimum of three appliances. This provides local redundancy and ensures that services such as Elasticsearch function properly.

A local load balancer distributes the load between the local Workspace ONE Access instances, and a failure of an individual appliance is handled with no outage to the service or failover to second site. Each local site load balancer is also load-balanced with a global load balancer.

At each site, two Workspace ONE Access Connector servers are hosted in the internal network and use an outbound-only connection to the Workspace ONE Access service appliances. These connectors connect over TCP 443 (HTTPS) to the global load balancer and to the Workspace ONE Access service URL. It is therefore critical that the Workspace ONE Access Connectors be able to resolve the Workspace ONE Access service URL.

Table 69: Disaster Recovery Strategy for On-Premises Workspace ONE Access

Decision

A second set of servers was installed in a second data center following the traditional multi-site architecture. The number and function of the servers was the same as sized for the primary site.

Justification

This strategy provides the same disaster recovery capacity for all Workspace ONE Access on-premises services as using the VMware Site Recovery Manager method. But due to the extra complexity in setting up the traditional method, it is beneficial to explain the process in this reference architecture.

Multi-site Database

You must ensure that the database remains accessible in the event of failover to the second site. We support using Site Recovery Manager, replicating the Microsoft SQL database server or using native Microsoft SQL Server functionality.

VMware Identity Manager 2.9 (and later) supports Microsoft SQL Server 2012 (and later) and its cluster offering Always On availability groups. This allows us to deploy multiple instances of Workspace ONE Access that point to the same database. The database is protected by an availability group, with an availability group listener as the single Java Database Connectivity (JDBC) target for all instances.

Workspace ONE Access is supported with an active/passive database instance with failover to the secondary site if the primary site becomes unavailable. Depending on the configuration of SQL Server Always On, inter-site failover of the database can be automatic, though not instantaneous.

For this reference architecture, we chose an Always On implementation with the following specifications: 

  • No shared disks were used.
  • The primary database instance ran in Site 1 during normal production.

Again, this decision was based on our desire to address the extra complexity in setting up and maintaining SQL Always On in comparison to using the Site Recovery Manager method.

Within a site, Windows Server Failover Clustering (WSFC) was used to improve local database availability and redundancy. In a WSFC cluster, two Windows servers are clustered together to run one instance of SQL Server, which is called a SQL Server failover cluster instance (FCI). Failover of the SQL Server services between these two Windows servers is automatic. This architecture is depicted in the following figure.

On-Premises Multi-site Workspace ONE Access Architecture

Figure 54: On-Premises Multi-site Workspace ONE Access Architecture

For this design, Workspace ONE Access was configured as follows:

  • It uses a hot standby deployment.
  • Workspace ONE Access nodes in Site 1 form an Elasticsearch cluster and an Ehcache cluster. Nodes in Site 2 form a separate Elasticsearch cluster and Ehcache cluster.
    Elasticsearch and Ehcache are embedded in the Workspace ONE Access virtual appliance.
    Note: Elasticsearch is a search and analytics engine used for auditing, reports, and directory sync logs. Ehcache provides caching capabilities.
  • Only the active site can service user requests.
  • An active Workspace ONE Access group exists in the same site as the primary replica for the Always On availability group.

Note: To implement this strategy, you must perform all the tasks described in Deploying Workspace ONE Access in a Secondary Data Center for Failover and Redundancy. One step that is easily overlooked is the editing of the runtime-config.properties file in the secondary data center. For more information, see Edit runtime-config.properties File in Secondary Data Center to Set Read-Only Mode.

All JDBC connection strings for Workspace ONE Access appliances should point to the SQL Server availability group listener (AGL) and not directly to an individual SQL Server node. For detailed instructions about deploying and configuring the Workspace ONE Access, creating SQL Server failover cluster instances, creating an Always On availability group, and configuring Workspace ONE Access appliances to point to the AGL, see Appendix C: Workspace ONE Access Configuration for Multi-site Deployments.

If your organization has already deployed Always On availability groups, consult with your database administrator about the requirements for the database used with Workspace ONE Access.

The SQL Server Always On setup can be configured to automatically fail over and promote the remaining site’s database to become the primary.

Table 70: Strategy for Multi-site Deployment of the On-Premises Database

Decision

Microsoft SQL Server was deployed in both sites.

A SQL Always On availability group was used. We chose this option over the simpler Site Recovery Manager one because it needs extra explanation.

Justification

This strategy provides replication of the SQL database to the second site and a mechanism for recovering the SQL database service in the event of a site outage. We chose this option over the simpler Site Recovery Manager one because it needs extra explanation.

Prerequisites

This section details the prerequisites for the Workspace ONE Access configuration.

vSphere and ESXi

Although several versions are supported, we used VMware vSphere® version 6.7.0.42200 build 156792816.5, and VMware ESXi™ hosts ran version 6.7.0 build 15160138. See the VMware Product Interoperability Matrices for more details about supported versions.

NTP

Your entire Workspace ONE Access implementation must be correctly time-synchronized. By default, the Workspace ONE Access appliances pick up the time from the underlying ESXi host. Therefore, the Network Time Protocol (NTP) must be correctly configured on all hosts and time-synchronized to an NTP server. You must turn on time sync at the ESXi host level, using an NTP server to prevent a time drift between virtual appliances. If you deploy multiple virtual appliances on different hosts, make sure all ESXi hosts are time-synced.

You can override the default behavior by specifying NTP server settings in the Virtual Appliance Configuration settings on each appliance. For more information see Configuring Time Synchronization for the Workspace ONE Access Service.

Network Configuration

  • Static IP addresses and DNS Forward (A) and Reverse (PTR) records are required for all servers and the Workspace ONE Access service URL.
  • Inbound firewall port 443 must be open so that users outside the network can connect to the Workspace ONE Access service URL load balancer.
  • Other inbound ports might be required, depending on which authentication methods your use cases require.

Active Directory

Workspace ONE Access 20.01 supports Active Directory configurations on Windows 2008 R2, 2012, 2012 R2, 2016, and 2019, with a domain functional level and forest functional level of Windows 2003 and later, including:

  • Single AD domain
  • Multidomain, single forest
  • Multiforest with trust relationships
  • Multiforest with untrusted relationships
  • Active Directory Global Catalog (optional for Directory Sync)

For this reference architecture, Windows Server 2016 Active Directory was used.

Virtual Machine Build

Specifications are detailed in Appendix A: VM Specifications. Each server was deployed with a single network card, and static IP address information was required for each server. Windows Server 2016 was used for the Workspace ONE Access Connector servers. Workspace ONE Access virtual appliances included the required SUSE Linux Enterprise Server (SLES) operating system. IP address information was allocated for each server.

Installation and Initial Configuration

The major steps for on-premises installation and initial configuration of Workspace ONE Access using Connector version 19.03 are depicted in the following diagram.

Workspace ONE Access Installation and Configuration Steps

Figure 55: Workspace ONE Access Installation and Configuration Steps

Workspace ONE Access OVA

The Workspace ONE Access service appliance is delivered as an Open Virtualization Format (OVF) template and deployed using the VMware vSphere® Web Client. For information on deploying the Workspace ONE Access service appliance, see Deploying Workspace ONE Access. Before you deploy the appliance, it is important to have DNS records (A and PTR) and network configuration specified. As you complete the OVF Deployment Wizard, you will be prompted for this information.

Note: In the OVF Deployment Wizard, you must specify the appliance’s FQDN in the Host Name field (not only the host name), as shown in the following figure.

Workspace ONE Access OVF Wizard

Figure 56: Workspace ONE Access OVF Wizard

After deployment and on the first boot, you must enter passwords for the SSHUSER, ROOT, and ADMIN user. By default, SSH is disabled for the ROOT user. If you want to SSH into the appliance, you must do so using the SSHUSER account, and you can then switch user to ROOT. The ADMIN user is your local administrator in the Workspace ONE Access web console.

After you configure directory sync, VMware recommends that you promote at least one synced user to administrator and use this account for your everyday operations. The local ADMIN password is also used to access the appliance settings page. You can later change both so that they are not the same. For more information, see Manage Your Appliance Passwords.

You will also be prompted to complete database setup. Here, you enter the JDBC connection string, username, and password. For more information, see Appendix C: Workspace ONE Access Configuration for Multi-site Deployments.

Workspace ONE Access Configuration

After initial setup, you can access the Workspace ONE Access web console. Because Workspace ONE Access depends heavily on the Workspace ONE Access service URL, VMware recommends that you configure this service URL first. For more information, see Modifying the Workspace ONE Access Service URL. If you have issues changing the service URL, see the troubleshooting tips in Troubleshooting FQDN Updates: VMware Workspace ONE Access Operational Tutorial.

After you have changed the service URL, do not forget to enable the New User Portal UI. Enter the license key, and generate activation codes for your Workspace ONE Access Connectors. Now is also a good time to turn on logging to an external Syslog server.

Workspace ONE Access Connector Configuration

The Workspace ONE Access Connector is delivered as a Windows installer and is deployed by installing it on an existing Windows machine. For more information about deploying the Workspace ONE Access Connector version 19.03 (the version used in this document), see Installing and Configuring VMware Identity Manager Connector 19.03.0.0 (Windows).

On first boot, you are prompted for the local admin user’s password. This password is used to access the appliance configuration of the Workspace ONE Access Connector. As the final step of the Workspace ONE Access Connector Setup Wizard, you are prompted for the connector activation code generated in the previous step.

Cluster Configuration

The procedure to create a Workspace ONE Access cluster is described in Configuring Failover and Redundancy in a Single Datacenter. Make sure to start the original appliance first and allow it to fully start up all services before powering on the other nodes.

Verify that the Elasticsearch cluster health is green. After the cluster is operational, VMware recommends always powering down the Elasticsearch master last. When you power the cluster back on, try to always start the Elasticsearch master first. You can see which appliance is currently the Elasticsearch master by visiting the Admin console and the Systems Diagnostic page. Under the Integrated Components section, the Elasticsearch – master node is listed. For troubleshooting Elasticsearch cluster health issues, see: Troubleshooting Elasticsearch Cluster Health: VMware Workspace ONE Access Operational Tutorial.

Directory Configuration

Although using local users (rather than syncing users from an existing directory) is supported, most implementations of Workspace ONE Access do synchronize users from a Microsoft Active Directory. Active Directory configuration involves creating a connection to Active Directory, selecting a bind account with permission to read from AD, choosing groups and users to sync, and initiating a directory sync. You can specify what attributes users in Workspace ONE Access should have. See Set User Attributes at the Global Level for more information.

Note: The required flag for attributes only means, for example, that if a user in Active Directory does not have the attribute populated, the user will not be synced to Workspace ONE Access. If the user has the attribute populated and if Workspace ONE Access has the attribute mapped to its internal attributes, the value will be synced (with or without the required flag set). Therefore, most of the time, you do not need to change attributes so that they are required.

Connectors Updates

Configure all connectors with the same authentication methods and add them to the WorkspaceIDP. The Workspace ONE Access Connector supports adding an external Syslog server for external log collection.

Service Updates

Make sure all the Workspace ONE Access Connectors are added to the built-in idP and verify that authentication methods are configured in outbound-only mode. Configuring network ranges, authentication methods, and access policies are important tasks to complete before allowing users access to Workspace ONE Access.

Application Catalog

Finally, you configure application integration and publish applications in the Workspace ONE Access user catalog. You can specify application-specific access policies.

Integration with Workspace ONE Unified Endpoint Management (UEM)

To leverage the breadth of the Workspace ONE experience, you should integrate Workspace ONE UEM and Workspace ONE Access into Workspace ONE. After integration:

  • Workspace ONE UEM can use Workspace ONE Access for authentication and access to SaaS and VMware Horizon applications.
  • Workspace ONE can use Workspace ONE UEM for device enrollment and management.

See the About Deploying VMware Workspace ONE and Working with Hub Services and Intelligent Hub App for more details.

Access to Resources Through Workspace ONE Access

Workspace ONE Access powers the Workspace ONE Hub catalog, providing self-service access to company applications for business users. Workspace ONE Access is responsible for the integration with web-based SaaS applications, internal web applications, Citrix, and VMware Horizon for the delivery of virtual desktops and published applications. All these desktops and apps are displayed to the user in the catalog based on directory entitlements.

Based on the types of applications to be delivered to end users, the catalog is configured to integrate with the relevant services.

Workspace ONE Native Mobile Apps

For many users, their first experience with Workspace ONE is through the Workspace ONE native mobile application, VMware Workspace ONE® Intelligent Hub, which displays a branded self-service catalog. The catalog provides the necessary applications for the user to do their job, and also offers access to other company resources, such as a company directory lookup. Native operating features, such as Apple Touch ID on an iOS device or Windows Hello on Windows 10, can be used to enhance the user experience.

The Workspace ONE Intelligent Hub app:

  • Delivers a unified application catalog of web, mobile, Windows, macOS, and virtual applications to the user.
    Through integration, Workspace ONE Access applications are aggregated with Workspace ONE UEM–delivered applications.
  • Provides a launcher to access the web, SaaS apps, and Horizon and Citrix virtual desktops and apps to give a consolidated and consistent way of discovering and launching all types of applications.
  • Gives the user the ability to search across an enterprise’s entire deployment of application resources.
  • Offers SSO technology for simple user access to resources without requiring users to remember each site’s password.
  • Can search the company’s user directory, retrieving employees’ phone numbers, email addresses, and position on the org chart.

Workspace ONE App

Figure 57: Workspace ONE App

The Workspace ONE native app is available from the various app stores and can be deployed through Workspace ONE UEM as part of the device enrollment process. Platforms supported are iOS, Android, macOS, and Windows 10.

SaaS Apps

SaaS applications, such as Concur and Salesforce, are often authenticated through federation standards, such as Security Assertion Markup Language (SAML), to offload authentication to an identity provider. These applications are published through Workspace ONE Access and allow users seamless SSO access while being protected by the rich access policies within Workspace ONE Access.

The cloud application catalog in Workspace ONE Access includes templates with many preconfigured parameters to make federating with the SaaS provider easier. For SaaS providers, where there is no template. Instead, a wizard guides you through configuring the application and entitling users. Workspace ONE Access supports SAML and OpenID Connect protocols for federation. Workspace ONE Access also supports WS-Fed for integration with Microsoft Office 365.

Administrator Adding a New SaaS Application to the Catalog

Figure 58: Administrator Adding a New SaaS Application to the Catalog

 

The Intelligent Hub Application Catalog for End Users

Figure 59: The Intelligent Hub Application Catalog for End Users

VMware Horizon Apps and Desktops

The capability to deliver virtual apps and desktops continues to be a significant value for Workspace ONE users. Workspace ONE Access can be integrated with a VMware Horizon implementation to expose the entitled apps and desktops to end users. Through VMware Horizon® Client™ for native mobile platforms, access to these resources can be easily extended to mobile devices.

You must deploy the Workspace ONE Access Connector version 19.03 to provide access to Horizon resources from the Workspace ONE Access cloud-based or on-premises service. The connector enables you to synchronize entitlements to the service.

Note: Workspace ONE Access does not proxy or tunnel the traffic to the resource. The end user’s device must be able to connect to the resource, web, or Horizon or Citrix desktops and apps. Establishing access to the resource can be done in many ways, for example, VPN, Per-App VPN, publicized on the Internet, and more.

Refer to Setting Up Resources in VMware Workspace ONE Access (Cloud) or Setting Up Resources in VMware Workspace ONE Access (On-Premises) for more details on how to add applications and other resources to the Workspace ONE Hub catalog.

Component Design: Workspace ONE Intelligence Architecture

The shift from traditional mobile device management (MDM) and PC management to a digital workspace presents its own challenges.

  • Data overload – When incorporating identity into device management, IT departments are deluged by an overwhelming volume of data from numerous sources.
  • Visibility silos – From a visibility and management standpoint, working with multiple unintegrated modules and solutions often results in security silos.
  • Manual processes – Traditional approaches such as using spreadsheets and scripting create bottlenecks and require constant monitoring and corrections.
  • Reactive approach – The process of first examining data for security vulnerabilities and then finding solutions can introduce delays. These delays significantly reduce the effectiveness of the solution. A reactive approach is not the best long-term strategy.

VMware Workspace ONE® Intelligence™ is designed to simplify user experience without compromising security. The intelligence service aggregates and correlates data from multiple sources to give complete visibility into the entire environment. It produces the insights and data that will allow you to make the right decisions for your VMware Workspace ONE® deployment. Workspace ONE Intelligence has a built-in automation engine that can create rules to take automatic action on security issues.

Workspace ONE Intelligence Logical Overview

Figure 60: Workspace ONE Intelligence Logical Overview

Table 71: Implementation Strategy for Workspace ONE Intelligence

Decision

Workspace ONE Intelligence was implemented.

Justification

The intelligence service aggregates and correlates data from multiple sources to optimize resources and strengthen security and compliance across the entire digital workspace.

Architecture Overview

Workspace ONE Intelligence is a cloud-only service, hosted on Amazon Web Services (AWS), that offers the following advantages:

  • Reduces the overhead of infrastructure and network management, which allows users to focus on utilizing the product.
  • Complements the continuous integration and continuous delivery approach to software development, allowing new features and functionality to be released with greater speed and frequency.
  • Helps with solution delivery by maintaining only one version of the software without any patching.
  • AWS are industry leaders in cloud infrastructure, with a global footprint that enables the service to be hosted in different regions around the world.
  • AWS offers a variety of managed services out-of-the-box for high availability and easy monitoring.
  • Leveraging these services allows VMware to focus on product feature development and security rather than infrastructure management.

Workspace ONE Intelligence includes the following components.

Table 72: Components of Workspace ONE Intelligence
Component Description

Workspace ONE Intelligence Connector

An ETL (Extract, Transform, Load) service responsible for collecting data from the Workspace ONE database and feeding it to the Workspace ONE Intelligence cloud service.

Intelligence Cloud Service

Aggregates all the data received from an Intelligence Connector and generates and schedules reports.

Populates the Workspace ONE Intelligence dashboard with different data points, in the format of your choice.

Consoles

Workspace ONE Intelligence currently leverages two consoles:

  • Workspace ONE UEM Console
  • Workspace ONE Intelligence Console

Data sources

VMware Workspace ONE® UEM, Workspace ONE Access™, Workspace ONE Intelligence SDK, Common Vulnerability and Exposures (CVE), and Workspace ONE Trust Network.

Scalability and Availability

The Workspace ONE Intelligence service is currently hosted in six production regions, including Oregon (two locations), Ireland, Frankfurt, Tokyo, and Sydney. It leverages the same auto-scaling and availability principles as those described in AWS Auto Scaling and High Availability (Multi-AZ) for Amazon RDS.

Database Design

Workspace ONE Intelligence uses a variety of databases, depending on the data type and purpose. These databases are preconfigured, offered out-of-the-box as per the cloud service offering, and no additional configuration is necessary.

Table 73: Workspace ONE Intelligence Databases
Database Type Description

Amazon S3

  • Ultimate source of truth
  • Cold storage for all data required for database recovery if needed
  • Also used actively for scenarios such as app analytics loads and usage

Dynamo DB

  • Managed service of AWS
  • Stores arbitrary key-value pairs for different data types
  • Data resource for reports for dashboard and subscriptions

Elasticsearch – History

  • Historical charts
  • Historical graphs

Elasticsearch – Snapshot

  • Report previews
  • Current counts

Data Sources for Workspace ONE Intelligence

The following figure shows how the various data sources contribute to Workspace ONE Intelligence.

Workspace ONE Intelligence Data Sources

Figure 61: Workspace ONE Intelligence Data Sources

Workspace ONE Unified Endpoint Management

After a device is enrolled with Workspace ONE UEM, it starts reporting a variety of data points to the Workspace ONE UEM database, such as device attributes, security posture, and application installation status. Along with this, Workspace ONE UEM also gathers information about device users and user attributes from local databases and from Active Directory.

After the administrator opts in to Workspace ONE Intelligence, the Workspace ONE Intelligence Connector starts sending data. The data is aggregated and correlated by the platform for display purposes and to perform automated actions that enhance security and simplify user experience.

Workspace ONE Intelligence Components for UEM

Figure 62: Workspace ONE Intelligence Components for UEM

The Workspace ONE Intelligence Connector service is responsible for aggregating the data from Workspace ONE UEM and feeding it to Workspace ONE Intelligence. After the data is extracted, the Workspace ONE Intelligence service processes it to populate dashboards and to generate reports based on the attributes selected by the intelligence administrator.

Your Workspace ONE Intelligence region is assigned based on your Workspace ONE UEM SaaS deployment location. No additional configuration is required to leverage Workspace ONE Intelligence. Find your shared and dedicated SaaS Workspace ONE UEM location and see its corresponding Workspace ONE Intelligence region at Workspace ONE UEM SaaS Environment Location Mapped to a Workspace ONE Intelligence Region.

Workspace ONE Intelligence Connector Design for On-Premises Deployment

When deploying Workspace ONE UEM on-premises, you will be asked to select a region to send data to during the installation of the Workspace ONE Intelligence Connector service. The service is available across multiple regions. To see a complete list of regions, see URLs to Whitelist for On-Premises by Region.

The Workspace ONE Intelligence Connector service supports high availability and disaster recovery scenarios in either active/active or active/passive mode. VMware highly recommends deploying the Intelligence Connector service in a highly available configuration to ensure continued access to Workspace ONE UEM data through Workspace ONE Intelligence, and to ensure continued execution of automations.

For high availability, deploy and enable at least two Intelligence Connectors. Both must be connected to the same SQL Server Always-On Listener. The synchronization of Workspace ONE UEM data with Workspace ONE Intelligence is performed by only one connector at a time—the active connector. If the active connector fails, one of the other connectors becomes the active connector and continues synchronization. This architecture is shown in the following figure.

Multi-site Architecture for Workspace ONE Intelligence Connector

Figure 63: Multi-site Architecture for Workspace ONE Intelligence Connector

The Sync Status page in the Workspace ONE Intelligence Console reports the hostname of the Intelligence Connector that is actively synchronizing data.

For multi-site designs, the same HA principle applies. Deploy and enable the additional Workspace ONE Intelligence Connectors on the secondary site, and have them connect to the same SQL Server Always-On Listener. If your DR strategy for Workspace ONE UEM is based on active/passive mode, you can keep the Intelligence Connectors disabled on the secondary site, and enable only when executing the DR strategy.

For more information on how to deploy Workspace ONE Intelligence Connector service for on-premises Workspace ONE UEM, see Workspace ONE UEM and Workspace ONE Intelligence Integration.

Table 74: Implementation Strategy for On-Premises Workspace ONE UEM

Decision

Two instances of Workspace ONE Intelligence Connector were deployed on each site and configured to aggregate data from the Workspace ONE UEM instance.

Justification

The Workspace ONE Intelligence Connector is required to send on-premises Workspace ONE UEM data to Workspace ONE Intelligence.

Common Vulnerabilities and Exposures (CVE)

CVE is a list of entries for publicly known cyber security vulnerabilities. With regards to Windows 10 managed devices, the CVE integration in Workspace ONE Intelligence performs a daily import of CVE details, as well as risk scores derived from the Common Vulnerability Scoring System (CVSS) defined by the National Vulnerability Database (NVD).

Because Workspace ONE UEM provides an update service for Windows 10 managed devices based on KBs released by Microsoft, Workspace ONE Intelligence is able to correlate its imported CVE details and risk scores with the Microsoft KBs.

The CVE information allows IT administrators and security teams to prioritize which vulnerabilities to fix first and helps them gauge the impact of vulnerabilities on their systems. This can be achieved through daily or even hourly reporting to security teams of all devices that are deemed vulnerable based on CVSS score.

Custom dashboards can then provide insights and real-time visibility into the security risks affecting all managed devices. The Workspace ONE Intelligence rules engine can take automated remediation actions, such as applying patches to the impacted devices.

CVE Metrics Based on Workspace ONE Intelligence

Figure 64: CVE Metrics Based on Workspace ONE Intelligence

So long as Workspace ONE UEM is integrated with Workspace ONE Intelligence through the Intelligence Connector service, no additional configuration is required to obtain and correlate CVE data.

Table 75: Strategy for Monitoring Security Risks to Windows 10 Devices

Decision

The Workspace ONE Intelligence dashboard was configured to provide real-time visibility into the impact of CVE entries on Windows 10 managed devices.

Justification

Workspace ONE Intelligence increases security and compliance across the environment by providing integrated insights and automating remediation actions.

Risk Analytics

Risk analytics is a feature of Workspace ONE Intelligence that assesses user risk by identifying practices that hinder security, such as employee negligence. Risk analytics provides a risk score for every device and user in an organization.

Each device gets a calculated risk score (low, medium, or high) on a daily basis, and for users with multiple devices, the user gets a risk score that is an aggregate of their scored devices. An organization needs to have at least 100 devices on the same platform for risk analytics to provide an accurate score. The risk analytics workflow consists of the following steps:

  1. Ingesting device data from Workspace ONE UEM
  2. Assessing risky user behaviors
  3. Computing a personalized risk score for every user and device, leveraging machine learning
  4. Conducting an automated response to mitigate the risks associated with the risky behaviors

The results are then available in the User Risk dashboard, as shown in the following figure.

User Risk Dashboard Showing Risk Behavior Over Time in Workspace ONE Intelligence

Figure 65: User Risk Dashboard Showing Risk Behavior Over Time in Workspace ONE Intelligence

A personalized risk score is calculated once a day for each device and user in an organization only when:

  • Workspace ONE Intelligence is integrated with Workspace ONE UEM, as devices and applications are the main source of data for the machine learning models used by risk analytics.
  • The environment contains at least 100 devices per platform that have been active in the past 14 days.

Note: Users with more than 6 devices are discarded because those devices are considered shared devices.

A user and device can get a score of low, medium, or high:

  • Low – Indicates little potential to introduce threats and vulnerabilities to the network and internal resources.
  • Medium – Indicates a moderate potential to introduce threats and vulnerabilities to the network and internal resources.
  • High – Indicates a great potential to introduce threats and vulnerabilities to the network and internal resources.

A score is assigned to each device based on the risk indicators identified. The user score is the aggregation of device scores for all the devices owned by the user. The risk Indicators that contribute to risk scoring are organized into device and application categories.

The device risk Indicators are:

  • Laggard Update – A person who lags behind others when it comes to updating software to the latest release.

  • Risky Setting – A person who is reluctant to enable stricter security settings on a device.

Application risk indicators evaluate only mobile unmanaged apps when Workspace ONE UEM privacy settings allow access to the Personal Applications list. The application risk indicators are:

  • Compulsive App Download – A person who downloads and installs an unusually large number of applications within a 14-day period and after 30 days of enrollment.

  • Unusual App Download – A person who downloads and installs rare applications in a 14-day period and after 30 days of enrollment.

  • App Collector – A person who has an unusually large number of applications on the device, no matter the timeframe.

  • Rare App Collector – A person who has an unusually large number of rare applications on the device, no matter the timeframe.

Administrators can access the calculated score through out-of-the-box dashboards, custom dashboards, and reports. The User Risk and Device Risk dashboards allow IT administrators to identify risk trends at the organization level, showing the high-risk devices and users over time. With these dashboards, administrators can also access detailed information to identify why a user or device is getting flagged as high risk.

Device Risk Dashboard with Detailed View of Each Device

Figure 66: Device Risk Dashboard with Detailed View of Each Device

Support for automation is available based on device risk-score events. Automation allows administrators to mitigate risk based on device score and risk indicator changes, taking action against the device and third-party systems.

Risk analytics and automation can also be used to nudge (influence) users’ behaviors and lead to greater compliance across an organization. For instance, social proof feedback could be implemented to help users identify desirable behaviors and motivate them to take action. For example, a user who receives a Laggard Update risk indicator can be nudged with a simple social proof intervention, such as the following message: “Dear user, your device is running an old operating system; 99% of the devices in your organization are running a newer version. Here are the instructions to update your device...”

The following screenshot shows an example of how an administrator might configure a Slack message to be sent when the system finds a certain risk score in combination with a certain number of risk indicators.

Automation Based on Device Risk Events

Figure 67: Automation Based on Device Risk Events

Risk analytics integrates with Workspace ONE Access (cloud only), enhancing conditional access beyond mere device compliance. Risk analytics adds risk-based conditional access, allowing Workspace ONE Access to assess risk in real time before allowing users to access their business applications.

Integration between Workspace ONE Intelligence and Workspace ONE Access is required to allow the administrator to enable the Risk Score Adapter in Workspace ONE Access. The administrator can then select the type of actions that identify low-, medium-, and high-risk users when those users attempt to access their business applications through Workspace ONE.

When enabling the Risk Score Adapter, administrators must configure the type of action to apply to the score. The action associated with the risk score determines the user experience:

  • Allow Access – The user can log in, and access policy rules are followed.

  • Step-Up Authentication – The user cannot log in with only the credentials that were entered. The next authentication method configured in the access policy is presented to the user.

  • Deny Access – The user cannot log in and no other login option is presented to the user.

This configuration will enable the risk-score authentication method on the access policy, allowing the use of risk-based conditional access.

Important: The risk score can be used only after the first authentication method is applied because the user must be identified before the risk score can be looked up.

Workspace ONE Access Policy Configuration That Uses a Risk-Score Authentication Method

Figure 68: Workspace ONE Access Policy Configuration That Uses a Risk-Score Authentication Method

For an in-depth understanding of risk analytics, instructions for integration with Workspace ONE Access, and a demonstration of integration in action, watch the video Workspace ONE Intelligence: Understanding Risk Analytics - Deep Dive.

Workspace ONE Access

Integrating Workspace ONE Access with Workspace ONE Intelligence allows administrators to track login and logout events for applications in the Workspace ONE catalog. This integration also captures application launches in the Workspace ONE catalog for both Service Provider (SP)–initiated and Identity Provider (IdP)–initiated workflows. This information is available for web, native, and virtual applications and is presented in preconfigured and as well as custom dashboards.

IT administrators can gather insight into:

  • Application adoption – By determining how many unique users have launched a particular application
  • Application engagement – By collecting user-experience statistics about the most-used applications
  • Security issues – By examining data about failed login attempts

User login events are represented by the following types:

  • Login – A user attempts to access an app listed in the Workspace ONE catalog.

    Note: Just logging in to the Workspace ONE catalog alone does not count as a user login.

  • Logout – A user manually logs out of the Workspace ONE catalog. A logout event is not generated when:
    • A user logs out of a particular app, because the user is still authenticated to the catalog.
    • The session times out or the user closes the browser.
  • Login failures – A user enters an incorrect password, the second factor of two-factor authentication is incorrect, the certificate is missing, and so on.

Daily Unique Users of Workspace ONE represented by Widgets in Workspace ONE Intelligence

Figure 69: Daily Unique Users of Workspace ONE represented by Widgets in Workspace ONE Intelligence

App launch events are captured under two scenarios:

  • A user launches an app from the Workspace ONE catalog (IdP-initiated).
  • A user navigates directly to a web app, so that SSO occurs through Workspace ONE (SP-initiated).

App launch events are captured for web, SaaS, and virtual apps, and for any other type of app configured as part of the Workspace ONE catalog. To provide insights about these apps, Workspace ONE Intelligence displays information about app events through widgets in the Apps dashboard.

Apps Dashboard for the Workday Web App Launched from Workspace ONE Access

Figure 70: Apps Dashboard for the Workday Web App Launched from Workspace ONE Access

To add Workspace ONE Access as a data source to Workspace ONE Intelligence, navigate to Intelligence Settings in the Intelligence dashboard and select Workspace ONE Access. Enter the tenant URL for the Workspace ONE Access cloud-based tenant and select Authorize. For more information, see Workspace ONE Access and Workspace ONE Intelligence Integration.

Only cloud-based instances of Workspace ONE Access can be integrated with Workspace ONE Intelligence. On-premises deployments of Workspace ONE Access cannot be integrated into Workspace ONE Intelligence.

Table 76: Implementation Strategy for Integrating Workspace ONE Access

Decision

Workspace ONE Intelligence was configured to collect data from Workspace ONE Access.

Justification

This strategy collects user data around events and users from Workspace ONE Access and integrates this data with Workspace ONE Intelligence. Web application data displays on the Apps dashboard, allowing the visualization of both Workspace ONE logins and application load events.

Workspace ONE Trust Network

Integrating VMware Workspace ONE® Trust Network with Workspace ONE Intelligence provides insight into threats detected by each of the Trust Network components configured in the environment. With this information, administrators can get insights through predefined dashboards and custom widgets, and administrators can create automations based on threat events.

Workspace ONE Intelligence integrates with VMware Carbon Black Cloud™ (formerly CB Defense) to determine threat activities in real time from Windows and macOS endpoints, allowing IT administrators to define automated actions against managed devices when a threat is identified.

Additional security solutions such Lookout on Mobile Threat Defense (MTD) and Netskope on Cloud Security Broker (CSB) can be leveraged to enhance endpoint and data protection. Both products are also part of the Workspace ONE Trust Network and integrate with Workspace ONE Intelligence.

Consolidated Threat View Reported by Trust Network Solutions Over Time in Workspace ONE Intelligence

Figure 71: Consolidated Threat View Reported by Trust Network Solutions Over Time in Workspace ONE Intelligence

For more information on how to integrate Trust Network solutions with Workspace ONE, see Workspace ONE Intelligence and Trust Network Integration.

Table 77: Implementation Strategy for Integrating Trust Network

Decision

Workspace ONE Intelligence was configured to collect data by using both Carbon Black Cloud (formerly CB Defense) and Lookout for Work.

Justification

This strategy collects threats from Windows and macOS devices using CB Defense agent. It collects threats from iOS and Android using Lookout for Work.

This strategy gives the Workspace ONE Intelligence administrator a consolidated view across the four major device platforms. The administrator can also create automated actions to address threats against managed devices.

App Analytics with Workspace ONE Intelligence SDK

Workspace ONE Intelligence SDK (formerly known as the Apteligent SDK) monitors, prioritizes, helps troubleshoot, and reveals trends of your native mobile app performance issues in real time.

Integrating the Workspace ONE Intelligence SDK with Workspace ONE Intelligence provides insight into app and user behavior analytics. App analytics is available as part of Workspace ONE Intelligence Enterprise apps and Consumer Apps. Both use the Intelligence SDK, and the integration process is the same for both.

Workspace ONE Intelligence SDK is available for iOS and Android platforms, and can be downloaded from the VMware Code downloads page for Workspace ONE Intelligence.

After the Workspace ONE Intelligence SDK is embedded in an app and that app is registered with Workspace ONE Intelligence, a unique app ID is generated, which is required to be added to the application. The app ID can be hard coded, but for managed apps, VMware recommends using App Config, which allows the application to receive the app ID from Workspace ONE UEM. This process simplifies the redeployment of applications without requiring an additional configuration update.

The SDK initialization process requires a minimal addition of code. Basically, you add a method that is called to pass the app ID generated in Workspace ONE Intelligence and start report the App Load events. To learn more about how to initialize the SDK, see:

As the applications get deployed and are launched by end users on their devices, the SDK sends the data to Workspace ONE Intelligence Cloud Service. The Apps dashboard starts populating the relevant data and correlates data from the Workspace ONE Intelligence SDK and Workspace ONE UEM.

User Flow Metrics on the Apps Dashboard

Figure 72: User Flow Metrics on the Apps Dashboard

The Apps dashboard displays app data fed from the Workspace ONE Intelligence SDK. The data is organized as shown in the following table.

Table 78: App Dashboard Summary

Widget

Description

Overview

Includes metrics for daily active users (DAU), monthly active users (MAU), app loads, and deployment status based on correlated data from the Workspace ONE Intelligence SDK and Workspace ONE UEM.

For the UEM correlated data, administrators can identity:

  • Total installs – Total number of installations of the application

  • Devices missing app – Number of devices that do not have a specific app

  • App install status – Installation status of the app; for example, installing, failed, pending removal, and managed
  • App version over time – Version of the app for the selected amount of time

  • Installs over time – Number of times the application was installed

User Flows

Includes metrics for key interactions in the app such as login, account registration, in-app purchases, and more. The App Load user flow is available out of the box. Other key user flows that are specific to the apps give detailed visibility into the user flows that succeeded or failed (that is, that crashed or had user-defined failures). These user flows are easy to implement.

Network Insights

Includes metrics to monitor service and API metrics for apps on endpoints. See data concerning error rates, status and error codes, response times, number of calls, and data bandwidth for network calls.

Crashes and Handled Exceptions

Monitors crashes and handled exceptions, giving visibility into error rate, stack trace, diagnostic details, users affected to prioritize fixes, troubleshooting help, and finding the root cause to resolve issues.

Breadcrumbs

Includes automatic network events, handled exception events, crash events, and system events such as session start or app foregrounded/backgrounded. These events are stitched together to form a breadcrumb trail before the crash or handled exception occurs. Breadcrumbs are also available for successful and failed user flows.

Settings

Provides the ability to upload symbol (.dSYM) files for iOS and mapping.txt files for Android. Download the Workspace ONE Intelligence SDK for iOS and Android platforms.

Use a custom dashboard to create your own visualization tool and to manipulate data about user behaviors, app loads, crashes, handled exceptions, user flows, network insights, and so on. Use the Workspace ONE Intelligence SDK category to select metrics concerning your SDK-integrated apps.

  • App Loads

  • Custom Events

  • Network Errors

  • Application Events

  • User Flows

  • iOS Crashes

  • Android Crashes

  • Processed iOS Crashes

  • Processed Android Crashes

Processed iOS Crashes and Processed Android Crashes include only symbolicated crash reports. To process crashes successfully, you must upload symbol (.dSYM) files for iOS apps and ProGuard mapping.txt files for Android apps. Workspace ONE Intelligence App analytics requires these files to process crashes successfully, to symbolicate them, and to group them.

The prerequisites for app analytics are that enterprise applications have the Workspace ONE Intelligence SDK embedded in them and that applications are managed by Workspace ONE UEM. For business-to-consumer apps, there is no need for managed devices. However, those types of apps must be registered with Workspace ONE Intelligence for Consumer Apps, which is a separate intelligence tenant that contains only the app analytics features.

The platforms supported with the Workspace ONE Intelligence SDK are Apple (iOS, tvOS) and Android, offering the following capabilities:

Mobile apps that integrate with the Workspace ONE Intelligence SDK must be registered in Workspace ONE Intelligence by following the instructions in Register Consumer Apps, or through the step-by-step video VMware Workspace ONE Intelligence: Integration with Apteligent - Feature Walk-through. The registration process enables the visualization of app analytics through the Apps dashboard and available widgets.

App Analytics for a Native Mobile App Integrated with the Workspace ONE Intelligence SDK

Figure 73: App Analytics for a Native Mobile App Integrated with the Workspace ONE Intelligence SDK

To leverage the full capabilities of app analytics, you can leverage the Workspace ONE Intelligence SDK. For more information, see the Workspace ONE Intelligence Dev Center.

Insights and Automation

All data collected from the data sources is aggregated and correlated by the Workspace ONE Intelligence service. The data is then made available for visualization from a business, process, and security standpoint. Also, the Workspace ONE Intelligence service can perform automatic actions based on the rules defined in the Intelligence Console.

Dashboards

Dashboards present the historical or latest snapshot of information about the selected attributes, such as devices, users, operating systems, and applications. These dashboards are populated using widgets that are fully customizable, including, for example, layout tools, editing filters, and other options. Information can be displayed in the form of horizontal or vertical bar charts, donuts, and tables. You can also choose a specific date range to visualize historical data. All the widgets can be added as part of My Dashboard.

Following is a summary of some of the predefined widgets.

Table 79: Examples of Out-of-the-Box Dashboard Widgets
Widget Category Metrics

Devices

Number of enrolled devices, operating system breakdowns, compromised status

Apps

Most popular apps, agent installed (by version)

OS Updates

Top-ten KBs installed, devices with a CVSS risk score higher than 7

User Logins

Trend of user logins, login failures (by authentication method)

App Launches

Top-five apps launched, according to both unique user count and total number of launches

You can extend the filters and data points for the out-of-the-box widgets or create new widgets from scratch.

In addition to My Dashboard, Workspace ONE Intelligence includes three additional predefined dashboards (Security Risk, OS Updates, and Apps) allowing IT administrators to quickly gather insights into their environment and make data-driven decisions.

Device Passcode Risk Over Time, Displayed in the Security Risk Dashboard

Figure 74: Device Passcode Risk Over Time, Displayed in the Security Risk Dashboard

Dashboards are available as a part of Workspace ONE Intelligence cloud offerings. No additional configuration is needed for this feature.

Reports

Reports are generated based on data fetched from Workspace ONE UEM, giving administrators real-time information about the deployment. The data is extracted from devices, applications, OS updates, and user data points.

Workspace ONE Intelligence offers a set of predefined templates. Additionally, you can customize these templates or create a new template from scratch to generate reports on the specific data points. Using the reports dashboard of Workspace ONE Intelligence, you can run, subscribe to, edit, copy, delete, and download (CSV format) reports. 

Reports are available as a part of Workspace ONE Intelligence cloud offerings. No additional configuration is needed for this feature when you use cloud-based Workspace ONE UEM. For an on-premises deployment of Workspace ONE UEM, you must deploy the Workspace ONE Intelligence Connector. Reports are available only to groups whose organization group type is Customer.

Automation Capabilities

Automation in Workspace ONE Intelligence acts across categories that include devices, apps, and OS updates. Administrators can specify the conditions under which automatic actions will be performed. Automation removes the need for constant monitoring and manual processing to react to a security vulnerability. Configuring automation involves setting up the trigger, condition, and automated action, such as sending out a notification or installing or removing a certain profile or app.

Automation is facilitated by automation connectors. These connectors leverage Workspace ONE UEM REST APIs to communicate with Workspace ONE UEM and third-party services. The current list of automation connectors includes out-of-the-box Workspace ONE UEM, Service Now, and Slack. The REST APIs also extend the connector capabilities to any system that provides a REST API interface that can leverage the Workspace ONE Intelligence Custom Connectors.

Workspace ONE Intelligence Connectors

Figure 75: Workspace ONE Intelligence Connectors

To learn more about how to integrate Workspace ONE Intelligence Connector with Workspace ONE UEM and third-part services, see Automation Connections, API Communications, and Third-Party Connections.

Getting Started with Workspace ONE Intelligence

Workspace ONE Intelligence is offered as a 30-day free trial or can be purchased as an add-on and included with the Workspace ONE Enterprise bundle. The first time you log in to the Workspace ONE Intelligence dashboard, you must opt-in to Workspace ONE Intelligence by selecting a check box.

For more information, see the VMware Workspace ONE Intelligence guide.

Component Design: Workspace ONE Assist Architecture

VMware Workspace ONE® Assist allows VMware Workspace ONE® UEM administrators to remotely access and troubleshoot devices in real time while respecting end-user privacy.

Workspace ONE Assist features include:

  • Screen sharing capabilities – Allows remote devices to screen share and relinquish device controls to an administrator for guided support. Can also capture images and video remotely.
  • File system capabilities – Exposes the device’s file system and allows for folders or files to be edited, deleted, or added remotely.
  • Run commands – Automate issue resolution and common tasks by remotely sending commands to the device.

Workspace ONE Assist can be implemented using either an on-premises or a cloud-based (SaaS) model. Both models offer the same functionality.

Note: Workspace ONE Assist features and capabilities are platform dependent. See Capabilities by Platform for a comprehensive list.

To avoid repetition of information, an overview of the product, its architecture, and the common components are described in the cloud-based architecture section, which follows. The on-premises architecture section then adds to this information if your preference is to build on-premises.

Cloud-Based Architecture

With a cloud-based implementation, the Workspace ONE Assist software is delivered using a software-as-a-service (SaaS) model. The integration between your Workspace ONE UEM SaaS tenant and your Workspace ONE Assist SaaS deployment is configured for you.

If you are integrating Workspace ONE Assist SaaS with an on-premises Workspace ONE UEM tenant, see Integrate Deployment Model, On-Prem UEM with SaaS Assist.

For additional Workspace ONE Assist SaaS details, such as regional fully qualified domain names (FQDN) and IP addresses for allowlisting, see SaaS Configurations, Network and Security Requirements.

Cloud-Based Workspace ONE Assist Logical Architecture

Figure 76: Cloud-Based Workspace ONE Assist Logical Architecture

Workspace ONE Assist includes the following components:

Table 80: Workspace ONE Assist Components

Component

Description

Workspace ONE Assist Core Services

Services responsible for coordinating communication and providing service discovery for all other Workspace ONE Assist services. All database communication is handled through these services.

Workspace ONE Assist Portal Services

Services that host the Workspace ONE Assist administration portal that manages remote device sessions and registration.

 

Workspace ONE Assist Application Services

Services responsible for communicating with devices available for remote management.

Workspace ONE Assist Connection Proctor

Proctor for managing device connections to the Workspace ONE Assist server. Simultaneously handles multiple requests for remote management sessions.

For additional details on these components, see Workspace ONE Assist Components.

The Workspace ONE UEM SaaS and AirWatch Cloud Connector components are shown in the figure only because they illustrate the typical Workspace ONE SaaS deployment model. For more information on those components, see Component Design: Workspace ONE UEM Architecture.

On-Premises Architecture

Workspace ONE Assist is composed of separate services that can be installed on a single- or multiple-server architecture to meet security and load requirements. Service endpoints can be spread across different network security zones, with Portal and Connection Proctor components located in a DMZ to allow external, inbound access to the Application, Core, and Database services located in a protected, internal network. See Deployments Across Public and Private Security Zones.

The network and security requirements for single- and multiple-server architecture differ and should be considered before deployment. See Network and Security Requirements. See On-Premises Configurations, Network and Security Requirements for a list of port and firewall rule requirements for both single- and multiple-server architectures.

The single-server architecture is also referred to as an all-in-one server, meaning the Core, Application, Portal, and Connection Proctor components are installed on a single server. 

In addition to the components already described for this cloud-based architecture, there are additional components required for an on-premises deployment.

Table 81: Additional On-Premises Workspace ONE Assist Components

Component

Description

Database

Microsoft SQL Server database that stores the Workspace ONE Assist system and tenant configuration, operations, and logging, such as the accrual of historical data showing when a device was enrolled in remote management.

The Workspace ONE Assist system is composed of eight databases. See Workspace ONE Assist Components for additional details on the eight databases.

All Workspace ONE Assist Core Service servers, Connection Proctor servers, and remote management registration details persist and reside in this database.

You may use the same Microsoft SQL Server that supports your Workspace ONE UEM deployment for your Workspace ONE Assist deployment.

 

On-Premises Workspace ONE Assist Logical Architecture

Figure 77: On-Premises Workspace ONE Assist Logical Architecture

Table 82: On-Premises Simple Workspace ONE Assist Architecture

Decision

An on-premises deployment of Workspace ONE Assist and the components required were architected, scaled, and deployed to support 50,000 devices and up to 50 concurrent remote management sessions with an active/passive setup.

Justification

This provides validation of design and implementation of an on-premises instance of Workspace ONE Assist.

Database

All Workspace ONE Assist system, tenant, and data configurations required for remote management operation and device registration are stored across eight databases on the SQL Server. For more details about how data is partitioned across these eight databases, see Workspace ONE Assist Components. The Workspace ONE Assist Core Services provide communication to the database for the Portal, Application, and Connection Proctor services.

In this reference architecture, Microsoft SQL Server 2016 was used along with its cluster offering Always On availability groups, which is supported with Workspace ONE Assist. This allows the deployment of two all-in-one Workspace ONE Assist servers in an active/passive pair that points to the same database and is protected by an availability group. An availability group listener is the connection target for both instances.

Windows Server Failover Clustering (WSFC) can also be used to improve local database availability and redundancy. In a WSFC cluster, two Windows servers are clustered together to run one instance of SQL Server, which is called a SQL Server failover cluster instance (FCI). Failover of the SQL Server services between these two Windows servers is automatic.

Workspace ONE Assist runs on an external SQL database and can be installed alongside your existing SQL database for Workspace ONE UEM. Licensed users can use a Microsoft SQL Server 2012, SQL Server 2014, or SQL Server 2016 database server to set up a high-availability database environment.

The Workspace ONE Assist installer will automatically create the necessary server roles, users, user mappings, and databases. You must have a server administrator account (or equivalent) for these elements to be created. See Database Settings Created Automatically During Installation.

Although Workspace ONE Assist supports using a local SQL Express database, it is not recommended for production and redundancy. For guidance on hardware sizing for Microsoft SQL Servers, see Hardware Scaling Requirements.

Table 83: Implementation Strategy for the On-Premises Workspace ONE Assist Database

Decision

An external Microsoft SQL database with Always-On availability groups was implemented for this design.

Justification

An external SQL database is recommended for production and allows for scale and redundancy.

Load Balancing

To remove a single point of failure, you can deploy more than one instance of a Workspace ONE Assist all-in-one server behind an external load balancer. This provides redundancy across the multiple all-in-one Workspace ONE Assist instances by routing traffic to the currently active service. 

To ensure that the load balancer itself does not become a point of failure, most load balancers allow for setup of multiple nodes in a high-availability (HA) or active/passive configuration.

SSL/TLS passthrough is required for all Workspace ONE Assist server configurations on the load balancers. SSL/TLS offloading is not supported for Workspace ONE Assist components. To address persistence, you must configure the load balancer to use IP or SSL/TLS session persistence. 

For more information on load balancing, see Integrate a Load Balancer to Your Deployment.

Scalability and Availability

Workspace ONE Assist components can be deployed in a single- or multiple-server architecture to support load and concurrency requirements. Single-server architectures can meet production high-availability requirements by deploying multiple all-in-one servers in an active/passive configuration behind a load balancer.

For more information on scaling a single- or multiple-server architecture, see Hardware Scaling Requirements.

Table 84: Implementation Strategy for the Workspace ONE Assist Services

Decision

Two instances of a Workspace ONE Assist all-in-one servers were deployed in the DMZ behind an external load balancer.

Justification

One all-in-one server can support 50,000 devices and 50 concurrent remote management sessions.

An additional all-in-one server is deployed in an active/passive configuration for redundancy.

 

On-Premises Workspace ONE Assist Architecture

Figure 78: On-Premises Workspace ONE Assist Architecture

This figure shows an environment suitable for up to 50,000 devices and 50 concurrent remote management sessions.

  • The Workspace ONE Assist all-in-one servers are located in the DMZ because the Connection Proctor and Portal components must be accessible from devices.
  • The Workspace ONE UEM administration console servers reside in the internal network with a load balancer in front of them. Administrators can access Workspace ONE Assist Portal services for remote management sessions from the Workspace ONE UEM administration console.

For this reference architecture, split DNS was used; that is, the same FQDN was used both internally and externally for user access to the Workspace ONE Assist active/passive server. Split DNS is not a strict requirement for a Workspace ONE Assist on-premises design, but it does improve the user experience.

See Appendix H: Registering Failover for Active/Passive Workspace ONE Assist Deployments.

Multi-site Design

The Workspace ONE Assist all-in-one servers are responsible for providing device registration and administering remote management sessions. These servers should be deployed to be highly available within a site and deployed in a secondary data center for failover and redundancy. A robust back-up policy for application servers and database servers can minimize the steps required for restoring a Workspace ONE Assist environment in another location.

You can configure disaster recovery (DR) for your Workspace ONE Assist solution using whatever procedures and methods meet your DR policies. Workspace ONE Assist has no dependency on your DR configuration, but we strongly recommend that you develop failover procedures for DR scenarios. Workspace ONE Assist components can be deployed to accommodate most of the typical disaster recovery scenarios.

Workspace ONE Assist consists of the following core components, which need to be designed for redundancy:

  • Workspace ONE Assist Core Services
  • Workspace ONE Assist Portal Services
  • Workspace ONE Assist Application Services
  • Workspace ONE Assist Connection Proctors
  • SQL database server
Table 85: Site Resilience Strategy for Workspace ONE Assist

Decision

A second site was set up with Workspace ONE Assist.

Justification

This strategy provides disaster recovery and site resilience for the on-premises implementation of Workspace ONE Assist.

Multi-site All-in-One Assist Servers

To provide site resilience, each site requires its own group of Workspace ONE Assist all-in-one servers deployed in an active/passive pair to allow the site to operate independently. One site runs as an active deployment, while the other has a passive deployment.

The Workspace ONE Assist all-in-one servers are hosted in the DMZ in each site. Each site has a local load balancer that directs traffic to the currently active Workspace ONE Assist all-in-one server in your active/passive pair. For more information, see Appendix H: Registering Failover for Active/Passive Workspace ONE Assist Deployments.

A global load balancer is used in front of each site’s load balancer.

Table 86: Strategy for Multi-site Deployment of the Workspace ONE Assist All-in-One active/passive Pairs

Decision

A second active/passive pair of Workspace ONE Assist all-in-one servers were installed in a second data center. The number and function of the servers were the same as sized for the primary site.

Justification

This strategy provides full disaster recovery capacity for all the Workspace ONE Assist services.

Multi-site Database

Workspace ONE Assist supports Microsoft SQL Server 2012 (and later) and its cluster offering Always On availability groups. This allows the deployment of multiple instances of the Workspace ONE Assist all-in-one servers to point to the same database so that remote management device registration and system configuration details are highly available in the case of component failure or maintenance.

It is recommended to deploy the Workspace ONE Assist databases on the same Workspace ONE UEM SQL Server machine. Due to this shared dependency, see Multi-site Database for configuration and design details of the multi-site database.

Table 87: Strategy for Multi-site Deployment of the On-Premises Database

Decision

A Microsoft SQL Server Always-On database was used.

Justification

This strategy provides replication of the database from the primary site to the recovery site and allows for recovery of the database functionality.

Failover to a Second Site

A Workspace ONE Assist multi-site design allows administrators to maintain constant availability of the different Workspace ONE Assist services in case a disaster renders the original active site unavailable. The following diagram shows a sample multi-site architecture.

On-Premises Multi-Site Workspace ONE Assist Architecture

Figure 79: On-Premises Multi-Site Workspace ONE Assist Architecture

To achieve failover to a secondary site, manual intervention might be required for two main layers of the solution:

  • Database – Depending on the configuration of the SQL Server Always On availability group, inter-site failover of the database can be automatic. If necessary, steps should be taken to manually control which site has the active SQL node.
  • All-in-one servers – The global load balancer controls which site the traffic is directed to.  During normal operation, the global load balancer directs traffic to the local load balancer in front of the Workspace ONE Assist all-in-one servers in Site 1. In a failover scenario, the global load balancer should be either manually or automatically changed to direct traffic to the equivalent local load balancer in Site 2.

Prerequisites for Network Configuration

This section details the prerequisites for the Workspace ONE Assist network configuration. Verify that the following requirements are met:

  • A static IP address and a DNS A record are used for each Workspace ONE Assist all-in-one server.
  • Inbound firewall ports 443 and 8443 are open so that external devices can connect to the active Workspace ONE Assist Portal service and Connection Proctor service, respectively, through the load balancer.
    Note: 443 and 8443 are the default ports but can be customized if required.
  • The external load balancer must direct traffic to the active Workspace ONE Assist all-in-one server using SSL/TLS passthrough.
  • The external load balancer must support IP or SSL/TLS persistence for traffic directed to the active Workspace ONE Assist all-in-one server.
    For a comprehensive list of requirements, see Network and Security Requirements.

Installation and Initial Configuration

Workspace ONE Assist is delivered as a single installer and deploys the Core, Application, Portal, Connection Proctor, and Database services. For information on installing Workspace ONE Assist, see Install Workspace ONE Assist. For the all-in-one server installation, see Standard (Basic) Installation of Workspace ONE Assist.

At a high level, installation and configuration involve the following tasks:

  1. Generate the Workspace ONE Assist Certificates using the RemoteManagementCertificateGenerator utility included in the installer. See Generate the Workspace ONE Assist Certificates.
  2. Run the Workspace ONE Assist installer:
    1. Select the Standard – Basic (that is, “all-in-one”) configuration.
    2. Configure the database details.
    3. Configure the Application service details.
    4. Configure the Portal and Connection Proctor service bindings.
    5. When the installer finishes, leave the Run Resource Pack option enabled. If you complete the installer without automatically running the included resource pack, see Import Device Profiles with Resource Pack Utility.

For full details, see Standard (Basic) Installation of Workspace ONE Assist. For troubleshooting articles, see Troubleshooting Workspace ONE Assist.

Integration with Workspace ONE UEM

Integrating Workspace ONE UEM and Workspace ONE Assist allows your administrators to launch Remote Management sessions for eligible devices directly from the Workspace ONE UEM administration console.

The integration process between the two solutions is detailed in Configure the Workspace ONE UEM Console.

See Workspace ONE UEM and Workspace ONE Assist Integration for full integration details.

Remote Management Client Tools

The Workspace ONE Assist client provides support tools to facilitate troubleshooting and remotely controlling end-user devices. These client tools provide effective troubleshooting options such as remote screen sharing and control, remote file system management, remotely issuing commands to the device, inspecting running tasks, and more.

Note: Not all client tools are available on all OS platforms. See Capabilities by Platform.

You can also assign tool-specific role permissions to your administrators from the Workspace ONE UEM console for granular control over which administrators can interact with specific Workspace ONE Assist client tools. See Assign Role Permissions for Workspace ONE Assist Client Tools for more details.

End-user privacy is an important aspect when allowing your administrators to remotely access, view, and control managed devices. See Privacy Notices and End-User Prompts for more information on the end-user experience.

Share Screen Tool

The Share Screen tool allows your administrator to view and control the end-user device remotely. The administrator can capture images or video while the Share Screen session is active. There is a virtual keyboard available for the administrator, or you can use the physical device buttons by interacting with the device shell presented in the Share Screen view.

End users can pause the Share Screen session at any time if needed for privacy concerns. Active Share Screen sessions are presented to the end user clearly by highlighting their screen in a blue outline and showing the Assist icon to clearly indicate if the Share Screen session is active or paused.

Administrator View of Device Using Share Screen Tool

Figure 80: Administrator View of Device Using Share Screen Tool

See Share Screen, Assist Client Tool for more details.

Important: When using Restriction Profiles in Workspace ONE UEM, be aware that disabling Allow Screen Capture will prevent Workspace ONE Assist from remotely viewing or controlling any device with this profile. See Workspace ONE UEM Screen Capture Restriction Profiles for more details.

See Troubleshooting Workspace ONE Assist for more troubleshooting articles.

Manage Files Tool

The Manage Files tool exposes the device’s file system to the administrator and allows administrators to upload, download, rename, delete, move, cut, copy, and paste files and folders.

Manage Files Tool Showing the File System on an End User’s Device

Figure 81: Manage Files Tool Showing the File System on an End User’s Device

See Manage Files, Assist Client Tool for more details.

Remote Commands Tools

Administrators can leverage the Remote Shell client tool for Windows 10 and the Command-Line Interface client tool for Android devices to send commands remotely. The Remote Shell client tool for Windows 10 connects to a PowerShell interface, while the Command-Line client tool for Android connects to a command-line interface.

Example of Retrieving Device Configuration Information Using the Remote Shell Client Tool for Android

Figure 82: Example of Retrieving Device Configuration Information Using the Remote Shell Client Tool for Android

See Remote Shell Assist Client Tool for Windows 10 and Command-Line Interface, Android for additional details.

Workspace ONE Assist Client Tools

Additional Workspace ONE Assist client tools are available for your administrators based on your device platform. See Client Tools for a comprehensive list.

Getting Started with Workspace ONE Assist

Workspace ONE Assist is available as an add-on to any Workspace ONE environment. On-premises deployments require the Workspace ONE Advanced Deployment Add-On. The shared SaaS version is available to all customers, including those with on-premises and dedicated SaaS environments. For additional information, reach out to your VMware sales representative.

Workspace ONE Assist is automatically provisioned and available for trial in Workspace ONE UEM Shared SaaS Free Trial and UAT environments. Workspace ONE Assist is not available for trial in Workspace ONE UEM On-Premises environments. If you wish to try Workspace ONE Assist in an on-premises deployment, request a new Workspace ONE UEM Shared SaaS Free Trial or UAT environment.

Workspace ONE UEM Shared SaaS Free Trial environments are available on the Try Workspace ONE Powered by AirWatch page.

For more information, see the Workspace ONE Assist product documentation.

Component Design: Horizon 7 Architecture

VMware Horizon® 7 is a platform for managing and delivering virtualized or hosted desktops and applications to end users. Horizon 7 allows you to create and broker connections to Windows virtual desktops, Linux virtual desktops, Remote Desktop Server (RDS)–hosted applications and desktops, and physical machines.

A successful deployment of Horizon 7 depends on good planning and a robust understanding of the platform. This section discusses the design options and details the design decisions that were made to satisfy the design requirements.

Table 88: Horizon 7 Environment Setup Strategy

Decision

A Horizon 7 deployment was designed, deployed, and integrated with the VMware Workspace ONE® platform.

The environment was designed to be capable of scaling to 8,000 concurrent connections for users.

Justification

This strategy allowed the design, deployment, and integration to be validated and documented.

Architectural Overview

The core components of Horizon 7 include a VMware Horizon® Client™ authenticating to a Connection Server, which brokers connections to virtual desktops and apps. The Horizon Client then forms a protocol session connection to a Horizon Agent running in a virtual desktop or RDSH server.

Horizon 7 Core Components

Figure 83: Horizon 7 Core Components

External access includes the use of VMware Unified Access Gateway™ to provide secure edge services. The Horizon Client authenticates to a Connection Server through the Unified Access Gateway. The Horizon Client then forms a protocol session connection, through the gateway service on the Unified Access Gateway, to a Horizon Agent running in a virtual desktop or RDSH server. This process is covered in more detail in External Access.

Horizon 7 Core Components for External Access

Figure 84: Horizon 7 Core Components for External Access

The following figure shows the high-level logical architecture of the Horizon 7 components with other Horizon 7 Enterprise Edition components shown for illustrative purposes.

Horizon 7 Enterprise Edition Logical Components

Figure 85: Horizon 7 Enterprise Edition Logical Components

Components

The components and features of Horizon 7 are described in the following table.

Table 89: Components of Horizon 7
Component Description

Connection Server

An enterprise-class desktop management server that securely brokers and connects users to desktops and published applications running on VMware vSphere® VMs, physical PCs, blade PCs, or RDSH servers.

Authenticates users through Windows Active Directory and directs the request to the appropriate and entitled resource.

Horizon Agent

A software service installed on the guest OS of all target VMs, physical systems, or RDSH servers. This allows them to be managed by Connection Servers and allows a Horizon Client to form a protocol session to the target VM.

Horizon Client

Client-device software that allows a physical device to access a virtual desktop or RDSH-published application in a Horizon 7 deployment. You can optionally use an HTML client for devices for which installing software is not possible.

Unified Access Gateway

Virtual appliance that provides a method to secure connections in access scenarios requiring additional security measures, such as over the Internet. (See Component Design: Unified Access Gateway Architecture for design and implementation details.)

Horizon Console

A web application that is part of the Connection Server, allowing administrators to configure the server, deploy and manage desktops, control user authentication, initiate and examine system and user events, carry out end-user support, and perform analytical activities.

VMware Instant Clone Technology

VMware technology that provides single-image management with automation capabilities. You can rapidly create automated pools or farms of instant-clone desktops or RDSH servers from a master image.

The technology reduces storage costs and streamlines desktop management by enabling automatic updating and patching of hundreds of images from the master image. Instant Clone Technology accelerates the process of creating cloned VMs over the previous Composer linked-clone technology. In addition, instant clones require less storage and are less expensive to manage and update.

RDSH servers

Microsoft Windows Servers that provide published applications and session-based remote desktops to end users.

Enrollment Server

Server that delivers True SSO functionality by ensuring a user can single-sign-on to a Horizon resource when launched from Workspace ONE Access™, regardless of the authentication method.

The Enrollment Server is responsible for receiving certificate signing requests from the Connection Server and then passing them to the Certificate Authority to sign.

True SSO requires Microsoft Certificate Authority services, which it uses to generate unique, short-lived certificates to manage the login process.

See the True SSO section for more information.

JMP Server

JMP (pronounced jump), which stands for Just-in-Time Management Platform, represents capabilities in VMware Horizon 7 Enterprise Edition that deliver Just-in-Time Desktops and Apps in a flexible, fast, and personalized manner.

The JMP server enables the use of JMP workflows by providing a single console to define and manage desktop workspaces for users or groups of users.

A JMP assignment can be defined that includes information about:

  • Operating system, by assigning a desktop pool
  • Applications, delivered by VMware App Volumes™ packages
  • Application and environment configuration, with VMware Dynamic Environment Manager™ settings

The JMP automation engine communicates with the Connection Server, App Volumes Managers, and Dynamic Environment Manager systems to entitle the user to a desktop. For more information, see the Quick-Start Tutorial for VMware Horizon JMP Integrated Workflow.

Cloud Connector

(not pictured)

The Horizon 7 Cloud Connector is required to use with Horizon 7 subscription licenses and management features hosted in the VMware Horizon® Cloud Service™.

The Horizon 7 Cloud Connector is a virtual appliance that connects a Connection Server in a pod with the Horizon Cloud Service.

You must have an active My VMware account to purchase a Horizon 7 license from https://my.vmware.com.

Composer

The Composer server is only required when using linked-clones.

This is the legacy method that enables scalable management of virtual desktops by provisioning clones from a single master image. The Composer service works with the Connection Servers and a VMware vCenter Server®.

vSphere and vCenter Server

The vSphere product family includes VMware ESXi™ and vCenter Server, and it is designed for building and managing virtual infrastructures. The vCenter Server system provides key administrative and operational functions, such as provisioning, cloning, and VM management features, which are essential for VDI.

From a data center perspective, several components and servers must be deployed to create a functioning Horizon 7 Enterprise Edition environment to deliver the desired services.

Horizon 7 Enterprise Edition Logical Architecture

Figure 86: Horizon 7 Enterprise Edition Logical Architecture

In addition to the core components and features, other products can be used in a Horizon 7 Enterprise Edition deployment to enhance and optimize the overall solution:

  • Workspace ONE Access – Provides enterprise single sign-on (SSO), securing and simplifying access to apps with the included identity provider or by integrating with existing identity providers. It provides application provisioning, a self-service catalog, conditional access controls, and SSO for SaaS, web, cloud, and native mobile applications. (See Component Design: Workspace ONE Access Architecture for design and implementation details.)
  • App Volumes Manager – Orchestrates application delivery by managing assignments of application volumes (packages and writable volumes) to users, groups, and target computers. (See Component Design: App Volumes Architecture for design and implementation details.)
  • Dynamic Environment Manager – Provides profile management by capturing user settings for the operating system and applications. (See Component Design: Dynamic Environment Manager Architecture for design and implementation details.)
  • Microsoft SQL Servers – Microsoft SQL database servers are used to host several databases used by the management components of Horizon 7 Enterprise Edition.
  • VMware vRealize® Operations Manager for Horizon® – Provides end-to-end visibility into the health, performance, and efficiency of virtual desktop and application environments from the data center and the network, all the way through to devices.
  • VMware vSAN storage – Delivers high-performance, flash-optimized, hyper-converged storage using server-attached flash devices or hard disks to provide a flash-optimized, highly resilient, shared datastore.
  • VMware NSX® Data Center for vSphere® – Provides network-based services such as security, virtualization networking, routing, and switching in a single platform. With micro-segmentation, you can set application-level security policies based on groupings of individual workloads, and you can isolate each virtual desktop from all other desktops as well as protecting the Horizon 7 management servers.

    Note: NSX Data Center for vSphere is licensed separately from Horizon 7 Enterprise Edition.

Horizon 7 Pod and Block

One key concept in a Horizon 7 environment design is the use of pods and blocks, which gives us a repeatable and scalable approach.

The numbers, limits, and recommendations given in this section were correct at time of writing. For the most current numbers, see the VMware Knowledge Base article VMware Horizon 7 Sizing Limits and Recommendations (2150348).

A pod is made up of a group of interconnected Connection Servers that broker connections to desktops or published applications. A pod can broker up to 20,000 sessions (10,000 recommended), including desktop and RDSH sessions. Multiple pods can be interconnected using Cloud Pod Architecture (CPA) for a maximum of 200,000 sessions. For numbers above that, separate CPAs can be deployed.

A pod is divided into multiple blocks to provide scalability. Each block is made up of one or more resource vSphere clusters, and each block has its own vCenter Server, Composer server (where linked clones are to be used), and VMware NSX® Manager™ (where NSX is being used). The number of virtual machines (VMs) a block can typically host depends on the type of Horizon 7 VMs used. See vCenter Server for details.

Horizon 7 Pod and Block Design

Figure 87: Horizon 7 Pod and Block Design

To add more resource capacity, we simply add more resource blocks. We also add an additional Connection Server for each additional block to add the capability for more session connections.

Depending on the types of VMs (instant clones, linked clones, full clones, using App Volumes) a resource block could host a different number of VMs (see Scalability and Availability). Typically, we have multiple resource blocks and up to seven Connection Servers in a pod capable of hosting 10,000 sessions. For numbers above that, we deploy additional pods.

As you can see, this approach allows us to design a single block capable of thousands of sessions that can then be repeated to create a pod capable of handling 10,000 sessions. Multiple pods grouped using Cloud Pod Architecture can then be used to scale the environment as large as needed.

Important: A single pod and the Connection Servers in it must be located within a single data center and cannot span locations. Multiple Horizon 7 pods and locations must be interconnected using Cloud Pod Architecture. See Multi-site Architecture and Cloud Pod Architecture for more detail.

Options regarding the location of management components, such as Connection Servers, include:

  • Co-located on the same vSphere hosts as the desktops and RDSH servers that will serve end-users
  • On a separate vSphere cluster

In large environments, for scalability and operational efficiency, it is normally best practice to have a separate vSphere cluster to host the management components. This keeps the VMs that run services such as vCenter Server, NSX Manager, Connection Server, Unified Access Gateway, and databases separate from the desktop and RDSH server VMs.

Management components can be co-hosted on the same vSphere cluster as the end-user resources, if desired. This architecture is more typical in smaller environments or where the use of converged hardware is used and the cost of providing dedicated hosts for management is too high. If you place everything on the same vSphere cluster, you must configure the setup to ensure resource prioritization for the management components. Sizing of resources (for example, virtual desktops) must also take into account the overhead of the management servers. See vSphere Resource Management for more information.

Table 90: Pod and Block Design for This Reference Architecture

Decision

A pod was formed in each site.

Each pod contained one or more resource blocks.

Justification

This allowed the design, deployment of the block, pod, and Cloud Pod Architecture (CPA) to be validated and documented.

Scalability and Availability

One key design principle is to remove single points of failure in the deployment. The numbers, limits and recommendations given in this section were correct at time of writing. For the most current numbers, see the VMware Knowledge Base article VMware Horizon 7 Sizing Limits and Recommendations (2150348).

Connection Server

A single Connection Server supports a maximum of 4,000 sessions (using the Blast Extreme or PCoIP display protocol), although 2,000 is recommended as a best practice. Up to seven Connection Servers are supported per pod with a recommendation of 10,000 sessions in total per pod.

To satisfy the requirements that the proposed solution be robust and able to handle failure, deploy one more server than is required for the number of connections (n+1).

Table 91: Strategy for Deploying Connection Servers

Decision

Five Horizon Connection Servers were deployed.

These ran on dedicated Windows 2016 VMs located in the internal network.

Justification

One Connection Server is recommended per 2,000 concurrent connections.

Four Connection Servers are required to handle the load of the target 8,000 users.

A fifth server provides redundancy and availability (n+1).

For more information, see Appendix B: VMware Horizon Configuration.

vCenter Server

vCenter Server is the delimiter of a resource block.

The recommended number of VMs that a vCenter Server can typically host depends on the type of Horizon 7 VMs used. The following limits have been tested.

  • 8,000 instant-clone VMs
  • 4,000 linked-clone or full-clone VMs

Just because VMware publishes these configuration maximums does not mean you should necessarily design to them. Using a single vCenter Server does introduce a single point of failure that could affect too large a percentage of the VMs in your environment. Therefore, carefully consider the size of the failure domain and the impact should a vCenter Server become unavailable.

A single vCenter Server might be capable of supporting your whole environment, but to reduce risk and minimize the impact of an outage, you will probably want to include more than one vCenter Server in your design. You can increase the availability of vCenter Server by using VMware vSphere® High Availability (HA), which restarts the vCenter Server VM in the case of a vSphere host outage. vCenter High Availability can also be used to provide an active-passive deployment of vCenter Server appliances, although caution should be used to weigh the benefits against the added complexity of management.

Sizing can also have performance implications because a single vCenter Server could become a bottleneck if too many provisioning tasks run at the same time. Do not just size for normal operations but also understand the impact of provisioning tasks and their frequency.

For example, consider instant-clone desktops, which are deleted after a user logs off and are provisioned when replacements are required. Although a floating desktop pool can be pre-populated with spare desktops, it is important to understand how often replacement VMs are being generated and when that happens. Are user logoff and the demand for new desktops spread throughout the day? Or are desktop deletion and replacement operations clustered at certain times of day? If these events are clustered, can the number of spare desktops satisfy the demand, or do replacements need to be provisioned? How long does provisioning desktops take, and is there a potential delay for users?

Table 92: Implementation Strategy for vCenter Server

Decision

Two resource blocks were deployed per site, each with their own vCenter Server virtual appliance, located in the internal network.

Justification

A single resource block and a single vCenter Server are supported for the intended target of 8,000 instant-clone VMs; however, having a single vCenter Server for the entire user environment presents too large a failure domain.

Splitting the environment across two resource blocks, and therefore over two vCenter Servers reduces the impact of any potential outage.

This approach also allows each resource block to scale to a higher number of VMs and allow for growth, up to the pod recommendation, without requiring us to rearchitect the resource blocks.

JMP Server

The JMP Server enables the use of JMP workflows by providing a single console to define assignments that can include information about the desktop pool, the App Volumes packages, and Dynamic Environment Manager settings. The JMP automation engine communicates with the Connection Server, App Volumes Managers, and Dynamic Environment Manager systems to entitle the user to a desktop.

A single JMP Server is supported per pod. High availability is provided by vSphere High Availability (HA), which restarts the JMP Server VM in the case of a vSphere host outage. VM monitoring with vSphere HA can also attempt to restart the VM in the case of an operating system crash.

If the JMP Server is unavailable, the only functionality affected is the administrator’s ability to create new JMP workflow assignments.

Table 93: Implementation Strategy for the JMP Server

Decision

One JMP Server was deployed per pod.

The JMP Servers ran on dedicated Windows Server 2016 VMs located in the internal network zones.

Justification

This allows for the use of the Horizon Console and workflows to create JMP assignments that include Horizon desktops, App Volumes packages, and Dynamic Environment Manager configuration settings.

Only one JMP Server per pod is supported.

Cloud Connector

The Horizon 7 Cloud Connector is deployed as a virtual appliance from VMware vSphere® Web Client and paired to one of the Connection Servers in the pod. As part of the pairing process, the Horizon 7 Cloud Connector virtual appliance connects the Connection Server to the Horizon Cloud Service to manage the subscription license. With a subscription license for Horizon 7, you do not need to retrieve or manually enter a license key for Horizon 7 product activation. However, license keys are still required for supporting the components, which include vSphere, vSAN, and vCenter Server. These keys are emailed to the https://my.vmware.com contact.

You must have an active My VMware® account to purchase a Horizon 7 license from https://my.vmware.com. You then receive a subscription email with the link to download the Horizon 7 Cloud Connector as an OVA (Open Virtual Appliance) file.

A single Cloud Connector VM is supported per pod. High availability is provided by vSphere HA, which restarts the Cloud Connector VM in the case of a vSphere host outage.

Table 94: Implementation Strategy for the Horizon Cloud Connector

Decision

One Cloud Connector per pod was deployed in the internal network.

Justification

The environment uses subscription licensing.

Composer Server

The Composer server is required only when using linked clones. Instant clones do not require a Composer server.

Each Composer server is paired with a vCenter Server. For example, in a block architecture where we have one vCenter Server per 4,000 linked-clone VMs, we would also have one Composer server.

High availability is provided by vSphere HA, which restarts the Composer VM in the case of a vSphere host outage. VM monitoring with vSphere HA can also attempt to restart the VM in the case of an operating system crash.

If the VMware View Composer service becomes unavailable, all existing desktops can continue to work just fine. While vSphere HA is restarting the Composer VM, the only impact is on any provisioning tasks within that block, such as image refreshes or recomposes, or creating new linked-clone pools.

Table 95: Decision Regarding Composer

Decision

A Composer server was not deployed in this environment.

Justification

Instant clones satisfy all use cases, which means that linked clones and the Composer service are not required.

If the requirements change, a separate server running the Composer service can easily be added to the design.

Load Balancing of Connection Servers

For high availability and scalability, VMware recommends that multiple Connection Servers be deployed in a load-balanced replication cluster.

Connection Servers broker client connections, authenticate users, and direct incoming requests to the correct endpoint. Although the Connection Server helps form the connection for authentication, it typically does not act as part of the data path after a protocol session has been established.

The load balancer serves as a central aggregation point for traffic flow between clients and Connection Servers, sending clients to the best-performing and most available Connection Server instance. Using a load balancer with multiple Connection Servers also facilitates greater flexibility by enabling IT administrators to perform maintenance, upgrades, and changes in the configuration without impacting users. To ensure that the load balancer itself does not become a point of failure, most load balancers allow for setup of multiple nodes in an HA or active/passive configuration.

Connection Server Load Balancing

Figure 88: Connection Server Load Balancing

Connection Servers require the load balancer to have a session persistence setting. This is sometimes referred to as persistent connections or sticky connections, and ensures data stays directed to the relevant Connection Server. For more information, see the VMware Knowledge Base article Load Balancing for VMware Horizon View (2146312).

Table 96: Strategy for Using Load Balancers with Connection Servers

Decision

A third-party load balancer was used in front of the Connection Servers.

Source IP is configured for the persistence or affinity type.

Justification

This provides a common namespace for the Connection Servers, which allows for ease of scale and redundancy.

External Access

Secure external access for users accessing resources is provided through the integration of Unified Access Gateway (UAG) appliances. We also use load balancers to provide scalability and allow for redundancy. A Unified Access Gateway appliance can be used in front of Connection Servers to provide access to on-premises Horizon 7 desktops and published applications.

For design detail, see Component Design: Unified Access Gateway Architecture.External Access Through Unified Access Gateway

Figure 89: External Access Through Unified Access Gateway

Table 97: Implementation Strategy for External Access

Decision

Five standard-size Unified Access Gateway appliances were deployed as part of the Horizon 7 solution.

These were located in the DMZ network.

Justification

UAG provides secure external access to internally hosted Horizon 7 desktops and applications.

One standard UAG appliance is recommended per 2,000 concurrent Horizon connections.

Four UAG appliances are required to handle the load of the target 8,000 users.

A fifth UAG provides redundancy and availability (n+1).

For the full detail and diagrams of all the possible ports for different display protocols and between all Horizon 7 components, see the Network Ports in VMware Horizon 7.

Authentication

One of the methods of accessing Horizon 7 desktops and applications is through Workspace ONE Access. This requires integration between Connection Servers and Workspace ONE Access using the SAML 2.0 standard to establish mutual trust, which is essential for single sign-on (SSO) functionality.

When SSO is enabled, users who log in to Workspace ONE Access with Active Directory credentials can launch remote desktops and applications without having to go through a second login procedure. If you set up the True SSO feature, users can log in using authentication mechanisms other than AD credentials.

See Using SAML Authentication and see Setting Up True SSO for details.

Table 98: Strategy for Authenticating Users Through Workspace ONE Access

Decision

SAML authentication was configured to be allowed on the Connection Servers

Justification

With this configuration, Connection Servers allow Workspace ONE Access to be a dynamic SAML authenticator. This strategy facilitates the launch of Horizon resources from Workspace ONE Access.

True SSO

Many user authentication options are available for logging in to Workspace ONE Access or Workspace ONE. Active Directory credentials are only one of these many authentication options. Ordinarily, using anything other than AD credentials would prevent a user from being able to single-sign-on to a Horizon 7 virtual desktop or published application. After selecting the desktop or published application from the catalog, the user would be prompted to authenticate again, this time with AD credentials.

True SSO provides users with SSO to Horizon 7 desktops and applications regardless of the authentication mechanism used. True SSO uses SAML, where Workspace ONE is the Identity Provider (IdP) and the Horizon 7 server is the Service Provider (SP). True SSO generates unique, short-lived certificates to manage the login process.

True SSO Logical Architecture

Figure 90: True SSO Logical Architecture

Table 99: Implementation Strategy for SSO

Decision

True SSO was configured and enabled.

Justification

This feature allows SSO to Horizon resources when launched from Workspace ONE Access, even when the user does not authenticate with Active Directory credentials.

True SSO requires the Enrollment Server service to be installed using the Horizon 7 installation media.

Design Overview

For True SSO to function, several components must be installed and configured within the environment. This section discusses the design options and details the design decisions that satisfy the requirements.

Note: For more information on how to install and configure True SSO, see Setting Up True SSO in the Horizon 7 Administration documentation and the Setting Up True SSO for Horizon 7 section in Appendix B: VMware Horizon Configuration.

The Enrollment Server is responsible for receiving certificate signing requests (CSRs) from the Connection Server. The enrolment server then passes the CSRs to the Microsoft Certificate Authority to sign using the relevant certificate template. The Enrollment Server is a lightweight service that can be installed on a dedicated Windows Server 2016 instance, or it can co-exist with the MS Certificate Authority service. It cannot be co-located on a Connection Server.

Scalability

A single Enrollment Server can easily handle all the requests from a single pod of 10,000 sessions.  The constraining factor is usually the Certificate Authority (CA). A single CA can generate approximately 70 certificates per second (based on a single vCPU). This usually increases to over 100 when multiple vCPUs are assigned to the CA VM.

To ensure availability, a second Enrollment Server should be deployed per pod (n+1). Additionally, ensure that the certificate authority service is deployed in a highly available manner, to ensure complete solution redundancy.

True SSO High Availability

Figure 91: True SSO High Availability

With two Enrollment Servers, and to achieve high availability, it is recommended to:

  • Co-host the Enrollment Server service with a Certificate Authority service on the same machine.
  • Configure the Enrollment Server to prefer to use the local Certificate Authority service.
  • Configure the Connection Servers to load-balance requests between the two Enrollment Servers.

Table 100: Implementation Strategy for Enrollment Servers

Decision

Two Enrollment Servers were deployed per Pod.

These ran on dedicated Windows Server 2016 VMs located in the internal network.

These servers also had the Microsoft Certificate Authority service installed.

Justification

One Enrollment Server is capable of supporting a pod of 10,000 sessions.

A second server provides availability (n+1).

True SSO High Availability Co-located

Figure 92: True SSO High Availability Co-located

Load Balancing of Enrollment Servers

Two Enrollment Servers were deployed in the environment, and the Connection Servers were configured to communicate with both deployed Enrollment Servers. The Enrollment Servers can be configured to communicate with two Certificate Authorities.

By default, the Enrollment Servers use an Active / Failover method of load balancing. It is recommended to change this to round robin when configuring two Enrollment Servers per pod to achieve high availability.

Table 101: Strategy for Load Balancing Between the Enrollment Servers

Decision

The Connection Server were configured to load balance requests using round robin between the two Enrollment Servers.

Justification

With two Enrollment Servers per pod, this is the recommendation when designing for availability.

vSphere HA and VMware vSphere® Storage DRS™ can be used to ensure the maximum availability of the Enrollment Servers. DRS rules are configured to ensure that the devices do not reside on the same vSphere host.

Scaled Single-Site Architecture

The following diagram shows the server components and the logical architecture for a single-site deployment of Horizon 7. For clarity, the focus in this diagram is to illustrate the core Horizon 7 server components, so it does not include additional and optional components such as App Volumes, Dynamic Environment Manager, and Workspace ONE Access.

Note: In addition to Horizon 7 server components, the following diagram shows database components, including Microsoft availability group (AG) listeners.

On-Premises Single-Site Horizon 7 Architecture

Figure 93: On-Premises Single-Site Horizon 7 Architecture

Multi-site Architecture

This reference architecture documents and validates the deployment of all features of Horizon 7 Enterprise Edition across two data centers.

The architecture has the following primary tenets:

  • Site redundancy – Eliminate any single point of failure that can cause an outage in the service.
  • Data replication – Ensure that every layer of the stack is configured with built-in redundancy or high availability so that the failure of one component does not affect the overall availability of the desktop service.

To achieve site redundancy, 

  • Services built using Horizon 7 are available in two data centers that are capable of operating independently.
  • Users are entitled to equivalent resources from both the primary and the secondary data centers.
  • Some services are available from both data centers (active/active).
  • Some services require failover steps to make the secondary data center the live service (active/passive).

To achieve data replication, 

  • Any component, application, or data required to deliver the service in the second data center is replicated to a secondary site.
  • The service can be reconstructed using the replicated components.
  • The type of replication depends on the type of components and data, and the service being delivered.
  • The mode of the secondary copy (active or passive) depends on the data replication and service type.

Cloud Pod Architecture

A key component in this reference architecture, and what makes Horizon 7 Enterprise Edition truly scalable and able to be deployed across multiple locations, is Cloud Pod Architecture (CPA).

CPA introduces the concept of a global entitlement (GE) through joining multiple pods together into a federation. This feature allows us to provide users and groups with a global entitlement that can contain desktop pools or RDSH-published applications from multiple different pods that are members of this federation construct.

This feature provides a solution for many different use cases, even though they might have different requirements in terms of accessing the desktop resource.

The following figure shows a logical overview of a basic two-site CPA implementation, as deployed in this reference architecture design.

Cloud Pod Architecture 

Figure 94: Cloud Pod Architecture 

For the full documentation on how to set up and configure CPA, refer to Administering Cloud Pod Architecture in Horizon 7.

Important: This type of deployment is not a stretched deployment. Each pod is distinct, and all Connection Servers belong to a specific pod and are required to reside in a single location and run on the same broadcast domain from a network perspective.

As well as being able to have desktop pool members from different pods in a global entitlement, this architecture allows for a property called scope. Scope allows us to define where new sessions should or could be placed and also allows users to connect to existing sessions (that are in a disconnected state) when connecting to any of the pod members in the federation.

CPA can also be used within a site:

  • To use global entitlements that span multiple resource blocks and pools
  • To federate multiple pods on the same site, when scaling above the capabilities of a single pod

Table 102: Implementation Strategy for Using Cloud Pod Architecture

Decision

Separate pods were deployed in separate sites.

Cloud Pod Architecture was used to federate the pods.

Justification

This provides site redundancy and allows an equivalent service to delivered to the user from an alternate location.

Active/Passive Architecture

Active/passive architecture uses two or more pods of Connection Servers, with at least one pod located in each data center. Pods are joined together using Cloud Pod Architecture configured with global entitlements.

Active/passive service consumption should be viewed from the perspective of the user. A user is assigned to a given data center with global entitlements, and user home sites are configured. The user actively consumes Horizon 7 resources from that pod and site and will only consume from the other site in the event that their primary site becomes unavailable.

Active/Passive Architecture

Figure 95: Active/Passive Architecture

Active/Active Architecture

Active/active architecture also uses two or more pods of Connection Servers, with at least one pod located in each data center. The pods are joined using Cloud Pod Architecture, which is configured with global entitlements.

As with an active/passive architecture, active/active service consumption should also be viewed from the perspective of the user. A user is assigned global entitlements that allow the user to consume Horizon 7 resources from either pod and site. No preference is given to which pod or site they consume from. The challenges with this approach are usually related to replication of user data between sites.

Active/Active Architecture

Figure 96: Active/Active Architecture

Stretched Active/Active Architecture (Unsupported)

This architecture is unsupported and is only shown here to stress why it is not supported. Connection Servers within a given site must always run on a well-connected LAN segment and therefore cannot be running actively in multiple geographical locations at the same time.

Unsupported Stretched Pod Architecture

Figure 97: Unsupported Stretched Pod Architecture

Multi-site Global Server Load Balancing

A common approach is to provide a single namespace for users to access Horizon pods deployed in separate locations. A Global Server Load Balancer (GSLB) or DNS load balancer solution can provide this functionality and can use placement logic to direct traffic to the local load balancer in an individual site. Some GSLBs can use information such as the user’s location to determine connection placement.

The use of a single namespace makes access simpler for users and allows for administrative changes or implementation of disaster recovery and failover without requiring users to change the way they access the environment.

Note the following features of a GSLB:

  • GSLB is similar to a Domain Name System (DNS) service in that it resolves a name to an IP address and directs traffic.
  • Compared to a DNS service, GSLB can usually apply additional criteria when resolving a name query.
  • Traffic does not actually flow through the GSLB to the end server.
  • Similar to a DNS server, the GLSB does not provide any port information in its resolution.
  • GSLB should be deployed in multiple nodes in an HA or active/passive configuration to ensure that the GSLB itself does not become a point of failure.
Table 103: Strategy for Global Load Balancing

Decision

A global server load balancer was deployed.

Justification

This provides a common namespace so that users can access both sites.

Multi-site Architecture Diagram

The following diagram shows the server components and the logical architecture for a multi-site deployment of Horizon 7. For clarity, the focus in this diagram is to illustrate the core Horizon 7 server components, so it does not include additional and optional components such as App Volumes, Dynamic Environment Manager, and Workspace ONE Access.

On-Premises Multi-site Horizon 7 Architecture

Figure 98: On-Premises Multi-site Horizon 7 Architecture

Virtual Machine Build

Connection Servers and Composer servers run as Windows services. Specifications are detailed in Appendix A: VM Specifications. Each server is deployed with a single network card, and static IP address information is required for each server.

Table 104: Operating System Used for Server Components

Decision

Windows Server 2016 was used for the OS build.

IP address information was allocated for each server.

Justification

As a best practice, server VMs use the latest supported OS.

Physical Hosting

The Connection Server and Enrollment Server VMs are hosted on vSphere servers. vSphere HA and DRS can be used to ensure maximum availability.

Display Protocol

Horizon 7 is a multi-protocol solution. Three remoting protocols are available when creating desktop pools or RDSH-published applications: Blast Extreme, PCoIP, and RDP.

Table 105: Display Protocol for Virtual Desktops and RDSH-Published Apps

Decision

For this design, we leveraged Blast Extreme.

Justification

This display protocol supports multiple codecs (JPG/PNG and H.264), both TCP and UDP from a transport protocol perspective, and the ability to do hardware encoding with NVIDIA GRID vGPU.

This protocol has full feature and performance parity with PCoIP and is optimized for mobile devices, which can decode video using the H.264 protocol in the device hardware.

Blast Extreme is configured through Horizon 7 when creating a pool. The display protocol can also be selected directly on the Horizon Client side when a user selects a desktop pool.

See the Blast Extreme Display Protocol in VMware Horizon 7 document for more information, including optimization tips.

VMware vRealize Operations for Horizon

Traditionally, management and monitoring of enterprise environments involved monitoring a bewildering array of systems, requiring administrators to switch between multiple consoles to support the environment.

VMware vRealize® Operations for Horizon® facilitates proactive monitoring and management of a Horizon environment and can also proactively monitor vSphere and display all information, alerts, and warnings for compute, storage, and networking.

vRealize Operations for Horizon provides end-to-end visibility into Horizon 7 and its supporting infrastructure, enabling administrators to:

  • Meet service-level agreements (SLAs)
  • Reduce the first time to resolution (FTR)
  • Improve user satisfaction
  • Proactively monitor the environment and resolve issues before they affect users
  • Optimize resources and lower management costs
  • Monitor reporting
  • Create custom dashboards

Architectural Components

vRealize Operations for Horizon consists of multiple components. These components are described here, and design options are discussed and determined.

vRealize Operations for Horizon Logical Architecture

Figure 99: vRealize Operations for Horizon Logical Architecture

vRealize Operations for Horizon consists of the following components:

  • vRealize Operations Manager
  • Horizon adapter
  • Broker agent
  • Desktop agent

Other adapters can be added to gather information from other sources; for example, the VMware vSAN management pack can be used to display vSAN storage metrics within the vRealize Operations Manager dashboards.

See VMware vRealize Operations for Horizon Installation for more detail.

Table 106: Implementation Strategy for vRealize Operations for Horizon

Decision

The latest versions of the vRealize Operations Manager and vRealize Operations for Horizon were deployed.

Justification

This meets the requirements for monitoring Horizon 7.

vRealize Operations Manager

vRealize Operations Manager can be deployed as a single node, as part of a cluster, or as a cluster with remote nodes.

  • Single node – A single-node deployment does not provide high availability and is limited in the number of objects it can support.
  • Cluster – A cluster consists of multiple nodes (appliances). This provides flexibility and the ability to scale to suit most enterprise deployments while providing high availability.
  • Cluster + remote collector node – Remote collector nodes are deployed in the data center or on a remote site to capture information before compressing and passing it back to the cluster.

vRealize Operations Manager appliances can perform various node roles, as described in the following table.

Table 107: vRealize Operations Manager Node Roles
Role Description

Cluster Management

A cluster consists of a master node and an optional replica node to provide high availability for cluster management. It can also have additional data nodes and optional remote collector nodes.

Deploy nodes to separate vSphere hosts to reduce the chance of data loss in the event that a physical host fails. You can use DRS anti-affinity rules to ensure that VMs remain on separate hosts.

Master Node

The initial, required node in vRealize Operations Manager. All other nodes are managed by the master node.

In a single-node installation, the master node manages itself, has adapters installed on it, and performs all data collection and analysis.

Replica Node

When high availability is enabled on a cluster, one of the data nodes is designated as a replica of the master node and protects the analytics cluster against the loss of a node.

Enabling HA within vRealize Operations Manager is not a disaster recovery solution. When you enable HA, you protect vRealize Operations Manager from data loss in the event that a single node is lost by duplicating data. If two or more nodes are lost, there might be permanent data loss.

Data Analytics Node

Additional nodes in a cluster that can perform data collection and analysis. Larger deployments usually have adapters on the data nodes so that master and replica node resources can be dedicated to cluster management.

Remote Collector Node

If vRealize Operations Manager is monitoring resources in additional data centers, you must use remote collectors and deploy the remote collectors in the remote data centers. Because of latency issues, you might need to modify the intervals at which the configured adapters on the remote collector collect information.

VMware recommends that latency between sites not exceed 200ms.

Remote collectors can also be used within the same data center as the cluster. Adapters can be installed on these remote collectors instead of the cluster nodes, freeing the cluster nodes to handle the analytical processing.

Collector Group

A collector group is a collection of nodes (analytic nodes and remote collectors). You can assign adapters to a collector group rather than to a single node.

If the node running the adapter fails, the adapter is automatically moved to another node in the collector group.

Sizing

vRealize Operations for Horizon can scale to support very high numbers of Horizon sessions. For enterprise deployments of vRealize Operations Manager, deploy all nodes as large or extra-large deployments, depending on sizing requirements and your available resources.

To assess the requirements for your environment, see the VMware Knowledge Base article vRealize Operations Manager 7.0 Sizing Guidelines (57903). Use the spreadsheet attached to this KB to assist with sizing.

Additionally, review the Reference Architecture Overview and Scalability Considerations in the vRealize Operations Manager documentation.

Table 108: Implementation Strategy for vRealize Operations Manager

Decision

Two large-sized nodes of vRealize Operations Manager were deployed, forming a cluster. The VM appliances were deployed in the internal network.

Justification

Two large cluster nodes support the number of Horizon VMs (8,000), and meet requirements for high availability.

Although medium-sized nodes would suffice for the current number of VMs, deploying large-sized nodes follows best practice for enterprise deployments and allows for growth in the environment without needing to rearchitect.

Horizon Adapter

The Horizon adapter obtains inventory information from the broker agent and collects metrics and performance data from desktop agents. The adapter passes this data to vRealize Operations Manager for analysis and visualization.

The Horizon adapter runs on the master node or a remote collector node in vRealize Operations Manager. Adapter instances are paired with one or more broker agents to receive communications from them.

Creating a Horizon adapter instance on a remote collector node is recommended in the following scenarios:

  • With large-scale environments of over 5,000 desktops, to give better scalability and to offload processing from cluster data nodes.
  • With remote data centers to minimize network traffic across WAN or other slow connections.
    • Deploy a remote collector node in each remote data center.
    • Create an adapter instance on each remote collector node and pair each instance with the broker agent that is located in the same data center.
  • Creating the Horizon adapter instance on a collector group is not supported.
    • If a failover occurs and the Horizon adapter instance is moved to a different collector in the group, it cannot continue to collect data.
    • To prevent communication interruptions, create the adapter instance on a remote collector node.

Creating more than one Horizon adapter instance per collector is not supported. You can pair the broker agents installed in multiple pods with a single Horizon adapter instance, as long as the total number of desktops in those pods does not exceed 10,000. If you need to create multiple adapter instances, you must create each instance on a different node.

Table 109: Implementation Strategy for Horizon Adapters

Decision

A large remote collector node was deployed for each site.

A Horizon adapter instance was created on each of these nodes to collect data from their local Horizon pods.

Justification

Separating the Horizon adapter onto a remote collector is recommended in environments of more than 5,000 desktops. The environment is designed for 8,000 users.

Horizon adapter instances are not supported on a collector group. Using remote collectors for remote sites allows for efficient data collection.

Broker Agent

The broker agent is a Windows service that runs on a Horizon Connection Server host that collects Horizon 7 inventory information, and then sends that information to the vRealize Operations for Horizon adapter.

  • The broker agent is installed on one Connection Server host in each Horizon 7 pod.
  • Only one broker agent exists in each Horizon 7 pod.

Table 110: Implementation Strategy for the Broker Agent

Decision

The broker agent was configured to collect information from the event database.

The agent was deployed to a single Connection Server within each pod.

Justification

The broker agent is a required component to allow vRealize Operations for Horizon to collect data from the Horizon environment. Only a single broker per pod is supported.

Desktop Agent

The vRealize Operations for Horizon desktop agent runs on each remote desktop or RDSH server VM in the Horizon 7 environment.

It collects metrics and performance data and sends that data to the Horizon adapter. Metrics collected by the desktop agent include:

  • Desktop and application objects
  • Users’ login time and duration
  • Session duration
  • Resource and protocol information

The vRealize Operations for Horizon desktop agent can be installed as a part of the Horizon Agent installation. See the table in Desktop Agent to find the version included with the version of Horizon Agent being used and determine whether you need to install a newer version separately.

Table 111: Implementation Strategy for the vRealize Operations for Horizon Desktop Agent

Decision

The desktop agent was installed as part of the standard Horizon Agent and was enabled during installation.

Justification

With Horizon Agent 7.7, the included version of the vRealize Operations for Horizon desktop agent supports the selected version of vRealize Operations for Horizon.

 

Component Design: Horizon Cloud Service on Microsoft Azure

VMware Horizon® Cloud Service™ is available using a software-as-a-service (SaaS) model. This service comprises multiple software components.

Horizon Cloud Service on Microsoft Azure

Figure 100: Horizon Cloud Service on Microsoft Azure

Horizon Cloud Service provides a single cloud control plane, run by VMware, that enables the central orchestration and management of remote desktops and applications in your Microsoft Azure capacity, in the form of one or multiple subscriptions in Microsoft Azure.

VMware is responsible for hosting the Horizon Cloud Service control plane and providing feature updates and enhancements for a software-as-a-service experience. The Horizon Cloud Service is an application service that runs in multiple Amazon Web Services (AWS) regions. 

The cloud control plane also hosts a common management user interface called the Horizon Cloud Administration Console, or Administration Console for short. The Administration Console runs in industry-standard browsers. It provides you with a single location for management tasks involving user assignments, virtual desktops, RDSH-published desktop sessions, and applications. This service is currently hosted in three AWS regions: United States, Germany, and Australia. The Administration Console is accessible from anywhere at any time, providing maximum flexibility.

Horizon Cloud Service on Microsoft Azure Deployment Overview

A successful deployment of VMware Horizon® Cloud Service™ on Microsoft Azure depends on good planning and a robust understanding of the platform. This section discusses the design options and details the design decisions that were made to satisfy the design requirements of this reference architecture.

The core elements of Horizon Cloud Service include:

  • Horizon Cloud control plane
  • Horizon Cloud Manager VM, which hosts the Administration Console UI
  • VMware Unified Access Gateway™
  • Horizon Agent
  • VMware Horizon® Client™

The following figure shows the high-level logical architecture of these core elements. Other components are shown for illustrative purposes.

Horizon Cloud Service on Microsoft Azure Logical Architecture

Figure 101: Horizon Cloud Service on Microsoft Azure Logical Architecture

This figure demonstrates the basic logical architecture of a Horizon Cloud Service pod on your Microsoft Azure capacity.

  • Your Microsoft Azure infrastructure as a service (IaaS) provides capacity.
  • Your Horizon Cloud Service control plane is granted permission to create and manage resources with the use of a service principal in Microsoft Azure.
  • You provide additional required components, such as Active Directory, as well as optional components, such as a Workspace ONE Connector or RDS license servers.
  • The Horizon Cloud Service control plane initiates the deployment of the Horizon Cloud Manager VM, Unified Access Gateway appliances for secure remote access, and other infrastructure components that assist with the configuration and management of the Horizon Cloud Service infrastructure.
  • After the Horizon Cloud Service pod is deployed, you can connect the pod to your own corporate AD infrastructure or create a new AD configuration in your Microsoft Azure subscription. You deploy VMs from the Microsoft Azure marketplace, which are sealed into images, and can be used in RDSH server farms.
  • With the VDI functionality, you can also create Windows 10 assignments of both dedicated and floating desktops.

Horizon Cloud Service on Microsoft Azure includes the following components and features.

Table 112: Components of Horizon Cloud on Microsoft Azure
Component Description

Jump box

The jump box is a temporary Linux-based VM used during environment buildout and for subsequent environment updates and upgrades.

One jump box is required per Azure pod only during platform buildout and upgrades.

Management VM

The management VM appliance provides access for administrators and users to operate and consume the platform.

One management VM appliance is constantly powered on; a second is required during upgrades.

Horizon Cloud control plane

This cloud-based control plane is the central location for conducting all administrative functions and policy management. From the control plane, you can manage your virtual desktops and RDSH server farms and assign applications and desktops to users and groups from any browser on any machine with an Internet connection.

The cloud control plane provides access to manage all Horizon Cloud pods deployed to your Microsoft Azure infrastructure in a single, centralized user interface, no matter which regional data center you use.

Horizon Cloud Administration Console

This component of the control plane is the web-based UI that administrators use to provision and manage Horizon Cloud desktops and applications, resource entitlements, and VM images.

The Administration Console provides full life-cycle management of desktops and Remote Desktop Session Host (RDSH) servers through a single, easy-to-use web-based console. Organizations can securely provision and manage desktop models and entitlements, as well as native and remote applications, through this console.

The console also provides usage and activity reports for various user, administrative, and capacity-management activities.

Horizon Agent

This software service, installed on the guest OS of all virtual desktops and RDSH servers, allows them to be managed by Horizon Cloud pods.

Horizon Client

This software, installed on the client device, allows a physical device to access a virtual desktop or RDSH-published application in a Horizon deployment. You can optionally use an HTML client on devices for which installing software is not possible.

Unified Access Gateway

This gateway is a hardened Linux virtual appliance that allows for secure remote access to the Horizon Cloud environment. This appliance is part of the Security Zone (for external Horizon Cloud access) and the Services Zone (for internal Horizon Cloud access).

The Unified Access Gateway appliances deployed as a Horizon Cloud pod are load balanced by an automatically deployed and configured Microsoft Azure load balancer. The design decisions for load balancing within a pod are already made for you.

RDSH servers

These Windows Server VMs provide published applications and session-based remote desktops to end users.

Table 113: Implementation Strategy for Horizon Cloud Service on Microsoft Azure

Decision

A Horizon Cloud Service on Microsoft Azure deployment was designed and integrated with the Workspace ONE platform.

This design accommodates an environment capable of scaling to 6,000 concurrent connections or users.

Justification

This strategy allowed the design, deployment, and integration to be validated and documented.

Availability and Scalability

When creating your design, keep in mind that you want an environment that can scale up when necessary and also remain highly available. Design decisions need to be made with respect to some Microsoft Azure limitations, and with respect to some Horizon Cloud limitations.

A Horizon Cloud pod is typically used to describe a deployment of Horizon Cloud in a Microsoft Azure subscription. The primary component of a pod is the Horizon Cloud Manager VM. A Horizon Cloud pod can have several functional components that support the key components of a Horizon Cloud pod. Examples of these components are a jump box, Unified Access Gateway virtual appliances, and the manager VM. 

Several Microsoft Azure platform components and services are used in a pod, such as Microsoft Azure Database for PostgreSQL Service, Microsoft Azure load balancers, and Microsoft Azure virtual networks (VNets). A full list of platform requirements can be found in the Horizon Cloud Service on Microsoft Azure Requirements Checklist for New Pod Deployments

Availability

One key design principle is to remove single points of failure in the deployment. The Horizon Cloud pod has a few components to protect against a single point of failure.

  • Two Unified Access Gateway virtual appliances are deployed by default along with a Microsoft Azure load balancer configured to route traffic to the primary Unified Access Gateway. Availability is tested by using Http:80/favicon.ico as a health monitoring HTTP GET request to the load balancer.

  • You can optionally deploy a secondary pod manager VM with a Microsoft Azure load balancer configured to route traffic to the currently active pod manager VM. 

Horizon Cloud on Microsoft Azure leverages cloud-based software components that provide functionality for the Horizon Cloud pod, such as monitoring, image creation, and an administrative interface. To maintain the health and function of the Horizon Cloud pod, you must have line-of-site visibility to several cloud-based services. A full list of all of the DNS addresses that must have line-of-site visibility is documented in DNS Requirements for a Horizon Cloud Pod in Microsoft Azure.

Horizon Cloud on Microsoft Azure operates using Microsoft Azure infrastructure components. Subscriptions are hosted in Azure regions (data centers) located throughout the world. Outages and service degradations on the Microsoft Azure platform can result in problems with the operations of a Horizon Cloud pod. Furthermore, Microsoft has regular maintenance windows for upgrades to the platform, and although most maintenance activities do not affect the operations of VMs, some may. For more information, see the Microsoft documents Maintenance for virtual machines in Azure and SLA summary for Azure services. You can view the current status of Azure regions on the Azure status page.

Scalability

The way to expand a Horizon Cloud on Microsoft Azure environment is to deploy additional pods.

Horizon Cloud on Microsoft Azure has certain configuration maximums you must consider when making design decisions:

  • Up to 2,000 concurrent active connections are supported per Horizon Cloud pod.
  • Up to 2,000 desktop and RDSH server VMs are supported per Horizon Cloud pod.
  • Up to 2,000 desktop and RDSH server VMs are supported per Microsoft Azure region or subscription.

To handle larger user environments, you can deploy multiple Horizon Cloud pods, but take care to follow the accepted guidelines for segregating the pods from each other. For example, under some circumstances, you might deploy a single pod in two different Microsoft Azure regions, or you might be able to deploy two pods in the same subscription in the same region as long as the IP address space is large enough to handle multiple deployments.

For more information, see VMware Horizon Cloud Service on Microsoft Azure Service Limits.

For information about creating subnets and address spaces, see Configure the Required Virtual Network in Microsoft Azure.

Table 114: Implementation Strategy for Horizon Cloud Pods

Decision

Three Horizon Cloud pods were deployed.

Justification

This design meets the requirements for scaling to 6,000 concurrent connections or users.

Configuration Maximums for Microsoft Azure Subscriptions

Horizon Cloud on Microsoft Azure leverages Microsoft Azure infrastructure to deliver desktops and applications to end users. Each Microsoft Azure region can have different infrastructure capabilities. You can leverage multiple Microsoft Azure regions for your infrastructure needs.

A Microsoft Azure region is a set of data centers deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.

These deployments are a part of your Microsoft Azure subscription or subscriptions. A subscription is a logical segregation of Microsoft Azure capacity that you are responsible for. You can have multiple Microsoft Azure subscriptions as a part of the organization defined for you in Microsoft Azure.

A Microsoft Azure subscription is an agreement with Microsoft to use one or more Microsoft cloud platforms or services, for which charges accrue based either on a per-user license fee or on cloud-based resource consumption. For more information on Microsoft Azure subscriptions, see Subscriptions, licenses, accounts, and tenants for Microsoft's cloud offerings.

Some of the limitations for individual Microsoft Azure subscriptions might impact designs for larger Horizon Cloud on Microsoft Azure deployments. For details about Microsoft Azure subscription limitations, see Azure subscription and service limits, quotas, and constraints. Microsoft Azure has a maximum of 10,000 vCPUs that can be allotted for any given Microsoft Azure subscription per region.

If you plan to deploy 2,000 concurrent VDI user sessions in a single deployment of Horizon Cloud on Microsoft Azure, consider the VM configurations you require. If necessary, you can leverage multiple Microsoft Azure subscriptions for a Horizon Cloud on Microsoft Azure deployment.

Note: You might need to request increases in quota allotment for your subscription in any given Microsoft Azure region to accommodate your design.

Table 115: Implementation Strategy Regarding Microsoft Azure Subscriptions

Decision

Multiple Microsoft Azure subscriptions were used.

Justification

This strategy provides an environment capable of scaling to 6,000 concurrent connections or users, where each session involves a VDI desktop with 2 vCPUs (or cores), making a total requirement of 12,000 vCPUs.

Because the requirement for 12,000 vCPUs exceeds the maximum number of vCPUs allowed per individual subscription, multiple subscriptions must be used.

Other Design Considerations

Several cloud- and SaaS-based components are included in a Horizon Cloud on Microsoft Azure deployment. The operation and design of these services are considered beyond the scope of this reference architecture because it is assumed that no design decisions you make will impact the nature of the services themselves. Microsoft publishes a Service Level Agreement for individual components and services provided by Microsoft Azure. 

Horizon Cloud on Microsoft Azure uses Azure availability sets for some components included in the Horizon Cloud pod—specifically for the two Unified Access Gateways that are deployed as a part of any Internet-enabled deployment. 

You can manually build and configure Horizon Cloud pods to provide applications and desktops in the event that you have an issue accessing a Microsoft Azure regional data center. Microsoft has suggestions for candidate regions for disaster recovery. For more information, see Business continuity and disaster recovery (BCDR): Azure Paired Regions.

As was mentioned previously, Horizon Cloud on Microsoft Azure has no built-in functionality to handle business continuity or regional availability issues. In addition, the Microsoft Azure services and features regarding availability are not supported by Horizon Cloud on Microsoft Azure.

Network Design

Horizon Cloud on Microsoft Azure is a simple solution for providing desktops and streamed applications to your end users. The deployment is straightforward: You prepare and provide information to VMware on a Microsoft Azure subscription, and the Horizon Service deploys a Horizon Cloud on Microsoft Azure pod into the subscription on your behalf. 

However, some companies need more flexibility in pod deployment options. For example, with the new custom deployment options available in Horizon Cloud on Microsoft Azure, you can configure segregated development, testing, and production environments, yet allow your application development team to access them all from the same Horizon Cloud on Microsoft Azure pod. 

For a basic Horizon Cloud on Microsoft Azure deployment, all components of the pod are deployed into the same Microsoft Azure VNet in the same subscription.

Basic Horizon Cloud on Microsoft Azure Deployment – Same VNet and Subscription for All Components

Figure 102: Basic Horizon Cloud on Microsoft Azure Deployment – Same VNet and Subscription for All Components

Starting with Horizon Cloud on Microsoft Azure 1.5, two deployment options have been added to facilitate these architectures.

  • Use a Different Subscription for External Gateway
  • Use a Different Virtual Network

Using these two deployment options allows you to deploy Horizon Cloud on Microsoft Azure to accommodate a hub and spoke architecture built within Microsoft Azure. 

When you choose either of these new deployment options, the Unified Access Gateway configuration components are deployed into a separate VNet or Azure subscription from the rest of the Horizon Cloud on Microsoft Azure pod. These options require you to follow an amended set of prerequisites to make Horizon Cloud on Microsoft Azure function properly.  

Important: For both of these deployment options, if you plan to create a network peering to provide visibility between your VNets, be sure to create the required subnets before running the deployment wizard. See In Advance of Pod Deployment, Create the Horizon Cloud Pod's Required Subnets on your VNet in Microsoft Azure.

Table 116: Implementation Strategy for Horizon Cloud on Microsoft Azure Networks
Decision Default network configuration was used for all deployments. That is, we used the same subscription and VNet for all components.

Justification

This strategy provided the simplest method of deployment to support our hub and spoke deployment. For our test environment, there was no need to segregate the environment due to security concerns or areas of responsibility.

Using a Different Subscription for an External Gateway

You can choose to deploy your Unified Access Gateway appliances into a separate subscription by toggling this option in the Horizon Cloud on Microsoft Azure deployment wizard. 

Selecting a Separate Subscription for Unified Access Gateway Appliances

Figure 103: Selecting a Separate Subscription for Unified Access Gateway Appliances

This option allows you to deploy the Unified Access Gateway (UAG) components into a separate subscription, as depicted in the following figure.

Additional Azure Subscription Used for the External Gateway

Figure 104: Additional Azure Subscription Used for the External Gateway

With this configuration, you will need to make sure that the two subscriptions are line-of-sight visible to each other through either of the following options:

Also see Prerequisites for Running the Pod Deployment Wizard, in the Horizon Cloud Deployment Guide.

Deploying Unified Access Gateways into a Different VNet

You can choose to deploy your Unified Access Gateway appliances into a separate virtual network by toggling this option in the Horizon Cloud on Microsoft Azure deployment wizard. 

Selecting a Separate Virtual Network for Unified Access Gateway Appliances

Figure 105: Selecting a Separate Virtual Network for Unified Access Gateway Appliances

This option allows you to deploy the Unified Access Gateway components into a separate VNet, as depicted in the following figure.

Unified Access Gateway in a Separate VNet – Same Subscription

Figure 106: Unified Access Gateway in a Separate VNet – Same Subscription

With this configuration, you will need to make sure that the two VNets are line-of-sight visible to each other through either of the following options:

Also see Prerequisites for Running the Pod Deployment Wizard, in the Horizon Cloud Deployment Guide.

Using the Options for a Separate Subscription or VNet in Other Configurations

The use of these two deployment options opens the door to a number of deployment configurations that were not available until now. The diagrams that follow describe examples of typical configurations that companies have used for more complex deployments. For example, the deployment configuration can make external traffic flow through a separate VNet, where network virtual appliances are deployed to provide a DMZ. This DMZ could manage Internet-based traffic prior to allowing access to the Unified Access Gateway appliances. This strategy follows the example described in the Microsoft document Hub-Spoke network topology with shared services in Azure.

Using a Separate VNet for Trusted UAG-NVA Traffic to Horizon Cloud Components

The configurations depicted in the following diagrams show how you can deploy the components leveraging a separate subnet for network virtual appliances (NVAs), with a trusted connection to the Horizon Cloud on Microsoft Azure components.

Trusted Unified Access Gateway Traffic Using a Separate VNet – Same Subscription, with NVA

Figure 107: Trusted Unified Access Gateway Traffic Using a Separate VNet – Same Subscription, with NVA

Trusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with NVA

Figure 108: Trusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with NVA

Using a Separate VNet for Untrusted NVA-UAG Traffic to Horizon Cloud Components

The configurations depicted in the following diagrams show how you can deploy the components leveraging a separate subnet for NVAs. In this example, the traffic is considered untrusted from the Unified Access Gateway appliances. This configuration introduces another layer of security by routing traffic back to the NVAs to handle trusted and untrusted connections.  

Untrusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with Front-End and Backend Interfaces on NVA

Figure 109: Untrusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with Front-End and Backend Interfaces on NVA

Trusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with Front-End and Backend Interfaces on NVA

Figure 110: Trusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with Front-End and Backend Interfaces on NVA

Using a Separate VNet for Untrusted UAG-NVA Traffic with Front-End and Backend NVAs

The configurations depicted in the diagrams that follow show how you can deploy the components leveraging an additional separate subnet for more NVAs. In this example, the traffic is still considered to be untrusted from the Unified Access Gateway appliances. This configuration introduces another layer of security by routing traffic into additional NVAs to handle trusted and untrusted connections.  

Untrusted Unified Access Gateway Traffic Using a Separate VNet – Same Subscription, with Front-End and Backend NVAs

Figure 111: Untrusted Unified Access Gateway Traffic Using a Separate VNet – Same Subscription, with Front-End and Backend NVAs

Untrusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with Front-End and Backend NVAs

Figure 112: Untrusted Unified Access Gateway Traffic Using a Separate VNet – Separate Subscriptions, with Front-End and Backend NVAs

Single-Site Design

Microsoft Azure is a cloud-based service platform that is deployed in many Azure data centers throughout the world, organized into regions. As the Microsoft Azure regions page states: “A region is a set of data centers deployed within a latency-defined perimeter and connected through a low-latency network.” 

It is good practice to select an Azure region near your primary user groups or applications to achieve low network latency between your users’ VDI desktops or RDSH server farms and the applications they use. Doing so will have a positive impact on your users’ experience with Horizon Cloud on Microsoft Azure. Also make sure that the region you select has access to all the Azure products and services you plan to use. Azure products are not distributed uniformly across all Azure regions. For example, some virtual machine types may not be available in any given region. See Products available by region.

 

Table 117: Implementation Strategy for Microsoft Azure Regions
Decision We decided to use the Azure regions East US and East US 2 for data centers.

Justification

Our current on-premises data center is in Atlanta, GA (USA). The East US and East US 2 Azure regions are in different parts of Virginia (USA). The Azure regional data centers are roughly between 300 and 350 miles from Atlanta and provide relatively low-latency connections (< 30 ms) to the Atlanta area.

East US and East US 2 have less than 10 ms latency between them. This setup provides an opportunity to distribute workloads geographically across three separate locations and still have a low-latency connection from any given site to the others. For more information, see:

You can deploy multiple Horizon Cloud on Microsoft Azure pods into a single Azure region. Doing so enables you to service more than 2,000 users in any given locality. To accomplish this, you must leverage multiple Azure subscriptions in the same region.

Logical Diagram of Horizon Cloud Deployments – Multiple Pods in a Single Azure Region

Figure 113: Logical Diagram of Horizon Cloud Deployments – Multiple Pods in a Single Azure Region

Multi-site Design

You can deploy Horizon Cloud pods to multiple Microsoft Azure regions and manage them all through the Horizon Cloud Administration Console. Each Horizon Cloud pod is a separate entity and is managed individually. VM master images, assignments, and users must all be managed within each pod. No cross-pod entitlement or resource sharing is available.

Logical Diagram Showing Horizon Cloud Deployments – Multiple Pods in Multiple Azure Regions

Figure 114: Logical Diagram Showing Horizon Cloud Deployments – Multiple Pods in Multiple Azure Regions

Table 118: Implementation Strategy for Multi-site Deployments
Decision

Three Horizon Cloud pods were deployed to Microsoft Azure regions:

  • Two pods were deployed to the US East Region of Microsoft Azure.
  • One pod was deployed to the US East 2 Region of Microsoft Azure.

Each region used a different subscription.

Justification

The use of separate Microsoft Azure regions illustrates how to scale and deploy Horizon Cloud for multi-site deployments.

Note: A Split-horizon DNS configuration might be required for a multi-site deployment, depending on how you want users to access the Horizon Cloud on Microsoft Azure environment. You can leverage both options to scale to multiple subscriptions in the same and other regions as required.

External Access

You can configure each pod to provide access to desktops and applications for end users located outside of your corporate network. By default, Horizon Cloud pods allow users to access the Horizon Cloud environment from the Internet. When the pod is deployed with this ability configured, the pod includes a load balancer and Unified Access Gateway instances to enable this access.

If you do not select Internet Enabled Desktops for your deployment, clients must connect directly to the pod and not through Unified Access Gateway. In this case, you must perform some post-deployment steps to create the proper internal network routing rules so that users on your corporate network have access to your Horizon Cloud environment.

If you decide to implement Horizon Cloud on Microsoft Azure so that only internal connections are allowed, you will need to configure your DNS correctly with a Split-horizon DNS configuration. 

Entitlement to Multiple Pods

You can manually spread users across multiple Horizon Cloud pods. However, each Horizon Cloud pod is managed individually, and there is no way to cross-entitle users to multiple pods. Although the same user interface is used to manage multiple Horizon Cloud pods, you must deploy separate VM images, RDSH server farms, and assignments on each pod individually. 

You can mask this complexity from a user’s point of view by implementing Workspace ONE Access™ so that end users must use VMware Workspace ONE® to access resources. For example, you could entitle different user groups to have exclusive access to different Horizon Cloud on Microsoft Azure deployments, and then join each pod to the same Active Directory.

Note: Although this method works, there is currently no product support for automatically balancing user workloads across Horizon Cloud pods.

Optional Components for a Horizon Cloud Service on Microsoft Azure Deployment

You can implement optional components to provide additional functionality and integration with other VMware products:

  • Workspace ONE Access – Implement and integrate the deployment with Workspace ONE Access so that end users can access all their apps and virtual desktops from a single unified catalog.
  • VMware Dynamic Environment Manager™ – Leverage Dynamic Environment Manager to provide a wide range of capabilities such as personalization of Windows and applications, contextual policies for enhanced user experience, and privilege elevation so that users can install applications without having administrator privileges.
  • True SSO Enrollment server – Deploy a True SSO Enrollment Server to integrate with Workspace ONE Access and enable single-sign-on features in your deployment. Users will be automatically logged in to their Windows desktop when they open a desktop from the Workspace ONE user interface.

Shared Services Prerequisites

The following shared services are required for a successful implementation of Horizon Cloud on Microsoft Azure deployment:

  • DNS – DNS is used to provide name resolution for both internal and external computer names. For more information, see Configure the Virtual Network’s DNS Server.
  • Active Directory – There are multiple configurations you can use for an Active Directory deployment. You can choose to host Active Directory completely on-premises, completely in Microsoft Azure, or in a hybrid (on-premises and in Microsoft Azure) deployment of Active Directory for Horizon Cloud on Microsoft Azure. For supported configurations, see Active Directory Domain Configurations.
  • RDS licensing – For connections to RDSH servers, each user and device requires a Client Access License assigned to it. RDS licensing infrastructure can be deployed either on-premises or in a Microsoft Azure region based on your organization’s needs. For details, see License your RDS deployment with client access licenses (CALs).
  • DHCP – In a Horizon environment, desktops and RDSH servers rely on DHCP to get IP addressing information. Microsoft Azure provides DHCP services as a part of the platform. You do not need to set up a separate DHCP service for Horizon Cloud Service on Microsoft Azure. For information on how DHCP works in Microsoft Azure, see Address Types in Add, change, or remove IP addresses for an Azure network interface.
  • Certificate services – The Unified Access Gateway capability in your pod requires SSL/TLS for client connections. To serve Internet-enabled desktops and published applications, the pod deployment wizard requires a PEM-format file. This file provides the SSL/TLS server certificate chain to the pod’s Unified Access Gateway configuration. The single PEM file must contain the entire certificate chain, including the SSL/TLS server certificate, any necessary intermediate CA certificates, the root CA certificate, and the private key.

    For additional details about certificate types used in Unified Access Gateway, see Selecting the Correct Certificate Type. Also see Environment Infrastructure Design for details on how certificates impact your Horizon Cloud on Microsoft Azure deployment.

Authentication

One method of accessing Horizon desktops and applications is through Workspace ONE Access. This requires integration between the Horizon Cloud Service and Workspace ONE Access using the SAML 2.0 standard to establish mutual trust, which is essential for single sign-on (SSO) functionality.

  • When SSO is enabled, users who log in to Workspace ONE Access with Active Directory credentials can launch remote desktops and applications without having to go through a second login procedure when they access a Horizon desktop or application.
  • When users are authenticating to Workspace ONE Access and using authentication mechanisms other than AD credentials, True SSO can be used to provide SSO to Horizon resources for the users.

For details, see Integrate a Horizon Cloud Node with a Workspace ONE Access Environment and Configure True SSO for Use with Your Horizon Cloud Environment.

See the chapter Platform Integration for more detail on integrating Horizon Cloud with Workspace ONE Access.

True SSO

Many user authentication options are available for logging in to Workspace ONE Access or Workspace ONE. Active Directory credentials are only one of these many authentication options. Ordinarily, using anything other than AD credentials would prevent a user from being able to SSO to a Horizon virtual desktop or published application through Horizon Cloud on Microsoft Azure. After selecting the desktop or published application from the catalog, the user would be prompted to authenticate again, this time with AD credentials.

True SSO provides users with SSO to Horizon Cloud on Microsoft Azure desktops and applications regardless of the authentication mechanism used. True SSO uses SAML, where Workspace ONE is the Identity Provider (IdP) and the Horizon Cloud pod is the Service Provider (SP). True SSO generates unique, short-lived certificates to manage the login process. This enhances security because no passwords are transferred within the data center.

True SSO Logical Architecture

Figure 115: True SSO Logical Architecture

True SSO requires a new service—the Enrollment Server—to be installed. 

Table 119: Implementation Strategy for SSO Using Authentication Mechanisms Other Than AD Credentials

Decision

True SSO was implemented.

Justification

This strategy allows for SSO to Horizon Cloud Service on Microsoft Azure desktops and applications through Workspace ONE Access, even when the user does not authenticate with Active Directory credentials.

Design Overview

For True SSO to function, several components must be installed and configured within the environment. This section discusses the design options and details the design decisions that satisfy the requirements.

The Enrollment Server is responsible for receiving certificate-signing requests from the Connection Server and passing them to the Certificate Authority to sign using the relevant certificate template. The Enrollment Server is a lightweight service that can be installed on a dedicated Windows Server 2016 VM, or it can run on the same server as the MS Certificate Authority service.

Scalability for True SSO

A single Enrollment Server can easily handle all the requests from a single pod. The constraining factor is usually the Certificate Authority (CA). A single CA can generate approximately 70 certificates per second (based on a single vCPU). This usually increases to over 100 when multiple vCPUs are assigned to the CA VM.

To ensure availability, a second Enrollment Server should be deployed per pod (n+1). Additionally, ensure that the Certificate Authority service is deployed in a highly available manner, to ensure complete solution redundancy.

True SSO Availability and Redundancy

Figure 116: True SSO Availability and Redundancy

With two Enrollment Servers, and to achieve high availability, it is recommended to co-host the Enrollment Server service with a Certificate Authority service on the same machine.

Table 120: Implementation Strategy for Enrollment Servers

Decision

Two Enrollment Servers were deployed in the same Microsoft Azure region as the Horizon Cloud pod.

These ran on dedicated Windows Server 2016 VMs.

These servers also had the Microsoft Certificate Authority service installed.

Justification

Having two servers satisfies the requirements of handling 2,000 sessions and provides high availability.

For information on how to install and configure True SSO, see Configure True SSO for Use with Your Horizon Cloud Environment. Also see Setting Up True SSO for Horizon Cloud Service on Microsoft Azure in Appendix B: VMware Horizon Configuration.

Component Design: App Volumes Architecture

The VMware App Volumes™ just-in-time application model separates IT-managed applications and application suites into administrator-defined application containers. App Volumes also introduces an entirely different container used for persisting user changes between sessions.

App Volumes Just-in-Time Application Model

Figure 117: App Volumes Just-in-Time Application Model

This version of the App Volumes reference architecture was built using App Volumes 4. App Volumes 4 architecture is similar to that of the earlier App Volumes 2.x versions, but there are some notable differences in components, lifecycle management, and terminology. The App Volumes 4 Feature Review is an interactive demo that will help you quickly familiarize yourself with the new concepts. If you are planning to upgrade from App Volumes 2.x to 4, see VMware App Volumes 4 Installation and Upgrade Considerations to learn about the various upgrade paths available, including App Volumes 2.18 and App Volumes 4 co-existence.

App Volumes serves two functions. The first is delivery of software programs that are not in the master VM image for VDI and RDSH. App Volumes groups one or more programs into packages based on the requirements of each use case. A package is a virtual disk containing one or more programs that are captured together.

The packages are added to applications. Applications are used to assign packages to AD entities such as user, group, organizational unit (OU), or machine. The packages can be mounted each time the user logs in to a desktop, or at machine startup. For VDI use cases, packages can be mounted at login. With RDSH use cases, because packages are assigned to the machine account, the packages are mounted when the App Volumes service starts.

App Volumes also provides user-writable volumes, which can be used in specific use cases. Writable volumes provide a mechanism to capture user profile data, user-installed applications that are not or cannot be delivered by packages, or both. This reduces the likelihood that persistent desktops would be required for a use case. User profile data and user-installed applications follow the user as they connect to different virtual desktops.

Table 121: Implementation Strategy for App Volumes

Decision

App Volumes was deployed and integrated into the VMware Horizon® 7 on-premises environment.

This design was created for an environment capable of scaling to 8,000 concurrent user connections.

Justification

This strategy allows the design, deployment, and integration to be validated and documented.

Note: If you are new to App Volumes, VMware recommends the following resources to help you familiarize yourself with the product:

For additional hands-on learning, consider this three-day course on implementing App Volumes and Dynamic Environment Manager on Horizon 7

Architecture Overview

The App Volumes Agent is installed in the guest operating system of nonpersistent VMs. The agent communicates with the App Volumes Manager instances to determine package and writable volumes entitlements. Packages and writable volumes virtual disks are attached to the guest operating system in the VM, making applications and personalized settings available to end users.

App Volumes Logical Components

Figure 118: App Volumes Logical Components

The components and features of App Volumes are described in the following table.

Table 122: App Volumes Components and Concepts
Component Description

App Volumes Manager

  • Console for management of App Volumes, including configuration, creation of applications and packages, and assignment of packages and writable volumes
  • Broker for App Volumes Agent for the assignment of packages and writable volumes

App Volumes Agent

  • Runs on virtual desktops or RDSH servers
  • File system and registry abstraction layer running on the target system
  • Virtualizes file system writes as appropriate (when used with an optional writable volume)

Application

  • Logical component containing one or more packages
  • Used to assign AD entities to packages
  • Supports marker and package assignment types
Package
  • Read-only volume containing applications
  • Virtual disk file that attaches to deliver apps to VDI or RDSH
  • One or more packages may be assigned per user or machine
Program
  • Represents a piece of software captured in a package
  • One or more programs may be captured in a package
Marker
  • Attribute of an application used to designate the current package
  • Simplifies application lifecycle management tasks

Writable volume

  • Read-write volume that persists changes written in the session, including user-installed applications and user profile
  • One writable volume per user
  • Only available with user or group assignments
  • User writable volumes are not applicable to RDSH

Database

  • Microsoft SQL database that contains configuration information for applications, packages, writable volumes, and user entitlements
  • Should be highly available

Active Directory

  • Environment used to assign and entitle users to packages and writable volumes

VMware vCenter Server®

  • App Volumes uses vCenter Server to connect to resources within the VMware vSphere® environment
  • Manages vSphere hosts for attaching and detaching packages and writable volumes to target VMs

Packaging VMs

  • Clean Windows VM with App Volumes Agent
  • Used to capture software programs to packages for distribution

Storage group
(not shown)

  • Group of datastores used to replicate packages and distribute writable volumes

The following figure shows the high-level logical architecture of the App Volumes components, scaled out with multiple App Volumes Manager servers using a third-party load balancer.

App Volumes Logical Architecture

Figure 119: App Volumes Logical Architecture

Key Design Considerations

  • Always use at least two App Volumes Manager servers, preferably configured behind a load balancer.
    Note: This setup requires a shared SQL Server.
  • An App Volumes instance is bounded by the SQL database.
  • Any kernel mode applications should reside in the base image and not in a package.
  • Use storage groups (if you are not using VMware vSAN™) to aggregate load and IOPS.

    Note: Packages are very read intensive.

  • Storage groups may still be applicable to vSAN customers for replicating packages. See Multi-site Design Using Separate Databases for more information.
  • Place packages on storage that is optimized for read (100 percent read).
  • Place writable volumes on storage optimized for random IOPS (50/50 read/write).
  • Assign as few packages as possible per user or device. See the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354) for the recommended number of packages per VM.

    Note: This KB article was written for App Volumes 2.x AppStacks. Although most of the content is applicable to App Volumes 4, the maximum number of package attachments tested has increased.

  • App Volumes 4 defaults to an optimized Machine Managers configuration. Use the default configuration and make changes only when necessary.

Default Machine Managers Configuration

Figure 120: Default Machine Managers Configuration

Note: With previous versions of App Volumes, configuring the Mount ESXi option or, mount on host was recommended to reduce the load on vCenter Server and improve App Volumes performance. App Volumes 4 and later provides new optimizations in the communication with vCenter Server. Most implementations will no longer benefit from enabling the Mount ESXi option.

You can enable the Mount Local storage option in App Volumes to check local storage first and then check central storage. Packages are mounted faster if stored locally to the ESXi (vSphere) host. Place VMDKs on local storage and, as a safeguard, place duplicates of these VMDKs on central storage in case the vSphere host fails. Then the VMs can reboot on other hosts that have access to the centrally stored VMDKs.

If you choose to enable Mount ESXi or Mount Local, all vSphere hosts must have the same user credentials. Root-level access is not required. See Create a Custom vCenter Role for more information.

Network Ports for App Volumes

A detailed discussion of network requirements for App Volumes is outside of the scope of this guide. See Network connectivity requirements for VMware App Volumes.

See Network Ports in VMware Horizon 7 for a comprehensive list of ports requirements for VMware Horizon®, App Volumes, and much more.

App Volumes in a Horizon 7 Environment

One key concept in a VMware Horizon® 7 environment design is the use of pods and blocks, which gives us a repeatable and scalable approach. See the Horizon Pod and Block section of Component Design: Horizon 7 Architecture for more information on pod and block design.

Consider the Horizon 7 block design and scale when architecting App Volumes.

Table 123: Strategy for Deploying App Volumes in Horizon 7 Pods

Decision

An App Volumes Manager instance was deployed in each pod in each site.

The App Volumes machine manager was configured for communication with the vCenter Server in each resource block.

Justification

Standardizing on the pod and block approach simplifies the architecture and streamlines administration. 

In a production Horizon 7 environment, it is important to adhere to the following best practices:

Scalability and Availability

As with all server workloads, it is strongly recommended that enterprises host App Volumes Manager servers as vSphere virtual machines. vSphere availability features such as cluster HA, VMware vSphere® Replication™, and VMware Site Recovery Manager™ can all complement App Volumes deployments and should be considered for a production deployment.

In production environments, avoid deploying only a single App Volumes Manager server. It is far better to deploy an enterprise-grade load balancer to manage multiple App Volumes Manager servers connected to a central, resilient SQL Server database instance.

As with all production workloads that run on vSphere, underlying host, cluster, network, and storage configurations should adhere to VMware best practices with regard to availability. See the vSphere Availability Guide for more information.

App Volumes Managers

App Volumes Managers are the primary point of management and configuration, and they broker volumes to agents. For a production environment, deploy at least two App Volumes Manager servers. App Volumes Manager is stateless—all of the data required by App Volumes is located in a SQL database. Deploying at least two App Volumes Manager servers ensures the availability of App Volumes services and distributes the user load.

For more information, see the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354)

Although two App Volumes Managers might support the 8,000 concurrent users design, additional managers are necessary to accommodate periods of heavy concurrent usage, such as for logon storms.

Table 124: Strategy for Scaling App Volumes

Decision

Four App Volumes Manager servers were deployed with a load balancer.

Justification

This strategy satisfies the requirements for load and provides redundancy.

Multiple-vCenter-Server Considerations

Configuring multiple vCenter Servers is a way to achieve scale for a large Horizon 7 pod, for multiple data centers, or for multiple sites.

With machine managers, you can use different credentials for each vCenter Server, but vSphere host names and datastore names must be unique across all vCenter Server environments. After you have enabled multi-vCenter-Server support in your environment, it is not recommended to revert back to a single vCenter Server configuration.

Note: In a multiple-vCenter-Server environment, a package is tied to storage that is available to each vCenter Server. It is possible that a package visible in App Volumes Manager could be assigned to a VM that does not have access to the storage. To avoid this issue, use storage groups to replicate packages across vCenter Servers. For instructions, see Configure Storage Groups.

Multiple-AD-Domain Considerations

App Volumes supports environments with multiple Active Directory domains, both with and without the need for trust types configured between them. See Configuring and Using Active Directory for more information.

An administrator can add multiple Active Directory domains through the Configuration > Active Directories tab in App Volumes Manager. An account with a minimum of read-only permissions for each domain is required. You must add each domain that will be accessed for App Volumes by any computer, group, or user object. In addition, non-domain-joined entities may be allowed by enabling this setting.

Enabling Non-Domain-Joined Entities

Figure 121: Enabling Non-Domain-Joined Entities

vSphere Considerations

Host configurations have significant impact on performance at scale. Consider all ESXi best practices during each phase of scale-out. To support optimal performance of packages and writable volumes, give special consideration to the following host storage elements:

  • Host storage policies
  • Storage network configuration
  • HBA or network adapter (NFS) configuration
  • Multipath configuration
  • Queue-depth configuration

For best results, follow the recommendations of the relevant storage partner when configuring hosts and clusters.

For more information, see the vSphere Hardening Guide.

Load Balancing

Use at least two App Volumes Managers in production and configure each App Volumes Agent to point to a load balancer, or use a DNS server that resolves to each App Volumes Manager in a round-robin fashion.

For high performance and availability, an external load balancer is required to balance connections between App Volumes Managers.

The main concern with App Volumes Managers is handling login storms. During the login process, user-based packages and writable volumes must be attached to the guest OS in the VMs. The greater the number of concurrent attachment operations, the more time it might take to get all users logged in.

For App Volumes 4, the exact number of users each App Volumes Manager can handle will vary, depending on the load and the specifics of each environment. See the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354) for tested limits of users per App Volumes Manager server and login rates.

VMware recommends that you test the load and then size the number of App Volumes Manager servers appropriately. To size this design, we assumed each App Volumes Manager was able to handle 2,000 users.

Table 125: Strategy for App Volumes Scalability and Availability

Decision

A third-party load balancer was placed in front of the App Volumes Manager servers.

Justification

The load balancer properly distributes load and keeps the services available in the event of an issue with one of the managers.

The following figure shows how virtual desktops and RDSH-published applications can point to an internal load balancer that distributes the load to two App Volumes Managers.

App Volumes Managers Load Balancing

Figure 122: App Volumes Managers Load Balancing

In the following list, the numbers correspond to numbers in the diagram.

  1. No additional configuration is required on the App Volumes Manager servers.
  2. Load balancing of App Volumes Managers should use the following:
  • Ports = 80, 443
  • Persistent or session stickiness = Hash all Cookies
  • Timeout = 6 minutes
  • Scheduling method = round robin
  • HTTP headers = X-Forward-For
  • Real server check = HTTP

Database Design

App Volumes 4 uses a Microsoft SQL Server database to store configuration settings, assignments, and metadata. This database is a critical aspect of the design, and it must be accessible to all App Volumes Manager servers.

An App Volumes instance is defined by the SQL database. Multiple App Volumes Manager servers may be connected to a single SQL database.

For nonproduction App Volumes environments, you can use the Microsoft SQL Server Express database option, which is included in the App Volumes Manager installer. Do not use SQL Server Express for large-scale deployments or for production implementations.

App Volumes works well with both SQL Server failover cluster instances (FCI) and SQL Server Always On availability groups. Consult with your SQL DBA or architect to decide which option better fits your environment.

Table 126: Implementation Strategy for the SQL Server Database

Decision

A SQL database was placed on a highly available Microsoft SQL Server. This database server was installed on a Windows Server Failover Cluster, and an Always On availability group was used to provide high availability.

Justification

An Always On availability group achieves automatic failover. Both App Volumes Manager servers point to the availability group listener for the SQL Server.

Storage

A successful implementation of App Volumes requires several carefully considered design decisions with regards to disk volume size, storage IOPS, and storage replication.

Package and Writable Volume Template Placement

When new packages and writable volumes are deployed, predefined templates are used as the copy source. Administrators should place these templates on a centralized shared storage platform. As with all production shared storage objects, the template storage should be highly available, resilient, and recoverable. See Configuring Storage to get started.

Free-Space Considerations

Package sizing and writable volume sizing are critical for success in a production environment. Package volumes should be large enough to allow programs to be installed and should also allow for software updates. Packages should always have at least 20 percent free disk space available so administrators can easily update programs without having to resize the package volumes.

Writable volumes should also be sufficiently sized to accommodate all users’ data. Storage platforms that allow for volume resizing are helpful if the total number of writable volume users is not known at the time of initial App Volumes deployment.

Because packages and writable volumes use VMware vSphere® VMFS, the thin-provisioned, clustered Virtual Machine File System from VMware, storage space is not immediately consumed. Follow VMware best practices when managing thin-provisioned storage environments. Free-space monitoring is essential in large production environments.

Writable Volumes Delay Creation Option

Two policy options can complicate free-space management for writable volumes:

  • The option to create writable volumes on the user’s next login means that storage processes and capacity allocation are impacted by user login behavior.
  • The option to restrict writable volume access (and thus initial creation) to a certain desktop or group of desktops can also mean that user login behavior dictates when a writable volume template is copied.

In a large App Volumes environment, it is not usually a good practice to allow user behavior to dictate storage operations and capacity allocation. For this reason, VMware recommends that you create writable volumes at the time of entitlement, rather than deferring creation.

Storage Groups

App Volumes uses a construct called storage groups. A storage group is a collection of datastores that are used to serve packages or distribute writable volumes.

The two types of storage groups are:

  • Package storage groups – Used for replication.
  • Writable volume storage groups – Used for distribution.

In App Volumes 4, the packages within a storage group can be replicated among its peers to ensure all packages are available. Having a common datastore presented to all hosts in all vCenter Servers allows packages to be replicated across vCenter Servers and datastores.

Two automation options for package storage groups are available:

  • Automatic replication – Any package placed on any datastore in the storage group is replicated across all datastores in the group.
  • Automatic import – After replication, the package is imported into App Volumes Manager and is available for assignment from all datastores in the storage group.

When using package storage groups, the App Volumes Manager manages the connection to the relevant package, based on location and number of attachments across all the datastores in the group.

Storage Groups for Scaling App Volumes

Once created, packages are read-only. As more and more users are entitled to and begin using a given package, the number of concurrent read operations increases. With enough users reading from a single package, performance can be negatively impacted. Performance can be improved by creating one or more copies of the package on additional datastores, and spreading user access across them.

Packages can be automatically replicated to multiple datastores in a storage group. This replication creates multiple copies of packages. Access is spread across the datastores, ensuring good performance as App Volumes scales to serve more end users. See the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354) for the recommended number of concurrent attachments per package.

Storage Groups for Multi-site App Volumes Implementations

Storage groups can also be used to replicate packages from one site to another in multi-site App Volumes configurations. By using a non-attachable datastore available to hosts in each site, packages created at one site can be replicated to remote sites to serve local users.

A datastore configured as non-attachable is ignored by the App Volumes Manager while mounting volumes, and the storage can be used solely for replication of packages. This means you can use a datastore on slow or inexpensive storage for replication, and use high-speed, low-latency storage for storing mountable volumes.

This non-attachable datastore can also be used as a staging area for package creation before deploying to production storage groups. This topic is covered in more detail in the Multi-site Design Using Separate Databases section and in Appendix E: App Volumes Configuration.

Storage Groups for Writable Volumes

Writable volume storage groups are used to distribute volumes across datastores to ensure good performance as writable volumes are added. See the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354) for the recommended number of writable volumes per datastore.

Table 127: Implementation Strategy for Storage Groups

Decision

Storage groups were set up to replicate packages between datastores.

An NFS datastore was used as a common datastore between the different vSphere clusters.

Justification

This strategy allows the packages to be automatically replicated between VMFS datastores, between vSAN datastores and between vSphere clusters.

Packages

This section provides guidance about creating, sizing, scaling, provisioning, configuring, and updating packages.

Package Templates

By default, a single 20-GB package template is deployed in an App Volumes environment. This template is thin-provisioned and is provided in both a VMDK and VHD format. This template can be copied and customized, depending on how large the package needs to be for a given deployment scenario. For more information, see the VMware Knowledge Base article Creating a new App Volumes package template VMDK smaller than 20 GB (2116022).

Note: Although this KB article was written for AppStacks, it is applicable to App Volumes 4 packages as well.

If you have packages from a previous App Volumes 4 release, they will continue to work. However, additional features or fixes included in later versions are not applied to packages created with earlier versions.

Packages at Scale

The number of packages that can be attached to a given VM is technically limited by the maximum number of possible drive attachments in Windows and vSphere.For example, ESXi has a limit of 59 VMDKs + 1 OS disk. See the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354) for guidance. In practice, the number of packages attached to a VM will likely be considerably lower than the maximum values.

Attaching packages involves the following processes:

  • The disk mount (mounting the package VMDK to the VM)
  • The virtualization process applied to the content in the package (merging files and registry entries with the guest OS)

The time required to complete the virtualization process varies greatly, depending on the programs contained in a given package. The more packages that need to be attached, the longer this operation might take to complete.

Packages may be assigned to a number of Active Directory objects, which has implications for the timing and specifics of which volumes are attached. See Working with Applications.

Recommended Practices for Packages in Production Environments

The size of the default package template is 20 GB. The default writable volume template is 10 GB. In some environments, it might make sense to add larger or smaller templates. For information on creating multiple, custom-sized templates, see the VMware Knowledge Base article Creating a New App Volumes package template VMDK smaller than 20 GB (2116022).

App Volumes 4 includes enhancements to the agent compared with 2.x agents. The result is improved logon times and better performance with many packages attached. Although you can likely attach more App Volumes 4 packages than App Volumes 2.x AppStacks, we recommend keeping the total number of packages assigned to a given user or computer relatively small. This can be accomplished by adding multiple programs to each package. Group apps in such a way as to simplify distribution.

The following is a simple example for grouping App Volumes programs into packages:

  • Create a package containing core programs (apps that most or all users should receive). This package can be assigned to a large group or OU.
  • Create a package for departmental programs (apps limited to a department). This package can be assigned at a group or departmental level.

For traditional storage (VMFS, NFS, and so on):

  • Do not place packages and VMs on the same datastore.
  • Use storage groups for packages when packages are assigned to a large population of users or desktops. This helps to distribute the aggregated I/O load across multiple datastores, while keeping the assignments consistent and easy to manage.

For vSAN:

  • Packages and VMs can be placed on a single datastore.
  • Storage groups for packages are not applicable in a vSAN implementation.

See the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354).

Recommended Practices for Packaging Applications

Consider the following best practices when creating and packaging applications:

  • The following characters cannot be used when naming packages: & “ ‘ < >
  • For the packaging VM, use a clean master image that resembles as closely as possible the target environment where the package is to be deployed. For example, the packaging VM and target should be at the same OS patch and service pack level and, if programs are included in the master image, they should also be in the packaging VM.
  • Do not use a packaging machine where you have previously installed and then uninstalled any of the programs that you will capture. Uninstalling a program might not clean up all remnants of the software, and the subsequent App Volumes package capture might not be complete.
  • Always take a snapshot of your packaging VM before packaging or attaching any packages to it. If any packages have been assigned to the VM, or if the VM has been used previously for packaging, revert that VM to the clean snapshot before creating a new package.

Recommended Practices for Updating and Assigning Updated Packages

App Volumes 4 introduced assignment types called marker and package to improve the administrative process of updating application packages. Using the CURRENT marker enables distribution of the current package to your end-user population; whereas using the package assignment type enables distribution of test versions to a subset of users for validation.

Once a new package has been tested and approved, you can simply change the CURRENT marker to point to the new package. As end users log off their desktop sessions, the old version of the package is detached. When they log on again, the new version is attached.

The following illustration shows the 7-Zip application, which contains three packages with different versions of the software.

Portion of the Application Tab Detailing the Three Packages on the Right

Figure 123: Portion of the Application Tab Detailing the Three Packages on the Right

7-Zip 16.04 has the CURRENT marker, so it is distributed to the general population of end users. 7-Zip 19.0 is an updated package that contains a newer version of the 7-Zip program.

Portion of the Application Tab Detailing the Two Assignments on the Right

Figure 124: Portion of the Application Tab Detailing the Two Assignments on the Right

7-Zip 19.0 is using a package assignment type to directly assign that specific package to a group of test users for validation.

To learn more about assignment types refer to Assign Application Package in App Volumes 4 Feature Review.

To initiate an update to programs in an existing package, use the App Volumes Manager console to invoke the update process. This process clones the original package for you to work with and apply updates. End users continue to work from the original package to prevent user downtime. The new package with the updated programs is distributed by simply moving the CURRENT marker once it has been approved.

Consider the following best practices when updating and assigning updated packages:

  • When creating and updating packages, use the Stage drop-down list to select an appropriate value. This makes it easy for you and other App Volumes admins to manage the lifecycle of the package.

    Stage dropdown list

  • Use the marker assignment type to simplify updates for your general population of users.
  • Use the Unset CURRENT option to disable delivery of a package without modifying assignments.
  • Use the package assignment type for one-off, explicit assignments of a specific version.
  • Note: If both package and marker assignments are made, the package assignment is used.

Horizon Integration

Although not required, App Volumes is often implemented in a Horizon environment. Consider the following when integrating App Volumes and Horizon.

  • Do not attempt to include the Horizon Agent in an App Volumes package. The Horizon Agent should be installed in the master image.
  • Do not use a Horizon VM (guest OS with Horizon Agent installed) as a clean packaging VM. You must uninstall the Horizon Agent if it is present. Dependencies previously installed by the Horizon Agent, such as Microsoft side-by-side (SxS) shared libraries, are not reinstalled, and therefore are not captured by the App Volumes packaging process.
  • See Installation order of End User Computing Agents for Horizon View, Dynamic Environment Manager, and App Volumes (2118048) for information on agent installation order.

Performance Testing for Packages

Test packages immediately after packagingto determine their overall performance. Using a performance analytics tool, such as VMware vRealize® Operations Manager™, gather virtual machine, host, network, and storage performance information for use when packages are operated on a larger scale. Do not neglect user feedback, which can be extremely useful for assessing the overall performance of an application.

Because App Volumes provides an application container and brokerage service, storage performance is very important in a production environment. Packages are read-only. Depending on utilization patterns, the underlying shared storage platform might have significant read I/O activity. Consider using flash and hybrid-flash storage technologies for packages.

This evaluation can be time-consuming for the administrator, but it is necessary for any desktop- transformation technology or initiative.

ThinApp Integration with Packages

Network latency is often the limiting factor for scalability and performance when deploying ThinApp packages in streaming mode. Yet ThinApp provides exceptional application-isolation capabilities. With App Volumes, administrators can present ThinApp packages as dynamically attached applications that are located on storage rather than as bits that must traverse the data center over the network.

Using App Volumes to deliver ThinApp packages removes network latency due to Windows OS and environmental conditions. It also allows for the best of both worlds—real-time delivery of isolated and troublesome applications alongside other applications delivered on packages.

With App Volumes in a virtual desktop infrastructure, enterprises can take advantage of local deployment mode for ThinApp packages. ThinApp virtual applications can be provisioned inside a package using all the storage options available for use with packages. This architecture permits thousands of virtual desktops to share a common ThinApp package through packages without the need to stream or copy the package locally.

Microsoft Office Application Packages

For deploying Microsoft Office applications through App Volumes, see the VMware knowledge base article VMware App Volumes 2.x with Microsoft Office Products (2146035).

Office Plug-Ins and Add-Ons

The most straightforward method is to include Microsoft Office plug-ins or add-ons in the same package as the Microsoft Office installation.

However, if necessary, you can include plug-ins or add-ons in packages that are separate from the packages that contain the Microsoft applications to which they apply. Before packaging the plug-in or add-on, install the primary application natively in the OS of the packaging VM.

Note: Ensure the plug-in or add-on is at the same version as the Microsoft Office application in the package. This includes any patches or updates.

Recommended Practices for Installing Office

VMware recommends that you install core Microsoft Office applications in the base virtual desktop image, and create one package for non-core Microsoft Office applications, such as Visio, Project, or Visio and Project together.

To create the package with Visio and Project, use a packaging machine with the same core Microsoft Office applications as on the base image. After the package is created, you can assign the package to only the users who require these non-core Microsoft Office applications.

RDSH Integration with Packages

App Volumes supports package integration with Microsoft RDSH-published desktops and published applications. Packages are assigned to RDSH servers rather than directly to users. Packages are attached to the RDSH server when the machine is powered on and the App Volumes service starts. Users are then entitled to the RDSH-published desktops or applications through the Horizon 7 entitlement process.

Note: Writable volumes are not supported with RDSH assignments.

Consider associating packages at the OU level in Active Directory, rather than to individual computer objects. This practice reduces the number of package entitlements and ensures packages are always available as new hosts are created and existing hosts are refreshed.

Entitling packages to an OU where Horizon 7 instant-clone RDSH server farms are provisioned ensures that all hosts are configured exactly alike, which supports dynamic growth of farms with minimal administrative effort.

Create dedicated packages for RDSH servers. Do not reuse a package that was originally created for a desktop OS.

When creating the package, install programs on a packaging machine that has the same operating system as that used on the deployed RDSH servers. Before installing software, switch the RDSH server to RD-Install mode. For more information, see Learn How To Install Applications on an RD Session Host Server.

See Infrastructure and Networking Requirements to verify that the Windows Server version you want to use for RDSH is supported for the App Volumes Agent.

For information about using App Volumes in a Citrix XenApp shared-application environment, see Implementation Considerations for VMware App Volumes in a Citrix XenApp Environment.

Application Suitability for Packages

Most Windows applications work well with App Volumes, including those with services and drivers, and require little to no additional interaction. If you need an application to continue to run after the user logs out, it is best to natively install this application on the desktop or desktop image.

The following sections briefly describe situations and application types where App Volumes might need special attention to work properly or where the application would work best when installed in the master image, rather than in a package.

Applications That Work Best in the Master Image

Applications that should be available to the OS in the event that a package or writable volume is not present should remain in the master image and not in an App Volumes virtual disk. These types of applications include antivirus, Windows updates, and OS and product activations, among others. Applications that should be available to the OS when no user is logged in should also be placed in the master image.

Similarly, applications that integrate tightly with the OS should not be virtualized in a package. If these apps are removed from the OS in real time, they can cause issues with the OS. Again, if the application needs to be present when the user is logged out, it must be in the master image and not in a package. Applications that start at boot time or need to perform an action before a user is completely logged in, such as firewalls, antivirus, and Microsoft Internet Explorer, fall into this category.

Applications that use the user profile as part of the application installation should not be virtualized in an App Stack. App Volumes does not capture the user profile space C:\users\<username>. If, as part of its installation process, an application places components into this space, those components will not be recorded as part of the packaging process. If this happens, undesired consequences or failure of the application might result when the application is captured in a package.

Applications Whose Components Are Not Well Understood

In the rare event that an issue with an application does present itself, it is important to have a thorough understanding of how the application functions. Understanding the processes that are spawned, file and registry interactions, and where files are created and stored is useful troubleshooting information.

App Volumes is a delivery mechanism for applications. It is important to understand that App Volumes does an intelligent recording of an installation during the packaging process and then delivers that installation. If the installation is not accurate or is configured incorrectly, the delivery of that application will also be incorrect (“garbage in, garbage out”). It is important to verify and test the installation process to ensure a consistent and reliable App Volumes delivery.

App Volumes Agent Altitude and Interaction with Other Mini-Filter Drivers

The App Volumes Agent is a mini-filter driver. Microsoft applies altitudes to filter drivers. The concept is that the larger the number, the “higher” the altitude. Mini-filter drivers can see only the other filter drivers that are at a higher altitude. The actions at a lower altitude are not seen by filter drivers operating at a higher altitude.

The lower-altitude mini-filter drivers are the first to interact with a request from the OS or other applications. Generally speaking, the requests are then given to the next mini-filter driver in the stack (next highest number) after the first driver finishes processing the request. However, this is not always the case because some mini-filter drivers might not release the request and instead “close” it out to the OS or application.

In the case where a request is closed, the subsequent mini-filter drivers will never see the request at all. If this happens with an application running at a lower altitude than App Volumes, the App Volumes mini-filter driver will never get a chance to process the request, and so will not be able to virtualize the I/O as expected.

This is the primary reason that certain applications that use a mini-filter driver should be disabled or removed from the OS while you install applications with App Volumes. There might be additional scenarios where App Volumes Agent should be disabled, allowing other applications to install correctly in the base OS.

Other Special Considerations

The following guidelines will also help you determine whether an application requires special handling during the virtualization process or whether virtualization in a package is even possible:

  • Additional application virtualization technologies – Other application virtualization technologies (Microsoft App-V, ThinApp, and others) should be disabled during packaging because the filter drivers could potentially conflict and cause inconsistent results in the packaging process.
  • Mixing of 32- and 64-bit OS types – The OS type (32- or 64-bit) of the machine that the package is attached to should match the OS type that applications were packaged on. Mixing of application types in App Volumes environments follows the same rules as Windows application types—that is, if a 32-bit application is certified to run in a 64-bit environment, then App Volumes supports that configuration also.
  • Exceptional applications – Some software apps just do not work when installed on an App Volumes package. There is no list of such applications, but an administrator might discover an issue where an application simply does not work with App Volumes.

In summary, most applications work well with App Volumes, with little to no additional interaction needed. However, you can save time and effort by identifying potential problems early, by looking at the application type and use case before deciding to create a package.

Writable Volumes

Writable volumes can be used to persist a variety of data as users roam between nonpersistent desktop sessions. As is described in App Volumes 2.14 Technical What’s New Overview, Outlook OST and Windows Search Index files are automatically redirected to writable volumes, improving search times for customers using these technologies. See Working with Writable Volumes for information on creating and managing writable volumes.

Writable volumes are often complemented by VMware Dynamic Environment Manager™ to provide a comprehensive profile management solution. For technical details on using App Volumes with Dynamic Environment Manager, see the VMware blog post VMware User Environment Manager with VMware App Volumes.

Note the key differences between packages and writable volumes:

  • Package VMDKs are mounted as read-only and can be shared among all desktop VMs within the data center.
  • Writable volumes are dedicated to individual users and are mounted as the user authenticates to the desktop. Writable volumes are user-centric and roam with the user for nonpersistent desktops.

Writable Volume Templates

Several writable volume templates are available to suit different use cases. See Configuring Storage for options.

The UIA (user-installed applications)-only template provides persistence for user-installed applications. After a writable volume with the UIA-only template is created and assigned to a user, that user can install and configure applications as they normally would. The installation is automatically redirected to the writable volume, and persisted between desktop sessions.

Note: For this functionality to work properly, users require account permissions in Windows that allow application installation. You may also use Dynamic Environment Manager Privilege Elevation to complement UIA-only writable volumes.

Table 128: Implementation Strategy for App Volumes Writable Volumes

Decision

Writable volumes were created for and assigned to end users who required the ability to install their own applications.

The UIA-only writable volume template was used.

Justification

Writable volumes provide added flexibility for end users who are permitted to install software outside of the IT-delivered set of applications.

The UIA-only template ensures application installation and configuration data is stored, while profile data is managed using other technologies.

If a writable volume becomes corrupt, applications can be reinstalled without the risk of data loss.

Performance Testing for Writable Volumes

Writable volumes are read-write. Storage utilization patterns are largely influenced by user behavior with regard to desktop logins and logouts, user-installed applications, and changes to local user profiles. Group each set of similar users into use cases, and evaluate performance based on peak average use.

Additional Writable Volumes Operations

See the section Next Steps: Additional Configuration Options for Writable Volumes of Appendix E: App Volumes Configuration.

Multi-site Design Using Separate Databases

VMware recommends designing multi-site implementations using a separate-databases, or multi-instance model. This option uses a separate SQL Server database at each site, is simple to implement and allows for easy scaling if you have more than two sites. Additionally, latency or bandwidth restrictions between sites have little impact on the design.

In this model, each site works independently, with its own set of App Volumes Managers and its own database instance. During an outage, the remaining site can provide access to packages with no intervention required. For detailed information on the failover steps required and the order in which they need to be executed, refer to Failover with Separate Databases.

App Volumes Multi-site Separate-Databases Option

Figure 125: App Volumes Multi-site Separate-Databases Option

This strategy makes use of the following components:

  • App Volumes Managers – At least two App Volumes Manager servers are used in each site for local redundancy and scalability.
  • Load balancers – Each site has its own namespace for the local App Volumes Manager servers. This is generally a local load balancer virtual IP that targets the individual managers.
    Note: The App Volumes Agent, which is installed in virtual desktops and RDSH servers, must be configured to use the appropriate local namespace.
  • Separate databases – A separate database is used for each site; that is, you have a separate Windows Server Failover Clustering (WSFC) cluster and an SQL Server Always On availability group listener for each site, to achieve automatic failover within a site.
  • vCenter Server machine managers – The App Volumes Manager servers at each site point to the local database instance and have machine managers registered only for the vCenter Servers from their own site.
  • Storage groups – Storage groups containing a common, non-attachable datastore can be used to automatically replicate packages from one site to the other. This common datastore must be visible to at least one vSphere host from each site.
    Note: In some environments, network design might prevent the use of storage group replication between sites. See Copying an AppStack to another App Volumes Manager instance for more information about manually copying AppStacks and packages.
  • Entitlement replication – To make user-based entitlements for packages available between sites, you can reproduce entitlements at each site. You can either manually reproduce the entitlements at each site or use the App Volumes Entitlements Sync tool, provided on the VMware Flings site. See the Appendix E: App Volumes Configuration. Manually reproducing entitlements can be somewhat streamlined by entitling packages to groups and OUs, rather than individuals.
Table 129: Strategy for Deploying App Volumes in Multiple Sites

Decision

App Volumes was set up in the second site used for Horizon 7 (on-premises).

A separate database and App Volumes instance deployment option was used.

An NFS datastore was used as a common datastore among the storage groups to facilitate cross-site package replication.

Justification

This strategy provides App Volumes capabilities in the second site.

The separate-databases option is the most resilient, provides true redundancy, and can also scale to more than two sites.

With packages replicated between sites, the packages are available for use at both locations.

Configuration with Separate Databases

When installing and configuring the App Volumes Managers in a setup like this, each site uses a standard SQL Server installation.

  1. Install the first App Volumes Manager in Site 1. If using Always On availability groups to provide a local highly available database, use the local availability group listener for Site 1 when configuring the ODBC connection.
    Important: For step-by-step instructions on this process, see Appendix E: App Volumes Configuration.
  2. Complete the App Volumes Manager wizard and add the vCenter Servers for Site 1 as machine managers, including mapping their corresponding datastores.
  3. Continue with installing the subsequent App Volumes Managers for Site 1. Add them as targets to the local load balancer virtual IP.
  4. Repeat steps 1–3 for Site 2 so that the App Volumes Managers in Site 2 point to the local availability group listener for Site 2, and register the local vCenter Servers for Site 2 as machine managers.
  5. For details on setting up storage groups for replicating packages from site to site, see the Recovery Service Integration section of Service Integration Design.
  6. Replicate package entitlements between sites, as described in Appendix E: App Volumes Configuration.

With this design, the following is achieved: 

  • Packages are made available in both sites.
  • Packages are replicated from site to site through storage groups defined in App Volumes Manager and through the use of a common datastore that is configured as non-attachable.
  • User-based entitlements for packages are replicated between sites.
  • A writable volume is normally active in one site for a given user.
  • Writable volumes can be replicated from site to site using processes such as array-based replication. An import operation might be required on the opposite site. The order and details for these steps are outlined in Failover with Separate Databases.
  • Entitlements for writable volumes are available between sites.

Failover with Separate Databases

In this model, each site works independently, with its own set of App Volumes Managers and its own database instance. During an outage, the remaining site can provide access to packages with no intervention required.

  • The packages have previously been copied between sites using non-attachable datastores that are members of both sites’ storage groups.
  • The entitlements to the packages have previously been reproduced, either manually or through an automated process.

In use cases where writable volumes are being used, there are a few additional steps:

  1. Mount the replicated datastore that contains the writable volumes.
  2. Perform a rescan of that datastore. If the datastore was the default writable volume location, App Volumes Manager automatically picks up the user entitlements after the old assignment information has been cleaned up.
  3. (Optional) If the datastore is not the default writable volume location, perform an Import Writable Volumes operation from the App Volumes Manager at Site 2.

All assignments to writable volumes are successfully added, but to the new valid location.

Recommended Practices for Production Environments

The following recommendations pertain to master image optimization, setting up virtual desktops, and security best practices.

Master Image Optimization

Master images should be optimized for VDI or RDSH to ensure the best performance possible in a virtualized environment. Consider using the instructions in Creating an Optimized Windows Image for a VMware Horizon Virtual Desktop when building your master images. The VMware OS Optimization Tool is referenced, and helps optimize Windows desktop and server operating systems for use with VMware Horizon.

The OS Optimization Tool includes customizable templates to enable or disable Windows system services and features, per VMware recommendations and best practices, across multiple systems. Because most Windows system services are enabled by default, the OS Optimization Tool can be used to easily disable unnecessary services and features to improve performance.

Desktop Pool Recommendations

When setting up client endpoint devices, consider the following best practices:

  • When reverting a desktop VM that is running the App Volumes Agent to a previous snapshot, make sure that the VM is gracefully shut down, to avoid synchronization issues. This is primarily relevant to the packaging desktop and master VMs for Horizon linked- and instant-clone desktop pools.
  • If you are using a Horizon desktop pool, the App Volumes Agent should be installed on the master VM for linked- and instant-clone pools, or on the VM template for full-clone pools, for ease of distribution.
  • If using a Horizon 7 linked-clone pool, make sure the Delete or Refresh machine on logoff policy in Desktop Pool Settings is set to Refresh Immediately. This policy ensures that the VMs stay consistent across logins.

App Volumes Security Recommendations

To support a large production environment, there are some important security configurations that administrators should put in place:

  • Open only essential, required firewall ports on App Volumes Manager and SQL. Consider using an advanced firewall solution, such as VMware NSX®, to dynamically assign virtual machine firewall policies based on server role.
  • Replace the default self-signed TLS/SSL certificate with a certificate for App Volumes Manager signed by a reliable certificate authority. See Replacing the Self-Signed Certificate in VMware App Volumes 2.12.
  • Verify that App Volumes Manager can accept the vCenter Server certificate. App Volumes Manager communicates with vCenter Server over TLS/SSL. The App Volumes Manager server must trust the vCenter Server certificate.

    Note: It is possible to configure App Volumes Manager to accept an unverifiable (self-signed) certificate. Navigate to Configuration > Machine Managers in the management console. Each machine manager (vCenter Server) has a Certificate option that shows the current status of the certificate and allows an administrator to explicitly accept an unverifiable certificate.

  • Consider using ThinApp to package applications, to take advantage of the security benefits of isolation modes, when required. Each ThinApp package can be isolated from the host system, and any changes, deletions, or additions made by the application to the file system or registry are recorded in the ThinApp sandbox instead of in the desktop operating system.
  • For writable volumes, determine which end users require ongoing administrator privileges. Writable volumes with user-installed applications require that each desktop user be assigned local computer administrator privileges to allow the installation and configuration of applications.

    Some use cases could benefit from temporary, request-based elevated privileges to allow incidental application installation for a specific user or user group. Carefully consider the security risks associated with granting users these elevated privileges.

  • Create and use an AD user service account specifically for App Volumes. This is good security hygiene and a forensics best practice. It is never a good idea to use a blanket or general-purpose AD user account for multiple purposes within AD.
  • Consider creating an administrative role in vCenter Server to apply to the App Volumes service account.

Installation and Initial Configuration

Installation prerequisites are covered in more detail in the System Requirements section of the VMware App Volumes Installation Guide. The following table lists the versions used in this reference architecture.

Table 130: App Volumes Components and Version
Component Requirement

Hypervisor

VMware vSphere 6.7.0 Update 3

Virtual Center

VMware vCenter 6.7.0 Update 3

App Volumes Manager

Windows Server 2016

Active Directory

2016 Functional Level

SQL Server

SQL Server 2016

OS for App Volumes Agent

Windows 10 and Windows Server 2016

Refer to the VMware App Volumes Installation Guide for installation procedures. This document outlines the initial setup and configuration process.

After installation is complete, you must perform the following tasks to start using App Volumes:

  • Complete the App Volumes Initial Configuration Wizard (https://avmanager).
  • Install the App Volumes Agent on one or more clients and point the agent to the App Volumes Manager address (load-balanced address).
  • Select a clean packaging system and create an application package. See Working with Applications and Working with Packagesin the VMware App Volumes Administration Guide for instructions.
  • Assign the package to a test user and verify it is connecting properly.
  • Assign a writable volume to a test user and verify it is connecting properly.

Component Design: Dynamic Environment Manager Architecture

VMware Dynamic Environment Manager™ (formerly called User Environment Manager) provides profile management by capturing user settings for the operating system and applications. Unlike traditional application profile management solutions, Dynamic Environment Manager does not manage the entire profile. Instead it captures settings that the administrator specifies. This reduces login and logout time because less data needs to be loaded. The settings can be dynamically applied when a user launches an application, making the login process more asynchronous. User data is managed through folder redirection.

Dynamic Environment Manager

Figure 126: Dynamic Environment Manager

Note: VMware App Volumes™ applications are not currently supported on VMware Horizon® Cloud Service™ on Microsoft Azure.

Dynamic Environment Manager Components and Key Features

Dynamic Environment Manager is a Windows-based application that consists of the following components.

Table 131: Dynamic Environment Manager Components
Component Description

Active Directory Group Policy

  • Mechanism for configuring Dynamic Environment Manager.
  • ADMX template files are provided with the product.

NoAD mode XML file

An alternative to using Active Directory Group Policy for configuring Dynamic Environment Manager. With NoAD mode, you do not need to create a GPO, write logon and logoff scripts, or configure Windows Group Policy settings.

IT configuration share

  • A central share (SMB) on a file server, which can be a replicated share (DFS-R) for multi-site scenarios, as long as the path to the share is the same for all client devices.
  • Is read-only to users.
  • If using DFS-R, it must be configured as hub and spoke. Multi-master replication is not supported.

Profile archive share

  • File shares (SMB) to store the users’ profile archives and profile archive backups.
  • Is used for read and write by end users.
  • For best performance, place archives on a share near the computer where the Dynamic Environment Manager FlexEngine (desktop agent) runs.

FlexEngine

The Dynamic Environment Manager agent that resides on the virtual desktop or RDSH server VM being managed.

Flex configuration file Files that contain data describing how a given application or Windows setting is stored in the registry or file system. FlexEngine uses these Flex configuration files to read and store user settings.

Application Profiler

Utility that creates a Dynamic Environment Manager Flex configuration file from an application by determining where the application stores configuration data in the registry and file system. Dynamic Environment Manager can manage settings for applications that have a valid Flex configuration file in the configuration share.

Helpdesk Support Tool

  • Allows support personnel to reset or restore user settings.
  • Enables administrators to open or edit profile archives.
  • Allows analysis of profile archive sizes.
  • Includes a log file viewer.

Self-Support

Optional self-service tool to allow users to manage and restore their configuration settings on an environment setting or application.
SyncTool Optional component designed to support physical PCs working offline or in limited bandwidth scenarios.
DirectFlex Feature that provides the option to import application settings at application startup rather than at user logon

The following figure shows how these components interact.

Dynamic Environment Manager Logical Architecture

Figure 127: Dynamic Environment Manager Logical Architecture

See Network Ports in VMware Horizon 7 for a comprehensive list of ports requirements for VMware Horizon®, Dynamic Environment Manager, and much more. For the ports that SMB uses, see Server Message Block. For the ports required by GPOs, see the Microsoft article Configure Firewall Port Requirements for Group Policy.

Additional Features and Capabilities

In addition to the main components described above, Dynamic Environment Manager also includes the following features that are helpful for optimizing Windows settings, improving login time, personalizing the Start menu, troubleshooting, and more.

 

Table 132: Additional Dynamic Environment Manager Features
Component Description

Windows common settings

A number of Windows Common Settings config file templates are included with Dynamic Environment Manager to improve management of Windows 10. Examples include file-type associations, personal certificates, and Windows Explorer and view settings. These config files can be added using the Easy Start configuration and the Config File Creation wizard.

Login time

With the release of Windows 10, the time it takes to log in for the first time on a device has increased dramatically. In Windows 8, Microsoft introduced Metro apps, which were later called Modern, Store, or Universal Windows Platform (UWP) apps. These apps are installed in the user profile on first login, and they have a negative impact on login time.

The easiest way to improve the login time is to use the VMware Operating System Optimization Tool for the virtual desktop infrastructure (VDI) master image. This tool removes many of the UWP apps, which are typically not used or missed in an enterprise environment. This tool also applies other optimizations that have a positive impact on the overall performance and speed.

For more information about building an optimized Windows 10 image, see Creating an Optimized Windows Image for a VMware Horizon Virtual Desktop.

To further improve login times, consider enabling DirectFlex whenever possible.

Note: The current recommendation is to avoid using mandatory profiles, and instead use local profiles. The behavior of mandatory profiles is unpredictable and has the potential to break when Windows is updated. Also, the login time with an optimized local profile is roughly the same as that for a mandatory profile.

Start menu

With the release of Windows 10 feature update 1703, the Start menu is stored in the registry, making it easier to roam the user’s personalized Start menu.

To customize the Start menu before first use, you can create a default Start menu layout for all users. The default Windows 10 Start menu may display tiles for UWP apps that are not relevant. Even after removing any unwanted UWP apps from the VDI master image using the VMware OS Optimization Tool, the default Start menu may still show unnecessary tiles. The best solution for most circumstances is to define a new default Start menu for your users in the following way:

  1. Start a clean Windows 10 VM, log in with a test user, and make all layout changes to your Start menu. This will become your default layout for all users.
  2. Export the Start menu layout with this PowerShell command that requires elevated permissions:

    Export-StartLayout -Path C:\StartLayout.xml

  3. Finally, import the layout into the Default User account by using this PowerShell command:

    Import-StartLayout –LayoutPath C:\StartLayout.xml –MountPath $env:SystemDrive\

All users logging in will receive this default Start menu.

Customize the Start menu

To make sure that any customization by users is captured and roams with the Dynamic Environment Manager profile, do the following:

  1. Add the following exclusion to the Dynamic Environment Manager configuration file called Windows Explorer.

    This exclusion prevents the Start menu icons from appearing blank or disappearing.

    [ExcludeIndividualRegistryValues]
    HKCU\Software\Microsoft\Windows\CurrentVersion\
    Explorer\Advanced\StartMenuInit

  2. Create a new Dynamic Environment Manager configuration file to roam the Start menu.
  3. Create a new Dynamic Environment Manager configuration file called Windows 10 Start Menu.

    This will allow Dynamic Environment Manager to export and import the Start menu layout at logoff and logon. Use the Config File Creation wizard to add the template.

FTAs and Protocols

Dynamic Environment Manager supports file-type associations (FTA) and protocols. The required configuration for managing FTAs and protocols can be achieved by adding the default applications config file after upgrading.

This captures the personal FTA and protocol preferences from users and allows those settings to roam between sessions.

You can prevent the message How do you want to open this file? from appearing by enabling the Group Policy setting Do not show the ‘new application installed’ notification. This Group Policy can be configured at Computer Configuration > Policies > Administrative Templates > Windows Components > File Explorer.

Dynamic Environment Manager also supports the central management of FTAs on Windows 10. Managing file types and protocols works the same as in previous versions. Within the Dynamic Environment Manager Management Console, go to the User Environment tab and use the File Type Associations feature.

For more detail, see Managing File Type Associations (FTA) natively using Dynamic Environment Manager.

Microsoft Edge Browser

For the Edge browser, a good Dynamic Environment Manager configuration file is included in the Easy Start configuration and the Config File Creation wizard.

Internet Explorer 11 Browser

With the introduction of Internet Explorer 10, Microsoft decided to store all browsing information (cookies, history, and so on) in a central database in the user profile, called WebCache.

For the Internet Explorer - WebCache, a Dynamic Environment Manager configuration file has been included in the Easy Start configuration and the Config File Creation wizard.

Dynamic Environment Manager uses compression, so the size of the database will be optimized. However, managing this WebCache still adds time to log in and log off because of the large file size. For this reason, the template for roaming the Internet Explorer – WebCache is included in the Easy Start configuration, but disabled by default.

 

Table 133: Implementation Strategy for Dynamic Environment Manager

Decision

Dynamic Environment Manager was implemented to support both VMware Horizon® 7 and VMware Horizon® Cloud Service™ environments.

Justification

Dynamic Environment Manager enables configuration of IT settings such as Horizon Smart Policies, predefined application settings, and privilege elevation rules, while providing user personalization for Windows and applications.

Applied across all types of Horizon environments, this strategy provides consistency and a persistent experience for the users.

DirectFlex

DirectFlex improves usage efficiency. If you configure an application for DirectFlex, the application’s settings are read when the user starts the application rather than when the user logs in to the operating system. Changes to application settings are written back to profile archives when the user exits the application instead of when the user logs out of the operating system.

FlexEngine, which is the Dynamic Environment Manager agent running in the virtual desktop or RDSH server, starts when a user logs in from a client device, and it runs until the user logs out. When a user logs in, the Active Directory GPO or NoAD.xml file configures FlexEngine. FlexEngine starts at login, imports user environment settings from the configuration share, and imports personalization settings (for those applications not configured with DirectFlex) from the profile archive share.

Once logged in, when a user opens an application, FlexEngine uses DirectFlex to dynamically load and apply the related configurations such as personalization and predefined application settings. When the user closes the application, FlexEngine uses DirectFlex to copy the changes back to the user profile archive share. When the user logs out, FlexEngine writes the remaining Windows personalization back to the user profile archive share. The following figure illustrates this process.

Typical Workflow of FlexEngine

Figure 128: Typical Workflow of FlexEngine

If an IT administrator makes changes while a user is logged in, the changes are not applied until the next time the user logs in to a session. Changes made by the user are applied to the current session and the following sessions.

Without DirectFlex, all settings are read during the login process and written back during the logout process. For example, a user could have 10 applications on the desktop but use only 2 applications in one session. If DirectFlex is not enabled, settings for all 10 applications are loaded, which can slow down the login and logout process if there are many settings.

Take the following guidelines into consideration when enabling DirectFlex:

  • To enable DirectFlex, FlexEngine must be configured to run at login. See GPO Mandatory Settings.
  • Do not enable DirectFlex for configuration files containing Windows settings such as the wallpaper, keyboard, and regional settings. These settings must always be processed during login and logout.
  • Best practice is to not enable DirectFlex for applications that act as middleware and use many plug-ins, such as Microsoft Office and Internet browsers.
Table 134: Implementation Strategy for DirectFlex
Decision DirectFlex was enabled for appropriate application configuration files.

Justification

DirectFlex improves login and logout times by reducing the amount of application configuration data copied from and to the profile archive share.

Flex Configuration Files for Applications

Dynamic Environment Manager provides you, the administrator, with granular control over which parts of the user profile are managed. Given this design approach, you must specify which applications and settings will be managed. Flex configuration files are imported or created for each application you want to manage with Dynamic Environment Manager.

A number of Flex configuration files, or templates, for common Windows settings and applications such as Microsoft Office are included when you install the Dynamic Environment Manager Management Console. Additional templates can be downloaded from the VMware Marketplace. See Deploying Templates from the Marketplace for a walk-through of this feature.

The Dynamic Environment Manager community is routinely updated with new templates you can download and customize as needed.

An included utility called the Application Profiler can be used to create your own Flex configuration files and predefined settings templates. For more information, see Introduction to VMware Dynamic Environment Manager Application Profiler. Also see the Profiling Applications operational tutorial for advanced profiling techniques.

Triggers

Dynamic Environment Manager relies on triggers to invoke a variety of actions. Events such as a login, logout, application start, and application exit are triggers used by FlexEngine to dynamically import and export Windows and applications settings as they are needed. Several additional triggers such as workstation lock, session reconnect, and All AppStacks attached are available, and may be used to create triggered tasks.

Note: The All AppStacks attached trigger functions in the same way for both App Volumes 2 AppStacks and App Volumes 4 packages.

Triggered Task Settings

Figure 129: Triggered Task Settings

Triggered tasks consist of a trigger and an action to perform. For example, creating a triggered task that uses the trigger Session reconnect with the action DirectFlex refresh will cause DirectFlex settings to be refreshed when a Horizon user connects to a previously disconnected virtual desktop or application session.

A variety of user environment settings such as file-type associations, drive mappings, and printer mappings can be refreshed using the User environment refresh trigger. Combining triggers with conditions supports advanced capabilities such as location-aware printing.

For example, when a user connects to a Horizon session from an endpoint in building A, Dynamic Environment Manager can map the appropriate printers based on the endpoint device’s IP range. If the user disconnects, moves to building B, and reconnects to the same Horizon session, Dynamic Environment Manager can dynamically map the new printers based on the new location of the endpoint device.

See Configure Triggered Tasks for more information.

SyncTool for Offline Scenarios

SyncTool lets you use Dynamic Environment Manager when Windows computers are working offline or have unreliable or slow WAN connections. SyncTool is not suitable for VDI and RDSH users.

SyncTool synchronizes the Dynamic Environment Manager configuration share and the personal archives to a local cache folder, so the user can always log in, even when the WAN connection is unreliable or unavailable. SyncTool is completely configurable and can generate detailed log files that provide troubleshooting assistance for IT.

You can limit network traffic by configuring SyncTool to replicate data only at specified intervals.

The following figure shows the SyncTool architecture and how the components work together.

SyncTool Architecture

Figure 130: SyncTool Architecture

See the VMware Dynamic Environment Manager SyncTool Administration Guide for more information.

User Profile Strategy

A Windows user profile is made of multiple components, including profile folders, user data, and the user registry. See About User Profiles for more information about Windows user profiles.

There are a number of user profile types, such as local, roaming, and mandatory. Dynamic Environment Manager complements each user profile type, providing a consistent user experience as end users roam from device to device. Dynamic Environment Manager is best-suited to run long-term with local and mandatory profile types. See Dynamic Environment Manager Scenario Considerations for more information and considerations when using roaming profiles.

Folder redirection can be used to abstract user data from the guest OS, and can be configured through GPO or using the Dynamic Environment Manager user environment settings.

See Folder Redirection for folder redirection options and recommended practices.

Dynamic Environment Manager User Profile Strategy

Figure 131: Dynamic Environment Manager User Profile Strategy

Table 135: User Profile Strategy with Dynamic Environment Manager

Decision

Local user profiles and folder redirection were used in this reference architecture. A local user profile is automatically created when a user logs on to the Windows VM.

Justification

With local user profiles, a user can modify their desktop during a session. Dynamic Environment Manager persists those Windows and application customizations you specified with Flex configuration files. Extraneous changes are simply discarded when the VM is refreshed.

Note: Previous versions of this reference architecture recommended the use of mandatory user profiles to improve logon times. In our testing, mandatory profiles do not work reliably with Windows 10 version 1809 and later. By following the process outlined in Creating an Optimized Windows Image for a VMware Horizon Virtual Desktop, you can achieve comparable logon times with local profiles.

Optimized, local profiles are strongly recommended with Dynamic Environment Manager. If you must use mandatory profiles see the blog post VMware User Environment Manager, Part 2: Complementing Mandatory Profiles with VMware User Environment Manager to learn more about how Dynamic Environment Manager can help.

Infrastructure

Dynamic Environment Manager requires little infrastructure. AD GPOs are used to specify Dynamic Environment Manager settings, and SMB shares are used to host the configuration data and profile data. Administrators use the Dynamic Environment Manager Management Console to configure settings.

Dynamic Environment Manager Infrastructure

Figure 132: Dynamic Environment Manager Infrastructure

FlexEngine Agent Configuration

The FlexEngine agent must be installed on any physical, virtual, or cloud-hosted Windows device you wish to manage with Dynamic Environment Manager. See Installing VMware Dynamic Environment Manager for information about manual and automated installation options.

Once FlexEngine is installed, it must be enabled and configured. You accomplish this either by creating a GPO using the provided ADMX templates or by creating an XML-based configuration file for use with NoAD mode.

NoAD Mode for FlexEngine

NoAD mode enables configuration of FlexEngine with no dependency on Active Directory. You do not need to create a GPO or logon and logoff scripts. For organizations where Active Directory is not available or GPO configuration is highly restricted, NoAD mode may be the better choice. For proof-of-concept or test environments, NoAD mode may enable you to make changes faster than going through formal AD change control processes.

To use NoAD mode, FlexEngine must be installed using the NOADCONFIGFILEPATH property on the MSI installer. See Install FlexEngine in NoAD mode for details. If FlexEngine is installed for NoAD mode, any previous GPO-based deployment settings are ignored.

Be sure to configure your Dynamic Environment Manager configuration share before installing the FlexEngine agent. You must specify the path to the configuration share as part of the NoAD-mode installation process.

Note: To disable NoAD mode, uninstall FlexEngine, and reinstall it without the NOADCONFIGFILEPATH MSI property.

If you use the Import Image wizard from the Azure Marketplace with Horizon Cloud Service on Microsoft Azure, the FlexEngine agent will be automatically installed for use with GPOs. You will need to reinstall the agent in NoAD mode.

Table 136: Strategy for Configuring Dynamic Environment Manager Settings

Decision

Active Directory Group Policy is chosen over NoAD mode.

Justification

This provides the flexibility to apply different user environment configuration settings for different users. An ADMX template is provided to streamline configuration.

See Group Policy Settings for Dynamic Environment Manager for additional information on Group Policy configuration options.

Scalability and Availability

The Dynamic Environment Manager architecture does not rely on dedicated servers or a database. The primary infrastructure components are the configuration share and profile archive share, which should be hosted on SMB shares. This simple architecture makes scaling Dynamic Environment Manager easy and has enabled numerous companies to manage production environments with more than 100,000 devices without running into scale limits.

Server Sizing

A general recommendation is to use Windows file servers for the SMB shares because they have proven to be faster and more reliable than SMB implementations from SAN and NAS devices. Use the latest Windows version for the best SMB performance. Do not use a Windows version earlier than Windows Server 2012, which introduced SMB 3.0.

Ensure file servers have sufficient resources. A single Windows file server can scale to support 10,000 Dynamic Environment Manager users. We recommend four CPUs and 16 GB RAM for a dedicated Windows file server supporting 10,000 users.

A single, dedicated Windows file server could suffice for a target of 8,000 users but would create a large fault domain. Consider clustering more, smaller file servers to reduce the fault domain and more easily recover from hardware failures. Internal testing has shown that a single Windows file server with four vCPUs and 10 GB RAM can provide excellent performance for 2,000 users.

In addition to allocating enough CPU and RAM, make sure the Windows file servers have access to a performant disk subsystem. Dynamic Environment Manager will be reading from and writing to the file servers throughout the user session. The faster these operations are, the better the user experience will be. Consider placing the configuration share on storage optimized for read operations. Place the profile archive share on storage optimized for reads and writes.

Configuration Share Sizing and Recommended Practices

The configuration share is accessed during login and logout and during startup and shutdown of DirectFlex-enabled applications. Because Dynamic Environment Manager is reading only small bits of configuration data as needed, bandwidth consumption to the configuration share is low. Actual bandwidth utilization will vary with the number of configuration elements such as config files and predefined settings you have configured. Keeping the configuration share on servers near the user desktops will ensure the best performance and logon times.

While your capacity requirements may vary, 1 GB per 5,000 users is sufficient for typical deployments.

Profile Archive Share Sizing and Recommended Practices

The profile archive share contains personal settings for all users stored as ZIP files. A unique subfolder is created for each user. The personal user settings are read from this share at login or application startup, and are written back at logout or application exit. To ensure the best performance, place this folder in the same data center or network location as the users. Configuring FlexEngine to use the correct folder can be achieved by using multiple GPOs, for instance, a separate GPO per Active Directory site or per organizational unit (OU).

Consider the following best practices when creating the profile archive share:

  • Configure Dynamic Environment Manager to store all user profile archives, profile archive backups, and log files in the same share.
  • Use a dedicated share and not the home drive.

The size of the profile archive folder per user depends on the following:

  • Number of application and Windows settings used for personalization – When an application is configured for personalization, registry settings, INI files, or other repositories are used to capture configuration data, which is stored on the profile archive share. The amount of configuration data stored for most applications is very small. The following are examples from our testing environment:
    • VLC Media Player: 30 KB
    • Notepad++: 145 KB
    • Mozilla Firefox: 1.14 MB
  • Number of backups configured – When user settings change for an application configured for personalization, a backup copy of the old profile archive ZIP file is created. This backup can be used to restore user configuration to an earlier state. Maintaining several backup copies provides more flexibility if you need to restore settings, but maintaining more copies consumes more space. We recommend maintaining five backup copies.
  • Types of applications – Applications store configuration data in various ways, including as registry keys, INI files, and databases, and may use a combination of options. It is important to thoroughly test applications you profile to ensure only the necessary configuration data is being persisted on the profile archive share. Keeping profile archives small not only reduces capacity consumption on the share but reduces the amount of data transferred between the virtual desktop or RDSH server and the file server. See Combating Profile Archive Growth in the Dynamic Environment Manager Application Profiler operational tutorial for an example and guidance. 

While your capacity requirements may vary, 100 MB per user is sufficient for typical deployments.

Disaster Recovery

The next section, Multi-site Design, describes how Microsoft DFS was used to create highly available SMB shares, which can be failed over in the case of a disaster. Alternatively, Windows failover clustering may be used to create a highly available file server cluster. See High Availability with Windows Failover Clustering.

Because Dynamic Environment Manager uses the existing file servers and domain controllers, ensure that those servers are highly available and that a disaster recovery plan is in place.

It is recommended to integrate the Management Console into an already existing disaster recovery plan. You can install the Management Console on as many computers as required. If the Management Console is not available after a system failure, you can install it on a new management server or administrator workstation.

Multi-site Design

Dynamic Environment Manager data consists of the following types. This data is typically stored on separate shares and can be treated differently to achieve high availability: 

  • IT configuration data – IT-defined settings that give predefined configuration for the user environment or applications
    Note: A Dynamic Environment Manager instance is defined by the configuration data share.
  • Profile archive (user settings and configuration data) – The individual end user’s customization or configuration settings

It is possible to have multiple sets of shares to divide the user population into groups. This can provide separation, distribute load, and give more options for recovery. By creating multiple Dynamic Environment Manager configuration shares, you create multiple environments. You can use a central installation of the Management Console to switch between these environments and to export and import settings between environments. You can also use Dynamic Environment Manager group policies to target policy settings to specific groups of users, such as users within a particular Active Directory OU. See Multiple Environments for example configuration options when using multiple environments.

To meet the requirements of having Dynamic Environment Manager IT configuration data and user settings data available across two sites, this design uses Distributed File System Namespace (DFS-N) for mapping the file shares. 

Although we used DFS-N, you are not required to use DFS-N. Many different types of storage replication and common namespaces can be used. The same design rules apply.

Configuration Share 

For configuration file shares, having multiple file server copies active at the same time with DFS-N is fully supported. This is possible because end users are assigned read-only permissions to the file shares so as to avoid write conflicts.

There are two typical models for the layout of the configuration share.

  • Centralized configuration share – Designing a multi-site Dynamic Environment Manager instance using a centralized configuration share streamlines administration for centralized IT. Changes to the configuration share are made to a primary copy, which is then replicated to one or more remote sites.
  • Separate configuration share at each site – Another option is to implement multiple Dynamic Environment Manager sites by creating a configuration share at each site. This model supports decentralized IT, as IT admins at each site can deploy and manage their own Dynamic Environment Manager instances.

Note: Only administrators should have permissions to make changes to the content of the IT configuration share. To avoid conflicts, have all administrators use the same file server for all the writes, connecting using the server URL rather than with DFS-N.

IT Configuration Share – Supported DFS Topology

Figure 133: IT Configuration Share – Supported DFS Topology

Table 137: Strategy for Managing Configuration Shares

Decision

The configuration shares were replicated to at least one server in each site using DFS-R.

Each server was enabled with DFS-N to allow each server to be used as a read target.

Justification

This strategy provides replication of the configuration data and availability in the event of a server or site outage.

Aligned with Active Directory sites, this can also direct usage to the local copy to minimize cross-site traffic.

This strategy provides centralized administration for multiple sites, while configuration data is read from a local copy of the configuration share.

Profile Archive Shares 

For user settings file shares, DFS-N is supported and can be used to create a unified namespace across sites. Because the content of these shares will be read from and written to by end users, it is important that the namespace links have only one active target. Configuring the namespace links with multiple active targets can result in data corruption. See the Microsoft KB article Microsoft’s Support Statement Around Replicated User Profile Data for more information.

Configuring the namespace links with one active and one or more inactive (passive) targets provides you the ability to quickly, albeit manually, fail over to a remote site in case of an outage.

Profile Archive Shares – Supported DFS Topology

Figure 134: Profile Archive Shares – Supported DFS Topology

Switching to another file server in the event of an outage requires a few simple manual steps:

  1. If possible, verify that data replication from the active folder target to the desired folder target is complete.
  2. Disable the DFS-N referral status for the active folder target
  3. Enable the DFS-N referral status on the desired folder target.

Profile Archive Shares – Failover State

Figure 135: Profile Archive Shares – Failover State

Table 138: Strategy for Managing Profile Archive Shares

Decision

The profile archive shares were replicated to at least one server in each site using DFS-R.

DFS-N was configured, but only one server was set as an active referral target. The rest were set as disabled targets.

Justification

This strategy provides replication of the profile archive data and availability in the event of a server or site outage.

A disabled target can be enabled in the event of a server or site outage to provide access to the data.

User configuration data is accessed or modified on a local copy of the profile archive share, ensuring good performance for end users.

The Dynamic Environment Manager Management Console can be installed on as many computers as desired. If the Management Console is not available after a disaster, you can install it on a new management server or on an administrator’s workstation and point that installation to the Dynamic Environment Manager configuration share.

Recommended Practices for Deployment and Management

Installation is a straightforward process, as outlined in the next section. After installation, be sure to follow the recommendations for initial configuration and ongoing management, as described in the sections that follow.

Installation

You can install and configure Dynamic Environment Manager in a few easy steps:

  1. Create SMB file shares for configuration data and user data.
  2. Import ADMX templates for Dynamic Environment Manager.
  3. Create Group Policy settings for Dynamic Environment Manager.

    Note: When applying the GPO settings to computer objects, use loopback processing. For more information, see Circle Back to Loopback.

  4. Install the FlexEngine agent on the virtual desktop or RDSH server VMs to be managed.
  • If you manually create a master VM, install the FlexEngine agent according to the VMware Dynamic Environment Manager documentation.
  • If you use the Import Image wizard to import from the Azure Marketplace, the FlexEngine agent is automatically installed when the image is created.
    The installation directory defaults to C:\Program Files\VMware\Horizon Agents\User Environment Manager.
  • Install the Dynamic Environment Manager Management Console and point to the configuration share.

Refer to Installing and Configuring Dynamic Environment Manager for detailed installation procedures. Also see the Quick-Start Tutorial for User Environment Manager. In this reference architecture, we used Dynamic Environment Manager 9.11.

Initial Configuration

When implementing Dynamic Environment Manager, consider the following best practices:

  • To optimize logon speed and the user experience, use DirectFlex as much as possible. Application Profiler enables DirectFlex by default for all created Dynamic Environment Manager configuration files. Do not enable DirectFlex for applications that act as middleware and use many plug-ins, such as Microsoft Office and Internet browsers.
  • Using the provided ADMX template to create a GPO that configures FlexEngine is recommended. If you use a GPO to configure FlexEngine, do not use a logon script to start FlexEngine at logon. Rather, enable the Run FlexEngine as Group Policy client-side extension GPO setting to start FlexEngine at logon. Using the Group Policy client-side extension optimizes logon times and supports more Windows settings.
  • When using the Group Policy client-side extensions, ensure that the extension runs during each logon by enabling the Always wait for the network at computer startup and logon