Systems and topology overview#

HCL™ Launch includes several systems, including a server and one or more agents. You can configure multiple topologies, including ones that use high availability, disaster recovery, and the blueprint designer, to meet your needs.

The following system topologies, including diagrams, are shown. For explanation of the system components, see the Description of systems.

Core topology#

The core installation of HCL Launch includes a server, agents, and a license server. Clients access the server through web browsers, the REST API, or the command-line client. Agents can be installed on cloud environments, virtual machines (VMs), containers, or physical systems; the agents are shown on cloud systems in the following diagrams, but they can be installed on many different systems.

With this topology, the server can create environments on clouds that use virtual system patterns (VSPs), such as IBM Cloud Orchestrator and IBM PureApplication® System. To create environments on other clouds such as Amazon Web Services, SoftLayer®, VMware vCenter, and Microsoft™ Azure, you must install the blueprint design server and at least one engine, as described in the blueprint design topologies.

A simple topology that consists of the server, agents, a license server, a cloud, and the interfaces to the server, including web browsers, the command-line client, and the REST API

To install the core components, see ../../com.udeploy.install.doc/topics/install_ch.md#.

Multi-region topology#

If your environment has multiple security zones that are divided by firewalls, you can use agent relays to connect agents to the server through the firewalls. For example, if your HCL Launch server is within a firewall, but your target environments are outside the firewall, the agents on those target environments cannot connect directly to the server. In this case, you install an agent relay outside the firewall to allow the agents to connect to the server through the firewall, as shown in the following diagram.

A topology that includes an agent relay; the relay allows agents to communicate with the server through firewalls

To install and configure an agent relay, see Installing agent relays.

High-availability clustered topologies#

High-availability topologies use multiple servers. These servers can all be running at the same time to share the load (as in a clustered topology), or they can be waiting for another server to fail (as in a cold standby topology). The following diagram shows a clustered topology in which a load balancer distributes connections to three servers. Users connect directly to the load balancer, which sends them to an active server. Agents connect via HTTP and HTTPS to the load balancer; however, the agents connect via JMS directly to the servers. The servers store their files on a shared database and file system.

A clustered high-availability topology, in which most communication to the multiple servers goes through a load balancer

To configure a cluster of servers, see Setting up clusters of servers.

You can also cluster blueprint design servers and engines. In this case, you install one or more blueprint design servers and engines, and set them each to access the same database and shared file system. Similarly, a load balancer distributes traffic to the blueprint design servers and engines. The following diagram shows a clustered topology with three engines and three blueprint design servers, connecting to one or more non-OpenStack clouds:

A clustered high-availability topology with multiple blueprint design servers and engines

To connect to OpenStack-based clouds, the topology is different because the Heat engines are installed via the OpenStack server, not via HCL Launch:

A clustered high-availability topology with multiple blueprint design servers that connect to OpenStack-based clouds

To configure a cluster of blueprint design servers and engines, see Setting up clusters of blueprint design servers and Setting up clusters of engines.

Disaster recovery topologies#

One way to prepare for disaster recovery is to have a cold standby system, including a stopped server and a replicated copy of the database and file system. The following diagram shows a simple topology with a cold standby server and related resources on standby.

A topology that includes a cold standby system

To configure a cold standby system for the server, see Adding cold standby servers.

You can also configure a disaster recovery system for the blueprint design server and engine. The following diagram shows a disaster recovery topology for the blueprint design server and Heat engine where the cold standby servers are located in a different data center. In this diagram, the shared systems and services should be configured for high availability.

A topology that includes a disaster recovery system for the blueprint design server and the Heat engine.

To configure disaster recovery for the blueprint designer, see Configuring disaster recovery for the blueprint design server.

Default ports#

The following diagram shows the default port numbers that HCL Launch uses for communication. Most of these ports can change depending on your choices at installation time. The following diagram is only a summary of the defaults.

A topology that shows the ports that each part of HCL Launch uses for communication

For more information about ports, see System requirements and performance considerations.

Description of systems#

Parent topic: Overview of HCL Launch