Cluster Installation Patterns


FastX was designed to fit multiple network configurations. Here is a list of common FastX usage patterns for settings up a cluster.

Requirements

Homogeneous Cluster

Use Case

All systems on the cluster are the same. Users have login access to any system and can launch sessions on any system. Typically use is a lab environment where all users are on a LAN.

Set Up

COMPUTE_1, COMPUTE_2, … COMPUTE_N — Systems where users will log in and run sessions

Installation Instructions

for COMPUTE_1 … COMPUTE_N
Install the webserver, user, manager, launcher, xorg

Point Client to any COMPUTE node.

Gateway Cluster

Use Case

All clients point to a HEAD node. All users login and run sessions on COMPUTE nodes. This configuration creates a gateway where there are no user accounts on the HEAD nodes allowing centralized access of clients.

Set Up

WEBSERVER_1, WEBSERVER_2, … WEBSERVER_N — Systems where clients will connect. No user logins on this system
COMPUTE_1, COMPUTE_2, … COMPUTE_N — Systems where users will log in and run sessions

Installation Instructions

  • for WEBSERVER_1 … WEBSERVER_N
    Install the webserver
  • for COMPUTE_1 … COMPUTE_N
    Install the user, manager, launcher, xorg
  • Point Client to any HEAD node.

Three Tier Architecture

Use Case

Modification of the three tier architecture where you have a Client, Server, and Database setup.

Move configuration to a central location on a different set of servers where MANAGER nodes can act as a defacto database. You may want to install the MANAGER nodes on the same systems as the TRANSPORTER nodes.

Users have login access to all COMPUTE nodes Typically use is a lab environment where all users are on a LAN.

Set Up

COMPUTE_1, COMPUTE_2, … COMPUTE_N — Systems where users will log in and run sessions
MANAGER_1, MANAGER_2, … MANAGER_N — Systems to store configuration and database info

Installation Instructions

  • for COMPUTE_1, … COMPUTE_N
    Install the user, launcher, xorg
  • for MANAGER_1, … MANAGER_N
    Install webserver, manager
  • Point Client to any MANAGER node

Standard Distributed, Fault Tolerant Cluster

Use Case

For administrators who require full microservice architecture. Allows for maximum scalability, fault tolerance, and redundancy. Useful when components are installed as containers.

Set Up

WEBSERVER_1, WEBSERVER_2, … WEBSERVER_N — Systems where clients will connect
MANAGER_1, MANAGER_2, … MANAGER_N — Systems to store configuration info
COMPUTE_1, COMPUTE_2, … COMPUTE_N — Systems where users will log in and run sessions

Installation Instructions

  • for WEBSERVER_1, … WEBSERVER_N
    Install the webserver
  • for MANAGER_1, … MANAGER_N
    Install manager.
  • for COMPUTE_1, … COMPUTE_N
    Install user, launcher, xorg
  • Point Client to any WEBSERVER node

Advanced Distributed, Fault Tolerant Cluster

Use Case

For administrators who require full microservice architecture. Allows for maximum scalability, fault tolerance, and redundancy. Useful when components are installed as containers.

Note: Only separate the USER and COMPUTE node if absolutely necessary. You should probably use ther standard distributed fault tolerant cluster.

This use case exists if the COMPUTE nodes do not have user accounts and all sessions will run under the same username. This configuration adds extra security considerations and should only be used if you know what you are doing.

Set Up

WEBSERVER_1, WEBSERVER_2, … WEBSERVER_N — Systems where clients will connect
MANAGER_1, MANAGER_2, … MANAGER_N — Systems to store configuration info
USER_1, USER_2, … USER_N — Systems where users will log in
COMPUTE_1, COMPUTE_2, … COMPUTE_N — Systems where users run sessions

Installation Instructions

  • for WEBSERVER_1, … WEBSERVER_N
    Install the webserver
  • for MANAGER_1, … MANAGER_N
    Install manager
  • for USER_1, … USER_N
    Install user
  • for COMPUTE_1, … COMPUTE_N
    Install launcher, xorg
  • Point Client to any WEBSERVER node