4.X FastX Clustering Guide


FastX is a platform for creating, managing, and connecting to Virtual Display Sessions (sessions) on remote Linux systems.  Multiple systems can be run individually as standalone systems, but the real power comes from when those systems are linked together into a cluster.  Clusters allow administrators to centrally manage multiple systems, create single entry points into the cluster event behind a firewall, Load Balance systems to allow even distribution of work, and many other features simplifying and improving the user experience.

Cluster Components

The FastX cluster is divided into separate components which can be enabled on the same or different systems to allow for load balancing and fault tolerance. Each component serves a specific purpose. This allows for separation of concerns and a smaller attack vector making the cluster more secure, scalable and fault tolerant.

  • Transporter — Communication plane of the cluster
  • Webserver — Endpoint where clients connect
  • Manager — Holds cluster state information and configuration
  • User — Holds user login, permissions, and preferences information. Also enables SSH logins
  • Launcher — Launches sessions. Running sessions connect back the the launcher to advertise

Installation

Requirements

  • FastX Advanced License Key
  • RHEL 8 or higher
  • Shared $FX_CONFIG_DIR$FX_VAR_DIR$HOME directories
  • Each Node needs its own $FX_LOCAL_DIR, and $FX_TEMP_DIR
    • $FX_LOCAL_DIR may be mounted
    • $FX_TEMP_DIR must not be mounted

Setup

Each system on the cluster is a node. Nodes are divided into 2 categories: Cluster nodes and Transporter nodes. Cluster nodes will do the work of the FastX cluser (authentication, launching sessions, web server endpoint etc). Transporter nodes provide the communication channels between services in the cluster.

You can install both a cluster node and a transporter node on the same system.

On every system

  • Download and install the FastX repository installation script
  • Download and install the FastX advanced setup script (setup-fastx-advanced.sh)
    • You will be prompted for your advanced activation key

On cluster nodes

Install fastx4-advanced

  • RHEL/CentOS: dnf install -y fastx4-advanced
  • Ubuntu/Debian: apt install -y fastx4-advanced

Enable specific services on each node

By default, all services are enabled on each node. Optionally, you can set up specific cluster nodes for different purposes. For example, you may want some nodes to act as gateways while others compute nodes. Or you may want to disable logins on some nodes.

See Cluster Installation Patterns for different common node setups

On Transporter nodes

Install fastx4-nats

  • RHEL/CentOS: dnf install -y fastx4-nats
  • Ubuntu/Debian: apt install -y fastx4-nats

/etc/fastx/nats-server.conf

The transporter uses the NATS protocol for fast and efficient transport. The /etc/fastx/nats-server.conf file configures the NATS transporter. The authorization token needs to match the token in /etc/fastx/transporter-secret.ini

authorization {
    token: "$2a$11$PWIFAL8RsWyGI3jVZtO9Nu8.6jOxzxfZo7c/W0eLk017hjgUKWrhy"
}

Distribute /etc/fastx/transporter-secret.ini

The installation of the fastx4-nats package will create the file /etc/fastx/transporter-secret.ini
Copy transporter-secret.ini to /etc/fastx/transporter-secret.ini on each cluster node.

On each cluster node

chown fastx.fastx /etc/fastx/transporter-secret.ini
chmod 600 /etc/fastx/transporter-secret.ini
systemctl restart fastx4.service

Note: When you install fastx4-nats on a transporter node it will create a new /etc/fastx/transporter-secret.ini and /etc/fastx/nats-server.conf file with new unique tokens. Make sure that the tokens match across the cluster before distributing them.

High Availability Setup

Cluster nodes

High availability is built in by default. Simply add more cluster nodes that are configured according to the nodes in Cluster Installation Patterns. The services will distribute the load evenly across the HA nodes.

Transporter nodes

NATS supports clustering and high availability. Edit the /etc/fastx/nats-server.conf to enable clustering according to the configuration documentation.

# Client port of 4222 on all interfaces
port: 4222

authorization {
   token: thisisasecret
}
tls {
  cert_file: /etc/pki/tls/certs/fedora-self.nats.crt
  key_file: /etc/pki/tls/private/fedora-self-key.nats.txt
}
# This is for clustering multiple servers together.
cluster {
  # Route connections to be received on any interface on port 6222
  port: 6222
  # Routes are protected, so need to use them with --routes flag
  authorization {
    user: ruser
    password: T0pS3cr3t
    timeout: 2
  }
  # Routes are actively solicited and connected to from this server.
  routes = [
    nats://10.211.55.9:6222,
    nats://10.211.55.4:6222,
    nats://10.211.55.8:6222
  ]
}

https://docs.nats.io/running-a-nats-service/configuration/clustering/cluster_config
https://docs.nats.io/running-a-nats-service/configuration/clustering

Load Balancing

Load balancing distributes FastX Sessions across multiple compute nodes according to administrator setup. Load balancing is done via custom configuration scripts enabling an admin to balance according to his use case.

See Load Balancing

Job Scheduling

Job scheduling is the process of launching a session to be scheduled at a later time by a special job scheduling script. Instead of calling the default start script, the FastX launcher will call a job scheduler template or admin supplied custom script that will use a custom job scheduler to launch the session. The session will show up in a pending state on the user’s UI.

See Job Scheduling

Long Term Storage and Configuration

All data that must be stored across restarts is saved in flat human readable files in directories on the different cluster components.

  • Storing information in flat files removes the need for setting up a fault tolerant database.
  • It allows administrators to easily modify configuration.
  • It seamlessly integrates with configuration management systems like Puppet, Ansible, and Kubernetes ConfigMaps/Secrets.
  • It is self documenting.

NOTE: When developing a fault tolerant cluster, administrators should install the configuration directories on a shared mountpoint (for example NFS or etcd).