Multi-Machine¶
Ericom Shield can be deployed in a variety of ways, according to the exact customer requirements. For smaller scale deployment, Shield can be installed on a single machine and provide full functionality.
For larger scale deployment, with greater loads or where redundancy is required, a multi-machine environment is recommended. This will ensure high availability (HA), scalability and improve load balancing.
Scaling is easily done during the initial installation or at a later stage, according to the needs (e.g., increased number of users in the system). Scaling can be done both horizontally (by adding extra nodes) and vertically (by adding extra resources to the machine).
Introduction to Node Types¶
Ericom Shield includes an orchestration platform. The orchestration platform includes 2 types of machines (nodes): Manager or Worker. The node type is defined by using a switch when setting up the node, more details to follow. The manager nodes are responsible for maintaining the containers, e.g. creating and destroying browser containers as the system is being used. The orchestrator also plays an important role in fault tolerance. The orchestrator refers to all system components as one single cluster and constantly checks that all elements are functioning properly. If a problem is detected in one of the components, the orchestrator will shut down the damaged component, create a new instance of that container, join the new container to the cluster and thus restore the system to normal operation. This process happens automatically without user intervention.
For orchestration to function correctly, it is important to maintain an odd number of management nodes. This is to ensure that the quorum remains in the event of a node failure. See the table below, which highlights the level of fault tolerance provided depending on the number of management nodes. For example, if there are 5 management nodes, the system can remain operational if 2 management nodes are lost.
Nodes | Majority | Fault Tolerance |
---|---|---|
1 | 1 | 0 |
2 | 2 | 0 |
3 | 2 | 1 |
4 | 3 | 1 |
5 | 3 | 2 |
6 | 4 | 2 |
7 | 4 | 3 |
Among the manager nodes, one is always defined as the leader and this node is responsible for orchestrating the other machines in the cluster. If the existing leader node should fail, then one of the other manager nodes will be promoted to be the leader (providing a majority of managers exists within the remaining nodes).
Note
Up to 7 manager nodes is considered the optimum in terms of performance. A higher number of manager nodes is considered an overhead.
Introduction to Ericom Components¶
In a multi-machine environment, Ericom Shield has 3 types of servers:
- Management - includes all components that are related to managing the system, administration, orchestration, collecting logs etc. These servers are located inside the domain (in a protected area). To ensure HA, it is recommended to have 3-7 management servers (must be an odd number).
- Browsers Farm - includes all the remote browser servers. These servers are located in the DMZ. Number of servers varies according to system load.
- Core - includes all the core components. These servers are located in the DMZ. To ensure HA, it is recommended to have a minimum of 2 core servers.
The Ericom Shield environment contains various containers (Core / Management / Browsers as described above). Management components are installed on a manager node.
Core and Browsers farms are installed on worker nodes. There is no special recommendation regarding the number of worker machines in the cluster. This number is determined by the scale of the system and the number of users/sessions that needs to be supported. It is important that all browser machines will have the same specifications (CPU and memory).
Design The System Deployment¶
It is highly important to properly design the system deployment before setting it up. While doing so, consider number of locations/sites, number of users, work load, connection options, available resources etc.
Ericom has developed a scaling model which can assist in estimating the number of browser nodes (machines) an organization will require based on the type of usage that is expected. Contact the Shield Professional Services team for further details.
Prerequisites¶
Required number of Linux Ubuntu 16.04 or 18.04 Server (64-bit, not workstation) machines. Ensure each machine:
- Has a fixed IP address
- Has SSH server installed
- Has the same username/password as the other machines (unified credentials for all system machines, required for update purposes)
- Has an internet connection (DNS and Proxy settings are configured properly)
- Hardware requirements - as specified here
Note
In Shield multi-machine system, connection between machines is by default using passwords. Using SSH keys is much safer. Most cloud providers are working with SSH keys, meaning using certificates (that include public and private keys), to connect to machines. Shield supports connecting using certificate. In order to enable connection using certificate, create the certificate (public and private keys) and copy the public key to the machine that is about to be connected. For more information on SSH Keys and how to set them, go here.
System Deployment¶
After properly designing the system and configuring the machines according to the prerequisites, set up the system. Follow the instructions specified here
Post Deployment Steps¶
Clustered Address¶
When the Shield Multi-Machine System is up and running, create a clustered DNS name (e.g., shield.company.com). The new DNS name should load balance proxy connections on port 3128 to the servers running the Management components. This DNS name should be used as the <ProxyHostname>
in the browser configuration detailed here.
Ports¶
When deploying in the DMZ, refer to Ports and DNS for full details of the required ports between each of the nodes within the Shield Multi-Machine System.
Multi-Machine Useful Services¶
For a summary of the nodes in the cluster, their status and availability and which node is the leader:
sudo ./status.sh -n
The output is displayed in a table, specifying the nodes in the cluster, their IP, status, role and labels.
For a more specific view of the cluster, use the following command:
sudo ./status.sh -s
The output is displayed in a table, with a column for each node, and entries for each service within the cluster. The table shows which services are running on which node within the cluster and the deployment of these services according to the node type (Management/Browsers/Core).
Another useful service, which allows further management of individual nodes - ./nodes.sh
.
This service includes the following options:
To display all the current labels of a node and its role (e.g. Manager/Worker):
sudo ./nodes.sh -show-labels <NodeName>
To add a label to an active node (e.g. management
, shield-core
or browser
):
sudo ./nodes.sh -add-label <NodeName> <LabelName>
To remove a label from an active node (e.g. management
, shield-core
or browser
):
sudo ./nodes.sh -remove-label <NodeName> <LabelName>