Redundancy
The Ignition Helm Chart can deploy a redundant pair of gateways as a 2-replicas StatefulSet
where pod index 0 is assigned the primary role and pod index 1 is assigned the backup role.
With this arrangement, the startup of Ignition is also customized such that role-specific values are applied correctly. Examples include:
- 8-digit leased activation licensing
- Public Address Settings (address, http port, https port)
- Unique Ingress routes for primary/backup gateways
Once enabled, the chart will deploy two pods. The first pod (index 0) will be the Primary role Ignition Gateway, and the second pod (index 1) will be the Backup role. Since they're deployed with StatefulSet
and the default OrderedReady
pod management policy, pod index 0 (Primary) will start and become ready before pod index 1 is created. Similarly, when changes to the definition require recreating the pods, the Backup gateway will be shutdown first.
Setting up Redundancy​
Setting up redundancy with the Ignition Helm Chart is easy.
# testing.yaml
gateway:
redundancy:
enabled: true
Once both pods become ready, you'll need to perform some additional steps within the web UI:
- Go to the Backup gateway and approve the Primary gateway certificate in the outgoing connections settings.
- Go to the Primary gateway and approve the Backup gateway certificate in the incoming connections settings.
- Finally, on the Primary gateway, approve the inbound connection.
Automating Gateway Network Connectivity​
Enabling redundancy was easy enough. It gets even better with cert-manager! You can enable cert-manager integration in combination with redundancy for a fully-automated deployment.
# testing.yaml
gateway:
redundancy:
enabled: true
certManager:
enabled: true
Using the values above will set up:
- Creation of a dedicated cert-manager certificate
Issuer
for the Gateway Network. - Generation of a Gateway Network certificate with PKCS12 keystore for Ignition, signed by the Gateway Network issuer.
- Integrated trust of the issuer certificate across Primary and Backup gateways.
- Require SSL and Require Two Way Auth enabled by default.
- Gateway Network Connection Security policy set to
Unrestricted
because only gateways with a signed certificate from our issuer can connect.
The result is a fully operational, freshly installed redundant pair of Ignition Gateways with best-in-class security configured out-of-the-box! 🎉
Role-specific options​
There are many values in the Ignition Helm Chart that can be customized by role. The values file will indicate where these customizations are possible.
For example, consider how you'd customize the public address settings for a stand-alone Ignition gateway:
# testing.yaml
gateway:
# ...
publicAddress:
host: "my-ignition.example.com"
http: 80
https: 443
When redundancy is enabled, the default behavior is to apply a -primary
and -suffix
to the main host. If you want to override these, you can use the modified forms shown below:
# testing.yaml
gateway:
# ...
primaryPublicAddress:
host: "my-main-ignition.example.com"
backupPublicAddress:
host: "my-backup-ignition.example.com"
In many of these primary/backup overrides, you can also specify the original configuration value as well as the role-specific overrides. With this, you can specify shared configuration and role-specific overlays. Examples include applying annotations to service or ingress resources.
Networking​
The networking configuration for a redundant Ignition pair is also different from a standalone gateway. Perspective sessions and Vision clients both need to be able to connect to both the Primary and Backup node. For this, two ingress rules exist (with -primary
and -backup
suffices) and the public address settings are configured accordingly.
You can specify unique ingress hostnames using the ingress.primaryHostOverride
and ingress.backupHostOverride
values to override the default suffix behavior.
Scheduling Primary and Backup Pods to different nodes​
By default, the individual pods may be scheduled to the same node as a convenience in testing. In a production setting, you'll want to set podAntiAffinity=true
at a minimum to ensure the pods get scheduled on different nodes. In clusters that have nodes spread across availability zones, you may want to be more specific in your configuration and drive the affinity
value more completely. The affinity
value in the chart supports rendering the supplied value as a helm "template". For example, you can use the affinity
setting below to specify a pod anti-affinity that will require your redundant gateways to be scheduled in opposing availability zones:
# testing.yaml
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
{{- include "ignition.selectorLabels" . | nindent 10 }}
topologyKey: topology.kubernetes.io/zone
The pipe (|
) character after affinity:
in the above example is important and leverages YAML multiline behavior to render the text block accordingly. The chart them rinses the value through the templating engine. See YAML Multiline for a helpful reference on multiline string handling in YAML.
Refer to the Kubernetes documentation on Assigning Pods to Nodes for more information.
Other Considerations​
Manual control of upgrades​
See the Upgrading section for more information on upgrading a redundant pair.
Storage Constraints​
Certain cloud storage providers may have additional constraints on scheduling workloads against their target backing storage. For example, in Amazon EKS, EBS volumes are created in a given availability zone (by default wherever their associated pod is first successfully scheduled). If you don't have your affinity settings configured properly, you could encounter a situation where you've got available compute in your cluster, but none available in the availability zone required for that pod's storage.