The Solace Event Broker can be deployed in two modes:
1. A single Standalone instance, suitable for quickly getting started for developer environments.
2. A High Availability deployment of 3 nodes (a.k.a "a triplet"), consisting of a primary and backup messaging nodes, and a single monitor node to prevent split-brain activation.
Container Runtime, Image Repository and Restart Policy
Solace supports docker and podman as container runtimes, and the image can be pulled from Docker Hub.
It is important for instances of Solace containers to restart automatically in the case of a crash or host reboot so set a restart policy accordingly.
The default user ID in a Solace container is 1000001. You can provide a specific UID available on the host machine if required to override this.
Important Note
MacOS reserves port 55555 to create a conflict with the Solace container. It will be port mapped instead to
55554 in this case. As Solace samples by default refer to SMF port 55555, make the adjustment to your
usage accordingly to 55554 instead. Due to this same reason, host mode networking is not possible to select.
Important Note
The Enterprise Edition of the Solace Broker is the purchased edition and not available from Docker Hub.
It is provided as a compressed tar archive, containing a Docker repository of a single image.
You can download it using your customer credentials from https://products.solace.com/
The image can be loaded into a container repository used by your enterprise, to then provide the full path above.
Alternatively, use the docker or podman load command to use a local machine container storage.
e.g. docker load -i /path/to/downloaded/solace-pubsub-enterprise-10.25.x.x-docker.tar.gz
Networking Configuration
The choice of networking mode is an important consideration when deploying Solace containers.
1. bridge networking: the container is deployed in a separate network namespace from the host, with port forwarding rules.
2. host networking: the container is deployed in the host’s network namespace.
3. slirp4netns networking: suitable if running rootless containers with podman.
Container Storage Configuration
The state information associated with a Solace container should be externalised to a filesystem mount on the host machine for performance and ease of management in the future (as some of this data is long lived.)
The storage path on the host will be mounted to /var/lib/solace in the container's filesystem and will need to be built with XFS, and have write access for the container user ID.
Important Note
If running rootless containers (with podman), take extra care to manage the directory permissions of the above path.
The filesystem inside the Solace container is owned by root, and attempts to write to a location owned by an unprivileged user on the host will fail.
Use the command
"podman unshare chown -R <container-UID>:<container-GID> /path/to/host/storage"
so the container can write to it. See the documentation for more information.
Password Configuration
When a Solace container is started, a default user for admin access can be created to allow immediate usage for CLI login or the Management UI.
This user will be called admin with the initial password possible to be supplied in one of three ways:
1. Plain text string: suitable for quick-start developer environments.
2. Password file path: a file on the host filesystem to read for the password.
3. Encrypted password: provide a previously encrypted password.
Important Note
When using this option, load the file into your container runtime's secret filesystem first.
/run/secrets is a temporary, in-memory filesystem location inside a container where sensitive data (secrets) are mounted at runtime instead of being baked into images or passed via environment variables.
e.g. docker secret create solace-secrets admin-pass.txt to add the file into a secrets filesystem called solace-secrets
Then when running the container, include the extra argument --secret solace-secrets
Important Note
The encrypted password must be a SHA-512 hash, generated using a random string salt.
Provide it above in the format: $6$<salt string>$<password-hash> e.g. $6$uE7L+w62wAI4$A0A5D01...1F65E441134AE01256
As Solace is a multi-protocol broker, the services are provided over a number of different ports.
Select the protocols you intend to use so that the port forwarding rules can be constructed between the container's network namespace and the host network.
Solace Messaging Protocol (SMF)
SMF (Plain)
55555
SMF (Secure TLS)
55443
Solace Messaging (SMF) over WebSocket
WebSocket (Plain)
8008
WebSocket (Secure TLS)
1443
AMQP
AMQP (Plain)
5672
AMQP (Secure TLS)
5671
MQTT
MQTT (Plain)
1883
MQTT (Secure TLS)
8883
MQTT WebSocket (Plain)
8000
MQTT WebSocket (Secure TLS)
8443
REST/HTTP Messaging
REST (Plain)
9000
REST (Secure TLS)
9443
SEMP (Management API)
SEMP (Plain)
8080
SEMP (Secure TLS)
1943
Command Line Interface over SSH
CLI (SSH)
2222
Multi-Node Routing (MNR)
MNR Control Port
55556
Note: Additional ports related to communication between HA nodes are automatically included when HA mode is selected and are not explicitly shown here.
Scaling Parameters
A Solace broker can be deployed in multiple scaling tiers, with capacity limits and host resource usage matching the intended use-case.
Ensure the minimum resource requirements are met for a successful container start. The default scaling levels set below are for the minimal size developer instance.
Max Message Spool Usage?
GB
Max Connections
?
100
Enterprise
1001,00010,000100,000200,000
Max Queue Messages
?
100M
Enterprise
100M240M3000M
Max Kafka Bridges
?
0
Enterprise
01050200
Max Kafka Broker Connections
?
0
Enterprise
03002,00010,000
Max Bridges
?
25
Enterprise
255005,000
Max Subscriptions
?
50,000
Enterprise
50,000500,0005,000,000
Max Guaranteed Message Size
?
10
Enterprise
1030
Important Note
Standard Edition has lower message spool and scaling limits. If you need higher limits, select Enterprise Edition.
Standalone Broker Configuration
For the simple standalone instance, provide a suitable name for the container to use in the generated run command.
High Availability Configuration
When deploying a High Availability triplet of Solace containers, they are configured with awareness of each other in terms of name and network location.
Provide a suitable name for each of the nodes to deploy, along with an IP address or fully qualified domain name (FQDN) on how each node may connect to the other 2.
Node
Broker and Container Name
Host IP address or FQDN
Primary node
Backup node
Monitor node
Every node is also configured with a Pre-Shared Authentication Key to allow each to connect to another.
This is 44 to 344 characters (which translates into 32 to 256 bytes of binary data encoded in base 64) and can be generated below.
Important Note
When using this option, load the file into your container runtime's secret filesystem first.
/run/secrets is a temporary, in-memory filesystem location inside a container where sensitive data (secrets) are mounted at runtime instead of being baked into images or passed via environment variables.
e.g. podman secret create solace-secrets solace-presharedkey.txt to add the file into a secrets filesystem called solace-secrets
Then when running the container, include the extra argument --secret solace-secrets
TLS Server Certificate
If setting up your broker to use TLS connections, provide a server certificate file path, and optionally a passphrase file path if the certificate is encrypted.
Important Note
Load the file(s) into your container runtime's secret filesystem first.
/run/secrets is a temporary, in-memory filesystem location inside a container where sensitive data (secrets) are mounted at runtime instead of being baked into images or passed via environment variables.
e.g. podman secret create solace-secrets server-certificate.p12 to add the file into a secrets filesystem called solace-secrets
Then when running the container, include the extra argument --secret solace-secrets
Generated Output
Standalone node container start command
Primary node container start command
Backup node container start command
Monitor node container start command
Important Note
After deploying a High-Availability broker group, you need to designate one of the brokers as the config-sync 'leader' after all are successfully running.