Multi-Node Architecture on Google Cloud
Deploy BindPlane OP in a multi-node configuration on Google Cloud
BindPlane OP on Google Cloud
Google Cloud can be used to host a scalable BindPlane OP architecture by leveraging multiple BindPlane OP instances in combination with Compute Engine, Cloud Load Balancer, and Pub Sub.
The following requirements must be met:
- You must have access to a Google Cloud Project
- You must have a BindPlane OP Enterprise license
- You must be comfortable working with the following Google services
The following diagram displays a basic implementation.
- Cloud Load Balancer receives traffic from agents and users, and distributes that traffic to the BindPlane OP instances.
- BindPlane OP is deployed to two compute instances. Additional instances can be added.
- Pub/Sub is used for communication and coordination between BindPlane OP instances.
- PostgreSQL (Cloud SQL) is used as the storage backend, allowing storage to be shared and in sync between instances.
Create a firewall rule which will allow connections to BindPlane on TCP/3001.
- Target Tags:
- Source Filters:
- IP ranges:
- IP ranges:
- Protocols and Ports:
*Allowing access from all IP ranges will allow anyone on the internet access to BindPlane OP. This firewall rule should be restricted to allow access only from networks you trust.
In this guide, we will create two compute instances,
bindplane-1. See the prerequisites for information on individually sizing your instances.
We expect this deployment to handle 200 agents, so we will select the
n2-standard-2 instance type, which has the exact core count required, and more than enough memory.
- 2 cores
- 8 GB memory
- 60 GB persistent ssd
- Static public IP addresses
- Set pubsub to "enabled"
- Network Tags:
PostgreSQL is used as a shared storage backend for BindPlane OP. Google has many options available for production use cases, such as replication and private VPC peering.
In this guide, we will deploy a basic configuration with:
- 4 cores
- 16GB memory
- 250GB SSD for storage
- Authorized Networks (Under "connections") set to the public IP addresses of the previously deployed compute instances
All other options are left unconfirmed or set to their default values.
Once the Cloud SQL instance is deployed, we need to create a database and a database user.
On the database's page, select "create database" and name it
On the user's page, add a new user named
bindplane and use a secure password, or choose the "generate password" option. Note the password, it will be required when BindPlane OP is configured.
Google Pub Sub is used by BindPlane OP to share information between instances. Create a new topic named
bindplane. Uncheck the "add a default subscription" option. You can keep all other options set to their default value.
Cloud Load Balancer
In order to distribute connections between multiple BindPlane OP instances, a TCP load balancer is required. This guide will use an internet facing load balancer, however, an internal load balancer is also supported.
Create a load balancer with the following options:
- From Internet to my VMs
- Single region only
- Target Pool or Target Instance
Configure the Backend with the following options:
- Region: The region used for your compute instances, pubsub topic, and cloudsql instance
- Backends: "Select Existing Instances"
- Select your BindPlane OP instances
- Health check: Choose "create new health check"
- Request Path:
- Health criteria: Use default values
Configure the Frontend with the following options:
- New Frontend IP and Port:
Review and Create
Review the configuration and choose "Create". Once created, the load balancer will exist and it should be failing the healtchecks, because BindPlane OP is not installed and configured yet.
Install BindPlane OP
With Cloud SQL, Pub Sub, and the load balancer configured, BindPlane OP can be installed on the previously deployed compute instances.
Connect to both instances using SSH and issue the installation command:
1curl -fsSlL https://github.com/observiq/bindplane-op/releases/latest/download/install-linux.sh | bash -s -- --enterprise
Once the script finishes, run the
init server command:
1sudo BINDPLANE_CONFIG_HOME=/var/lib/bindplane /usr/local/bin/bindplane init server --config /etc/bindplane/config.yaml
- License Key: Paste your license key
- Server Host:
0.0.0.0to listen on all interfaces
- Server Port:
- Remote URL: The IP address of your load balancer.
- Sessions secret: Use the generated UUID, or create your own UUID
- Enable Multi Account: Yes
- Auth Type: Single User*
- Accept Eula: Choose yes if you agree.
📘 You can select LDAP or Active Directory if you do not wish to use basic auth. This guide's scope will not be covering external authentication.
The init command will not configure Pub Sub or Postgres. The configuration file must be edited with your editor.
Stop BindPlane OP:
1sudo systemctl stop bindplane
Open the configuration file with your favorite editor. Make sure to use
sudo or the
root user as the configuration file is owned by the
bindplane system account.
store section and modify or add the following:
store.postgres.host: The IP address of the Cloud SQL instance
store.postgres.password: The database password used during user creation
1store: 2 type: postgres 3 maxEvents: 100 4 bbolt: 5 path: /var/lib/bindplane/storage/bindplane.db 6 postgres: 7 host: 18.104.22.168 8 port: '5432' 9 database: bindplane 10 sslmode: disable 11 maxConnections: 100 12 username: bindplane 13 password: redacted
Configure Pub Sub
eventBus section and modify it to look like the following:
1eventBus: 2 type: googlePubSub 3 googlePubSub: 4 projectID: <your project id> 5 topic: bindplane
The Pub Sub integration's authentication is handled automatically due to the compute instance scope "Pub Sub" being enabled.
After configuring the storage backend and event bus, restart BindPlane:
1sudo systemctl restart bindplane
Once BindPlane starts, the Pub Sub subscriptions are configured automatically:
After a few moments, the load balancer healthchecks will begin to pass:
Cloud SQL activity can be monitored by enabling Query Insights.
On the agents page, choose "Install Agent" and inspect the installation command. The
-e flag should be set to the load balancer address. If it is not, this indicates a misconfiguration in BindPlane's
remoteURL configuration option in
1sudo sh -c "$(curl -fsSlL https://github.com/observiq/bindplane-agent/releases/download/v1.32.0/install_unix.sh)" install_unix.sh -e ws://22.214.171.124:3001/v1/opamp -s redacted -v 1.32.0
To quickly test, deploy an agent to each of the BindPlane compute instances.