Live Workshop: Integrate Google SecOps with Bindplane - Join Us on January 29th at 11 AM ET!Sign Up Now

Multi-Node Architecture on Google Cloud

Deploy BindPlane OP in a multi-node configuration on Google Cloud

Google Cloud can be used to host a scalable BindPlane OP architecture by leveraging multiple BindPlane OP instances in combination with Compute Engine, Cloud Load Balancer, and Pub/Sub.

Prerequisites

The following requirements must be met:

Architecture

See the High Availability documentation for details on the architecture that is used in this guide.

Deployment

Firewall

Create a firewall rule that will allow connections to BindPlane on TCP/3001.

  • Name: bindplane
  • Target Tags: bindplane
  • Source Filters:
    • IP ranges: 0.0.0.0*.
  • Protocols and Ports: TCP/3001

*Allowing access from all IP ranges will allow anyone on the internet access to BindPlane OP. This firewall rule should be restricted to allow access only from networks you trust.

observIQ docs - Multi-Node Architecture on Google Cloud - image 2

Compute Engine

In this guide, we will create three compute instances, bindplane-0, bindplane-1, and bindplane-prometheus. See the prerequisites for information on individually sizing your instances.

We expect this deployment to handle 200 agents, so we will select the n2-standard-2 instance type, which has the exact core count required, and more than enough memory. We will use the same instance settings for Prometheus.

  • 2 cores
  • 8 GB memory
  • 60 GB persistent ssd

For the BindPlane instances, use the following additional configuration.

  • Static public IP addresses
  • Scopes
    • Set Cloud Platform to "enabled"
    • Set pub/sub to "enabled"
  • Network Tags: bindplane
observIQ docs - Multi-Node Architecture on Google Cloud - image 3

Prometheus

Prometheus is used as a shared storage backend for BindPlane OP's agent throughput measurements. Connect to the bindplane-prometheus instance and follow our Self-Managed Prometheus documentation.

Cloud SQL

PostgreSQL is used as a shared storage backend for BindPlane OP. Google has many options available for production use cases, such as replication and private VPC peering.

Deploy

In this guide, we will deploy a basic configuration with:

  • 4 cores
  • 16GB memory
  • 250GB SSD for storage
  • Authorized Networks (Under "connections") set to the public IP addresses of the previously deployed compute instances

All other options are left unconfirmed or set to their default values.

observIQ docs - Multi-Node Architecture on Google Cloud - image 4
observIQ docs - Multi-Node Architecture on Google Cloud - image 5

Configure

Once the Cloud SQL instance is deployed, we need to create a database and a database user.

On the database's page, select "create database" and name it bindplane.

observIQ docs - Multi-Node Architecture on Google Cloud - image 6

On the user's page, add a new user named bindplane and use a secure password, or choose the "generate password" option. Note the password, it will be required when BindPlane OP is configured.

observIQ docs - Multi-Node Architecture on Google Cloud - image 7

Pub/Sub

Google Pub/Sub is used by BindPlane OP to share information between instances. Create a new topic named bindplane. Uncheck the "add a default subscription" option. You can keep all other options set to their default value.

observIQ docs - Multi-Node Architecture on Google Cloud - image 8
observIQ docs - Multi-Node Architecture on Google Cloud - image 9

Cloud Load Balancer

In order to distribute connections between multiple BindPlane OP instances, a TCP load balancer is required. This guide will use an internet-facing load balancer, however, an internal load balancer is also supported.

Create a load balancer with the following options:

  • From the internet to my VMs
  • Single region only
  • Pass-through
  • Target Pool or Target Instance
observIQ docs - Multi-Node Architecture on Google Cloud - image 10

Backend Configuration

Configure the Backend with the following options:

  • Name: bindplane
  • Region: The region used for your compute instances, pub/sub topic, and CloudSQL instance
  • Backends: "Select Existing Instances"
    • Select your BindPlane OP instances
  • Health check: Choose "Create new health check"
    • Name: bindplane
    • Protocol: http
    • Port: 3001
    • Request Path: /health
    • Health criteria: Use default values
observIQ docs - Multi-Node Architecture on Google Cloud - image 11
observIQ docs - Multi-Node Architecture on Google Cloud - image 12

Frontend Configuration

Configure the Frontend with the following options:

  • New Frontend IP and Port:
    • Name: bindplane
    • Port: 3001
observIQ docs - Multi-Node Architecture on Google Cloud - image 13

Review and Create

Review the configuration and choose "Create". Once created, the load balancer will exist and it should be failing the healtchecks, because BindPlane OP is not installed and configured yet.

observIQ docs - Multi-Node Architecture on Google Cloud - image 14

Install BindPlane OP

With Cloud SQL, Pub/Sub, and the load balancer configured, BindPlane OP can be installed on the previously deployed compute instances.

Install Script

Connect to both instances using SSH and issue the installation command:

bash
1curl -fsSlL https://storage.googleapis.com/bindplane-op-releases/bindplane/latest/install-linux.sh -o install-linux.sh && bash install-linux.sh && rm install-linux.sh

Initial Configuration

Once the script finishes, run the init server command on one of the instances. You will copy the generated configuration file to the second instance after configuring the first.

bash
1sudo BINDPLANE_CONFIG_HOME=/var/lib/bindplane /usr/local/bin/bindplane init server --config /etc/bindplane/config.yaml
  1. License Key: Paste your license key.
  2. Server Host: 0.0.0.0 to listen on all interfaces.
  3. Server Port: 3001
  4. Remote URL: The IP address of your load balancer.
    1. Example: http://35.238.177.64:3001
  5. Enable Multi Project: Yes
  6. Auth Type: Single User*
  7. Storage Type: postgres
  8. Host: Public IP address of the CloudSQL instance.
  9. Port: 5432
  10. Database Name: bindplane
  11. SSL Mode: require
  12. Maximum Number of Database Connections: 100
  13. PostgreSQL Username: bindplane
  14. PostgreSQL Password: The password you configured during the CloudSQL setup.
  15. Event Bus Type: Google PubSub
  16. PubSub Project ID: Your Google project id
  17. PubSub Credentials File: Leave this blank, authentication will be handled automatically.
  18. PubSub Topic: bindplane
  19. PubSub Subscription: Leave blank, subscriptions will be managed by each BindPlane instance.
  20. Accept Eula: Choose yes if you agree.
  21. Restart the server?: no

note

📘 You can select LDAP or Active Directory if you do not wish to use basic auth. This guide's scope will not cover external authentication.

Copy the contents from the file /etc/bindplane/config.yaml to the same location on the second instance. This will ensure both instances have an identical configuration. Specifically, both instances require the same value for auth.sessionSecret.

Configure Remote Prometheus

BindPlane OP uses Prometheus to store agent throughput metrics. When operating with multiple nodes, a shared Prometheus instance is required.

Stop BindPlane OP:

bash
1sudo systemctl stop bindplane

Open the configuration file with your favorite editor. Make sure to use sudo or the root user as the configuration file is owned by the bindplane system project.

bash
1sudo vim /etc/bindplane/config.yaml

Find the Prometheus section. It will look like this:

yaml
1prometheus:
2  localFolder: /var/lib/bindplane/prometheus
3  host: localhost
4  port: '9090'
5  remoteWrite:
6    endpoint: /api/v1/write
7  auth:
8    type: none

Make two changes.

  1. Add enableRemote: true
  2. Update host: bindplane-prometheus

The final configuration will look like this:

yaml
1prometheus:
2  enableRemote: true
3  localFolder: /var/lib/bindplane/prometheus
4  host: bindplane-prometheus
5  port: '9090'
6  remoteWrite:
7    endpoint: /api/v1/write
8  auth:
9    type: none

These changes will instruct BindPlane to use a remote Prometheus instance.

Start BindPlane

Restart all BindPlane instances in order to pickup the latest configuration.

bash
1sudo systemctl restart bindplane
2sudo systemctl status bindplane

Once BindPlane starts, the Pub/Sub subscriptions are configured automatically:

observIQ docs - Multi-Node Architecture on Google Cloud - image 15

After a few moments, the load balancer healthchecks will begin to pass:

observIQ docs - Multi-Node Architecture on Google Cloud - image 16

Cloud SQL activity can be monitored by enabling Query Insights.

observIQ docs - Multi-Node Architecture on Google Cloud - image 17

Use BindPlane OP

Connect to BindPlane OP

Browse to http://<loadbalancer address>:3001 and sign into the BindPlane installation using the username and password you used during the configuration step.

Install Agents

On the agents page, choose "Install Agent" and inspect the installation command. The -e flag should be set to the load balancer address. If it is not, this indicates a misconfiguration in BindPlane's remoteURL configuration option in /etc/bindplane/config.yaml.

To quickly test, deploy an agent to each of the BindPlane compute instances.

observIQ docs - Multi-Node Architecture on Google Cloud - image 18