Overview
Pinpoint is deployed on six standalone servers (no Docker Swarm), each running its own Pinpoint stack:n01.pinpointn02.pinpointn03.pinpointn04.pinpointn05.pinpointinternal-tools-pinpoint(internal-only, same architecture but dedicated to internal services)
- Route53 points to the Hetzner load balancer
- The Hetzner load balancer distributes traffic to the public Pinpoint nodes (
n01–n05)
Servers & environments
- Production
- Staging (legacy)
Pinpoint production runs on five public nodes behind the Hetzner load balancer:
n01.pinpoint.devops.arabiaweather.comn02.pinpoint.devops.arabiaweather.comn03.pinpoint.devops.arabiaweather.comn04.pinpoint.devops.arabiaweather.comn05.pinpoint.devops.arabiaweather.com
internal-tools-pinpoint.devops.arabiaweather.com
External traffic flows: Route53 → Hetzner Load Balancer → Pinpoint nodes.
How we deploy
Pinpoint is managed with Docker Compose, not Docker Swarm services. On each node, deployments are typically performed as:autoheal container, which automatically restarts unhealthy containers (including Pinpoint replicas) based on health checks.
Node-level deployment (per server)
Each node (for examplen04.pinpoint) is managed as a standalone host with a small stack defined in docker-compose.yml under /root:
haproxy: front proxy for Pinpoint, using/root/haproxy.cfgpinpoint: main API service (multiple replicas on the node)redis-replica: local Redis instance for overrides (replica of the central Redis cluster)node-exporterandcadvisor: node and container metrics for Grafana- A separate
promtailstack under/root/promtailfor log shipping autoheal: watches container health and automatically restarts unhealthy containers
/root/docker-compose.yml– services on the node (Pinpoint, HAProxy, Redis replica, exporters)/root/haproxy.cfg– HAProxy frontend/backends pointing at Pinpoint containers/root/promtail/docker-compose.ymland/root/promtail/promtail-config.yml– Promtail setup for shipping logs to Grafana / Loki
HAProxy configuration (per node)
Each node runs an HAProxy container listening on::80– main HTTP entry point, routing to the localpinpointcontainers on port3000:8404– Prometheus metrics endpoint for HAProxy
- Uses Docker DNS (
resolvers docker) to discover Pinpoint containers by thepinpointservice name - Performs HTTP health checks on
/api/health(expects status200) - Exposes structured JSON logs to stdout (picked up by Promtail)
Pinpoint service (per node)
On each node:- The
pinpointservice is run from a Docker image such asregistry.docker.devops.arabiaweather.com/pinpoint:v2025.12.08-hotfix-cache-yazan - Multiple replicas (e.g.
replicas: 14) are started on the same host - Health checks call
http://localhost:3000/api/healthfrom inside the container - A local volume
location_cacheprovides/code/data(elevation file, location cache)
- Central Redis (e.g.
node03.cluster.devops.arabiaweather.com:6379) for cache read/write - Local
redis-replicaservice for overrides - ModMS at
modms.devops.arabiaweather.com - Logging stack (via Promtail → Loki → Grafana) for application logs and dashboards
Staging Environment
There is a staging Pinpoint instance available via Portainer that still uses the legacy deployment pattern. Use staging to:- Validate new releases before production
- Test configuration changes
- Perform smoke tests for critical endpoints
- Still runs on the older stack (Docker Swarm-style deployment with Nginx in the stack)
- Does not fully match the new six-node HAProxy + Docker Compose architecture
- Should be treated as a legacy environment while production runs on the new pattern
Observability
Pinpoint is integrated with Grafana dashboards for:- Metrics (request rates, latency, errors, per-node health)
- Logs (via Promtail sending container and HAProxy logs into Loki, visualized in Grafana)
- Verify deployments
- Monitor traffic and performance across all six nodes
- Investigate incidents and errors quickly

