CloudNativePG with Node.js and Express
Everyone and their mother has heard do not use kubernetes for any database. But its almost 2025 and kubernetes along with cncf(cloud native computing foundation) has come a long way. CloudNativePG along with being a kubernetes compliant way to host a database has some great benefits too.
“Additionally, CloudNativePG provides cloud native capabilities like self-healing, high availability, rolling updates, scale up/down of read-only replicas, affinity/anti-affinity/tolerations for scheduling, resource management, and so on.” - Taken from official website
In this small blog we will setup the bare minimum and usable postgres along with a node and express app to test it. This blog hopefully will be followed up by a series of in-depth tutorials on how to manage things like backups, restores and scaling.
Let’s start by installing CNPG
This will install the the CNPG operator.
kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.24/releases/cnpg-1.24.1.yaml
Note
An operator
in kubernetes for the uninitiated is just some custom definitions made from fundamental kubernetes blocks to abstract away complexity.
Database Configuration
Create your CloudNativePG cluster manifest (postgres-cluster.yaml
):
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: pg-cluster
spec:
instances: 3
storage:
size: 1Gi
bootstrap:
initdb:
database: appdb
owner: appuser
secret:
name: pg-credentials
---
apiVersion: v1
kind: Secret
metadata:
name: pg-credentials
type: Opaque
stringData:
username: appuser
password: your-secure-password
Now apply using
kubetctl apply -f postgres-cluster.yaml
After a few seconds you will see 3 pods up. And also 3 services up
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pg-cluster-r ClusterIP 10.108.51.255 <none> 5432/TCP 1m
pg-cluster-ro ClusterIP 10.103.160.182 <none> 5432/TCP 1m
pg-cluster-rw ClusterIP 10.109.250.51 <none> 5432/TCP 1m
These 3 services each have distinct use cases, read more here
For our usecase we will only be using the pg-cluster-rw
service which allows read and write both.
## Node.js Application
Here's the simple `app.js` which just has one health check for database connectivity.
```javascript
const express = require('express');
const { Pool } = require('pg');
const app = express();
const port = 3000;
const pool = new Pool({
host: 'pg-cluster-rw',
database: 'appdb',
user: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
port: 5432,
});
// Health check endpoint
app.get('/health', async (req, res) => {
try {
const result = await pool.query('SELECT NOW()');
res.json({ status: 'healthy', timestamp: result.rows[0].now });
} catch (err) {
res.status(500).json({ status: 'error', error: err.message });
}
});
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});
Since we will need image here’s a simple Dockerfile
FROM node:22-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]
Kubernetes Deployment for the node app
Deploy your application with this deployment.yaml
:
apiVersion: v1
kind: Service
metadata:
name: node-app
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30000
selector:
app: node-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-app
spec:
replicas: 1
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- name: node-app
image: node-pg-app:latest
imagePullPolicy: Never # Important for local images
ports:
- containerPort: 3000
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: pg-credentials
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: pg-credentials
key: password
Voila
Your app should be up accesible and you can check by curl the endpoint
or if you are using minikube
then runnning the command minikube service node-app
Next Steps
Consider implementing:
- Regular backup schedules
- Monitoring with Prometheus
- Connection pooling
- Read replicas for scaling