Deploy Plausible Analytics on Thalassa Cloud Kubernetes

Plausible Analytics is a privacy-friendly, open-source web analytics platform that provides website traffic insights without using cookies or collecting personal data. By deploying Plausible on Thalassa Cloud Kubernetes using Cloud Native PostgreSQL and ClickHouse, you can run a self-hosted analytics solution that respects user privacy while providing detailed website statistics.

This guide walks you through deploying Plausible with its dual-database architecture: PostgreSQL for application data and ClickHouse for event storage.

Plausible Architecture

Plausible uses a dual-database architecture optimized for analytics workloads:

  • PostgreSQL: Stores application data including sites, users, and configuration. Cloud Native PostgreSQL provides high availability and automated backups for this critical data.
  • ClickHouse: Stores event data (page views, custom events) in an optimized format for fast analytics queries. ClickHouse’s columnar storage and compression make it ideal for time-series analytics data.

Prerequisites

Before deploying Plausible, ensure you have the following in place:

  1. Kubernetes Cluster: A running Kubernetes cluster in Thalassa Cloud. If you’re new to Thalassa Cloud Kubernetes, see the Getting Started guide for cluster creation and basic setup.

  2. Cluster Access: Configure cluster access using kubectl. Use tcloud kubernetes connect to configure access, or set up kubeconfig manually. You’ll need cluster administrator permissions to create namespaces and deploy resources.

  3. Cloud Native PostgreSQL: Cloud Native PostgreSQL must be installed in your cluster. If you haven’t installed it yet, follow the Cloud Native PostgreSQL guide to set it up. This guide assumes you have Cloud Native PostgreSQL installed and ready to use.

  4. Cert Manager: For TLS certificates, you’ll need Cert Manager installed with Let’s Encrypt configured. See the Cert Manager and Let’s Encrypt guide for installation and configuration instructions.

  5. Ingress Controller: An ingress controller (such as Ingress NGINX or Traefik) must be installed in your cluster to expose Plausible externally.

  6. Resources: Ensure your cluster has sufficient resources. Plausible requires CPU, memory, and storage for the application and both databases. Plan for at least one node with adequate resources.

Setting Up the Namespace

Create a namespace for Plausible resources:

kubectl create namespace plausible

Alternatively, create a namespace with pod security labels:

apiVersion: v1
kind: Namespace
metadata:
  name: plausible
  labels:
    pod-security.kubernetes.io/enforce: baseline
    pod-security.kubernetes.io/warn: restricted

Apply the namespace:

kubectl apply -f namespace.yaml

Setting Up PostgreSQL Database

Plausible requires PostgreSQL for storing application data. We’ll use Cloud Native PostgreSQL to create a reliable, managed PostgreSQL cluster.

Step 1: Create PostgreSQL Cluster

Create a PostgreSQL cluster for Plausible. This example creates a single-instance cluster suitable for most deployments:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: plausible
  namespace: plausible
spec:
  instances: 1
  imageName: ghcr.io/cloudnative-pg/postgresql:17
  primaryUpdateStrategy: unsupervised
  enablePDB: false
  postgresql:
    parameters:
      max_connections: '100'
  bootstrap:
    initdb:
      database: plausible
      owner: plausible
      dataChecksums: true
  storage:
    size: 5Gi
    storageClass: tc-block
  resources:
    requests:
      memory: "512Mi"
      cpu: "50m"
    limits:
      memory: "1Gi"
      cpu: "1000m"
  monitoring:
    enablePodMonitor: false

The bootstrap.initdb section tells CloudNativePG to automatically create a database named plausible with an owner plausible. CloudNativePG will automatically generate a Kubernetes Secret containing the database credentials during cluster initialization.

Automatic Secret Generation

CloudNativePG automatically creates a Secret named <cluster-name>-app (in this case, plausible-app) containing the database username and password. You don’t need to create this secret manually.

Save this to postgres-cluster.yaml and apply it:

kubectl apply -f postgres-cluster.yaml

Step 2: Wait for Cluster to be Ready

Wait for the cluster to be ready:

kubectl wait --for=condition=Ready cluster/plausible -n plausible --timeout=300s

CloudNativePG automatically creates the database and user during cluster initialization. The credentials are automatically generated and stored in a Secret named plausible-app, which you’ll reference when configuring the Plausible application.

Database Connection String

Cloud Native PostgreSQL provides a read-write service at <cluster-name>-rw.<namespace>.svc.cluster.local. For this deployment, the service will be available at plausible-rw.plausible.svc.cluster.local.

Setting Up ClickHouse for Events

Plausible uses ClickHouse to store event data (page views, custom events) in an optimized format for analytics queries. We’ll deploy ClickHouse as a StatefulSet with persistent storage.

Step 1: Create ClickHouse Configuration

Create a ConfigMap with ClickHouse configuration optimized for resource-constrained environments:

apiVersion: v1
kind: ConfigMap
metadata:
  name: plausible-events-db-config
  namespace: plausible
data:
  low-resources.xml: |
    <!-- https://clickhouse.com/docs/en/operations/tips#using-less-than-16gb-of-ram -->
    <clickhouse>
        <!--
        https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings#mark_cache_size -->
        <mark_cache_size>524288000</mark_cache_size>

        <profile>
            <default>
                <!-- https://clickhouse.com/docs/en/operations/settings/settings#max_threads -->
                <max_threads>1</max_threads>
                <!-- https://clickhouse.com/docs/en/operations/settings/settings#max_block_size -->
                <max_block_size>8192</max_block_size>
                <!-- https://clickhouse.com/docs/en/operations/settings/settings#max_download_threads -->
                <max_download_threads>1</max_download_threads>
                <!--
                https://clickhouse.com/docs/en/operations/settings/settings#input_format_parallel_parsing -->
                <input_format_parallel_parsing>0</input_format_parallel_parsing>
                <!--
                https://clickhouse.com/docs/en/operations/settings/settings#output_format_parallel_formatting -->
                <output_format_parallel_formatting>0</output_format_parallel_formatting>
            </default>
        </profile>
    </clickhouse>
  ipv4-only.xml: |
    <clickhouse>
        <listen_host>0.0.0.0</listen_host>
    </clickhouse>
  logs.xml: |
    <clickhouse>
        <logger>
            <level>warning</level>
            <console>true</console>
        </logger>

        <query_log replace="1">
            <database>system</database>
            <table>query_log</table>
            <flush_interval_milliseconds>7500</flush_interval_milliseconds>
            <engine>
                ENGINE = MergeTree
                PARTITION BY event_date
                ORDER BY (event_time)
                TTL event_date + interval 30 day
                SETTINGS ttl_only_drop_parts=1
            </engine>
        </query_log>

        <!-- Stops unnecessary logging -->
        <metric_log remove="remove" />
        <asynchronous_metric_log remove="remove" />
        <query_thread_log remove="remove" />
        <text_log remove="remove" />
        <trace_log remove="remove" />
        <session_log remove="remove" />
        <part_log remove="remove" />
    </clickhouse>

This configuration optimizes ClickHouse for low-resource environments and reduces unnecessary logging. Apply the ConfigMap:

kubectl apply -f clickhouse-config.yaml

Step 2: Create ClickHouse Service

Create a Service for ClickHouse:

apiVersion: v1
kind: Service
metadata:
  name: plausible-events-db
  namespace: plausible
  labels:
    app.kubernetes.io/name: clickhouse
    app.kubernetes.io/component: database
    app.kubernetes.io/part-of: plausible
spec:
  type: ClusterIP
  ports:
    - name: db
      port: 8123
      targetPort: 8123
      protocol: TCP
  selector:
    app.kubernetes.io/name: clickhouse
    app.kubernetes.io/component: database
    app.kubernetes.io/part-of: plausible

Apply the service:

kubectl apply -f clickhouse-service.yaml

Step 3: Create ClickHouse StatefulSet

Create a StatefulSet for ClickHouse with persistent storage:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: plausible-events-db
  namespace: plausible
  labels:
    app.kubernetes.io/name: clickhouse
    app.kubernetes.io/component: database
    app.kubernetes.io/part-of: plausible
spec:
  replicas: 1
  serviceName: plausible-events-db
  selector:
    matchLabels:
      app.kubernetes.io/name: clickhouse
      app.kubernetes.io/component: database
      app.kubernetes.io/part-of: plausible
  template:
    metadata:
      annotations:
        backup.velero.io/backup-volumes: data
      labels:
        app.kubernetes.io/name: clickhouse
        app.kubernetes.io/component: database
        app.kubernetes.io/part-of: plausible
    spec:
      restartPolicy: Always
      # see https://github.com/ClickHouse/ClickHouse/blob/master/docker/server/Dockerfile
      securityContext:
        runAsUser: 101
        runAsGroup: 101
        fsGroup: 101
      containers:
        - name: plausible-events-db
          image: clickhouse/clickhouse-server:24.12-alpine
          imagePullPolicy: Always
          ports:
            - containerPort: 8123
          volumeMounts:
            - name: data
              mountPath: /var/lib/clickhouse
            - name: config
              mountPath: /etc/clickhouse-server/config.d/logs.xml
              subPath: logs.xml
              readOnly: true
            - name: config
              mountPath: /etc/clickhouse-server/users.d/ipv4-only.xml
              subPath: ipv4-only.xml
              readOnly: true
            - name: config
              mountPath: /etc/clickhouse-server/users.d/low-resources.xml
              subPath: low-resources.xml
              readOnly: true
          env:
            - name: CLICKHOUSE_DB
              value: plausible
            - name: CLICKHOUSE_SKIP_USER_SETUP
              value: "1"
          securityContext:
            allowPrivilegeEscalation: false
          resources:
            limits:
              memory: 1Gi
              cpu: 1500m
            requests:
              memory: 80Mi
              cpu: 10m
          readinessProbe:
            httpGet:
              path: /ping
              port: 8123
            initialDelaySeconds: 20
            failureThreshold: 6
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /ping
              port: 8123
            initialDelaySeconds: 30
            failureThreshold: 3
            periodSeconds: 10
      volumes:
        - name: config
          configMap:
            name: plausible-events-db-config
  volumeClaimTemplates:
    - metadata:
        name: data
        labels:
          app.kubernetes.io/name: clickhouse
          app.kubernetes.io/component: database
          app.kubernetes.io/part-of: plausible
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: tc-block
        resources:
          requests:
            storage: 5Gi
          limits:
            storage: 5Gi

Apply the StatefulSet:

kubectl apply -f clickhouse-statefulset.yaml

Wait for ClickHouse to be ready:

kubectl wait --for=condition=ready pod/plausible-events-db-0 -n plausible --timeout=300s

Deploying Plausible Application

Now we’ll deploy the Plausible application itself, connecting it to both PostgreSQL and ClickHouse.

Step 1: Add Secret Key Base to Application Secret

CloudNativePG automatically created a Secret named plausible-app containing the database credentials (username and password). We need to add the SECRET_KEY_BASE to this secret for Plausible to use.

Secret Key Base

The SECRET_KEY_BASE must be at least 64 characters long. Generate a secure random string for production use.

Generate a secret key base and add it to the existing secret:

# Generate a secure 64-character secret key base
SECRET_KEY=$(openssl rand -hex 32)

# Add it to the existing plausible-app secret
kubectl patch secret plausible-app -n plausible --type='json' \
  -p="[{\"op\": \"add\", \"path\": \"/data/secret-key-base\", \"value\": \"$(echo -n $SECRET_KEY | base64)\"}]"

Verify the secret contains both keys:

kubectl get secret plausible-app -n plausible -o jsonpath='{.data}' | jq 'keys'

You should see both password and secret-key-base in the output.

Step 2: Create Plausible Service

Create a Service for Plausible:

apiVersion: v1
kind: Service
metadata:
  name: plausible
  namespace: plausible
  labels:
    app.kubernetes.io/name: plausible
    app.kubernetes.io/component: server
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 8000
      targetPort: 8000
      protocol: TCP
  selector:
    app.kubernetes.io/name: plausible
    app.kubernetes.io/component: server

Apply the service:

kubectl apply -f plausible-service.yaml

Step 3: Create Plausible Deployment

Create the Plausible Deployment. This deployment connects to both PostgreSQL and ClickHouse:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: plausible
  namespace: plausible
  labels:
    app.kubernetes.io/name: plausible
    app.kubernetes.io/component: server
spec:
  # Plausible is not currently designed to run in a clustered scenario. Increasing the replicas of this deployment is highly NOT recommended!
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: plausible
      app.kubernetes.io/component: server
  template:
    metadata:
      labels:
        app.kubernetes.io/name: plausible
        app.kubernetes.io/component: server
    spec:
      restartPolicy: Always
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      containers:
        - name: plausible
          image: ghcr.io/plausible/community-edition:v3.1.0
          command:
            - "/bin/sh"
            - "-c"
          args:
            - /entrypoint.sh db createdb && /entrypoint.sh db migrate && /entrypoint.sh run
          imagePullPolicy: Always
          ports:
            - name: http
              protocol: TCP
              containerPort: 8000
          env:
            - name: POSTGRES_USER
              value: plausible
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: plausible-app
                  key: password
            - name: DATABASE_URL
              value: postgres://$(POSTGRES_USER):$(POSTGRES_PASSWORD)@plausible-rw:5432/plausible
            - name: CLICKHOUSE_DATABASE_URL
              value: http://plausible-events-db:8123/plausible
            - name: SMTP_HOST_ADDR
              value: <smtp-relay-addr>
            - name: MAILER_EMAIL
              value: noreply@example.com
            - name: ADMIN_USER_EMAIL
              value: support@example.com
            - name: ADMIN_USER_NAME
              value: admin
            - name: BASE_URL
              value: https://analytics.example.com
            - name: SECRET_KEY_BASE
              valueFrom:
                secretKeyRef:
                  name: plausible-app
                  key: secret-key-base
          securityContext:
            allowPrivilegeEscalation: false
          resources:
            limits:
              memory: 2Gi
              cpu: 1500m
            requests:
              memory: 140Mi
              cpu: 10m
          readinessProbe:
            httpGet:
              path: /api/health
              port: 8000
            initialDelaySeconds: 35
            failureThreshold: 6
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /api/health
              port: 8000
            initialDelaySeconds: 45
            failureThreshold: 3
            periodSeconds: 10

Environment Variables

Update the following environment variables to match your setup:

  • SMTP_HOST_ADDR: Your SMTP server address
  • MAILER_EMAIL: Email address for sending emails
  • ADMIN_USER_EMAIL: Email address for the admin user
  • ADMIN_USER_NAME: Name for the admin user
  • BASE_URL: Your Plausible instance URL

Apply the deployment:

kubectl apply -f plausible-deployment.yaml

Wait for Plausible to be ready:

kubectl wait --for=condition=available deployment/plausible -n plausible --timeout=600s

Configuring Ingress and TLS

Expose Plausible externally using an Ingress resource with TLS certificates managed by Cert Manager.

Creating Ingress Resource

Create an Ingress resource to expose Plausible:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: plausible
  namespace: plausible
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
  ingressClassName: nginx
  rules:
  - host: analytics.example.com
    http:
      paths:
      - backend:
          service:
            name: plausible
            port:
              number: 8000
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - analytics.example.com
    secretName: plausible-tls

Domain Configuration

Replace analytics.example.com with your actual domain name. Ensure the domain DNS record points to your ingress controller’s load balancer IP.

Apply the ingress:

kubectl apply -f plausible-ingress.yaml

Cert Manager will automatically create and manage the TLS certificate. Verify the certificate is issued:

kubectl get certificate -n plausible

Once the certificate is ready, you can access Plausible at your configured domain.

Initial Setup

Once Plausible is accessible, complete the initial setup:

  1. Navigate to your Plausible URL (e.g., https://analytics.example.com) in a web browser
  2. You’ll be prompted to create an admin account. Use the email address you configured in ADMIN_USER_EMAIL
  3. After creating the account, you can start adding websites to track

Admin Account

The admin account is created automatically using the ADMIN_USER_EMAIL and ADMIN_USER_NAME environment variables. You’ll still need to set a password through the web interface on first login.

Configuration Options

Plausible supports many configuration options through environment variables. The following sections cover commonly used configuration options. For a complete reference, see the official Plausible Community Edition configuration wiki.

Required Configuration

The following environment variables are required for Plausible to function:

  • BASE_URL: The base URL for your Plausible instance (e.g., https://analytics.example.com)
  • SECRET_KEY_BASE: A secret key at least 64 bytes long, used for sessions and generating other secrets

Both are already configured in the deployment example above.

Email Configuration

Plausible sends transactional emails (account activation, password resets) and non-transactional emails (weekly/monthly reports). Configure SMTP settings:

env:
  # SMTP server configuration
  - name: SMTP_HOST_ADDR
    value: your-smtp-server.example.com
  - name: SMTP_HOST_PORT
    value: "587"  # Default: 587
  - name: SMTP_USER_NAME
    value: your-smtp-username
  - name: SMTP_USER_PWD
    valueFrom:
      secretKeyRef:
        name: plausible-smtp
        key: password
  - name: SMTP_HOST_SSL_ENABLED
    value: "false"  # Set to true for port 465
  
  # Sender configuration
  - name: MAILER_EMAIL
    value: noreply@example.com
  - name: MAILER_NAME
    value: "Plausible Analytics"  # Optional display name

Web Server Configuration

Configure HTTP/HTTPS ports:

env:
  - name: HTTP_PORT
    value: "8000"  # Default: 8000
  - name: HTTPS_PORT
    value: "8443"  # Optional: enables HTTPS server

Let’s Encrypt Integration

If you set HTTP_PORT=80 and HTTPS_PORT=443, Plausible will attempt to issue and maintain TLS certificates from Let’s Encrypt. However, when using an ingress controller (as in this guide), you should handle TLS at the ingress level instead.

Database Configuration

The database URLs are already configured in the deployment. The format follows Ecto.Repo URL parameters:

  • DATABASE_URL: PostgreSQL connection string (format: postgres://user:password@host:port/database)
  • CLICKHOUSE_DATABASE_URL: ClickHouse connection string (format: http://host:port/database)

Complete Configuration Reference

For a complete list of all configuration options, default values, and detailed explanations, see the official Plausible Community Edition configuration wiki.

Monitoring and Maintenance

Monitor your Plausible deployment to ensure it’s running smoothly:

# Check pod status
kubectl get pods -n plausible

# View Plausible logs
kubectl logs -f deployment/plausible -n plausible

# Check ClickHouse logs
kubectl logs -f statefulset/plausible-events-db -n plausible

# Monitor resource usage
kubectl top pods -n plausible

For production-grade monitoring, consider setting up Prometheus Operator. See the Prometheus Operator guide for details.

Upgrading Plausible

To upgrade Plausible to a newer version, update the image in the Deployment:

spec:
  template:
    spec:
      containers:
      - name: plausible
        image: ghcr.io/plausible/community-edition:v3.2.0  # Updated version

Apply the updated deployment:

kubectl apply -f plausible-deployment.yaml

Backup Before Upgrades

Always create backups of both the PostgreSQL database and ClickHouse data before upgrading Plausible. This ensures you can roll back if the upgrade causes issues.

Troubleshooting

Plausible Pod Not Starting

Check the pod logs for errors:

kubectl logs deployment/plausible -n plausible

Common issues:

  • Database connection failures: Verify PostgreSQL cluster is ready and credentials are correct
  • ClickHouse connection failures: Verify ClickHouse pod is running and accessible
  • Secret key base too short: Ensure SECRET_KEY_BASE is at least 64 characters

ClickHouse Connection Issues

Verify ClickHouse is accessible:

kubectl exec -it plausible-events-db-0 -n plausible -- wget -qO- http://localhost:8123/ping

Check ClickHouse logs:

kubectl logs statefulset/plausible-events-db -n plausible

Database Migration Issues

If database migrations fail, you can run them manually:

kubectl exec -it deployment/plausible -n plausible -- /entrypoint.sh db migrate