Postgres connection pool on Kubernetes in 1 minute

JM Robles
2 min readApr 25, 2021

TL;DR: How to deploy pgbouncer on Kubernetes to allow use more database connections

PGBouncer on Kubernetes in just one minute!


If you develop apps using frameworks such as Django o RoR, you have probably faced the following problem/error.

FATAL: sorry, too many clients already

As you know, these kind of frameworks uses a database connection pool with the objective of to reduce the database connection latency.

That’s is great always which your database is optimized to have a lot of connections.

As you can infer, it’s not the case of Postgres.

Each Postgres connection uses about 10 MB, altough most of the time they are idle.

With the microservices boom or gRPC streams, things get worse. We have a lot of idle connections but Postgres requests a lot of resources for nothing, to put in a idle state.

PgBouncer to the rescue

There are serveral options to solve the multiple connections problem but all of them use the same pattern: a proxy in the middle.

The idea is that you connect your consumer software to a proxy which allows a lot of “cheap” connections, and this proxy connects with the Postgres database only when your app really need to perform any database action.

One of these solutions is PgBouncer.

It’s the oldest solution and its adopted widely.

Pgbouncer in your K8S cluster

To put pgbouncer in your cluster is easy as pie.

We use the following docker image/project: edoburu/pgbouncer

To use just we need to define a config and a secret K8S resources. We need to provide it the following Postgres database connection:

  • DB_HOST: the K8S service name of your Postgres pod/deployment.
  • DB_USER: the Postgres database user.
  • DB_PASSWORD: the Postgres database user (you should store it as a secret).
  • POOL_MODE: specific pgbouncer parameter. We use “transaction” mode because we only want to connect to Postgres when we really need it.



JM Robles

Backend / Platform engineer #go #python #kubernetes #terraform #devops