Simplifying Kubernetes Service Access with OpenVPN - A Complete Production Guide

Simplifying Kubernetes Service Access with OpenVPN - A Complete Production Guide


13 min read

Howdy, folks!

I have a question for you, How do you expose your Kubernetes services(or pods sometimes) outside your cluster?

Just to clarify*, I am not talking about making these objects public/internet-facing, otherwise, we will end up making everything in our cluster public, which is not a good choice at all.*

One option could be kubectl port-forward , see documentation here. It works well but I can see various cons to it, such as :

  1. Need to run a command to expose and then need to keep the terminal session open.

  2. You need to assign one local port for the port-forwarding, in some cases which can create conflict with other networking tools or apps.

  3. You need to run kubectl port-forward command, which means you need access and authorization of the Kube API server but suppose you don't want your devs to give kubectl access, then how would they test their apps privately or check monitoring/logging/APM dashboards?

  4. If your machine is not configured properly, it can potentially expose the Kubernetes resources to the internet if the connection is not secured properly. This can result in unauthorized access or data breaches.

  5. The port-forward command is not designed for high traffic or load. It is primarily intended for debugging or testing purposes, and may not be able to handle large amounts of traffic or requests.

  6. The port-forward command creates a direct connection between your local machine and the Kubernetes resource, which can tie up resources and affect performance, especially if multiple users are using the same connection.

Hmm... Now It seems so bad, so what do we have here, which can help us to mitigate these pain points?

OpenVPN for the rescue!

OpenVPN 2.4.3 Review | PCMag

Yes, you heard right and this is what this blog is all about us using OpenVPN in our environments to deal with the problems listed above. But let's Start from the basics.

What is VPN?

VPN stands for Virtual Private Network*. It is a technology that allows users to connect to a private network over the internet securely. VPNs create a secure and encrypted connection between the user's device and the private network, which can be located anywhere in the world.*

Remote Access VPN :

A Remote Access VPN (Virtual Private Network) is a type of VPN that allows users to securely access a private network from a remote location over the internet. It is designed to provide secure connectivity for remote workers who need access to company resources from outside the office network, such as files, applications, or databases.

With a Remote Access VPN, users can connect to the company network as if they were physically present in the office, by creating a secure and encrypted connection between their device and the company's VPN gateway. This connection can be established using a variety of VPN protocols, such as OpenVPN, PPTP, L2TP/IPSec, and SSTP.

Remote Access VPNs are typically used by companies to provide secure access to corporate resources for employees who work remotely or travel frequently. They can also be used by individual users to access their home network or to connect to a public Wi-Fi hotspot securely.

I hope these basic definitions clear your basics and help you to understand the next part of the blog.

What is OpenVPN?

OpenVPN is an open-source VPN protocol that provides secure and private connections over the Internet. It is one of the most widely used VPN protocols due to its flexibility, security, and ease of use.

What will we achieve by installing OpenVPN in a Kubernetes Cluster?

Once we are done with the setup, We can access Kubernetes endpoints directly, for example -

  1. Pod IP Addresses

  2. K8s Service FQDN

  3. K8s Service IP

Bonus! - You can also access your VPC resources by using private IPs provided by your cloud.

So that way only authorized personnel can access your private services w/o making any of those public.

Let's Start ๐Ÿƒ๐Ÿปโ€โ™‚๏ธ


  1. A Kubernetes cluster, preferably on a cloud like AWS, GCP, Azure, etc. If you are on an on-prem bare-metal cluster or a local one, you can use MetalLB. (check this example for minikube)

  2. Todo - add more

Step-by-Step Guide

FYI - We are running a self-hosted K8s cluster using kOps on AWS.

OpenVPN in a Container

To Understand how OpenVPN works in a container(or in general), let's observe some popular Dockerfiles available on GitHub:




Basic flow of OpenVPN Dockerfile:

  1. Pick a base image *(*alpine recommended)

  2. Install OpenVPN with all other required binaries - OpenSSL, iptables, easy-rsa(for automatic cert generation and CA setup process), google-authenticator(for OTP support)

  3. Add some scripts where the code for setting up the OpenVPN server along with Cert generation resides.

  4. Define /etc/openvpn as VOLUME so make it persistent later on.

  5. EXPOSE some ports, e.g. 1194/UDP for OpenVPN

  6. Add CMD instructions to run those scripts so that whenever our container starts it will setup everything up to run an OpenVPN server.

As things are running in containers with no manual intervention, we need to automate all of the setup processes. If you are doing it manually, you can follow this official guide. In this, we use a binary called easyrsa , which is a CLI utility to build and manage a PKI CA.

Commonly Used Commands(and terms):

  1. easyrsa init-pki : Initialises your PKI.

  2. easyrsa build-ca : command to create the ca.crt and ca.key. Arg nopass can be used to disable password locking the CA.

  3. easyrsa gen-dh : generates DH parameters used during the TLS handshake with connecting clients. DH (Diffie-Hellman) parameters are generated using a mathematical algorithm that involves the selection of a large prime number and a generator value. The prime number and generator are used to perform a series of mathematical calculations that generate the DH parameters. Caution! - It can be a little time taking.

  4. openvpn --genkey --secret SECRET_NAME.key generates a secret key on the server.

  5. easyrsa build-server-full : Builds a server certificate and key. You also need to pass the server's common name used in the certs.

  6. easyrsa gen-crl : Generates the CRL(Certificate Revocation List - a list of digital certificates that have been revoked) for client/server certificates revocation.

  7. /dev/net/tun : The /dev/net/tun directory in Linux is a virtual device file that represents a network tunneling device called TUN (network TUNnel). TUN is a software interface that allows users to create virtual network devices that can be used to tunnel network traffic over a network connection.

  8. mknod : creates a device file in the file system. More info on mknod is here.

How all these are bundled in the Helm Chart?

As we are ultimately going to deploy all this on Kubernetes, we need to understand how the automation is set up.

Repo Link:

While Deploying OpenVPN on Kubernetes, All the scripts are configured in config-openvpn.yaml ConfigMap. The scripts are mounted in the running pod. Those scripts are:

  • : This will set up CA and other components(dh params, ta.key, crl) for OpenVPN using the easy-rsa utility.

  • : Used for generating config files for OpenVPN clients.

  • : To revoke any existing certificates.

  • : This is the entrypoint script located at /etc/openvpn/setup/

Values.yaml file

below we are mentioning some values which you should consider changing as per your cluster setup. Also, it is good to know what they do.

  • OVPN_K8S_POD_NETWORK: Kubernetes pod network IP

  • OVPN_K8S_POD_SUBNET : CIDR Range for Pod network

  • OVPN_K8S_SVC_NETWORK : Kubernetes Service network IP

  • OVPN_K8S_SVC_SUBNET : CIDR Range for K8s Service network

serverConf: To append additional config to the server configuration file.

  • example :

        serverConf: |
          push "route"
          push "route"
    • here is the IP of our VPC Network which is the subnet mask. ( We want all the traffic for VPC's Subnet IPs to go via the OpenVPN server. (this will give us the power to access AWS resources via Private IPs๐Ÿ˜Ž)

    • here is for Node-local-dns (an addon that we are using.)

    • duplicate-cn is for allowing the same client to connect more than once.

Deploy Helm Chart

I am assuming you have made changes in the values.yaml file as per your requirements.

# clone the repo 
git clone

cd k8s-openvpn/deploy/openvpn/

helm upgrade --install openvpn ./ -n k8s-openvpn  -f values.yaml --create-namespace

It will create the deployment and a LoadBalancer type service. You can tail logs to see what's happening.

โš ๏ธ Generation of DH params can take 5-10 mins.

When the pod is successfully set up and LB is up for your OpenVPN service you can move to the next part.

Create our first OpenVPN user

cd to manage dir and run the below command:

cd k8s-openvpn/manage/
bash opevpn-user-1

It will give you a file with your config and certs embedded.

See VPN in action!

Now, you have a client file, use any OpenVPN-supported client(e.g. TunnelBlick), connect using it, and see if you can connect or not.

\"Tunnelblick mode - Do not set nameserver" - START**

Note! - select Do Not Set Nameserver in the Tunnelblick config of your VPN profile.

Disclaimer - You do not need to do the above step but to show you one more alternative and to show how things work internally, for the sake of this tutorial let's move on as stated.

Add NGINX pod and NGINX service quickly to our cluster to test our VPN:

kubectl run nginx --image nginx:alpine
kubectl expose po nginx --port 80

Keep your fingers crossed ๐Ÿคž๐Ÿป : open Chrome and enter the ClusterIP of your NGINX service.

Woohoo! It worked! ๐Ÿฅณ ๐Ÿฅณ ๐Ÿฅณ

Testing the K8s-specific DNS

According to Official Guide, Let's form our DNS, which is an A Record, which should be mapped to the IP of the service.



  • nginx -> name of service

  • default -> name of the namespace

  • svc -> represents k8s svc

  • cluster.local -> your clusterDomain

So, Let's try:

wait, what? Why did it fail? ๐Ÿ˜ญ


There is no mechanism available for resolving those Kubernetes FQDNs, although if you are in k8s networking(in a pod), it will be resolved as you are on an external network on your PC, you need something in between to do the magic for you!

Kube-DNS to the rescue!

Logic ๐Ÿค” : Currently we are able to connect to the K8s network by using IPs with the help of our VPN server. There is a service already running in our cluster called kube-dns if we ask this service to resolve those FQDNs, it will resolve it.

So we will use another bundled script called and make sure you have everything defined there correctly.

run that script and it will create a file named svc.cluster.local in /etc/resolver dir. and this file will have the IP of kube-dns as a nameserver.

So all our cluster DNS queries as above will use kube-dns as the nameserver.

cd k8s-openvpn/manage/

Let's Try Again

๐ŸŽ‰ ๐Ÿฅณ ๐ŸŽ‰ ๐Ÿฅณ ๐ŸŽ‰ ๐Ÿฅณ

but how? Let's observe the newly created file /etc/resolver/svc.cluster.local

cat /etc/resolver/svc.cluster.local

# output
domain svc.cluster.local
nameserver # IP of kube-dns svc

Here is a rough diagram explaining the flow:

\"Tunnelblick mode - Do not set nameserver" - END**

Remember we added a disclaimer regarding how you need to select Do Not Set Nameserver , Now, if you got the understanding, you can select Set Nameserver in Tunnelblick and w/o the need of running the the script you can see k8s FQDNs working fine in your browser.

Revoking a user

Suppose one person left your team and now you want to revoke that person's certificate.

Note! - here CERT_NAME is the common_name you used when generating the certificate.

cd k8s-openvpn/manage/

A note on k8s pod's /etc/resolv.conf

In Kubernetes, the /etc/resolv.conf file in a pod is populated with the DNS configuration specified in the cluster-wide DNS policy. This configuration is typically set up by the cluster administrator and can vary depending on the cluster's networking configuration.

The cluster-wide DNS policy is specified in the kubelet configuration file on each node in the cluster. This policy specifies the DNS server IP addresses and search domains that should be used for name resolution in the cluster.

When a pod is created, the kubelet on the node where the pod is scheduled creates a symlink from /etc/resolv.conf in the pod's network namespace to a file in the host filesystem that contains the DNS configuration specified in the kubelet configuration file.

$ kubectl exec -it YOUR_POD_NAME -- cat /etc/resolv.conf

# output
Defaulted container "openvpn" out of: openvpn, sysctl (init)
search default.svc.cluster.local svc.cluster.local cluster.local ap-south-1.compute.internal
options ndots:5

How does OpenVPN make the clients use it for certain domains?

By default, the OpenVPN server does not set the DHCP options for the clients. In our config, we use dhcp-option DOMAIN-SEARCH to achieve this kind of behavior.

dhcp-option DOMAIN-SEARCH is a DHCP option that specifies the DNS search domain that the DHCP client should use for name resolution. When a client receives this option, it adds the specified domain name to its list of DNS search domains.

a note on config generation

Suppose an OpenVPN server started and now it's bootstrapping itself and generating the OpenVPN config file. Script name - in config-openvpn.yaml ConfigMap.

note the below part in the script :

      SEARCH=$(cat /etc/resolv.conf | grep -v '^#' | grep search | awk '{$1=""; print $0}')
      for DOMAIN in $SEARCH; do
      cp -f /etc/openvpn/setup/openvpn.conf /etc/openvpn/
      sed 's|OVPN_K8S_SEARCH|'"${FORMATTED_SEARCH}"'|' -i /etc/openvpn/openvpn.conf

I tried to get the value of SEARCH variable and it came out as:

openvpn-6779bdf87f-r85q2:/# SEARCH=$(cat /etc/resolv.conf | grep -v '^#' | grep search | awk '{$1=""; print $0}')

openvpn-6779bdf87f-r85q2:/# echo $SEARCH
# output
default.svc.cluster.local svc.cluster.local cluster.local ap-south-1.compute.internal

So Now I think you can understand, from Pod's /etc/resolv.conf file, it gets the list of domains for DOMAIN-SEARCH DHCP-option and adds these to the config file for the OpenVPN Server and now whenever clients connect, for these domains their traffic goes via OpenVPN and thus, able to connect to the k8s network.

Tired Episode 5 GIF by The Simpsons


  1. Your AWS SG rule should allow access to your desired ports if you are accessing AWS VPC resources.

    1. Tip! - We have added a rule for the ephemeral ports range for the NodePort services (30000-32768) and only the VPC Subnet CIDR is allowed.
  2. Here we are using CN=server and CN=client for the server conf and client conf respectively.

  3. We are using EASYRSA_BATCH=1 in our scripts so it doesn't ask/wait for user inputs in prompts.

  4. We are using remote-cert-tls to mitigate MITM attacks.

Bonus! - OpenVPN Monitor

We came across an amazing project called OpenVPN Monitor It gives you a GUI to see the list of all connected clients with some more info.

for that, you need to add the below line in the OpenVPN config:

management 5555

This will expose port 5555 on the OpenVPN pod to listen for incoming management client requests.

Our helm chart is already configured for this, make sure you supply appropriate values.


# clone the repo
git clone
cd openvpn-monitor

# copy the config and make sure you replace the host1 in VPN1
cp openvpn-monitor.conf.example openvpn-monitor.conf

# create a virtual env for python
python3.8 -m venv openvpn

# activate it
source ./openvpn/bin/activate

# install the dependency
pip install -r requirements.txt

# install gunicorn
pip install gunicorn

# run the server
gunicorn openvpn-monitor -b

Now you can open your browser and have a look at the cool console with data provided by the OpenVPN server.

Possibilities are limitless - you can also bundle this tool as part of our helm chart and run it in the same cluster.

SpongeBob gif. Spongebob shoots finger guns with a sly grin, attempting to look cool as he backs out of the room.

๐Ÿฅณ Shout Out to Harshit @ Kutumb for setting this up in the very first place!