Valid SSL/TLS certificates are a core requirement of the modern application landscape. Unfortunately, managing certificate (or cert) renewals is often an afterthought when deploying an application. Certificates have a limited lifetime, ranging from roughly 13 months for certificates from DigiCert to 90 days for Let’s Encrypt certificates. To maintain secure access, these certificates need to be renewed/reissued prior to their expiration. Given the substantial workload of most Ops teams, cert renewal sometimes falls through the cracks, resulting in a scramble as certificates near – or worse, pass – their expiration date.
It doesn’t need to be like this. With some planning and preparation,certmanagement can be automated and streamlined. Here, we will look at a solution for Kubernetes using three technologies:
In this blog, you’ll learn to simplify cert management by providing unique, automatically renewed and updated certificates to your endpoints.
Certificates in a Kubernetes Environment
Before we get into technical details, we need to define some terminology. The term “TLS certificate” refers to two components required to enable HTTPS connections on our Ingress controller:
- The certificate
- The private key
Both the certificate and private key are issued by Let’s Encrypt. For a full explanation of how TLS certificates work, please see DigiCert’s post How TLS/SSL Certificates Work.
In Kubernetes, these two components are stored as Secrets. Kubernetes workloads – such as the NGINX Ingress Controller and cert-manager – can write and read these Secrets, which can also be managed by users who have access to the Kubernetes installation.
Introducing cert-manager
The cert-manager project is a certificate controller that works with Kubernetes and OpenShift. When deployed in Kubernetes, cert-manager will automatically issue certificates required by Ingress controllers and will ensure they are valid and up-to-date. Additionally, it will track expiration dates for certificates and attempt renewal at a configured time interval. Although it works with numerous public and private issuers, we will be showing its integration with Let’s Encrypt.
Two Challenge Types
When using Let’s Encrypt, all cert management is handled automatically. While this provides a great deal of convenience, it also presents a problem: How does the service ensure that you own the fully-qualified domain name (FQDN) in question?
This problem is solved using a challenge, which requires you to answer a verification request that only someone with access to the specific domain’s DNS records can provide. Challenges take one of two forms:
- HTTP-01: This challenge can be answered by having a DNS record for the FQDN that you are issuing a certificate. For example, if your server is at IP www.xxx.yyy.zzz and your FQDN is cert.example.com, the challenge mechanism will expose a token on the server at www.xxx.yyy.zzz and the Let’s Encrypt servers will attempt to reach it via cert.example.com. If successful, the challenge is passed and the certificate is issued.
HTTP-01 is the simplest way to generate a certificate, as it does not require direct access to the DNS provider. This type of challenge is always conducted over Port 80 (HTTP). Note that when using HTTP-01 challenges, cert-manager will utilize the Ingress controller to serve the challenge token.
- DNS-01: This challenge creates a DNS TXT record with a token, which is then verified by the issuer. If the token is recognized, you have proved ownership of that domain and can now issue certificates for its records. Unlike the HTTP-01 challenge, when using the DNS-01 challenge, the FQDN does not need to resolve to your server’s IP address (nor even exist). Additionally, DNS-01 can be used when Port 80 is blocked. The offset to this ease of use is the necessity of providing access to your DNS infrastructure via API token to thecert-manager installation.
Ingress Controllers
An Ingress controller is a specialized service for Kubernetes that brings traffic from outside the cluster, load balances it to internal Pods (a group of one or more containers), and manages egress traffic. Additionally, the Ingress controller is controlled through the Kubernetes API and will monitor and update the load balancing configuration as Pods are added, removed, or fail.
To learn more about Ingress controllers, read the following blogs:
- Kubernetes Networking 101
- A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options
In the examples below, we will use NGINX Ingress Controller that is developed and maintained by F5 NGINX.
Certificate Management Examples
These examples assume that you have a working Kubernetes installation that you can test with, and that the installation can assign an external IP address (Kubernetes LoadBalancer object). Additionally, it assumes that you can receive traffic on both Port 80 and Port 443 (if using the HTTP-01 challenge) or solely Port 443 (if using the DNS-01 challenge). These examples are illustrated using Mac OS X, but can be used on Linux or WSL as well.
You will also need a DNS provider and FQDN that you can adjust the A record for. If you are using the HTTP-01 challenge, you only need the ability to add an A record (or have one added for you). If you are using the DNS-01 challenge, you will need API access to a supported DNS provider or a supported webhook provider.
Deploy NGINX Ingress Controller
The easiest way is to deploy via Helm. This deployment allows you to use both the Kubernetes Ingress and the NGINX Virtual Server CRD.
- Add the NGINX repo.
- Update the repository.
- Deploy the Ingress controller .
- Check the deployment and retrieve the IP address of the egress for the Ingress controller . Note that you cannot continue without a valid IP address.
$ helm repo add nginx-stable https://helm.nginx.com/stable "nginx-stable" has been added to your repositories
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "nginx-stable" chart repository Update Complete. ⎈Happy Helming!⎈
$ helm install nginx-kic nginx-stable/nginx-ingress \ --namespace nginx-ingress --set controller.enableCustomResources=true \ --create-namespace --set controller.enableCertManager=true NAME: nginx-kic LAST DEPLOYED: Thu Sep 1 15:58:15 2022 NAMESPACE: nginx-ingress STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The NGINX Ingress Controller has been installed.
$ kubectl get deployments --namespace nginx-ingress NAME READY UP-TO-DATE AVAILABLE AGE nginx-kic-nginx-ingress 1/1 1 1 23s $ kubectl get services --namespace nginx-ingress NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-kic-nginx-ingress LoadBalancer 10.128.60.190 www.xxx.yyy.zzz 80:31526/TCP,443:32058/TCP 30s
Add Your DNS A Record
The process here will depend on your DNS provider. This DNS name will need to be resolvable from the Let’s Encrypt servers, which may require that you wait for the record to propagate before it will work. For more information on this, please see the SiteGround article What Is DNS Propagation and Why Does It Take So Long?
Once you can resolve your chosen FQDN you are ready to move on to the next step.
$ host cert.example.com cert.example.com has address www.xxx.yyy.zzz
Deploycert-manager
The next step is to deploy the most recent version ofcert-manager. Again, we will be using Helm for our deployment.
- Add the Helm repository.
- Update the repository.
- Deploycert-manager.
- Validate the deployment.
$ helm repo add jetstack https://charts.jetstack.io "jetstack" has been added to your repositories
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "nginx-stable" chart repository ...Successfully got an update from the "jetstack" chart repository Update Complete. ⎈Happy Helming!⎈
$ helm install cert-manager jetstack/cert-manager \ --namespace cert-manager --create-namespace \ --version v1.9.1 --set installCRDs=true NAME: cert-manager LAST DEPLOYED: Thu Sep 1 16:01:52 2022 NAMESPACE: cert-manager STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: cert-manager v1.9.1 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision Certificates for Ingress resources, take a look at the `ingress-shim` documentation:
https://cert-manager.io/docs/usage/ingress/
$ kubectl get deployments --namespace cert-manager NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 4m30s cert-manager-cainjector 1/1 1 1 4m30s cert-manager-webhook 1/1 1 1 4m30s
Deploy the NGINX Cafe Example
We are going to be using the NGINX Cafe example to provide our backend deployment and Services. This is a common example used within the documentation provided by NGINX. We will not be deploying Ingress as part of this.
- Clone the NGINX Ingress GitHub project.
- Change to the examples directory. This directory contains several examples that demonstrate various configurations of the Ingress controller. We are using the example provided under the complete-example directory.
- Deploy the NGINX Cafe example.
- Validate the deployment and Services using the
kubectl
get command. You are looking to ensure that the Pods are showing asREADY
, and the Services are showing asrunning
. The example below shows a representative sample of what you are looking for . Note that thekubernetes
service is a system service running in the same namespace (default) as the NGINX Cafe example.
$ git clone https://github.com/nginxinc/kubernetes-ingress.git Cloning into 'kubernetes-ingress'... remote: Enumerating objects: 44979, done. remote: Counting objects: 100% (172/172), done. remote: Compressing objects: 100% (108/108), done. remote: Total 44979 (delta 87), reused 120 (delta 63), pack-reused 44807 Receiving objects: 100% (44979/44979), 60.27 MiB | 27.33 MiB/s, done. Resolving deltas: 100% (26508/26508), done.
$ cd ./kubernetes-ingress/examples/ingress-resources/complete-example
$ kubectl apply -f ./cafe.yaml deployment.apps/coffee created service/coffee-svc created deployment.apps/tea created service/tea-svc created
$ kubectl get deployments,services --namespace default NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coffee 2/2 2 2 69s deployment.apps/tea 3/3 3 3 68sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/coffee-svc ClusterIP 10.128.154.225 <none> 80/TCP 68s service/kubernetes ClusterIP 10.128.0.1 <none> 443/TCP 29mservice/tea-svc ClusterIP 10.128.96.145 <none> 80/TCP 68s
Deploy the ClusterIssuer
Withincert-manager, the ClusterIssuer can be used to issue certificates. This is a cluster-scoped object that can be referenced by any namespace and used by any certificate requests with the defined certificate-issuing authority. In this example, any certificate requests for Let’s Encrypt certificates can be handled by this ClusterIssuer.
Deploy the ClusterIssuer for the challenge type you have selected. Although it is out of scope for this post, there are advanced configuration options that allow you to specify multiple resolvers (chosen based on selector fields) in your ClusterIssuer.
ACME Challenge Basics
TheAutomated Certificate Management Environment (ACME) protocol is used to determine if you own a domain name and can therefore be issued a Let’s Encrypt certificate. For this challenge, these are the parameters that need to be passed:
- metadata.name: The ClusterIssuer name, which needs to be unique within the Kubernetes installation. This name will be used later in the example when we are issuing a certificate.
- spec.acme.email: This is the email address you are registering with Let’s Encrypt for the purpose of generating certificates. This should be your email.
- spec.acme.privateKeySecretRef: This is the name of the Kubernetes secret you will use to store your private key.
- spec.acme.solvers: This should be left as-is – it notes the type of challenge (or, as ACME refers to it, solver) you are using (HTTP-01 or DNS-01) as well as what Ingress class it should be applied to, which in this case will be nginx.
Using HTTP-01
This example shows how to set up a ClusterIssuer to use the HTTP-01 challenge to prove domain ownership and receive a certificate.
- Create the ClusterIssuer using HTTP-01 for challenges.
- Validate the ClusterIssuer (it should show as ready).
$ cat << EOF | kubectl apply -f apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: prod-issuer spec: acme: email: example@example.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: prod-issuer-account-key solvers: - http01: ingress: class: nginx EOF clusterissuer.cert-manager.io/prod-issuer created
$ kubectl get clusterissuer NAME READY AGEprod-issuer True 34s
Using DNS-01
This example shows how to set up a ClusterIssuer to use the DNS-01 challenge to authenticate your domain ownership. Depending on your DNS provider you will likely need to use a Kubernetes Secret to store your token. This example is using Cloudflare. Note the use of namespace. Thecert-manager application, which is deployed into the cert-manager namespace, needs to have access to the Secret .
For this example, you will need a Cloudflare API token, which you can create from your account. This will need to be put in the <API Token> line below. If you are not using Cloudflare you will need to follow the documentation for your provider.
- Create a Secret for the API token.
- Create the issuer using DNS-01 for challenges.
- Validate the issuer (it should show as ready).
$ cat << EOF | kubectl apply -f apiVersion: v1 kind: Secret metadata: name: cloudflare-api-token-secret namespace: cert-manager type: Opaque stringData: api-token: <API Token> EOF
$ cat << EOF | kubectl apply -f apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: prod-issuer spec: acme: email: example@example.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: prod-issuer-account-key solvers: - dns01: cloudflare: apiTokenSecretRef: name: cloudflare-api-token-secret key: api-token EOF
$ kubectl get clusterissuer NAME READY AGEprod-issuer True 31m
Deploy the Ingress
This is the point we’ve been building towards – the deployment of the Ingress resource for our application. This will route traffic into the NGINX Cafe application we deployed earlier.
Using the Kubernetes Ingress
If you are using the standard Kubernetes Ingress resource, you will use the following deployment YAML to configure the Ingress and request a certificate.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: cafe-ingress annotations: cert-manager.io/cluster-issuer: prod-issuer acme.cert-manager.io/http01-edit-in-place: "true" spec: ingressClassName: nginx tls: - hosts: - cert.example.com secretName: cafe-secret rules: - host: cert.example.com http: paths: - path: /tea pathType: Prefix backend: service: name: tea-svc port: number: 80 - path: /coffee pathType: Prefix backend: service: name: coffee-svc port: number: 80
It’s worth reviewing some key parts of the manifest:
- The API being called is the standard Kubernetes Ingress.
- A key part of this configuration is under
metadata.annotations
where we setacme.cert-manager.io/http01-edit-in-place
to “true”. This value is required and adjusts the way that the challenge is served. For more information see the Supported Annotations document. This can also be handled by using a master/minion setup. - The
spec.ingressClassName
refers to the NGINX Ingress controller that we installed and will be using. - The
spec.tls.secret
Kubernetes Secret resource stores the certificate key that is returned when the certificate is issued by Let’s Encrypt. - Our hostname of
cert.example.com
is specified forspec.tls.hosts
andspec.rules.host
. This is the hostname for which our ClusterIssuer issued the certificate. - The
spec.rules.http
section defines the paths and the backend Services that will service requests on those paths. For example, traffic to/tea
will be directed to Port 80 on thetea-svc
.
- Modify the above manifest for your installation. At minimum, this will involve changing the
spec.rules.host
andspec.tls.hosts
values, but you should review all parameters in the configuration. - Apply the manifest.
- Wait for the certificate to be issued. You are looking for a value of “True” for the READY field.
$ kubectl apply -f ./cafe-virtual-server.yaml virtualserver.k8s.nginx.org/cafe created
$ kubectl get certificates NAME READY SECRET AGE certificate.cert-manager.io/cafe-secret True cafe-secret 37m
Using the NGINX Virtual Server / Virtual Routes
If you are using the NGINX CRDs, you will need to use the following deployment YAML to configure your Ingress.
apiVersion: k8s.nginx.org/v1 kind: VirtualServer metadata: name: cafe spec: host: cert.example.com tls: secret: cafe-secret cert-manager: cluster-issuer: prod-issuer upstreams: - name: tea service: tea-svc port: 80 - name: coffee service: coffee-svc port: 80 routes: - path: /tea action: pass: tea - path: /coffee action: pass: coffee
Once again, it’s worth reviewing some key parts of the manifest:
- The API being called is the NGINX-specific k8s.nginx.org/v1 for the VirtualServer resource.
- The
spec.tls.secret
Kubernetes Secret resource stores the certificate key that is returned when the certificate is issued by Let’s Encrypt. - Our hostname of
cert.example.com
is specified forspec.host
. This is the hostname for which our ClusterIssuer issued the certificate . - The
spec.upstreams
values point to our backend Services, including the ports. - The
spec.routes
defines both the route and the action to be taken when those routes are hit.
- Modify the above manifest for your installation. At minimum, this will involve changing the
spec.host
value, but you should review all parameters in the configuration. - Apply the manifest.
- Wait for the certificate to be issued. You should see a status of Valid.
$ kubectl apply -f ./cafe-virtual-server.yaml virtualserver.k8s.nginx.org/cafe created
$ kubectl get VirtualServers NAME STATE HOST IP PORTS AGE cafe Valid cert.example.com www.xxx.yyy.zzz [80,443] 51m
View the Certificate
You can view the certificate via the Kubernetes API. This will show you details about the certificate, including its size and associated private key.
$ kubectl describe secret cafe-secret Name: cafe-secret Namespace: default Labels: <none> Annotations:cert-manager.io/alt-names: cert.example.com cert-manager.io/certificate-name: cafe-secret cert-manager.io/common-name: cert.example.com cert-manager.io/ip-sans: cert-manager.io/issuer-group: cert-manager.io/issuer-kind: ClusterIssuer cert-manager.io/issuer-name: prod-issuer cert-manager.io/uri-sans:Type:kubernetes.io/tlsData ==== tls.crt: 5607 bytes tls.key: 1675 bytes
If you’d like to see the actual certificate and key, you can do so by running the following command. (Note: This does illustrate a weakness of the Kubernetes Secrets. Namely, they can be read by anyone with the necessary access permissions.)
$ kubectl get secret cafe-secret -o yaml
Test the Ingress
Test the certificates . You can use any method that you wish here. The example below uses cURL. Success is indicated by a block similar to what is shown before , which includes the server name, internal address of the server, date, the URI (route) chosen (coffee or tea), and the request ID. Failures will take the form of HTTP error codes, most likely 400 or 301.
$ curl https://cert.example.com/tea Server address: 10.2.0.6:8080 Server name: tea-5c457db9-l4pvq Date: 02/Sep/2022:15:21:06 +0000 URI: /tea Request ID: d736db9f696423c6212ffc70cd7ebecf $ curl https://cert.example.com/coffee Server address: 10.2.2.6:8080 Server name: coffee-7c86d7d67c-kjddk Date: 02/Sep/2022:15:21:10 +0000 URI: /coffeeRequest ID: 4ea3aa1c87d2f1d80a706dde91f31d54
Certificate Renewals
At the start, we promised that this approach would eliminate the need to manage certificate renewals. However, we have yet to explain how to do that. Why? Because this is a core, built-in part of cert-manager. In this automatic process, when cert-manager realizes that a certificate is not present, is expired, is within 15 days of expiry, or if the user requests a new cert via the CLI, then a new certificate is automatically requested. It doesn’t get much easier than that.
Common Questions
What About NGINX Plus?
If you are an NGINX Plus subscriber, the only difference for you will involve installing the NGINX Ingress Controller. Please see the Installation Helm section of the NGINX Docs for instructions on how to modify the Helm command given above to accomplish this.
What Challenge Type Should I Use?
This largely depends on your use case.
The HTTP-01 challenge method requires that Port 80 is open to the Internet and that the DNS A record has been properly configured for the IP address of the Ingress controller. This approach does not require access to the DNS provider other than to create the A record.
The DNS-01 challenge method can be used when you cannot expose Port 80 to the Internet, and only requires that the cert-manager have egress access to the DNS provider. However, this method does require that you have access to your DNS provider’s API, although the level of access required varies by specific provider.
How Do I Troubleshoot Problems?
Since Kubernetes is so complex, it’s difficult to provide targeted troubleshooting information. If you do run into issues, we’d like to invite you to ask us on NGINX Community Slack (NGINX Plus subscribers can use their normal support options).
Get Started Today
Get started by requesting your free 30-day trial of NGINX Ingress Controller with NGINX App Protect WAF and DoS, and download the always‑free NGINX Service Mesh.