Internal Ingress Controllers
What is this?
Cloud Platform now provides internal NGINX ingress controller classes.
These controller classes utilise dedicated Route53 domains for provisioning DNS records for resolving and routing to Cloud Platform services internally.
Why might I want to use this?
There are a number of reasons why you might want to use an internal ingress controller, for example:
-
You want to restrict access to your Cloud Platform hosted service to only users / services running on Transit Gateway routed networks (for example Modernisation Platform environments).
-
You have a Cloud Platform hosted service that talks to another service on the cluster, and you have a requirement for both restricting access to the internet but want
httpsto be handled in the same way external ingress does. -
You currently use
NetworkPoliciesfor internal cluster communication between namespaced services, but would like to have the rich logging that ingress controllers provide.
How do I use this?
We have two internal ingress controllers classes available:
non-production internal
class name: internal-non-prod
domain: *.internal-non-prod.cloud-platform.service.justice.gov.uk
production Internal
class name internal
domain: *.internal.cloud-platform.service.justice.gov.uk
In order to setup your service to use this ingress controller, simply set your ingressClassName to the appropriate class name, and use the appropriate domain for your ingress rules.
Certificate issuing and lifecycle management is handled handled automatically by the ingress controllers' default certificate issuer for each of the domains.
NOTE: We kindly ask that you use the non-prod class for lower environments (dev, test, staging etc) and the production class for production environments only. This ensures that nginx config reload disruptions are kept to a minimum.
Example Ingress configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: [namespace]
annotations:
external-dns.alpha.kubernetes.io/aws-weight: "100"
external-dns.alpha.kubernetes.io/set-identifier: my-ingress-[namespace]-green
spec:
ingressClassName: internal
rules:
- host: my-shiny-app.internal.cloud-platform.service.justice.gov.uk
http:
paths:
- backend:
service:
name: shiny-app-service
port:
number: 8080
path: /
pathType: ImplementationSpecific
Once the ingress is created, you can confirm the configuration is internal with:
$ nslookup my-shiny-app.internal.cloud-platform.service.justice.gov.uk
..
..
Name: my-shiny-app.internal.cloud-platform.service.justice.gov.uk
Address: 172.20.120.31
Name: my-shiny-app.internal.cloud-platform.service.justice.gov.uk
Address: 172.20.136.38
Name: my-shiny-app.internal.cloud-platform.service.justice.gov.uk
Address: 172.20.94.23
This might seem interesting that you can lookup these addresses from your local machine. This is because the Route53 domains are public; however the ingress controller load balancers are deployed into private subnets, and these IPs are not reachable from outside of the VPC (AWS in fact themselves do the same for internal dns relying services like RDS).
Logging
Internal ingress logs are available in the kubernetes_ingress index on OpenSearch here
Was this page useful?