Building an API Gateway using Nginx and Kubernetes
This tutorial will guide you through building a robust API Gateway using NGINX inside a Kubernetes cluster. We will create multiple microservices in Node.js, deploy them to Kubernetes, and configure NGINX as the API Gateway. This setup will help you understand how an API Gateway manages routing, security, and inter-service communication within Kubernetes.
Overview of API Gateway in Kubernetes
An API Gateway acts as the entry point for client requests in microservices architectures. It handles:
- Routing: Directs requests to the appropriate backend services.
- Security: Manages authentication, authorization, and rate-limiting.
- Load Balancing: Distributes requests evenly across instances.
- Monitoring: Logs and tracks request metrics.
Kubernetes provides service discovery and scaling capabilities, making it an ideal platform for running API Gateways. NGINX is a popular choice for API Gateways because of its flexibility, performance, and ability to handle a large number of requests.
Prerequisites
- Kubernetes cluster: You can use Minikube, Docker Desktop, or a cloud provider like AWS EKS, GCP GKE, or Azure AKS.
- kubectl: Installed and configured to interact with your cluster.
- Node.js: Installed locally to create the microservices.
Architecture Overview
- NGINX: Acts as the API Gateway, routing requests to different backend services.
- CoreDNS: Handles DNS-based service discovery, resolving service names to IP addresses within the cluster.
Step 1: Create Node.js Microservices
We’ll create two simple services: Order Service and User Service.
1.1. Order Service
File: order-service/index.js
const express = require('express');
const fetch = require('node-fetch');
const app = express();
const PORT = process.env.PORT || 3000;
const userServiceURL = process.env.USER_SERVICE_URL || 'http://user-service.default.svc.cluster.local';
app.get('/orders', (req, res) => {
res.send('Order Service: List of orders');
});
app.get('/order-user', async (req, res) => {
try {
const response = await fetch(`${userServiceURL}/users`);
const data = await response.text();
res.send(`Order service connected to user service: ${data}`);
} catch (error) {
res.status(500).send('Failed to connect to user service');
}
});
app.listen(PORT, () => console.log(`Order Service running on port ${PORT}`));
1.2. User Service
File: user-service/index.js
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3001;
app.get('/users', (req, res) => {
res.send('User Service: List of users');
});
app.listen(PORT, () => console.log(`User Service running on port ${PORT}`));
Step 2: Containerize Microservices
To deploy the services to Kubernetes, we need Docker images.
2.1. Create Dockerfiles
Order Service Dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
User Service Dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "index.js"]
2.2. Build Docker Images
docker build -t order-service:1.0 ./order-service
docker build -t user-service:1.0 ./user-service
Step 3: Deploy Microservices to Kubernetes
3.1. Deployment for Order Service
order-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 2
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: order-service:1.0
ports:
- containerPort: 3000
env:
- name: USER_SERVICE_URL
value: "http://user-service.default.svc.cluster.local"
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: ClusterIP
3.2. Deployment for User Service
user-service-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 2
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: user-service:1.0
ports:
- containerPort: 3001
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- protocol: TCP
port: 80
targetPort: 3001
type: ClusterIP
Step 4: Deploy NGINX as API Gateway in Kubernetes
4.1. Create NGINX ConfigMap
nginx-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
events {}
http {
upstream order_service {
server order-service.default.svc.cluster.local:80;
}
upstream user_service {
server user-service.default.svc.cluster.local:80;
}
server {
listen 80;
location /orders {
proxy_pass http://order_service;
}
location /users {
proxy_pass http://user_service;
}
}
}
4.2. NGINX Deployment and Service
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-gateway
spec:
replicas: 1
selector:
matchLabels:
app: nginx-gateway
template:
metadata:
labels:
app: nginx-gateway
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config-volume
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx-gateway
spec:
selector:
app: nginx-gateway
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Step 5: Deploy All Resources to Kubernetes
Apply the YAML files to the cluster:
kubectl apply -f order-service-deployment.yaml
kubectl apply -f user-service-deployment.yaml
kubectl apply -f nginx-configmap.yaml
kubectl apply -f nginx-deployment.yaml
Step 6: Accessing the API Gateway
- Find the external IP of the NGINX service:
kubectl get svc nginx-gateway
- Use the external IP to access the services:
- Order Service:
http://<external-ip>/orders
- User Service:
http://<external-ip>/users
- Order Service:
FAQs
Q1: How does NGINX communicate with services in Kubernetes?
- NGINX uses DNS-based service discovery to communicate with Kubernetes services. The service names, such as
order-service.default.svc.cluster.local
, are automatically resolved to internal service IPs by CoreDNS.
Q2: What role does CoreDNS play?
- CoreDNS manages DNS resolution within the Kubernetes cluster. It ensures that service names are mapped to the correct internal IP addresses, allowing NGINX and other services to communicate seamlessly.
Q3: Can I scale microservices independently?
- Yes, you can scale each service independently using Kubernetes' scaling features:
kubectl scale deployment order-service --replicas=5
Q4: How does the API Gateway handle failures?
- NGINX handles basic failure scenarios like retries, timeouts, and circuit breaking. However, advanced failure handling might require additional tools like Istio or Linkerd.
Q5: Is this configuration suitable for handling secure connections?
- For secure connections (HTTPS), you need to configure SSL certificates in NGINX. You can use Let's Encrypt or other certificate management tools in Kubernetes.
Conclusion
In this tutorial, we deployed an API Gateway using NGINX in a Kubernetes environment. We built two Node.js microservices, configured them with DNS-based service discovery, and routed traffic through NGINX. The setup ensures reliable communication, load balancing, and scalability within the Kubernetes ecosystem.