
Blockchain and DevOps | Secure CICD Integration
April 7, 2025
AI Prompts Best Practice That Will Transform your Life
May 19, 2025Artificial Intelligence is showing up everywhere, including Kubernetes! The K8sGPT project is an official CNCF sandbox project, first announced at KubeCon Amsterdam in 2023
Why K8sGPT?
K8sGPT uses AI to analyze Kubernetes resources (Pods, Services, Deployments, etc.) and delivers clear, actionable insights. No more deciphering cryptic errors manually. It’s like having a Kubernetes expert on speed dial.
Setup in a Nutshell
- Build a Custom Image: I created a Docker image with K8sGPT and Ollama (running Llama3 for AI) to keep things local and cost-free.
- GitLab CI/CD Pipeline: Set up a pipeline to run K8sGPT scans on my EKS cluster. It pulls the Llama3 model, analyzes the cluster, and spits out a report.
- Filter Resources: Used a k8sgpt.yaml to focus on key resources like Pods, Ingresses, and StatefulSets.
- Run & Review: Pushed the code, triggered the pipeline, and got a neat k8sgpt-report.txt with issues and AI explanations.
What I Found
K8sGPT caught a pod failing to pull an invalid image tag (nginx:1.a.b.c). The AI explained the error and suggested fixes—way faster than my usual debugging slog. It also flagged a misconfigured Ingress and a stuck CronJob.
Cool Bits
• Local AI with Ollama: No cloud AI costs, and it’s privacy-friendly.
• Plain English Explanations: Even non-experts can understand the issues.
• Pipeline Integration: Automates scans in GitLab, saving time.
• Operator Option: For continuous monitoring, you can deploy K8sGPT as a Kubernetes operator.
Gotchas
• Setup takes some effort (Docker, pipeline config, kubeconfig).
• Anonymization (for sensitive data) isn’t fully supported yet.
• Resource coverage is solid but not exhaustive—custom analyzers can help.
Verdict
K8sGPT is a game-changer for Kubernetes observability. It’s like an AI sidekick that spots problems and explains them clearly. Integrating it into a CI/CD pipeline makes it even sweeter for automated cluster health checks. Want to try it? Check the K8sGPT docs and give it a spin.
Technicals
Steps to Replicate
- Create a Custom Docker Image with Ollama and K8sGPT
o Build a Docker image to run K8sGPT and Ollama (local AI model).
o Use the following Dockerfile: - Build and push to your registry (e.g., GitLab registry):
FROM ubuntu
ENV DEBIAN_FRONTEND=noninteractive
ENV OLLAMA_HOST=0.0.0.0
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://ollama.com/install.sh | sh
RUN curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.3.24/k8sgpt_amd64.deb && \
dpkg -i k8sgpt_amd64.deb && \
rm k8sgpt_amd64.deb
docker build -t <image-registry-url>/k8sgpt:latest .
docker push <image-registry-url>/k8sgpt:latest
Set Up GitLab CI/CD Pipeline
• Create a .gitlab-ci.yml file in your repository to define the pipeline.
• Example pipeline configuration:
analyze:
stage: analyze
image: <image-registry-url>/k8sgpt:latest
timeout: 2h 30m
variables:
OLLAMA_MODELS: $CI_PROJECT_DIR/.ollama/models
CLUSTER_NAME: some-eks-cluster
services:
- name: <image-registry-url>/k8sgpt:latest
alias: ollama
entrypoint: ["/usr/local/bin/ollama"]
command: ["serve"]
script:
- mkdir -p $OLLAMA_MODELS
- test -n "$(ls -A $OLLAMA_MODELS)" || ollama pull llama3
- ollama list
- k8sgpt auth add --backend localai --model llama3 --baseurl http://ollama:11434/v1
- CONTEXT=$(kubectl config current-context)
- k8sgpt analyze --explain --config k8sgpt.yaml --kubeconfig $HOME/.kube/$CLUSTER_NAME --kubecontext $CONTEXT | tee k8sgpt-${CLUSTER_NAME}-report.txt
cache:
key: ollama
paths:
- $OLLAMA_MODELS
artifacts:
paths:
- k8sgpt-*.txt
Explanation:
- The image pulls your custom K8sGPT/Ollama image.
- The services section runs Ollama as a sidecar to serve the Llama3 model.
- The script pulls the Llama3 model if not cached, authenticates K8sGPT with the local Ollama backend, and runs the k8sgpt analyze command to scan the cluster.
- Artifacts store the analysis report (k8sgpt–report.txt).
- Cache persists the Llama3 model to speed up future runs.
Configure K8sGPT Filters
• Create a k8sgpt.yaml file to specify which Kubernetes resources to analyze
Place this file in your GitLab repository so the pipeline can reference it.
filters:
- ValidatingWebhookConfiguration
- PersistentVolumeClaim
- StatefulSet
- Node
- MutatingWebhookConfiguration
- Service
- Ingress
- CronJob
- Pod
- Deployment
- ReplicaSet
- Log
Set Up Kubernetes Access
• Ensure the GitLab runner has access to your Kubernetes cluster.
• Store the kubeconfig file securely (e.g., in $HOME/.kube/$CLUSTER_NAME on the runner or as a GitLab CI/CD variable).
• Verify kubectl can connect to the cluster
kubectl config use-context <context-name>
kubectl get nodes
Run the Pipeline
• Push the .gitlab-ci.yml, Dockerfile, and k8sgpt.yaml to your GitLab repository.
• Trigger the pipeline via GitLab’s CI/CD interface or a git push.
• The pipeline will:
o Build the Docker image (if not already built).
o Run the K8sGPT analysis.
o Output a report (k8sgpt–report.txt) with issues and AI-generated explanations/suggestions.
Review Results
• Download the artifact (k8sgpt–report.txt) from the GitLab pipeline.
• Example output might include
0 default/broken-pod(broken-pod) - Error: Back-off pulling image "nginx:1.a.b.c"
Explanation: The pod cannot pull the specified image due to an invalid tag. Ensure the image tag is correct and accessible.
Optional: Deploy K8sGPT Operator
For continuous monitoring, deploy the K8sGPT Operator in your cluster
helm repo add k8sgpt https://charts.k8sgpt.ai/
helm repo update
helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace
Notes
AI Backend: The blog uses Ollama with Llama3 for local AI processing, avoiding external providers like OpenAI. Adjust the –backend and –model flags if using other providers (e.g., OpenAI, Azure).
Security: Enable the –anonymize flag to mask sensitive data before sending to AI backends (not fully implemented for all analyzers).
Supported Resources: K8sGPT supports resources like Pods, Services, Deployments, etc. (see k8sgpt.yaml filters). Custom analyzers can be added for specific needs.
Documentation: Refer to the official K8sGPT docs (https://docs.k8sgpt.ai/) for advanced configurations.