{"id":2412,"date":"2025-04-21T17:12:27","date_gmt":"2025-04-21T17:12:27","guid":{"rendered":"https:\/\/spirezen.com\/blog\/?p=2412"},"modified":"2025-04-21T17:12:28","modified_gmt":"2025-04-21T17:12:28","slug":"ai-for-kubernetes-k8sgpt","status":"publish","type":"post","link":"https:\/\/spirezen.com\/blog\/ai-for-kubernetes-k8sgpt\/","title":{"rendered":"AI For Kubernetes &#8211; K8sGPT"},"content":{"rendered":"\n<p class=\"\">Artificial Intelligence is showing up everywhere, including Kubernetes! The K8sGPT project is an official CNCF sandbox project, first announced at KubeCon Amsterdam in 2023<\/p>\n\n\n\n<p class=\"\"><strong>Why K8sGPT?<\/strong><br>K8sGPT uses AI to analyze Kubernetes resources (Pods, Services, Deployments, etc.) and delivers clear, actionable insights. No more deciphering cryptic errors manually. It\u2019s like having a Kubernetes expert on speed dial.<\/p>\n\n\n\n<p class=\"\"><br><strong>Setup in a Nutshell<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"\">Build a Custom Image: I created a Docker image with K8sGPT and Ollama (running Llama3 for AI) to keep things local and cost-free.<\/li>\n\n\n\n<li class=\"\">GitLab CI\/CD Pipeline: Set up a pipeline to run K8sGPT scans on my EKS cluster. It pulls the Llama3 model, analyzes the cluster, and spits out a report.<\/li>\n\n\n\n<li class=\"\">Filter Resources: Used a k8sgpt.yaml to focus on key resources like Pods, Ingresses, and StatefulSets.<\/li>\n\n\n\n<li class=\"\">Run &amp; Review: Pushed the code, triggered the pipeline, and got a neat k8sgpt-report.txt with issues and AI explanations.<\/li>\n<\/ol>\n\n\n\n<p class=\"\"><strong>What I Found<\/strong><br>K8sGPT caught a pod failing to pull an invalid image tag (nginx:1.a.b.c). The AI explained the error and suggested fixes\u2014way faster than my usual debugging slog. It also flagged a misconfigured Ingress and a stuck CronJob.<\/p>\n\n\n\n<p class=\"\"><strong>Cool Bits<\/strong><br>\u2022 Local AI with Ollama: No cloud AI costs, and it\u2019s privacy-friendly.<br>\u2022 Plain English Explanations: Even non-experts can understand the issues.<br>\u2022 Pipeline Integration: Automates scans in GitLab, saving time.<br>\u2022 Operator Option: For continuous monitoring, you can deploy K8sGPT as a Kubernetes operator.<\/p>\n\n\n\n<p class=\"\"><strong>Gotchas<\/strong><br>\u2022 Setup takes some effort (Docker, pipeline config, kubeconfig).<br>\u2022 Anonymization (for sensitive data) isn\u2019t fully supported yet.<br>\u2022 Resource coverage is solid but not exhaustive\u2014custom analyzers can help.<\/p>\n\n\n\n<p class=\"\"><strong>Verdict<\/strong><br>K8sGPT is a game-changer for Kubernetes observability. It\u2019s like an AI sidekick that spots problems and explains them clearly. Integrating it into a CI\/CD pipeline makes it even sweeter for automated cluster health checks. Want to try it? Check the K8sGPT docs and give it a spin.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Technicals<\/strong><\/h2>\n\n\n\n<p class=\"\"><strong>Steps to Replicate<\/strong><\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"\">Create a Custom Docker Image with Ollama and K8sGPT<br>o Build a Docker image to run K8sGPT and Ollama (local AI model).<br>o Use the following Dockerfile:<\/li>\n\n\n\n<li class=\"\">Build and push to your registry (e.g., GitLab registry):<\/li>\n<\/ol>\n\n\n\n<pre class=\"wp-block-code has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><code>FROM ubuntu\nENV DEBIAN_FRONTEND=noninteractive\nENV OLLAMA_HOST=0.0.0.0\nRUN apt-get update &amp;&amp; apt-get install -y curl\nRUN curl -fsSL https:\/\/ollama.com\/install.sh | sh\nRUN curl -LO https:\/\/github.com\/k8sgpt-ai\/k8sgpt\/releases\/download\/v0.3.24\/k8sgpt_amd64.deb &amp;&amp; \\\n    dpkg -i k8sgpt_amd64.deb &amp;&amp; \\\n    rm k8sgpt_amd64.deb\ndocker build -t &lt;image-registry-url>\/k8sgpt:latest .\ndocker push &lt;image-registry-url>\/k8sgpt:latest<\/code><\/pre>\n\n\n\n<p class=\"\"><strong>Set Up GitLab CI\/CD Pipeline<\/strong><br>\u2022 Create a .gitlab-ci.yml file in your repository to define the pipeline.<br>\u2022 Example pipeline configuration:<\/p>\n\n\n\n<pre class=\"wp-block-code has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><code>analyze:\n  stage: analyze\n  image: &lt;image-registry-url>\/k8sgpt:latest\n  timeout: 2h 30m\n  variables:\n    OLLAMA_MODELS: $CI_PROJECT_DIR\/.ollama\/models\n    CLUSTER_NAME: some-eks-cluster\n  services:\n    - name: &lt;image-registry-url>\/k8sgpt:latest\n      alias: ollama\n      entrypoint: &#91;\"\/usr\/local\/bin\/ollama\"]\n      command: &#91;\"serve\"]\n  script:\n    - mkdir -p $OLLAMA_MODELS\n    - test -n \"$(ls -A $OLLAMA_MODELS)\" || ollama pull llama3\n    - ollama list\n    - k8sgpt auth add --backend localai --model llama3 --baseurl http:\/\/ollama:11434\/v1\n    - CONTEXT=$(kubectl config current-context)\n    - k8sgpt analyze --explain --config k8sgpt.yaml --kubeconfig $HOME\/.kube\/$CLUSTER_NAME --kubecontext $CONTEXT | tee k8sgpt-${CLUSTER_NAME}-report.txt\n  cache:\n    key: ollama\n    paths:\n      - $OLLAMA_MODELS\n  artifacts:\n    paths:\n      - k8sgpt-*.txt<\/code><\/pre>\n\n\n\n<p class=\"\">Explanation:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"\">The image pulls your custom K8sGPT\/Ollama image.<\/li>\n\n\n\n<li class=\"\">The services section runs Ollama as a sidecar to serve the Llama3 model.<\/li>\n\n\n\n<li class=\"\">The script pulls the Llama3 model if not cached, authenticates K8sGPT with the local Ollama backend, and runs the k8sgpt analyze command to scan the cluster.<\/li>\n\n\n\n<li class=\"\">Artifacts store the analysis report (k8sgpt&#8211;report.txt).<\/li>\n\n\n\n<li class=\"\">Cache persists the Llama3 model to speed up future runs.<\/li>\n<\/ul>\n\n\n\n<p class=\"\"><strong>Configure K8sGPT Filters<\/strong><br>\u2022 Create a k8sgpt.yaml file to specify which Kubernetes resources to analyze<\/p>\n\n\n\n<p class=\"\">Place this file in your GitLab repository so the pipeline can reference it.<\/p>\n\n\n\n<pre class=\"wp-block-code has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><code>filters:\n  - ValidatingWebhookConfiguration\n  - PersistentVolumeClaim\n  - StatefulSet\n  - Node\n  - MutatingWebhookConfiguration\n  - Service\n  - Ingress\n  - CronJob\n  - Pod\n  - Deployment\n  - ReplicaSet\n  - Log<\/code><\/pre>\n\n\n\n<p class=\"\"><strong>Set Up Kubernetes Access<\/strong><br>\u2022 Ensure the GitLab runner has access to your Kubernetes cluster.<br>\u2022 Store the kubeconfig file securely (e.g., in $HOME\/.kube\/$CLUSTER_NAME on the runner or as a GitLab CI\/CD variable).<br>\u2022 Verify kubectl can connect to the cluster<\/p>\n\n\n\n<pre class=\"wp-block-code has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><code>kubectl config use-context &lt;context-name>\nkubectl get nodes<\/code><\/pre>\n\n\n\n<p class=\"\"><strong>Run the Pipeline<\/strong><br>\u2022 Push the .gitlab-ci.yml, Dockerfile, and k8sgpt.yaml to your GitLab repository.<br>\u2022 Trigger the pipeline via GitLab\u2019s CI\/CD interface or a git push.<br>\u2022 The pipeline will:<br>o Build the Docker image (if not already built).<br>o Run the K8sGPT analysis.<br>o Output a report (k8sgpt&#8211;report.txt) with issues and AI-generated explanations\/suggestions.<\/p>\n\n\n\n<p class=\"\"><strong>Review Results<\/strong><br>\u2022 Download the artifact (k8sgpt&#8211;report.txt) from the GitLab pipeline.<br>\u2022 Example output might include<\/p>\n\n\n\n<pre class=\"wp-block-code has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><code>0 default\/broken-pod(broken-pod) - Error: Back-off pulling image \"nginx:1.a.b.c\"\nExplanation: The pod cannot pull the specified image due to an invalid tag. Ensure the image tag is correct and accessible.<\/code><\/pre>\n\n\n\n<p class=\"\"><strong>Optional: Deploy K8sGPT Operator<\/strong><br>For continuous monitoring, deploy the K8sGPT Operator in your cluster<\/p>\n\n\n\n<pre class=\"wp-block-code has-very-light-gray-to-cyan-bluish-gray-gradient-background has-background\"><code>helm repo add k8sgpt https:\/\/charts.k8sgpt.ai\/\nhelm repo update\nhelm install release k8sgpt\/k8sgpt-operator -n k8sgpt-operator-system --create-namespace<\/code><\/pre>\n\n\n\n<p class=\"\"><strong>Notes<\/strong><br>AI Backend: The blog uses Ollama with Llama3 for local AI processing, avoiding external providers like OpenAI. Adjust the &#8211;backend and &#8211;model flags if using other providers (e.g., OpenAI, Azure).<br>Security: Enable the &#8211;anonymize flag to mask sensitive data before sending to AI backends (not fully implemented for all analyzers).<br>Supported Resources: K8sGPT supports resources like Pods, Services, Deployments, etc. (see k8sgpt.yaml filters). Custom analyzers can be added for specific needs.<br>Documentation: Refer to the official K8sGPT docs (https:\/\/docs.k8sgpt.ai\/) for advanced configurations.<\/p>\n\n\n\n<p class=\"\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence is showing up everywhere, including Kubernetes! The K8sGPT project is an official CNCF sandbox project, first announced at KubeCon Amsterdam in 2023 Why K8sGPT?K8sGPT<span class=\"excerpt-hellip\"> [\u2026]<\/span><\/p>\n","protected":false},"author":2,"featured_media":2413,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-2412","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/posts\/2412","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/comments?post=2412"}],"version-history":[{"count":1,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/posts\/2412\/revisions"}],"predecessor-version":[{"id":2414,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/posts\/2412\/revisions\/2414"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/media\/2413"}],"wp:attachment":[{"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/media?parent=2412"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/categories?post=2412"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/spirezen.com\/blog\/wp-json\/wp\/v2\/tags?post=2412"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}