Hey all :wave: - wanted to share an open-source pr...
# general
a
Hey all 👋 - wanted to share an open-source project I've been working on that is relevant to platform engineering : Kubently It's a tool for troubleshooting Kubernetes agentically - debug clusters through natural conversation with any major LLM. Built it because kubectl output is verbose, debugging is manual, and honestly agents debug faster than I can half the time. Quick highlights: ~50ms command delivery via SSERead-only operations by defaultNative A2A protocol supportLangGraph/LangChainRuns on any K8s cluster - EKS, GKE, AKS, bare metalMulti-cluster from day oneDocs: https://kubently.io GitHub: https://github.com/kubently/kubently Still early days - lots of room for improvement. Feedback, bug reports, or feature ideas all welcome. Happy to answer questions here too.
👍 3
s
Thx for sharing @Adam Dickinson, looks promising. Do you plan to support Ollama for the LLM? A central Ollama server running gpt-oss:(1)20b should be capable of supporting the tool well too I believe.
a
Will look into it. Thanks for the suggestion
👍 1
I actually think it supports it already. Or maybe. I was looking at the package I use for LLM factory, and if you are using something OpenAI API compatible, you could probably just api: env: LLM_PROVIDER: "openai" OPENAI_ENDPOINT: "http://ollama-server:11434/v1" OPENAI_MODEL_NAME: "llama3" OPENAI_API_KEY: "ollama" and it should work