Hey all, happy to share this open source project w...
# general
l
Hey all, happy to share this open source project we launch today-Appilot! It helps you operate applications using GPT-like LLMs, which means you can chat to deploy and manage applications on any infrastructure. About Appilot Features: Application management Environment management Diagnose Safeguard: any action involving state changes requires human approval. Hybrid infrastructure Multi language support Pluggable backends: It supports multiple backends including Walrus and Kubernetes, and is extensible. Here is more details: https://github.com/seal-io/appilot
h
Ok I got to say that is pretty cool, only thing that's a bit confusing is at the top it says you can use kubernetes as a backend but then at the bottom it says you need walrus?, walrus is cool but from a community perspective argocd, flux, crossplane and others already exist so I would rather use a existing ecosystem for app deployment
r
@Lynn out of curiosity - how realisticly this kind of tool can be used in production? Is there a way to preview the prompt before applying any conf? What the workflow / experience look like?
l
@Hugo Pinheiro Of course, you can integrate Appilot into any existing ecosystem according to your routine. Walrus is just an example and this is K8S use case: https://github.com/seal-io/appilot/blob/main/examples/k8s_helm.md. It is NOT vendor lock-in, so feel free to try~😉
l
@Romaric Philogène There are some key issues that need to be addressed before it's production ready, such as reliability, cost, privacy, and so on. Using pre-trained models like GPT-4 may not be sufficient to tackle these problems. Fine-tuned dedicated model could be the key. As the LLM infra evolves, we anticipate to see new solutions in the short term.
It uses a workflow called ReAct(reason & act). Prompts does not apply conf, it helps to generate input to apply conf. So users can review all inputs that apply any conf to the system.