This message was deleted.
# general
s
This message was deleted.
a
Human developers will always need CICD. CI is required when 2 devs and above work together, to attempt and find bugs quicker by testing more rapidly. CD is required to avoid manual elaborate workflows of provision/upgrade. Both of those a direct tech response to a common basic element and is us, humans. Its about protecting ourselves from ourselves. A portal is just an abstraction layer to the built-in UI of your CICD tool to allow you to add more context to the action.
What is going to change is the development of the pipeline, something along Dagger to overcome the explosion of tools that don't speak the same language. So we created a DSL to replace other DSLs 🙂 That and some new configuration languages like CUE (that Dagger is based on) and even the new one from Apple. Last would be more AI based tooling around the pipeline, so basically enhanced DSLs to hook into some more knowledgable systems to both write/augment/fix pipelines
r
Great points
c
I actually see CICD + scaffolding as the quick way to address the itch that Platform Engineering tries to scratch. I led the effort at my current employer to create what we call "Starter Kits." Want to build a Spring Boot API and deploy it to Kubernetes? Fill in some basic fields (like, where is your OpenAPI spec) and we'll scaffold the boilerplate of your code, drop in a battle tested CICD pipeline, deploy it using common base images and Helm charts, and get your app running in your dev environment on your first commit. At the end of the day, it's just code generation/templating (ala CookieCutter) + CICD pipeline + related assets (base images, Dockerfiles that use those images, and Helm charts).
i
I agree with @Chris Chandler in that we can make current CI/CD systems much more approachable to devs by thinking about the design and implementation of pipelines within a platform engineering effort (after talking to them about what they need to do in their automation) and exposing a simpler (but not restricted) interface to devs. This can be done via pipeline templating, default Helm charts, preconfigured integrations, provisioned secrets, etc. This way, a dev can build a CI/CD pipeline from preconfigured blocks. I think it's an illusion to think that you can hide all the inherit complexity of CI/CD systems from devs (e.g. just provide a Dockerfile and you are good to go), nor should you want to unless you are willing to force them into a severely restricted default path. Better to provide them with building blocks that leave the option open to extend them or create custom config. As far as syntax goes, we always seem to be replacing it with another better DSL but even when it's in the same programming language (like Dagger) it requires learning how to (best) use it and not shoot yourself in the foot or reinvent the wheel multiple times. Again, preconfigured building blocks/patterns can help here.
r
cc @Solomon Hykes
a
there are 2 ways for pipelines @Ivo you either go the full textual based one, a.k.a. YAML so you keep it in the repo or create a UI based approach that allows the end user to orchestrate the steps while enforcing some guradrails from security perspective (so they cant remove some mandatory steps)
Using something like Dagger or some configuration language that crosses the boundaries of services to address incompatible DSL and schema between the different services.
j
@Chris Chandler @Ivo @Arie Heinrich How much abstraction do you think should be in the templates? How abstract should the helm chart, or the dockerfiles, or even the CI/CD Pipelines? When do you decide to centrally manage what the teams create? I know Platform Engineering is, today, mostly about the golden path and starting points are pretty straight forward meaning you create a template and people start from them, but what happens when they start to deviate from the template (i.e. they start overwriting the CI/CD Pipeline, or not keeping their dockerfile up-to-date?)
c
Think sane defaults over abstractions. Regardless of how special devs think their apps are, there’s not a ton to be opinionated about when it comes to containerizing and deploying them. Sure, the code and config are very unique, but how you package and ship it should be standardized. In our case, we have a common pipeline that is maintained centrally and is pulled into the project’s pipeline via an include. That allows us to iterate the pipeline logic centrally and everyone gets the change on their next commit. Their pipeline just becomes config at that point. More importantly, as you called out, they can’t change the pipeline (at least not without an MR my team approves 😁). Ditto with Helm charts. We drop in values file templates and the pipeline pulls in a centralized Helm chart at deploy time. Same story; changes are realized on the next deploy. Config in those has safe and sane defaults. Example: no HPA for dev, but enabled and set to 3-5 min/max for prod. All of this comes with docs in the repo explaining all the available nerd knobs so they can grow into more advanced config/options as their skills grow. All of the common bits are innersourced, so they can see - and contribute - all they like.
To be clear, what I describe above is an opt-in currently, not mandated. Folks who roll their own pipelines, charts, etc.have the issues you called out - especially as Cyber ratchets up the security requirements. We’re finding our best adoption on new projects, with some teams approaching 50+ repos built on the patterns above. Whether it’s people preferring what they know, simply not having time to make a change, or having PTSD from prior runs of trusting a centralized team for this we definitely see folks hesitant to switch.
a
Thats one way of doing it, when you have one team that owns all pipelines and they offer it "as a service". However, that kind of make the devs slightly less responsible for what they build as you have a good experienced team to back up the abstractions and maintain any issues. Another way is to let the devs maintain and own the pipeline, but your abstractions are using the same logic their software is built on. Central team can create python libraries, nuget packages, golang libraries etc that match the application code and let the teams use your libraries as a dependency If you dont want to do that at the level of the software, you do that at the level of the cicd pipelines Like creating a shared library in jenkins, or your own tasks in azure devops, or own GH actions etc and even enforce some of them so devs cant bypass them
c
We have a team of just 3 engineers maintaining the stuff I mentioned (in addition to other PE-related bits). Think of it in the context of Netflix's Full Cycle Developer philosophy. The devs can - and should - rely on paved roads provided to them, but they're ultimately still responsible for running and owning their app. The challenge when you try to shift too much left is that you're putting what I call "high risk, low value" decisions in the hands of devs. Example: which base image to use. There are a surprising number of folks still using the OpenJDK image as their base, despite the project being deprecated in 2022. This is a (somewhat) innocent mistake and easy to miss if you're not keeping up with that - in addition to the other 5000 things on your plate as a dev. Why not remove that decision from the devs and lighten the cognitive load? It's risky to the company to allow potential vulns to creep in due to lack of awareness about the security posture of your base image and it's not something most devs care about (hence low value to them). They just want a JRE to run their app. Sure, you'll catch that (hopefully) in your pipeline... we are running container scans, right? 😄 - but why make extra work and interrupts for the devs when it's so simple to just start with a quality base image from the start. Ultimately, it's the same underlying philosophy as Dagger: Allow smart folks who focus on bits of the CICD process to build common bits for the devs and make them easy to consume.
i
Chris for the most part basically just described our approach to CI/CD configuration. It all starts with the golden path idea and we've modelled a CI/CD pipeline around that, which contains all the common steps most teams need. For us, it's not one single pipeline but we've broken it down into several templates for different steps of the pipeline. Teams construct the pipeline from these building blocks and adjust as necessary. Our pipeline templates are not owned by a single team per se, it's more of a community effort where several people from different teams, with affinity for CI/CD configuration, collaborate to improve the templates so that they are usable for all teams. None of this is mandatory but in practice we see that the devs are happy that they can just compose a pipeline from standardised building blocks. Helm chart is also just a single chart, maintained by a community effort, that is flexible enough to accommodate most use-cases and has sane defaults. Maybe the term should be composition instead of abstraction. Our devs can go all the way to the nitty-gritty details (of course we have certain guardrails in place to prevent any real disasters) and use none of our building blocks but then the burden is on them to show that they are secure, compliant, etc. As Chris said, there is a lot of value in providing templates/building blocks/whatever by knowledgeable people that solve common concerns all the teams face, versus letting every team reinvent the wheel (poorly).
c
100%, Ivo! We call ours “wrapper templates” as they are groups of LEGO for a given pattern. The sub templates are composable and reused across the various wrapper templates.