:wave: I'm curious how your teams approach Java-ce...
# general
g
👋 I'm curious how your teams approach Java-centric pipelines when Docker is involved. We're seeing that many pipelines for Java projects (built with Maven or Gradle) involve scripting container builds, image pushes, and ECS or K8s deploys using shell scripts or CI glue. I'm trying to understand how platform teams are solving for these gaps: 1. Do you expose image build/push as part of a developer platform? 2. Are developers responsible for Docker logic, or do you abstract it? 3. Do you treat Docker image construction as part of the app build or as a separate platform concern? 4. How do you handle failure-prone Docker interactions (image flakiness, ECS inconsistencies, etc)? Any patterns, anti-patterns, or platform designs you’ve seen work (or fail) would be super helpful. Especially from teams abstracting infra for Java devs. Thanks!
c
Main issue with Java containers I have always had in the past was that every dev team tried to have their own arcane way to construct that container. If everything goes well, you have a fleet of running containers that needs to have it’s own tooling to track vulnerabilities etc. Pattern I have seen work in the past is to remove the arcane build process with a standard that is platform provided. Most easy option is to simply use buildpacks (they do have their challenges for Java projects with very specific JVM configs / requirements but generally work well for at least 90% of the cases). Paketo was the concrete implementation that I have used successfully in the past - I am currently not tracking the project so please explore with a pinch of salt --> https://paketo.io/
d
My experience and view from an SRE/SecOps perspective is that you can do this a couple of ways. One of the orgs i worked at we had ArgoCD and Rancher 1.4 deployed for the microservices platform which ran predominantly java with some springboot apps. As the sre I got the requirements from the devs in terms of what they would need to be able to run their apps and built a custom spec Dockerfile - iirc there were two, one for spring boot and one for native java with the JRE both on lightweight base containers using alpine . So basically just install the spec of what they want. I created a couple of schematics to identify/validate which packages were needed for what app(s) on each container and setup two jobs in ArgoCD to have a container image build pipeline [ I pushed the built containers to the artifactory registry] to create each 'application container' for the Dev's to use in their "app" build pipeline. I would regularly patch these as part of the dockerfile. To follow , there was another step in the docker image build where i'd use Sysdig to scan the container to ensure that no vulns had been overlooked during the package build as a box tick. "aka shift left DevSecOps" I found this was more helpful to the dev's than having them try and do this themselves. Subsequently I learned that a lot of other customers tend to do something like this from when I worked at Aquasec, so essentially they have a repository with container images [public repos] that's "dirty" and a registry with built/scanned images that are considered clean [ post patching/updated container images]. If you have a tool that can do runtime protection/CWPP - like Aquasec then that's super helpful because you get both benefits. This is quite a good gated approach in any environment. You can use
trivy
which is open source to do a lot of the scanning and it also has a bunch of other capabilities. hth
@Gili Do you currently have any devsecops tools in your org/team ?
j
My last two jobs were using a combo of golden templates (such as created from Backstage). This included Dockerfile for Java using a Maven build, yes, but teams found ways to use Gradle or sbt instead. The model was always "bring your own Dockerfile" with "bring a Makefile with (test, build, clean)" then the CICD would run through those and eventually execute a Docker build/push, or use Kaniko for similar. Ultimately, this was never Java specific until teams started wanting various build options like Jib or fabric8 or bulldpack
One note that I thought worked very well for Java apps was having a common reference entrypoint.sh script that could control, setup , and validate common JVM flags such as heap, JAAS, and SSL file locations, especially when using various container orchestrators that're managed by the platform team who ultimately has filesystem access into running images. This made debugging much easier since all Java images from the template "acted" similarly at boot and standard metrics could be emitted at boot