I'm curious to hear if any of your companies are u...
# general
m
I'm curious to hear if any of your companies are utilizing the modular monolith pattern. If so, how has it been working for you? Any insights you can share from the perspective of Platform Engineering? (Ex: any difference in tooling, capabilities, etc.)
t
In my mind it’s not that much different from micro services in a mono-repo. What I think is key is that, even if everything builds at once for every push, there is clear separation in test execution for each module. The more you can have ”a test pipe per module” the better. Why do I say that? First of all is isolation. In a modular monolith we want the modules to still be isolated, that’s what differentiates it from just a monolith . That goes for their tests as well. By having the modules as first class citizens in the pipelines it not only makes the tests more resilient but also designs for change. It allows us to easily break out individual modules that become noisy and need to scale independently into a micro services. If a modular monolith is well built with full isolation in code and database then typically the hardest thing in breaking out a module into a separate deployment is the pipeline.
t
Btw it was me who wrote the above. Im just transitioning out of AWS forgot I had not changed my account in here. Still I agree with all I said.. 😉
b
Still I agree with all I said..
That's quite the accomplishment 🤣 Agreed. It takes restraint to avoid the "big ball of mud" with monoliths. I like Tomas's approach to lay a golden path by making it harder to have tightly coupled modules in your monolith. I've seen it done wrong more often than right, but that also applies to microservices (distributed monoliths are painful).
m
This is some great insight, thanks guys!
It takes restraint to avoid the "big ball of mud" with monoliths.
This is an interesting point, and one of my concerns. I'm not sure this is a problem Platform Engineering is suited to solve, other than helping to design tooling that simplifies "doing the right thing". @Tomas Riha it sounds like you've had to build support for this. Any suggestions on tooling we should consider (already assuming Bazel).
t
@Michael Galloway yes Ive built my fair share of pipeline related things over the years. I’ve also worked with a fair share of customers who have dug them selfs into a hole with their tooling. Im not gonna recommend you any tooling but rather an approach and a mindset. ”All tooling sucks, I don’t want to debate which one sucks the least and I don’t want spend ages evaluating tooling.”. I came to this conclusion after having done all the above to the point of insanity. So this is my key mindset when working with tooling. By clear separation of responsibility between pipeline orchestration tooling, task orchestration, build tools, test tools, analytics tools, reporting tools and deployment tools we can achieve a modular solution where each tool can be replaced without requiring to fully rebuild entire pipeline. A key to that is the task orchestration. I’ve seen so many pipeline where there is rows and rows of script code in the GitHub Action steps. What I do is on is to build pipelines that have clear interfaces between the tools. The GHA step should only call the task orchestration with a single line
do build
or
do test
(do being what ever tool we chose). The first reason which I never compromise on is portability. If I can do
do build
and
do test
locally with the exact same line of code as I do in the pipe then good things will start happening. You might need an argument s but the code for that step is the same regardless if you build locally or in pipe. The second reason is that it allows for different build, test and iac tools and swapping them out doesn’t touch the pipe. This can be done by Bazel, Make or what ever. All suck. ;) But it’s an execution tool not a build tool we are looking at here. Even though Bazel and Make are called build systems, they are more then just that where as maven is just a build system in my mind. I want to be able to implement all steps in that orchestration tool. So from
make build
to
make deploy
. I can now switch between Github and Gitlab by just rewriting my pipeline yml. I can swap from Java to Python withouth affecting my pipeline. In your case you can break out a module into its own micro service by pretty much just adjusting the pipeline.
Copy code
bazel build
bazel test :order_test
bazel test :fulfillment_test
bazel run :deploy
Alt
Copy code
bazel build
bazel test :allmodule_test
bazel run :deploy
So if the pipe contains these four steps, then it’s easy to just make add the
Bazel run :deploy_fulfillment
or move duplicate it to a new pipe where you just remove testing of the others. If using you use a collection target for all tests or not kind of depends on how much you wana force the issue on isolation and how you do bootstrapping of new modules. If this was micro services I would go for the later. In the modular monolith I might choose to be bit more explicit.
b
This is good advice Tomas.
m
Thanks Tomas, I like the framing as well. I agree, "all tooling sucks" 🙂 At least in a long enough horizon, all of them will need to be replaced when the conditions of today no longer are the constraints we need to operate under. Thanks for the thorough and thoughtful answer @Tomas Riha (AWS)