Anyone have good stories on making a monolith run ...
# platform-toolbox
a
Anyone have good stories on making a monolith run efficiently and have a lot of developers contribute to it over the years?
I worked on two large monoliths Youtube and Dropbox both 4M lines of python or more. Both seem to be fine ;)
Most of the time we spent working on build, test and release
The biggest wins were selective testing, ability to reason about things easily, and just running the monolith multiple times when we wanted services
a
I assume there was a dedicated team for this building custom tooling thats not available on the market even today. I agree the difference between microservices and monolith is an implementation detail and having a managed platform makes sense. Was the most value in the end state of splitting the monolith or in abstracting common code. Trying to decide if we should double down on our monolith or make a platform and it sounds like the right answer is some amount of both.
c
Monolithic architecture and platforming are not mutually exclusive. At a first glance it might seem that with a monolith a platform has diminished value, because every developer should be running more or less the same thing and there is not a big need for that common substrate to run different tech stacks on. In reality, the choice between monolith and service oriented architecture (yep, could be micro) is mainly a choice on how you isolate and deploy the discrete parts of your architecture. So developers will still focus on one part of the problem and then want to deploy that. As Andrew said there’s immense value in being able to run multiple versions of the monolith at the same time - and a platform normally provides that capability. If you don’t have that capability, every change of a developer immediately becomes a full integration test and you will see your build is more often broken than green.
a
bazel for selective testing, deployments were rsync over ssh ;) no need for complicated platforms
the platform referenced now is k8s but it went a long time before that (10y+). k8s was brought for other reasons as the adjacent pieces became go serivces etc
a
👍 ty for the context. What do you mean by running multiple versions of the monolith at the same time? Like make a change for
/newEndpoint
and deploy a new version of the monolith but only for
/newEndpoint
while the other endpoints still go to the existing monolith? In a way like a rollout strategy?
a
we ran server-rpc server-www server-watch-page etc
but traffic routed /watch to server-watch instance /rpc to rpc instance /* to everything else
physically isolated the clusters so now you have SOA but on a monolith ;)
a
makes sense since one issue on monoliths is managing the capacity and usage of different endpoints
a
tbh this scaled to hundreds of developers + a scale no one really sees
I think FB would probably say the same thing
we ended up building and investing in tools like MyPy, Bazel, and even a prototype of rewriting and replacing cPython
a
Would you do it all again or have seperate services once you have enough teams?
a
It’s hard to argue with outcomes of DBX, YouTube, FB. I think monolith and the smallest number of srvices is always better. It requires less cognitive overhead and things are way simpler. The answer is contextual but in startup life i argue monolith everyday
If you hit “scale”’you can also break it up bc capital isn’t an issue, the best talent wants to join you, etc
the problem is more for those in btwn. My experience is way more oriented towards outliers
c
Copy code
What do you mean by running multiple versions of the monolith at the same time?
Literally that. If you’re not able to run multiple “versions” or better “instances” at the same time, but only one, you’ll always lack the capability to experiment. Afterwards, you could turn this into anything - also a rollout strategy, which would be very uncommon, but possible.
What you described regarding the routing, is very common. If you start chipping services off the monolith, you need a layer to decide if a request goes to the old code in the monolith or the new service at the side - strangler pattern. Routing / Transparent proxy / etc. is the name of the game. If done right, this might even open up paths to scale by multiple instances of the monolith. Obviously only if they play nicely along with each other on the same backing data.