This message was deleted.
# general
s
This message was deleted.
r
One school of thought holds that dependencies should always be pegged to specific versions. This provides repeatability and traceability — a build at a given commit is guaranteed to produce the same test results, and you can always see what versions were used in a build because it’s all in version control. But there’s a maintenance load involved in keeping versions up-to-date, especially in a world where dependency trees are often huge and tangled. Another approach is to just YOLO and basically take
latest
of everything: lower maintenance, less friction, always up-to-date on security patches, etc. No repeatability, but who really repeats builds? And is repeatability just an illusion anyways, since no developer can step into the same code stream twice? No rollbacks, only roll-forwards! Cards on table: I’m mostly for pegging versions. But I’m kinda old and I don’t like surprises in my release flow. Dev teams I’ve worked with have tended to push back on pegging, saying that the benefit isn’t worth the cost and you just don’t need that level of repeatability. What do you all think?
x
Agree with your arguments. Pinned versions (googling pegged versions turned out NSFW lolsob) and then tools like dependabot or https://docs.renovatebot.com/ to update them.
Also, not only for repeatability but rather for stability, who knows what might break/change on the newest latest that you're not even aware is being introduced.
s
Pinning vs ranges is heavily dependent about what kind of dependencies we are talking about. Libraries or frameworks should define their dependencies in ranges rather than pinning to reduce footprint and decouple patching from their own release cycle. If you are the end user in the sense that your code is not directly imported but rather the artifact is consumed, then you should definitely pin. I recommend this read: https://docs.renovatebot.com/dependency-pinning/ Disclaimer I'm a RenovateBot maintainer.
j
You can also have a mixed strategy where some parts of your build are pinned and some parts are latest. You can really take it on a case by case basis
r
My company is actually building a service for keeping app dependencies up to date. The idea is to remove toil from your business while keeping your apps secure and your teams focused on value added activities. We're currently building a landing page. Meanwhile, I have a pretty nice doc, that shared with some folks from this community, where we explain the main challenges and things to consider in the app dependency maintenance space. In case you're interested to get a copy of the doc or just have a chat don't hesitate to send me a DM.
s
How does it to compare to Dependabot/Renovatebot?
r
Dependabot, Renovatebot, X-bot, just solve one part of your problem... They suggest changes to your app. But the visibility about your portfolio of apps, the impact of your changes, and the "uncertainty" of applying the change is can be significantly improved. That's without even speaking about testing, how can you validate that your app and its dependencies still work as expected... ?
Besides, every org/team is different, that's why we're not only developing tools, but we're also building a service to support companies while dealing with app dependency maintenance.
j
Am I hearing a potential service catalog user story here?
s
Yeah, that is why we are using a Backstage plugin for Renovate
r
Backstage is going to help, but still is not going to be very effective addressing for you: uncertainty when applying an update, visibility and testing.
a
grit.io - there’s others but almost all the big tech companies do this in this manner
r
I've interviewed many engineers so far, many of them worked for large companies in Platform Engineering Teams. Some were dealing with app dependency maintenance, but only when the apps were mature enough (CI/CD pipeline in place, good testing coverage, etc). It was very hard for all of them to keep apps up to date and validate that their changes were not breaking anything. Besides, it was always a challenge to go and ask app teams (usually very busy) for some help with broken apps that cannot be easily updated.
a
bazel or other 2nd gen build systems are pretty much required as well to make anything work that is reasonable
j
rodolfo, what you are describing at scale is also very much a cultural problem, less so a pure technical one. product teams need to balance some level of hygienic activities as a part of maintenance activities for the apps they own
tooling can help aid that in terms of reduction of effort but it won’t make it go away
r
True @Jordan Chernev! That's why I think you need to combine tooling with a service that takes care of the hygiene toil
j
the way i’ve seen this done is automation of proposed changes, say around zero day exploits, that get auto-suggested and pushed as PRs to codebases under purview and request respective owning teams to review and accept or deny the proposed changes and subsequently coordinate a rollout to all of their environments
r
It's like going to the dentist, you have appointments with certain frequency that you don't want to attend, but you need to
j
we are dancing around it a bit but this really is the subject of run time governance
the way i’ve done this in the past is capture and expose that type of extra dimensionality at the app level to all sorts of different groups with different interests / stakes, e.g. app level owners, SREs, platform teams, security personnel, senior leadership
r
Why runtime governance? I mean, those apps "should" be built with certain frequency. Could you please elaborate more on this argument? @Jordan Chernev
Yes to this But if the team is not operating there yet, someone would need to help them get there
j
in my experience, that’s not necessarily true on “apps “should” be built with certain frequency”. some are legacy apps / monolith / 3P vendor software with its own dependency chain, reasons many
plus, you do want to offer the decision around deployment frequency and tactics to each respective product team
some may choose to do hourly builds, others nightly, weekly, adhoc / on demand…
run time governance helps you capture the state of the union in terms of metadata across all of these flavors
that way, you can build tooling and automation that can help with specific “slices” of use cases in the environment
r
Good points there, specially around legacy and 3rd vendor apps
j
the idea being is that the service catalog behind the run time governance is really the metadata for the entire organization
once you have that, it really changes the entire ball game. you can do so many cool things with it
a
For package management in general:
j
We were enthusiastic pinners, until X happened and torpedoed our startup — never again!”
if pinning torpedoed your startup, you likely got very unlucky somewhere along the way. there are much bigger and more common risk factors that usually impact a startup’s success
a
@Ron Hough is generating SBOMs also something you are considering?
r
@Jordan Chernev: lol! I agree… but if it had happened to someone, it’d be a fun story. (To hear, at least!) @Andrew Dennis, I suppose SBOMs are closely related. If you pin thoroughly, that’s a lot of the job right there. I’ve not been in a context where an SBOM has been a super high priority — until pre-acquisition lawyers start due-ing diligence [lolsic].
a
Bazel expert here, in case anyone wants 100% reproducibility 🙂
^ some highlights from Bazelcon Community day at the close of 2023 in Munich 😄
g
Me and a colleague wrote-up two contrasting positions on automatic dependency updates, let me know if this is useful: • https://beny23.github.io/posts/automatic_dependency_updates/https://www.cosotateam.com/post/automating-dependency-updates-the-big-debate
r
@Gerald Benischke Nice. Thanks for posting those. I think “It depends” is exactly right. IF your project has really, really thorough automated testing, it may well be safe enough to auto-update deps. This can be a really tall order though… Like: do you have automated benchmark regression testing that would detect the uptick in memory usage or the slowdown in request processing caused by that lib update? (You probably should… but let’s be real here.) The argument that auto-update keeps you updated with latest security fixes is compelling… but it also keeps you updated with the latest bugs and exploits. I never install an iOS x.0.0 update on my phone, for example. I’m going to let all the early adopters find the low-hanging fruit that Apple’s testing didn’t. Auto-updating only for minor/build numbers seems naively a good idea, but not everyone interprets semantic versioning in the same way. (`cargo` anyone?) Some packages can be reliably upgraded without issues, but others cannot. Ultimately, the thought of updating a dependency without anyone taking a glance at the release notes makes me queasy. But the alternative seems soul-crushingly impractical. Anybody working on a project to apply LLMs to automatic dependency updates? A renovatebot that digests dependency release notes and puts the major highlights in the upversioning commit would come in very handy. Particularly if you can flag specific packages for extra scrutiny.
j
Interesting thread, thanks for asking! This recently came up in the London CoffeeOps 😄 I'd also very much +1 using pinning, and Renovate (as the best option I've tried across Renovate, Dependabot, Snyk - but note that I am a contributor to Renovate) as a way to make sure dependencies are kept on top of. I'm personally working on a tool called Dependency Management Data (DMD) which aims to make it This can take data from Renovate, Dependabot or SBOMs and make it easier to see where things need upgrades, or more importantly, where you're using things that are unmaintained, deprecated, have security issues, or maybe just you're quite a few libyears behind DMD also gives you an ability to get SQL/GraphQL access to your dependencies, so you can ask things like "what Terraform modules are my teams using" as well as a whole host of other things. I've found that as well as having insight into it, you also need a strategy around it, and processes to help empower teams to keeping on top (+ correctly prioritise) it