Is anyone using a feature-flag platform in their D...
# platform-blueprints
f
Is anyone using a feature-flag platform in their DP, and if so, which one and how are you using it? What is the most important thing you are trying to achieve with it?
m
We use split.io at Doma, and at Netflix we built our own (called fast properties). There are two common use-cases I’ve seen: 1) global config values 2) feature toggles The feature toggle use case was renowned for causing outages at Netflix, due to a lack of documentation, cleanup (removing old toggles), and unintended dependencies. We tried a few ways to introduce safety into the change process, but it was non-trivial to do.
j
Uber released some tooling around this a while back https://github.com/uber/piranha
Same here, we have deployment-related configs and then user-based feature toggles; for A/B testing, gradual roll-outs, etc.
f
Thanks for the feedback. Those are the same use cases I had on my list. Working on a product that enables this. Working on some example use cases so glad that lines up. Part of the service we are building is built in global multi-cloud deployment aka low-latency and strong consistency. From the looks of it, neither split.io or piranha supports strong consistency. Do you think that could be valuable in your use case, or do you not care about it?
My hypothesis for strong consistency is that you want to be able to turn off feature flags all at the same time. Say you turned something on by mistake you don’t want the turn-off sequence to take some time before it reaches eventual consistency globally. This could mean broken or wrong features are turned on longer for some users.
m
I don’t think strong consistency was an issue for us, but it’s basically table stakes honestly
There is a more fundamental issue with feature flags I think
Which is that they present a way to circumvent safe delivery (intro of code) practices
Think of “turning on a feature” as the same as deploying new code
Delivery pipelines are typically designed to test code and integrations before introducing them to prod
Feature flags present a way to bypass that safety mechanism
That was the bigger problem at Netflix
f
Yep, that makes sense. You essentially bypass all existing safety mechanisms that are in place.
m
Yes exactly! It's actually pretty crazy when you think about it. Here are the general problems I've observed with feature flags in general: 1. Bypassing of safe delivery practices 2. Lack of sufficient automated testing for stuff behind the feature flag 3. Deprecated / legacy feature flags with no documentation and original owner is now gone 4. Dependencies creeping into the code base expecting a feature flag to be in a certain state 5. Too many feature flags, becomes a nightmare to keep track of 6. Broad feature flags (large percent of code behind a flag)
Of course most of those can be mitigated if you have super fantastic code reviews, clearly defined policies, linters, etc. But the reality is, 99.9% of companies don't have that, and most wouldn't want it for fear of creating an overly complex developer experience.
(Sorry, for the long rant... I've just seen lots of horror stories)
f
It’s funny how it’s almost going full circle. We build to rigid release pipelines, then we introduce feature flags to bypass them and before you know it you need to create another rigid process to restrict feature flag use.
This is helpful, I am halfway through my guide on how to use the product for feature flags so I’ll probably keep it, but also going to make one for service discovery.
m
We build to rigid release pipelines, then we introduce feature flags to bypass them and before you know it you need to create another rigid process to restrict feature flag use.
Well said... this is exactly the issue!
f
BTW looking for beta testers if anyone is interested in testing our edge key-value store 🙂 We’d love some feedback.
m
We used feature flags at YikYak for both A/B testing and rolling out of new features. We made a rule to remove the flags once a feature was rolled out to 100% of the users. Having a small engineering team made this easy to coordinate. On my current team, we tried launch darkly and split but ended up creating our own, aptly named dark launchly ( 🙂 ) to do A/B testing.
f
@Michael Smith thanks for sharing. What made you decide to build your own?
m
At YikYak, we did it because there weren't any options at the time. On my current team, we mainly did it due to cost and we didn't really need all the bells and whistles split and launch provided. They are both great products, though.
f
Gotcha makes sense. What were your MVP requirements on your current team, if you don’t mind me asking.
Finished my guide here, if anyone is interested. https://developers.seaplane.io/guides/mdkvs-feature-flags Learned a lot writing it.