Greetings folks. Does anyone here have experience ...
# platform-blueprints
r
Greetings folks. Does anyone here have experience integrating security into platform design? I’m currently in the process of doing so and would love to get insight (tooling, processes, team structure) from those who’ve gone ahead of me. Topics of interest would be: • Pitfalls you’ve encountered • Security and SDLC (dependency vulnerability and container scanning) • SIEM • Multi cloud security and/or compliance (think HIPAA, GDPR) posture scanning and reporting • How have you “platformised” (somebody’s gotta add that word to a dictionary) security? ◦ Eliminate cognitive load from engineers by integrating security tooling into the platform ◦ How have you made vulnerability findings and recommended remediation more readily accessible to engineers? • What metrics are you using to measure the success of security initiatives/adoption/value? PS: I realise the scope of the question is pretty wide 😅 I’d be happy to split that into multiple threads OR maybe create a
platform-security
channel
s
Pitfalls: • Legacy tooling that isn’t self-service or doesn’t provide rapid feedback in a pipeline • Lack of platform engineers with security backgrounds • Legacy security policies around manual processes There’s a lot of modern tooling now (snyk, Github advanced security, lacework) - so there’s no limitation there. The issue in my experience is adapting an older company’s policies to be more platform-centric, developer-focused and adapting the organization’s approach to risk identification and mitigation. I think my answer about “how” is generally: • Create a high-level mission for the platform security team to give them boundaries • Create a high-level vision for what good looks like for them • Have a measurable strategy for what they need to achieve and then let them plan / organize to that.
m
Even with the new tooling (we are using Snyk for image scanning and lacework for security posture) we still had to build process to remediate security findings with the due dates based on CVSS scores. e.g. identify owner of vulnerable libraries if those are shared between few dev teams, find out impacted applications/customers using images with vulnerable libraries and so on. Just to note that we didn’t found silver bullet and still have semi automated process. So any ideas are welcome!
r
Appreciate both your responses on the topic 🙏
Have a measurable strategy for what they need to achieve
@Schuyler Bishop Agreed! The challenge i’m experiencing at the moment is: where do I begin from when wanting to define the security vision and success metrics that come along with it? Some thoughts that’ve come to mind so far: • Perform some threat modelling activities on our current setup to fuel the security vision and therefore giving us more measurable and targeted success metrics. In other words, let’s avoid boiling the ocean in our attempt to make things secure. The cons of this approach IMO being: ◦ Threat modelling seems to be a speculative approach that could result in security-gaps due to one not foreseeing potential scopes for system infiltration • Start with the compliance schemes (HIPAA/GDPR etc) and use those to define the security initiatives. Cons: ◦ Being compliant with a particular scheme also isn’t a good measure of how secure one’s infrastructure/IT systems are Of the 2, I prefer the former but i’d love to hear your thoughts on this.
with the due dates based on CVSS scores
@Martynas Dabašinskas Ah, that’s an interesting approach! Wasn’t aware of CVSS scores.
identify owner of vulnerable libraries if those are shared between few dev teams,
Have you been able to identify any automation/tooling geared towards automatically aggregating/mapping code ownership based on source control (git)? I imagine the process of having to hunt down teams and dependancies manually being super daunting.
we still had to build process to remediate security findings
Would these be automatically triggered playbooks or set/s of pre-aggregated documentation on how a team impacted by a certain class of vulnerability can remedy their systems?
m
In my experience, people, processes, and tooling are fairly straightforward and solvable. Advice above is spot on, IMHO. I think @Schuyler Bishop nailed it with this point though. This is the hardest aspect:
The issue in my experience is adapting an older company’s policies to be more platform-centric, developer-focused and adapting the organization’s approach to risk identification and mitigation.
What I've experienced is that the hardest part of introducing security into platform, especially with an existing business, is the cultural changes necessary. The security industry has built quite a long backlog of rigid interpretations of best practices, and many folks in the that space are reluctant to focus on the spirit vs the letter of those practices. This results in security theater that quickly diminishes trust with engineering teams and undermines engagements. I'd emphasize a concerted effort to connect the “why” of security for your Eng customers, hire security folks with an Eng background, and make empathy and communication a critical skill set for those you bring on.
❤️ 1
👍 2
r
I second @Michael Galloway and some of the above. Having previously been and worked for a SAST vendor, culture is a big one. Plenty of observations where Sec and Dev are in different silos and distrust each other, enforced by not aligned KPIs. Seen examples where the sec teams don’t get access to the source code, or just measure vulnerabilities without a feedback-fix loop. Obviously, a common goal is the first step, but also visibility across silos. Trust can be build (IMHO) by seeing the security scanning results across the relevant parts of the organisation and there is a common incentive to address results. Unfortunately, some vendor models don’t help, where you get charged if you add more “eyes” to view results. As a consequence companies start to build their own results viewers and dashboards and sink money in that process.
👍 1
I could go on, but from experience a good start is to have executive buy-in for a “shift-left” approach. Not too different from platformization.
💯 3
r
Amazing tips folks. If anyone in this thread would be up for a zoom call for a further exchange of ideas around the topic, just give me a and I'll be more than glad to setup one 🙏
5
m
Late to the thread here, but I would break this into at least three subproblems: 1. Protection of the platform itself. How do you ensure that users cannot overstep their entitlements and view/change code belonging to projects other than the ones they are intended to be working on? What happens if there is a vulnerability in the platform itself and you need to release a security patch? These are usual ‘product security’ questions and many of them are process rather than technology centric but can be amenable to being automated. 2. Vulnerability-free deliveries via the platform. How do you ensure that any vulnerabilities occurring in the code as it is being developed are found promptly, and that the right information (including sometimes even an automatic patch PR) are surfaced up in such a way that developers can act as quickly as possible. This would include first party code being written as well as introduced via third party dependencies. Secondarily, how do you enforce a gate so that no-one can release code containing unfixed vulnerabilities? And what happens after release - what is the feedback loop when a new vulnerability is found within a dependency for code which is already deployed? The latter is common for open source components, where new vulnerabilities are found all the time. 3. Compliance. How do you defend the integrity of 1&2 in the face of controls audits for e.g. PCI, HIPAA, etc. This is probably the only one of the three domains where you need to be literate in heavyweight security standards such as ISO 27001, NIST SP 800-53, and most recently NIST 800-161r1 - because these are the questions that the auditors will ask. I do believe that if done properly, the partnership between dev and sec teams is crucial to build a platform that encompasses DevSecOps and actually makes life much easier for all involved as well as addressing the real world problem of software supply chain security. Also - for reference - you don’t need to start from scratch here either, there are some great community efforts, e.g. https://github.com/cncf/tag-security/blob/main/supply-chain-security/secure-software-factory/secure-software-factory.md p.s. also happy to discuss and I think a platform security channel would be a great addition!
r
Very late to the thread, but I ran into this while searching for discussion regarding SOX-like compliance with CI/CD implementations. Anyone aware of other threads where this is being discussed? In particular, managing separation of duties (developers, reviews, approvals, deploys, proving software is correct, etc)
m
Hey @Ryan Grimard we have some experience on our end at Doma around compliance challenges. Happy to share what were doing