Slackbot
03/06/2024, 9:54 AMClemens Jütte
03/07/2024, 7:34 AMOshrat Nir
03/07/2024, 8:46 AMIlia Chernov
03/07/2024, 7:39 PMThe problem really is the robot in this case. You can’t observe if a robot is acting out of normal behavior like you can with a human, so interactive logins are key. Robots usually don’t have prescribed ways for privilege escalation. The god mode admin you describe will log into boxes as a regular user and then use prescribed ways to escalate his privileges to admin.I've understood this as the following: if the robot needs to do X, Y and sometimes (in very rare cases) Z, we have to give it permissions for everything at once (X, Y and Z). while with a person the permission for the Z can be acquired through privilege escalation. Is it correct or did you try to say something different?
The CD master might create a job, but a runner that is sitting in-network of the deployment target will pick up that job and execute it - basically leading to a very similar outcome to the pull approach.Interesting, didn't think about it... But it means that CD master still have access to everything, isn't it? The access is indirect, but still. Or is it like runner can reject some malicious commands from master based on some kind of a policy? I would really appreciate it if you can share a term for something related to this, so I can google it and learn more :)
Ilia Chernov
03/07/2024, 7:41 PMI'm wondering if this would be a good use case for runtime security, in which behavior that is inconsistent with baseline is flagged or blocked.Yeah, this was also on my mind, but, I guess, it could too late already, when the malicious behaviour is spotted :)
Clemens Jütte
03/08/2024, 8:38 AMI’ve understood this as the following: if the robot needs to do X, Y and sometimes (in very rare cases) Z, we have to give it permissions for everything at once (X, Y and Z). while with a person the permission for the Z can be acquired through privilege escalation. Is it correct or did you try to say something different?The thing with humans is, that they can easily use different factors, so having different passwords everywhere is not an issue. Think authenticator app for OTP or password safe running on your machine. Even if your main account is compromised, black hats don’t have access to the second factor and thus cannot escalate their privileges. Machines cannot use another factor in that way and thus are vulnerable. You normally make machine accounts not able to use an interactive login for that reason and still that can be abused if the account is compromised. The second thing is, that you can analyze human behaviour on an interactive login in the same way that a captcha does. The most common pattern to track is mouse movements and the time gaps between actions like key presses. Also location from where someone logs on etc. is interesting information. Over time, a profile is generated that will fire alerts if something is out of the ordinary. Authentication providers like Octa, Ping, Auth0, your favorite hyperscaler etc. run these kind of login systems.
But it means that CD master still have access to everything, isn’t it? The access is indirect, but still. Or is it like runner can reject some malicious commands from master based on some kind of a policy?
I would really appreciate it if you can share a term for something related to this, so I can google it and learn more 🙂It usually doesn’t have access to anything. The most vulnerable path is that someone can inject arbitrary code into a pipeline and then an order to execute that is created. An agent fetches the order and executes the code in a protected region. If you want to defend against that kind of attack, you can employ simple scanning / compliance mechanisms that check for script executions in pipelines - you sincerely NEVER want to do this. If you need to execute a script, it should be already present on the target system and only be triggered by a CD mechanism. I guess I am lacking the keywords to search for more specific things, but you could take a look at https://docs.gitlab.com/runner/ or https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners for concepts around private runners. You found ArgoCD on your own, which is basically working on the same architectural patterns - the order is persisted in Git and the runner is Argo living on the target cluster, if you don’t use a cascaded Argo approach.
Clemens Jütte
03/08/2024, 8:43 AMI’m wondering if this would be a good use case for runtime security, in which behavior that is inconsistent with baseline is flagged or blocked.This is a problematic approach. That type of control is usually enforced over robots aka “your software”. The reason being, that robots operate in defined boundaries. Humans are the opposite - when they log into a system, it’s a one-off activity that is looking erratic for a machine or out of boundaries because administrative tasks like “update and exchange executables and processes” don’t fit business as usual. You will probably get a lot of false positives and frustrated admins.
Ilia Chernov
03/08/2024, 3:09 PMOshrat Nir
03/10/2024, 1:55 PMYou will probably get a lot of false positives and frustrated admins.Interesting take. I'll take it back to our reearchers and product people. Though I would think that the rules in the KDR would be able to identify humans vs. machines using patterns or tags.