Even if it were technically unnecessary (in some hypothetical future where privilege escalation became impossible?), legal, compliance, and insurance requirements would still be there.
The problem is that EDR is basically a rootkit, by using it you enable a huge attack surface instead of being able to have stuff e.g. immutable. That tradeoff only makes sense, when you don't trust and control the OS itself. This is more of a problem with proprietary OSes like Windows. Otherwise you would rather integrate this into the OS itself.
> That tradeoff only makes sense, when you don't trust and control the OS itself.
That's totally accurate, but you're missing the fact that we fundamentally don't (and can never) trust the OS or any other part of a general purpose computer.
In general purpose computing you have a version of Descartes brain in a vat problem (or maybe Plato's allegory of the cave if you want to go even further back).
To summarize: We can't trust the inputs even if the OS is trusted, and if the OS is trusted can't trust the compiler, and even if we trust the compiler we can't trust the firmware, but even if we trust the firmware we can't trust the chips it runs on, and even if we trust those chips we can't trust the supply chain, etc. "Trust" is fundamentally unsolvable for any Turing machine, because all trust does is move the issue further down the supply chain.
I know this all sounds a bit hypothetical, but it's not. I can show you a real world example of every one of those things having been compromised in the past. When there is money or lives at stake people will find a way, and both things are definitely at stake here.
So what we have to do is trust, but verify, or at the very least log everything that happens and that's largely what those EDR products exist to do. Maybe we can't stop every attack, even in theory, but we take a crack at it and while we're at it we can log every attack to ensure that we can at least catch it later.
There just isn't any version of this world in which general purpose computers don't require monitoring, logging, and exploit prevention.
Sure, that is why you trust a blackbox software from some random company running as a rootkit, whose concrete version you do not even control, because it is remotely updated by them.
If you think the hardware works against you, then you are screwed.
> Sure, that is why you trust a blackbox software from some random company running as a rootkit, whose concrete version you do not even control, because it is remotely updated by them.
It doesn't have to be "a random company". Microsoft, for example, now ships EDR as part of the operating system.
Many companies prefer other vendors for their own reasons. Sometimes one concern is the exact issue you're describing. By using another vendor outside of MS they can layer the security rather than putting all their eggs in a Microsoft designed basket. We sometimes call that a "security onion" in cyber.
I have no idea what the Linux version of that would even look like though. I imagine you'd just choose one of the many 3rd party EDR's from "random companies." It's another reason I asked the original question about how Sysadmins cope with Linux these days. MS has an entire suite of products designed to meet these security, regulatory, and compliance problems. Linux has... file permissions I guess?
If your think of running some EDR software in kernel mode, then my point is indeed don't do that. That just sounds like less security. Use the OS and run the reporting in userspace.
If you want integrity, first make everything executable immutable, the system is explicitly designed to work that way. That's why the FHS exists for. Then use something like Tripwire to monitor it.
How though? Presumably you mean we should trust the OS to do that?
Edit to be clear auditd has the same issue. We're trusting it to audit itself. However, we know that we cant trust it because rootkits are a thing. So now what?...
I guess we need a tool thats designed to be tamper proof to monitor it. We do that by introducing an external validation. A 2nd external system can vouch that hashes are what we expect, etc.
So you have an OS of which you have the source, which is binary reproducible and you can compile yourself if you want to. You want to make that more trustworthy by injecting a random blob, you can not inspect and which updates itself over the network controlled by a third party. I do not understand your threat model.
If you think your OS doesn't give you the correct answer to a read, than you need to run a second OS side-by-side and compare. If you think your OS is touching data you haven't told it to, you need to have a layer running below so you can check, i.e. virtualization, BIOS or hardware. If you think your OS is making network calls you haven't told it to, then you need to connect it via an intermediate host, that acts as a firewall.
I don't see what injecting a random blob into the OS gives you other than box ticking. Now you need to trust the OS and that other thing.
When your attacker gains control of your OS (so actually below root), than you are screwed anyways. Only having some layer independently will help you in that case. Having more code in your OS, won't help you at all, it will just add more attack surface.
> If you think your OS doesn't give you the correct answer to a read, than you need to run a second OS side-by-side and compare.
I mean, that's mostly right. IF the OS is already rootkit infected then installing an EDR won't fix it, as it mostly won't be able to tell that the answers it gets from the OS are incorrect. That's why you'll sometimes see bootable EDR tools used on machines that are suspected of already being compromised. It's a second OS to verify the first, exactly as you describe.
In practice that's not typically required because the EDR is usually loaded shortly after the OS is installed, and they're typically built with anti-tamper measures now. So we can mostly just assume that the EDR will be running when the malware is loaded. That allows us to do things like Kernel‑level monitoring for driver loads, module loads, and security‑relevant events (e.g., LSM/eBPF hooks on Linux, kernel callbacks/ETW on Windows).
By then layering on some behavioral analysis we can typically prevent the rootkit from installing at all, or at the very least get some logs and alerts sent before it can disable the EDR. It's also one reason these things don't just run in userland as you suggested above. They need kernel mode access to detect kernel mode malware, and they need low level IO access to independently verify that the OS is doing what it says it is when we call an API.
Your suggestion reminds me of the old 'chkrootkit' command on Linux. It's a great tool, if you don't already have a rootkit. In that case it just doesn't work. A modern EDR would have prevented the rootkit from installing an API hook in the first place (ideally).
> Only having some layer independently will help you in that case.
Sometimes it's more about detection, and sometimes it's more about prevention, but both are valuable. I would one day love to see a REAL solution, but for now I think EDR's are the least worst answer we have.
A better answer would be a modern OS built to avoid the weaknesses that make these bolt on afterthought solutions necessary, but neither Windows or Linux come anywhere close to being that. They both have too much history and have to preserve compatibility.
> A better answer would be a modern OS built to avoid the weaknesses that make these bolt on afterthought solutions necessary
That's basically my point. Plugging EDR into an OS, is getting you a different OS that contains a part of which you have only a binary blob, and which is changed by a third-party over the network. This means you need to be able to change parts of the OS over the network, which opens you to new attack surfaces and you now also have the possibility of incompatibilities between the core OS and your blob, since these are developed by different vendors.
When you have software, of which you have the source, you control the version, trust the vendor, run this in the kernel and still want to call that EDR, that is fine, but that doesn't seem to be what EDR companies like Crowdstrike are doing.
If all you do is use kernel hooks, than you are still trusting the kernel. If your low-level IO still queries things in the kernel, than you still trust the kernel. If low-level IO means below the kernel, than you are not modifying the OS, your "EDR" is the OS and you run another untrusted OS on top.
No, its not and never will be.
Even if it were technically unnecessary (in some hypothetical future where privilege escalation became impossible?), legal, compliance, and insurance requirements would still be there.