At what point, though, do we shift from trying to detect and block/remove malware to trying to prevent it from exploiting its way onto machines in the first place?
I'm sure the security industry has its reasons. It just seems like a great deal more ingenuity goes into the antivirus arms race than into hardening attack surfaces.
Those are different jobs, and both jobs are being done.
The sheer complexity of hardening makes it naive to think it will ever be bulletproof, as I'm sure you'll agree, so there will always be a call for another layer behind it.
World-facing firewall defends from the outside, strict routing and internal firewall defends the network from it self, firewalls on each server/computer defends from having exploits/worms spread like wildfire once they manage to find a crack, and detection software does it's damnedest to discover when something unwanted is happening.
Remove any of these, and the whole chain is less secure.
To make a computer completely secure, of course, you need a trash compactor and a boat to take it out to the Marianas trench, so it'll always be about balancing risks against accessibility and usability.
Frankly, I think the detection software part of the security stack just has better PR.
I am obviously not a security expert. But my understanding is that most breaches happen because of vulnerabilities that we've known about (and known how to defend against) for a long time, like
- Not deploying email encryption or even SPF, such that an attacker can convincingly impersonate others in the company by email (spear-phishing).
- Not updating software (which is necessarily exposed to a large audience by the firewall, because a large audience consumes it) when it has known vulnerabilities in it.
- Writing and running code in memory unsafe languages and not even mitigating that risk through static analysis or Valgrind.
- SQL injection and other failures to sanitize user input.
- Poorly thought out authentication/authorization schemes and bypass bugs, like URL enumeration.
- Services that make no attempt or an inadequate attempt to authenticate their consumers (i.e. firewall can't protect the MongoDB server from the web server; the whole point of the MonogDB server is to be accessed by the application tier).
- Not using TLS where appropriate.
- Not using 2FA for privileged insiders.
- Weak password reset schemes, and password expiry schemes that result in users writing them down on post-its at their workstations.
- Shared accounts.
It just seems odd to me that the security community will basically skin you alive for gross negligence if you don't have a firewall or antivirus, but this kind of stuff is more or less accepted as a fact of life.
And a firewall or antivrius is not necessarily going to do anything about it (if the attacker goes through routes that have to be open for the system to function, and writes their own exploits for which virus definition signatures don't exist).
I'm sure the security industry has its reasons. It just seems like a great deal more ingenuity goes into the antivirus arms race than into hardening attack surfaces.