Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To play devil's advocate, they may not be worried about vulnerabilities in their code but rather vulnerabilities in their method of virus detection, the same way Google doesn't share details about their search algorithm partly so it isn't gamed by spammers. Actually this is common in software that is meant to protect against sophisticated attackers. Blizzard and Valve used to have periodic mass bans but they would never say what exact action triggered a ban. In fact you would get no information and the ban itself may have come months after some hack was used so that crackers wouldn't know what specifically triggered it.


> To play devil's advocate, they may not be worried about vulnerabilities in their code but rather vulnerabilities in their method of virus detection

This is an argument for factoring out the means of virus detection into a closed-source plugin/module, while opening the source of the rest of the code. Particularly since detection is presumably pure (i.e. functional programming notions of purity and referential transparency), and thus much less likely to be a source of vulnerabilities, compared to the rest of the client which actually interacts with the OS, disks/files, etc. and is therefore much more likely to be exploited. Because the vulnerability scanner would still be a closed-source binary blob, the public would need to trust the company that the blob is actually pure, but seeing that blob within the context of an open-source client which is handling I/O makes that trust easier.

Yes, it makes it easier for malware creators to test their creations against the closed-source module before releasing their malware into the wild. But sophisticated malware writers are already doing that, by installing the anti-virus client into a VM, updating it, disconnecting it from networks, then loading the malware into the VM and seeing if the malware is detected or not. So malware writers don't gain that much from the opening of the rest of the codebase (unless they succeed in finding vulnerabilities that the rest of the world doesn't), and the white-hat public gains a much more trustworthy security tool.


Well, that’s an argument they should have made! I think it’s extremely charitable to assume this is why, though, when every indication points to code-audit fearmongering.

But, you’re also forgetting that these virus scanners can also be vulnerabilities and exploits in themselves; i seem to remember one virus exploited a flaw in the compression code of a virus scanner to establish some type of malware. Just because something is a trade secret doesn’t exactly lessen the risk of it existing.


What's the difference between vulnerabilities in code and vulnerabilities in virus detection? Isn't the virus detection done in code? Is security through obscurity valid for virus detection but not code?


I don't think the parent is talking about vulnerabilities, but the fact that if you know how the antivirus engine works it may be easier to write a virus able to avoid detection.


That makes sense, though I think there's still a large difference between the virus detection and ranking algorithm comparison. The entirety of the virus detection code is running on the client's PC; surely it can be reverse engineered and understood fairly successfully?

The same can't really be said of Google's algorithm, as it's essentially a hugely complex black box, and you can barely interact with it. That's kind of like reverse engineering a chip purely using its inputs / outputs.


Sounds like a vulnerability. Isn't that how the argument went about source code? "If you know how the program works it may be easier to write an exploit." But then experience taught people that exposing source code to the bright sunlight by opening its source could actually make software more secure through many eyes finding holes. Why is this not applicable to virus detection algorithms?


Now that I think about it, you have a point. In general when I think about a (software) vulnerability I think about taking advantage of some bugs or unforeseen behavior of the software. If the software is acting as intended but can not protect you from a certain kind of issue can we say it has a vulnerability ? My answer was no before, now I am in doubt :-).


>If you know how the program works it may be easier to write an exploit."

BTW, this is true. Seeing the source code versus having to go through assembly listings - I know which one I'd pick if I had to find logic bugs.

>Why is this not applicable to virus detection algorithms?

Its not an algorithm, but a heuristic. If you want to look for a suspect, you don't announce "I'm looking for someone 5 feet 5 inches tall with a buzz cut who drives a ford and wears size 12 nike sneakers". In much the same way, security via heuristics doesn't mean creating a perfect detection system, because it doesn't exist. They want to make the game harder to play by hiding the rules of the game, not because they're sure that they're going to win. This is a real, tangible benefit for the customers. There is nothing really special about it, we've been using such ideas for centuries.


How do you build a heuristic if not with an algorithm? Perhaps the entire AV model employed by Symantec is flawed.


>How do you build a heuristic if not with an algorithm?

I don't know what that means. Perhaps superficially there is some overlap since both run on deterministic hardware, but a heuristic is completely different from an algorithm. Its a technique that can perhaps give you an imperfect answer to the question you're asking. An algorithm describes a method, which, if followed, gives you the answer. Here is an AV heuristic that I made up just now:

-Is it encrypted? +1 point

-Does it contain self modifying/unpacking code? +1 point

-Does it call OS APIs to monitor running programs? +1 point

-Does it run at startup? +1 point

-Does it have no UI? +1 point

-Does it try to punch a hole through NAT? +1 point

-Does its process name contain random strings? +1 point

If you get > 5 points, hash the executable and send the hash/executable for analysis.

>Perhaps the entire AV model employed by Symantec is flawed.

Well, for one, the heuristic isn't the "entire AV model". But what makes you think the entire AV model is flawed? Every major OS uses parts of the AV model.


because its not that easy. if i write some part of the code to detect if you are a good human and will you go to hell or heaven, to evaluate that for me would be hard. and if you had a access to my source code you could check what i am looking for and could maybe cheat.

the vulnerability i would call is if i sent you to hell and you found a way to escape.


I can second that. A lot of virus detection basically boils down to detecting this particular substring. Which is usually quite easily bypassed.


Somebody please reply to this. Both this comment and the above comment seem reasonable. I don't know what to believe!


For me Worrying about "vulnerabilities in their virus detection method" seems unlikely.

We're talking about downloadable software here, not a cloud service like google. Once a hostile nation state has access to your binaries (as they would with an installed product like A-V) they can just fuzz the A-V detection method to find bypasses.

Heck that's what pentesters and red teamers do on a regular basis, A-V bypass is a common thing in that world, so if people at that level can do it you can bet that nation state actors can do it.


Yeah, when I worked at Malwarebytes we did not really care about this issue. If people are doing to download it they are going to reverse engineer it.

We also did third party security audits on a regular basis, but still wouldn't be comfortable allowing that to be done with other countries. Purely my own opinion here, but my concern wouldn't be a security one so much as an intellectual property one- it's pretty well known that other governments (China, Russia) have strong links to their commercial sectors and little regard for IP protection.


I believe the latter post (obfuscating the method of detection) over incompetence.

Don't forget that nation states also produce malware (Recall Stuxnet?) [0] and evading detection is substantially easier when you know exactly what to avoid doing.

[0] https://en.m.wikipedia.org/wiki/Stuxnet


Evading detection is easy if you have the slightest clue of what you're doing. Antivirus evasion simply isn't difficult enough for this to be a reasonable explanation.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: