How do we not have freelance bioweapons developers? Would it be any more difficult to prohibit the development and auctioning of cyberweapons?
I don't mean to be evasive by answering your question with a question, but freelance development of weapons that can cause widespread damage seems to be controllable. Why not quash the development sold-to-the-highest-bidder cyberweapons by the same means?
> How do we not have freelance bioweapons developers? Would it be any more difficult to prohibit the development and auctioning of cyberweapons?
First of all, we do, in the Middle East. But bioweaponry development is orders of magnitude more difficult: when you accidentally release your bioweapon into your own shed you die and it may be detected from abroad, when you accidentally release your cyberweapon into your own LAN you reimage machines and no one from outside noticed unless you're incredibly stupid.
Likewise, bioweaponry distribution is orders of magnitude more difficult, if you don't maintain the bioweapon properly the weapon doesn't work, if you don't climate control properly the weapon doesn't work, if you don't weaponize and launch it properly the weapon doesn't work. Cyberweapons can be copied around on cards no bigger than my thumbnail, and maintain practically forever.
These problems go on and on. You're literally talking about controlling information and communications. NSA has enough trouble trying to read communications (and we're all working as hard as we can to close that ability), let alone to have government control that flow of communications.
Your take on this is colored by the assumption that the US has a permanent lead in all these types of weapons. Aspects of handling bio-weapons can be more capital equipment intensive than making cyber-weapons, but I bet it cost tens of millions in hardware to test Stuxnet, where testing a bioweapon could be very cheap. Virus particles and bacteria can be very durable in plain glass vials and can be "shelf stable." At first-tier player level, I doubt the costs differ by orders of magnitude.
Yes, "cyberweapons" (I hate that term but know why people use it) are much harder to control than bioweapons.
How would you control them? What would that policy look like? Want to take a stab at one? I'm interested in what people think should be done about this problem.
The answer is obviously complex and difficult, but I think it's fair to say the current approach isn't working well..
So you need something new to address the problem. I think that the answer is government legislation.
To be clear, I think that it's a terrible answer, in fact everything I'm about to write has glaring problems, I just can't come up with a better alternative.
So software suppliers have to be held responsible for the security of the products they supply, it would obviously be a long and arduous process, and would likely have serious repercussions on the industry as it stands today.
Flip-side is that you try to regulate the market for vulns to reduce their use in malware etc. you could do this by requiring companies to buy discovered vulns and require researchers to sell to the developers.
One of the many major issues with this approach is, what do you do about open source software. There's no money, so liability has no meaning. Here, if it's constituted into a commercial product (think all the lovely "appliances" out there) the liability passes to the guy making the money. Where an end-user company directly uses open source, they get the liability to go with it.
Along-side this you start slowly ramping up the security compliance requirements for companies and organisations processing transactions or personal data on the Internet.
I'd compare this to security becoming like "health & safety" . I don't think most companies really want to spend money on it, and I think a huge amount of money is wasted in unneeded process, but compared to the alternative, it has been seen to be the best approach.
What does the policy that requires me to sell to vendors actually look like? I keep asking because nobody is really answering.
O.K., so if I find an RCE in Windows 10, I have to sell it to Microsoft. But if I don't like the price I'm going to get from Microsoft, why wouldn't I just not independently find RCEs in Windows 10, and instead just sell engineering services to non-Microsoft companies who want to find and exploit those vulnerabilities themselves?
Obviously, you'd want to ban that kind of service. But how would you do that? What does that ban look like? A ban on reverse engineering? How do you differentiate the kind of reversing that Tridge did to build Samba from the kind of reversing HD Moore's team does to build Metasploit? Also: you want to ban Metasploit?
well I'm not a legislator, and if you're looking to pick holes, trust me you would be able to :)
The question is, is that the best approach and if not, what is? Once an approach is agreed then the inevitable x years of wrangling on details could occur.
You are crazy if you think that legislators are going to do a better job of designing a policy to regulate vulnerability researchers than technologists will. Whatever the policy would end up being, it will be asymptotically as effective and reasonable as whatever we talk about here.
If a bunch of technologists (including vuln researchers) can't define a reasonable policy, I think it's a pretty safe bet that there's no reasonable policy to be had.
Well I did say at the top of this thread that the proposal had glaring problems :) I was interested to hear if other people had alternate suggestions which would address the issue.
I don't mean to be evasive by answering your question with a question, but freelance development of weapons that can cause widespread damage seems to be controllable. Why not quash the development sold-to-the-highest-bidder cyberweapons by the same means?