Punishment is the best solution. Incentives are what drive behavior, and learning that you can get away with lying will just lead to more getting away with lying.
When it comes to training humans and animals, positive punishment is far less effective than most other training techniques like positive reinforcement. Don't Shoot the Dog[1]!
Unfortunately, the positives are customer adoption, and customers have already adopted zoom. This is like continuing to feed the dog treats because it's what you're used to, regardless of the outcome of their actions.
But more generally, it's not obvious that individual, "reptile-brain" incentives translate to large company leadership. I'd be hugely skeptical of applying positive psychology to international corporate leadership, but what do I know anyway.
Agree with your first paragraph, less so the second. People learn corporate leadership in steps, starting with a small group. The style of successful leadership doesn't change IMO, just the number of variables and possibility for greater success/failure.
So we should all remember that Zoom is probably depressed right now and could probably use some support from its friends. Maybe urge GCal to send it a nice note.
Not necessarily. Corporations are more than just sum of the people - they are a process that runs on top of people. People themselves are replaceable - and if you change the behavior of one to something the corporation doesn't want, it'll replace that person with someone new. You want to change the behavior of the corporation itself - and that's best done by creating monetary incentives and disincentives (i.e. punishment). The corporation will adjust the behavior of people on its own.
In other words: "appealing to the people" instead of addressing the corporation itself is like trying to heat up a climate-controlled room by lighting a small fire in it. You'll be fighting the AC unit all the way and causing lots of unnecessary damage, when the right way to do it is to adjust the thermostat on the AC unit.
With an alternative analogy, pushed to the limit, it's like trying to change someone's mind by appealing to the neuron. When there's a system advanced enough to exhibit adequate emergent behavior (which most big companies probability are), the subsystems are less and less important for the macro-system's outcomes. There are neural networks with fewer neurons than the population of the corporate leadership at Zoom that we still don't really understand.
My gripe is the companies who failed to implement because they couldn't do security in a way that was easy to use and resulted in a good user experience, but chose to be honest.
I hate the
(1) cheat to win and vanquish your competitors
(2) when you're caught, say you're sorry,
(3) win anyway because your competitors are gone
progression. It seems like the penalty for that should be existential or at least something painfully severe.
That's the point. It's the companies that are little known that get squashed. I don't know much about the space, but I tried Google and chose zoom instead because it was easier— and I pay for Google. I tried Jitsi. But what about the ones we haven't heard of, struggling to solve the problem that Zoom lied about solving, but because they're honest they never took that step forward.
It's like RealPlayer. By the time the courts catch up, the game is over.
Several people are on zoom instead of Google, for instance, even though I pay for Google. I don't know the other players in the space.
Zoom, unbelievably, built a better video conferencing solution than any product by any other company. Their top competitors were Google, Microsoft, and Cisco - several orders of magnitude larger than them.
In this case, I believe the underdog won.
InfoSec cuts both ways in the market. Sure, products with lower standards “poison the well.” But purchasers with burdensome, pointless, obsolete security audits do far more damage to the ecosystem. It certainly cost my startup a tremendous amount of potential growth. We far exceeded security standards like SOC Type II, but still had to bend how we solved security/user problems to Excel sheet checklists.
Zoom was facing a similar issue - they delivered “secure enough” until it wasn’t. Then they, in months, made massive, productive, effective changes that addressed the new issues from skyrocketing growth.
If our standards for good actors in the tech space is higher than that, I don’t know how humans can achieve them.
Actually intermittent reinforcement is much more effective. If it's offered every time, then when it is not offered it is less likely to trigger the desired behavior. Operant conditioning using intermittent reinforcement trains to not expect the reinforcement mechanism every time, so when it doesn't come, the desired behavior is still displayed: