The linked Wyden letter makes for interesting reading too:
> "While it is likely that Amazon has known that its AWS product was vulnerable to SSRF attacks since the first high-profile demonstration by a security researcher in 2014, the company has certainly known since mid-2018 at the latest. In August of 2018, Amazon's security team was contacted by email by a cybersecurity expert, who recommended that Amazon adopt the same cybersecurity defense against SSRF already used by Google and Microsoft. A copy of that email is attached. Amazon failed to act on this third-party report and has not provided an explanation for its inaction."
We aren't going to see real change until vendors start facing consequences for their negligence. Yes the criminals who exploit these vulnerabilities should go to prison but there also needs to be consequences for companies that don't bother patching vulnerabilities when they know better.
Here is something that doesn't feel right. The article says:
> The breach has so far cost US bank Capital One, one of the 30 institutions affected, more than $270m in compensation and regulatory fines.
The bank claims that it is hacker's fault.
But isn't it wrong? Bank was fined and sued not for having been a victim of a hack but for not storing data securily, not configuring the cloud accounts properly. For not following required procedures. Therefore, as I understand, the bank should have been fined even if there were no breach.
Or are they blaming the hacker for exposing the violations? Do they assume that it is ok to violate regulations as long as nobody knowns about it? That's ridiculous.
I mean to be clear up front she very much did do a crime, she is not a white hat hacker, however.
> Here is something that doesn't feel right.
I totally agree, it's a strong indicator of an oligarchy. Same as Chevron basically putting a lawyer in jail for suing them on behalf of the people's whose lands they poisoned. Threats to wealth are punished by our "justice" system more severely than similar crimes.
the hacking happening means the accounts weren't secure, hence the fines and settlement.
the insecure configuration is not tied to the hack, and was only exposed because of it.
the insecure configuration was there, independent of the hacking, yes, but it was unknown.
so, the hacker only made regulators aware of the problem, and that's what Capital One blames the hacker for. regulators would not have known if the hacker didn't take advantage of their insecure configuration, and now, regulators are fining them.
to you and me, the fault is on capital one for having the insecure configuration, and on the hacker for exploiting it.
to capital one, they did nothing wrong, and were only punished because of the actions of someone else.
It will be interesting to see if she goes to a male or female prison. If she goes to a male prison she's going to need protection. If she goes to a female prison but still has a penis that might cause some controversy.
Is it really the hack that cost Capital One $270m, or the misconfigured server? While she's obviously in the wrong here, I doubt it makes sense to pin the whole sum on her.
Interestingly nobody would be fined by the regulator had the misconfiguration not been discovered by someone stupid enough to to post about it with their real name, organised criminals would have been smarter about it.
The misconfiguration on a “firewall” sounds like AWS is deep into the capital one org tightly controlling the narrative. Whilst at the end of the day it was an EC2 that had access to all the accounts S3 buckets that was configured to pass out its role token to anyone that asked and the bucket itself had no protection against outside access with a compromised key. This to me sounds like absolute negligence for an FI and rightfully deserves the fine. Back when this attack was done the AWS service made it very complex to mitigate against this type of attack and since then AWS have scrambled to release a bunch of “features” to fix this like Aws:calledvia , s3:resourceaccount condition key, s3 block public access came out just before attack was made public I am sure there were others but this is what I can recall.
> Interestingly nobody would be fined by the regulator had the misconfiguration not been discovered by someone stupid enough to to post about it with their real name, organised criminals would have been smarter about it.
Isn't there a third option, fully anonymous disclosure by a grey hat?
Seems like the best outcome would be from showing it to a scrupulous journalist who protects sources, and it looks like you're discounting that.
"Is it really the theft that cost the person the valuables stored at home, or the fact that they didn't have live armed guards or didn't store it in a vault? While the thief is obviously in the wrong here, you doubt it makes sense to pin the whole sum on him."
"Is it really the rapist that cost the person's life, or the fact that they weren't fully armed and were not at home after sunset? While the rapist/murderer is obviously in the wrong here, you doubt it makes sense to pin the whole sum on him."
These are not accidents like parking a car at the edge of a cliff and forgetting to put it in gear and set the parking brake.
These are deliberate premeditated actions by another party exploiting some weakness or error. Of course it helps to avoid weakness or errors, but the point of a civilized society is to not have to live like we're constantly under assault in an armed camp.
The criminal is a criminal, and the entire amount rests on his/her head.
That said, it is also appropriate for those who lost to analyze the losses and improve their situation. If there was already a spec or procedure to handle this, and it was not followed, then it would not be surprising to see some workers and managers retrained or sacked. But zero of this reduces the criminal's responsibility or liability.
I suppose that if there is anyone to blame for shortcomings incurring costs, it is the criminal herself. Aside from deciding to do the crime in the first place, she also had bad enough opsec to get caught, and that will come with a price.
I'm confused... do your totured analogies identify Capital One as the victim?
>> These are not accidents like parking a car at the edge of a cliff and forgetting to put it in gear and set the parking brake.
They left S3 buckets parked on the edge of a cliff with (the personal information of) customers sitting in the passenger seat, and failed to stop them (from being publicly visible) with security.
All of your analogies forget one important thing. Capital One was storing other people's data and acting in the role of the bank or vault. They have a duty to store those things securely.
Or to put it another way, if someone breaks into the bank and steals your valuables from your deposit box, are you going to blame the thief, the bank, or both?
Assigning responsibility is always a moral judgement and different people will come up with different answers based on their own morality.
If someone walks into the bank and the doors are wide open, no one is there, and the vault is just open, so they decide to grab some visible valuables, how much responsibility is actually on the thief?
I mean obviously there is some responsibility, but how much is more of a moral judgement than anything else.
Agree, it's all judgement, and there's clearly a broad spectrum. Some good example points on it might include:
* Implemented all possible security measures, above and beyond reasonable, but were breached by a nation-state actor.
* Took reasonable professional-standards measures, but were breached by professional thieves.
* Took most standard measures, missed some, but were breached by a modestly skilled thief.
* Were somewhat negligent and some people found an unlocked door and stole the goods.
* Created an attractive nuisance, too tempting for some people to avoid, and some people looted the place.
* Left the goods out on the sidewalk and were surprised when people helped themselves.
In all but the last sidewalk example, I'd say the taker has full responsibility as a thief - an honest person would not get involved, and a skeptical person would wonder if it was a honeytrap.
All but the last two examples require not only dishonesty, but also require specific planning and actions to get the goods. In all but the last two examples, I'd say that the taker is responsible for their acts, and nothing about the owner's actions mitigates that.
That said, the protector of the goods also has full responsibility for taking appropriate measures for the reasonably foreseeable threats.
I guess I'd put it as responsibility is not divided but added or multiplied by the parties.
You can be both negligent _and_ preyed on by criminals. It doesn’t make the criminal any less criminal, but it also doesn’t make you any less negligent.
This is the 3rd person from my teenage irc days who has gotten v& for something stupid. Paige was a bit unhinged so it's not surprising. Some day I'll dig up my old hard drive with chat logs for entertainment.
I also met Paige (erratic) once, some time around 2009-2012 (possibly at Metrix? somewhere on Cap Hill), and got the impression there was a screw or two loose.
this is why as a frontend developer. Im scared to deploy to aws. there so many things in the dashboard to get wrong if you don’t know what you’re doing. and I don’t know what I’m doing. I stick to digital ocean and netlify
I agree. The AWS interfaces are poorly designed, difficult enough to use well that "mistakes" are statistically guaranteed to happen.
It is laughable to hold those who make S3 buckets public accountable yet underplay the contribution of interface design. It's as if the NTSB had a report template with only a single checkbox for "pilot error".
S3 buckets are now very clearly labelled as being public, perhaps even private by default.
Not so long ago this wasn't the case and various grey hat services would let you explore public buckets via a nice interface / API. What you could find was pretty shocking.
The idea of "don't use it if you don't know how" still applies though. In this world so many things can go wrong if you don't understand what button does what but that's not an excuse imho - AWS isn't designed for non technical people.
Yeah, AWS lets you do things--especially related to billing--that can really burn you if you make a mistake or don't know what you're doing. But making S3 buckets public is something they've really put guardrails on over time.
I’ve always found the main issue is not how easily you can make a bucket public, it’s knowing how to utilize a private bucket to serve assets or media in the first place.
This used to be worse than it is now. Today there’s all kinds of warnings about buckets that can be publicly accessed. It takes quite a bit of work to make one public, not something you’d do without knowing anymore.
Banks also use Excel and VBA macros for a lot of things that deeply scare me. It feels like the planet’s financial systems are kept together by spit and prayer.
Never ever use the dashboard to configure your production deployments, on any cloud. From the moment you create the account, use automation (Terraform, provider-specific stuff like CF, whatever) to create resources. The learning curve is not that big, and you will have certainty about, and control over, what is running in your account.
I disagree. If you don't fundamentally understand how the services work on their own, or how to repair them outside of your selected framework, you're setting yourself up for failure down the road.
You'll also have very little visibility in how to effectively test your configuration and ensure the security you _think_ you have is actually in place and functioning correctly.
I'm not suggesting any type of "framework" (which suggests inversion of control). Terraform, CF, etc make the same API calls that the console does, they just keep state. Whether you understand the services is orthogonal to whether you use automation to deploy them.
I've used terraformer. It generates a bunch of HCL that's useful but verbose, and you can target certain kinds of resources with it. It'll do in a pinch but can't say I loved it.
In my experience you need to find each resource one by one, and then import each one. Sometimes by name, some by ARN, some by something else. The import approach is time consuming.
Appears to me to be a bit of a misleading title; the hack cost Capital One $270m in fines and compensation. Title seems to imply $270m was stolen in the hack.
That was definitely the implication I took but I don’t think the title is misleading. If the hack cost a company 270m it was a 270m hack imo. Similar to how we measure property damage.
A trajectory topic: this kind of incidents makes it extremely unpleasant to work for AWS as an engineer. AWS treats security seriously. This type of incidents prompts AWS to introduce more red tapes for good reasons, but the side effect is that engineers are subject to kinda hostile development environment. A SDE-2 security engineer could shut down a major project by citing vague reasons like "the design is not user friendly", for whatever the f&*% that means, and the team had to escalate all the way to L8 or even L10 with pages and pages of "narratives" to get unblocked. Engineers can't ssh to production machines (again, for good reasons). Data platforms have rigorous rules for egress connections, making data processing very unpleasant, to say the least. If you were an engineer who just enjoys getting things done, working for a smaller company will be a much more pleasant experience.
Small companies usually implode if someone in a key position goes rouge. If that happens to 1 out of every 100 small companies annually, that's still arguably worse than 1000 engineers being slightly inconvenienced for a few minutes daily.
I’ve seen the vulnerability described as a misconfigured mod_security server acting as a WAF. But how is it even possible to misconfigure something like this? Forwarding to a backend server dynamically based on the host header sounds like more work to setup than doing it statically. The ips for your hosts would have to resolve differently for the WAF host than publicly or else you would have a loop.
The metadata service is a big issue. When you pair EC2 with common, off-the-shelf software, you can end up inadvertently allowing requests to EC2's metadata service. That allows an attacker to gain the same privileges as the EC2 instance they're hitting, which often means they can access resources like private S3 buckets.
While the metadata service isn't technically a vulnerability, it's poorly designed. Not enough thought went into its security, but too much relies on it for them to disable the current version overnight. Any changes are going to take many years.
It's grim. Thompson did just about everything they could have done to escalate their sentence short of finding a way to traffic explosive devices or desecrate a veterans cemetery. The sentence in reality will come down to how they account for losses to the victims, but any plausible number here rockets you to the bottom of the sentencing table (the difference between 50MM and 250MM in the sentencing guidelines is much smaller than the difference between $5k and $100k).
Roughly here, you get:
6 base sentence level for 2B1.1 crimes
+20-28 victim loss(!)
+4 multiple victims
+2 sophisticated means or multiple jurisdictions
+2 trafficking in access devices (incl. account numbers)
+4 (maybe) jeopardizing the safety of a financial institution
+2 PII
+4 malware (the indictment more or less demands this one)
+2 obstruction or destruction of evidence
Assume no criminal history for the defendant, then, without replicating the whole table, level 10 is 6-12 months, level 20 is 3 years, level 30 is 7-9 years, and level 40 is 25-30 years.
That's a good catch; I just grabbed the indictment from PACER. Worth noting though that they don't have to find Thompson guilty on those counts to trigger those accelerators (and it's hard to believe Thompson could dodge the PII sentencing modification since there's zero doubt as to whether their conduct involved PII).
Ultimately though I think it'll come down to how much money Capital One lost dealing with this and the aftermath (again, I assume less the fines and lawsuit).
I break into computers for a living, and stories like this are in the news all the time. I'd probably do much worse at, like, an embezzlement case.
I'm also probably (I hope) wrong about the 2b1.1 loss calculation here; I read the USSC primer on it and it's not super clear but leans me towards the idea that a penalty assessed on Capital One for doing a poor job securing their data can't be included in a loss assessment against Thompson, and I'm not clear that the damages for a settled lawsuit over same could apply either.
So total losses could be in the single-digit millions (as a general rule of thumb, you can't get convicted in federal court of hacking a real company and incur less than ~100k in damages, simply because of the cost of insurance-mandated forensics investigations --- here I don't really see any chance that the "actual damages" could have been less than 7 figures given the magnitude of what was stolen).
There is also, per the USSC document, a formula for computing damages "per access device", where "access device" is a term of art that includes account numbers, so that could also generate a nosebleed sentence.
For no reason whatsoever, just based on doing this exercise for every 18 USC 1030 case that's been in the news for the last decade or so, my wild-ass underinformed guess is that the sentence will end up under 10 years, but more than 5.
He does security for a living. It seems pretty important to know how much jail time you would be facing if you cross the line from whitehat to blackhat (including who legally gets to decide that line).
Another story about the case says "up to 20 years" but I assume that comes with all the usual caveats, e.g. they may have just totaled up the maximum sentence from each individual count as if they had to be served consecutively.
> Capital One, which is one of 30 institutions hacked by Thompson
AKA: Capital One left customer data in a publically available s3 bucket.
I'm not defending what this "hacker" did at all but this is 100% the company's fault. That $270M? That's from fines and settling a class action by their customers. Again, not defending the hacker but all the hacker really did was shine a light on this (doesn't appear they sold/used the data unless I'm missing something).
> AKA: Capital One left customer data in a publically available s3 bucket
That’s not what happened, that’s uninformed forums speculation that you’ve seen repeated often enough you assume it’s true.
Krebs and one of Cloudflare’s PMs have both gone into some depth about this - it involved an SSRF attack against a non-public S3 bucket among other things. Krebs’ article is particularly interesting as it has screenshots of tweets from her describing the process.
I spoke with a CapitalOne employee after this hack who was involved in "cleaning up" the security. He wasn't allowed to discuss specifics, but he did share some fun facts:
The hacker was very smart
The hacker chained together "about 6 or 7" different exploits to get to the data. Note, this means it is much harder than "leaving an S3 bucket public"
The hacker tried to sell the data, but couldn't find a buyer before being found out
That is crazy to me. `hacker chained together "about 6 or 7" different exploits to get to the data` <- but somehow the hacker couldn't use $5 per month vpn?
The hacker did use a VPN, but they also posted snippets of the stolen data online, possibly to brag or find potential buyers, that eventually led to their real identity.
VPN isn't cruise control to anonymity. Being based in a country without an extradition treaty to the US probably offers much better protection e.g. Russia, China.
I find it suspicious too. As I understand, the fines are not for being a victim of a hack, but for not storing the data properly and securely. Therefore bank should have paid those fines even if it hadn't been hacked.
The title is a bit misleading, the hack was a theft of personal data, not money. The data also wasn't sold for that money.
The $ 270m is an estimate (on the low side) of the total cost to the bank, based on being fined $ 80m by the regulator and paying $ 190m to affected customers as settlement in a class action lawsuit.
Important to be clear that 'AWS engineer' is mostly a coincidence. This is not believed to have relied on any internal knowledge or access. Though the data was also on AWS and arguably they could have made it easier to correctly configure the compromised WAF.
Yeah that seems like a really click-baity part of the article, like she used some internal thing to pull it off. It appears she just worked at AWS at some point before she downloaded the data.
I worked for the federal government for 6 weeks one summer when I was in school 20+ years ago. I guess that would make anything I do "state sponsored hacking"?
“She then used those misconfigured accounts to hack in and download the data of more than 30 entities, including Capital One bank.”
I don't think they're intentionally using the original meaning of the word "hack", as in "focused on outcome not methodology" but trying to paint this as some sort of evil mastermind who somehow defeated the security of (what I assume was) wide-open public S3 buckets?
https://ejj.io/blog/fixing-capital-one
The linked Wyden letter makes for interesting reading too:
> "While it is likely that Amazon has known that its AWS product was vulnerable to SSRF attacks since the first high-profile demonstration by a security researcher in 2014, the company has certainly known since mid-2018 at the latest. In August of 2018, Amazon's security team was contacted by email by a cybersecurity expert, who recommended that Amazon adopt the same cybersecurity defense against SSRF already used by Google and Microsoft. A copy of that email is attached. Amazon failed to act on this third-party report and has not provided an explanation for its inaction."