many contractors *do* have sensitive data on their
networks with a gateway out to the public Internet.
I always loved the early "HIPPA" systems at the doctor's office where the web browser was not restricted, nor the email client, and they ran XP. These didn't even need a hardware feature to exploit...
Even in a server, though, given spectre or an equivalent (remember this could be exploited from javascript in a browser or php or...) if apps were present on a machine with both kinds of info/connections, we don't even need custom chips, the path is there in cache-management/pipeline-management bugs. I once ran into a cute bug in a power-pc chip (405ep, used in some older switches as the management processor) where I had to mark all I/O buffers non-cachable (yes, this is a good idea anyhow, but the chip documentation said that an invalidate/flush in the right places took care of that, and I really needed the speed later during packet parsing. And no, copying the packets was prohibitive...) Anyhow, with an 30 (or so) mbit stream coming into ram, about every 30 seconds, the ethertype byte came in 0 instead of 0800 (the responsible bug was in cache management, and the errata item describing it required 5 separate steps involving both processor and I/O access to that address or one in that cache line. At least this system wasn't multiuser... A friend who read the errata item said (and I agree) it looks like a Rube Goldberg sequence. (yes, I'm dating myself.) As far as I know, 10 years later, the bug has never been fixed in the masks (of course, most ppc (and embedded mips) designs are now going to ARM chips. Don't know how much better that is; some of the speed-demon versions of that have a version of spectre.)
-- Pete
I have found that the article below provides some interesting analysis on the matter which is informative as apposed to many articles which simply restate what others have already said.
Thanks ~ Bryce Wilson, AS202313
You just need to fire any contractor that allows a server with sensitive data out to an unknown address on the Internet. Security 101.
Steven Naslund
You just need to fire any contractor that allows a server with
sensitive data out to an unknown address on the Internet. Security
101.
'cept the goal is not unemployed contractors
Important distinction; You fire any contractor who does it *repeatedly* after communicating the requirements for securing your data.
Zero-tolerance for genuine mistakes (we all make them) just leads to high contractor turnaround and no conceivable security improvement; A a rotating door of mediocre contractors is a much larger attack surface than a small set of contractors you actively work with to improve security.
~ a
That would be one way, but a lot of the problem is unplanned cross-access.
It's (relatively) easy to isolate network permissions and access at a single location, but once you have multi-site configurations it gets more complex.
Especially when you have companies out there that consider VPN a reasonable way to handle secure data transfer cross-connects with vendors or clients.
At some point, you get to balance any inherent security problems with the
concept of using a VPN against the fact that while most VPN software has a
reasonably robust point-n-drool interface to configure, most VPN alternatives
are very much "some assembly required".
Which is more likely? That some state-level actor finds a hole in your VPN
software, or that somebody mis-configures your VPN alternative so it leaks keys
and data all over the place?
The risks of VPN aren't in the VPN itself, they are in the continuous network connection architecture.
90%+ of VPN interconnects could be handled cleanly, safely, and reliably using HTTPS, without having to get internal network administration involved at all.
And the risks of key exposure with HTTPS are exactly the same as the risks of having one end or the other of your VPN compromised.
As it is, VPN means trusting the network admins at your peer company.
Hey,
Important distinction; You fire any contractor who does it *repeatedly* after communicating the requirements for securing your data.
Zero-tolerance for genuine mistakes (we all make them) just leads to high contractor turnaround and no conceivable security improvement; A a rotating door of mediocre contractors is a much larger attack surface than a small set of contractors you actively work with to improve security.
+1.
Changing people is a cop out, and often blame shifting. Believing you
have better people than your competitor is dangerous. Creating
environment where humans can succeed is far harder than creating
environment where humans systematically fail.
Allowing an internal server with sensitive data out to "any" is a serious mistake and so basic that I would fire that contractor immediately (or better yet impose huge monetary penalties. As long as your security policy is defaulted to "deny all" outbound that should not be difficult to accomplish. Maybe if a couple contractors feel the pain, they will straighten up. The requirements for securing government sensitive data is communicated very clearly in contractual documents. Genuine mistake can get you in very deep trouble within the military and should apply to contractors as well. I can tell you that the "oh well, it's just a mistake" gets used far too often and its why your personal data is getting compromised over and over again by all kinds of entities. For example, with tokenization there is no reason at all for any retailer to be storing your credit card data (card number, CVV, exp date) at all (let alone unencrypted) but it keeps happening over and over. There needs to be consequences especially for contractors in the age of cyber warfare.
Steven Naslund
Chicago IL
It's been a while since I've had to professionally worry about this,
but as I recall, compliance with PCI [Payment Card Industry] Data
Security Standards prohibit EVER storing the CVV. Companies which
do may find themselves banned from being able to process card
payments if they're found out (which is unlikely).
- Brian
Yet this data gets compromised again and again, and I know for a fact that the CVV was compromised in at least four cases I personally am aware of. As long as the processors are getting the money, do you really think they are going to kick out someone like Macy's or Home Depot? After all, it is really only an inconvenience to you and neither of them care much about that.
Steve
They actually profit from fraud; and my theory is that that's why issuers have mostly ceased allowing consumers to generate one time use card numbers via portal or app, even though they claim it's simply because "you're not responsible for fraud." When a stolen credit card is used, the consumer disputes the resulting fraudulent charges. The dispute makes it to the merchant account issuer, who then takes back the money their merchant had collected, and generally adds insult to injury by charging the merchant a chargeback fee for having to deal with the issue (Amex is notable for not doing this). The fee is often as high as $20, so the merchant loses whatever merchandise or service they sold, loses the money, and pays the merchant account bank a fee on top of that.
Regarding CVV; PCI permits it being stored 'temporarily', but with specific conditions on how that are far more restrictive than the card number. Suffice it to say, it should not be possible for an intrusion to obtain it, and we know how that goes....
These days javascript being inserted on the payment page of a compromised site, to steal the card in real time, is becoming a more common occurrence than actually breaching an application or database. Websites have so much third party garbage loaded into them now, analytics, social media, PPC ads, etc. that it's nearly impossible to know what should or shouldn't be present, or if a given block of JS is sending the submitted card in parallel to some other entity. There's technologies like subresource integrity to ensure the correct code is served by a given page, but that doesn't stop someone from replacing the page, etc.
Yet this data gets compromised again and again, and I know for a fact that the CVV was compromised in at least four cases I personally am aware of. As long as the processors are getting the money, do you really think they are going to kick out someone like Macy's or Home Depot? After all, it is really only an inconvenience to you and neither of them care much about that.
Steve
Hi Steve,
I respectfully disagree.
Deny-all-permit-by-exception incurs a substantial manpower cost both
in terms of increasing the number of people needed to do the job and
in terms of the reducing quality of the people willing to do the job:
deny-all is a more painful environment to work in and most of us have
other options. As with all security choices, that cost has to be
balanced against the risk-cost of an incident which would otherwise
have been contained by the deny-all rule.
Indeed, the most commonplace security error is spending more resources
securing something than the risk-cost of an incident. By voluntarily
spending the money you've basically done the attacker's damage for
them!
Except with the most sensitive of data, an IDS which alerts security
when an internal server generates unexpected traffic can establish
risk-costs much lower than the direct and indirect costs of a deny-all
rule.
Thus rejecting the deny-all approach as part of a balanced and well
conceived security plan is not inherently an error and does not
necessarily recommend firing anyone.
Regards,
Bill Herrin
You are free to disagree all you want with the default deny-all policy but it is a DoD 5200.28-STD requirement and NSA Orange Book TCSEC requirement. It is baked into all approved secure operating systems including SELINUX so it is really not open for debate if you have meet these requirements. Remember we were talking about Intel agency systems here, not the general public. It is SUPPOSED to be painful to open things to the Internet in those environments. It needs to take an affirmative act to do so. It is a simple matter of knowing what each and every connection outside the network is there for. It also reveals application vulnerabilities and compromises as well as making it easy to identify apps that are compromised.
In several of the corporate networks I have worked on, they had differing policies for different network zones. For example, you might allow your users out to anywhere on the Internet (at least for common public protocols like HTTP/HTTPS) but not allow any servers out to the Internet except where they are in a DMZ offering public services or destination required for support (like patching and remote updates). Seemed like good workable policy.
Steven Naslund
Chicago IL
Well,
Once you get the Expiry Date (which is the most prevalent data that is not encoded with the CHD)
CVV is only 3 digits, we saw ppl using parallelizing tactics to find the correct sequence using acquirers around the world.
With the delays in the reporting pipeline, they have the time to completely abuse that CHD/Date/CVV before getting caught.
For chipless markets ( You know who you are )
I’m way more worried about Pinpads carrying Track1+Track2 unencrypted thru Serial, USB, Bluetooth, Wireless custom connection…
( I snooped Serial, USB, Bluetooth for a Pinpad PA-DSS project )
And with the PA-DSS spec being dropped by 2020 it will become worst.
Well,
( I’m sorry but I cannot resist )
Seriously mate, trolling this list using “deny-all is bad m’kay” is not a good idea.
The entire point of the CVV has become useless. Recently my wife was talking to an airline ticket agent on the phone (American Airlines) and one of the things they ask for on the phone is the CVV. If you are going to read that all out over the phone with all the other data you are completely vulnerable to fraud. It would be trivial to implement a system where you make a charge over the phone like that and get a text asking you to authorize it instead of asking for a CVV.
After all this time it is stupid to have the same data being used over and over. We have had SecurID and other token/pin systems in the IT world forever. I have a token on my iPhone right now that I use for certain logins to systems. The hardware tokens cost very little (especially compared to the credit card companies revenue). The soft tokens are virtually free. A token should be useful for one and only one transaction. You would be vulnerable from the time you read your token to someone (or something) until the charge hit your account. You would also not have to worry about a call center agent or waiter stealing that data because it could only be used once (and if it is not their employer it would become apparent really quickly). Recurring transactions should be unique tokens for a set amount range from a particular entity (i.e. 12 transactions, one per month, not more than $500 each, Comcast only). For example, my reusable token given to my cable company should not be usable by anyone else. Why hasn’t this been done yet……simple there is no advantage to the retailers and processors. There has been some one-time use numbers for stuff like that but it is inconvenient for the user so it won’t be that popular. The entire system is archaic and dates back to the time of imprinting on paper.
Tokenized transactions exist today between some entities and the processors but it is time to extend that all the way from card holder to processor.
Steven Naslund
Chicago IL
Having gone through this I know that it's all on you which is why no one really cares. You have to notice a fraudulent charge (in most cases), you have to dispute it, you have to prove it was not you that made the charge, and if they agree then they change all of your numbers at which point you have to contact everyone that might be auto charging your accounts for you. It is a super pain in the neck. So many merchants have been compromised that it seems to be having less and less impact on their reputation.