PCI Compliance is on almost everyone’s mind and it will continue to grow in popularity as time progresses. Before I start, this is by no means a rant or flaming of the PCI DSS 1.1 standard, in fact, to date the PCI DSS is the most technical security framework really out there to date. There are a lot of positives toward the standard; however it has many shortcomings. I’m not going to list every one, but focus on some of the most pressing issues that we’ve seen as penetration testers and where the standard really is failing.
First, let’s start with: the standard should never be fully relied upon for organizations overall security posture. A matured security organization typically pulls from many standards like the ISO 27001/27002, NSA IAM, NIST, PCI, etc. One thing that this standard has really pulled through was a start to security in organizations and a start for security to be taken seriously. Generally, banks were the forefront for protecting data due to the sensitive nature of information, and most other organizations security fell in the wind and has since begun to shift in a different direction.
As a penetration tester, and running a team of gifted hackers, we get to see every environment and configuration known to man. Due to PCI’s 11.3 “Performing penetration testing at least one a year”, we do a variety of penetration testing assessments against PCI Compliant organizations. One of the most alarming statistics is our 63 percent success rate for breaching systems in PCI Compliant organizations. By breaching systems, we’re talking about full access to the underlying operating system and potential to further penetrate into the network. The 63 percent doesn’t even include access to the back-end databases, login bypasses, and various other issues we find during a pentest.
PCI’s 1.1 states in 6.5 to use the OWASP guidelines for securing their systems. PCI 1.1 uses the OWASP 2004 framework for vulnerabilities, which is missing the good ol’ malicious file execution amongst others. In addition to this, the ASV scans that need to be performed only check for XSS and SQL Injection. Vulnerability scanners, in general, are pretty rudimentary and basic in vulnerability identification, but only detecting two of the overall top ten is a major issue. In 6.6 a code review or WAF has to be in place, which should hopefully stop SOME of these attacks, but the alarming issue here is we’ve done successful penetration tests against sites that have undergone a code review, or have a WAF in place and do minor protection against attacks.
WAF are great (if properly configured), don’t get me wrong, but they should not be the only layer of defense. Creating a security systems development life-cycle (SSDLC) from the beginning of development and through development and establishing code freezes for security testing before going into production is vital and the preferred method. Web applications have been getting a bum rap and are the major points of entry for external breaches to date. The standard simply states “Installing an application-layer firewall in front of web-facing applications”, it does not state anything about allowing exceptions for functionality of the site (and introducing exposures) or hardening techniques. Again, the purpose of the standard was not to secure your systems for you, but at least give you a framework for organizations to implement security.
My next question on the DSS standard is who performs the code reviews looking for these common vulnerabilities? Most VARs have a penetration testing wing, generally consisting of two people that implement product and do penetration testing as a side job (oh and hey, if you buy this product it’ll fix your vulnerabilities). These guys generally use tools that don’t even touch the web application layer in depth and give false satisfactions and assurance to organizations. There’s no certification process for web applications assessors, it only states “an organization that specializes in application security”, if I have Nessus does that make me a specialist in web application security? If I have AppScan, WebInspect, Ounce, Fortify, or any of the others in there, does running a tool make me a specialist?
My last main issue covers the ASV guidelines, quoted directly from the ASV scanning standard:
“Merchants and service providers have the ultimate responsibility for defining the scope of their PCI Security Scan, though they may seek expertise from ASVs for help. If an account data compromise occurs via an IP address or component not included in the scan, the merchant or service provider is responsible. ”
We’ve run into many scenarios where organizations are using this loophole to take systems out of scope for the PCI assessment. While the organization is “responsible” if it becomes compromised, generally the companies we’re seeing this as a way to pass the test without ever having any of the security restrictions on the system in place. So as per this statement, a main e-commerce site handling all credit card transactions for the organization can be taken out of scope by the merchant or service provider if they choose. At this point, what is the point in becoming compliant at all? Why study hard for a “C” when you can get an “A” every time without ever studying? “Sure if you get caught cheating its bad, but what’s the payoff, we’re never going to get breached!”
To finalize the point I was trying to get out here is organizations are really using this to become “secure”, when that really isn’t the impact intended. In order for organizations to incorporate security, it has to be a widespread adoption within the organization, pull multiple frameworks and standards, and incorporate them into a regularly tested and updated programs. Relying solely on of PCI compliance and going through the checklist is not going to protect you from a breach by any means.