PCI-DSS V4.0 Deep Dive 2: Keyed Cryptographic Hashing

As we delve into the intricacies of the PCI-DSS v4.0 standard, it’s crucial to understand the significance of each requirement and its impact on safeguarding sensitive cardholder data. For this article, we’ll be focusing on Requirement 3.5.1.1, which revolves around the use of keyed cryptographic hashing for protecting Primary Account Numbers (PANs).


Requirement 3.5.1.1 states that “Hashes used to render PAN unreadable (per the first bullet of Requirement 3.5.1) are keyed cryptographic hashes of the entire PAN, with associated key-management processes and procedures in accordance with Requirements 3.6 and 3.7.”

Firstly, like everything else from interpreting history to interpreting your wife’s nuanced tone when she says, “It’s fine.”, everything needs a little contextualization. Luckily for us, PCI has placed some explanation for us to jumpstart the discussion.

A hashing function that incorporates a randomly generated secret key to provide brute force attack resistance and secret authentication integrity.
Appropriate keyed cryptographic hashing algorithms include but are not limited to: HMAC, CMAC, and GMAC, with an effective cryptographic strength of at least 128-bits (NIST SP 800-131Ar2).
Refer to the following for more information about HMAC, CMAC, and GMAC, respectively: NIST SP 800-107r1, NIST SP 800-38B, and NIST SP 800-38D).
See NIST SP 800-107 (Revision 1): Recommendation for Applications Using Approved Hash Algorithms §5.3.

Definition of Keyed Cryptographic Hash, PCI Glossary

So that’s a lot of MACs.

Let’s break it down. The requirements has 3 areas of importance, which we have helpfully underlined.

Keyed Cryptographic Hashes

The requirement mandates the use of keyed cryptographic hashes to render PANs unreadable. A keyed hash basically takes the entire PAN and combines it with a secret key to produce a unique, fixed-size output that is practically impossible to reverse-engineer without knowing the key. This way, even if an attacker gains access to the hashed PAN, they won’t be able to derive the original PAN without the secret key.

In 3.2.1, this was not stated and therefore the assumption that a simple hash was sufficient. Let’s listen to what this old obsolete standard says: “It is recommended, but not currently a requirement, that an additional, random input value be added to the cardholder data prior to hashing to reduce the feasibility of an attacker comparing the data against (and deriving the PAN from) tables of precomputed hash values.”

That aged like milk. Basically they are talking about salt. The goal of salting is to protect against dictionary attacks or attacks using a rainbow table. If no secret salt is used, or if the salt has not been sufficiently protected, the corresponding data, for example the PAN, can be read from the attacker’s previously calculated dictionaries or rainbow tables. So in short, salts are good for the world. Except for Salt Bae. He’s no good.

Salting creates slow hashing, which is the point. So that it takes a few billion years for brute force to be successful. How different is salting from keyed hashes? For one, salts are generally known. Sometimes they are even stored together with the hash in the database. So if let’s say, that’s compromised, Salt is known. I suppose, you can say “Live by the salt, die by the salt.” Ha!

Keyed Crypto Hashes mean there is a secret key. And before you go and jump off the building, there are already existing algorithms out there (The MAC brothers) that has previously been used — primarily, to my knowledge — for message integrity checks. In fact, the MAC here means Message Authentication Code to check integrity and authenticity. Unlike the salt, it ISN’T known, or at least not unprotected. So even if the database is compromised, they can’t get the key, because it’s protected (through encryption, later on to explain).

Now, why the change from normal hash, with recommended Salt, to hash with secret key?

The problem is with card numbers. Those dang card numbers, which is so different from let’s say passwords. Unlike passwords, where it could really be random, credit card numbers are NOT random. They are unique, but they are far from random. You see, a credit card consist of:

  1. The bank identification number (BIN) or issuer identification number (IIN)The first six digits is the issuer id. You can go https://www.bindb.com/bin-list
  2. The account number: The number between the BIN and the check digit (the last digit) is six to nine digits long and is used to identify the individual account number.
  3. The check digit: The last digit, added to validate the authenticity of the credit card number. This is by using the Luhn algorithm.

The thing about Luhn is that it is used to validate primary account numbers. I am not going into details, as other people have done so and will do a much better job in explaining this. But the short of it is that, if I have the BIN, and I have the Luhn and I have, let’s say 3 more numbers of the account number, then you get the picture. The Luhn digit is the result of the luhn algorithm applying to all the previous numbers (right first to left), which you already known, if it’s truncated! You would already know 9 digits (first six, last three, the last being the final luhn result). It’s likely still going to take a lot of effort, but the predictable way credit cards are structured actually provides less fields to be guessed. As scary as it may sound, hashes can be possibly reversed.

While salt adds complexity and make it slower, salts aren’t secret, remember, so eventually that can still be broken. A Key however is secret. Remember the data encryption key, key encryption keys? Well, hashing now requires the same treatment as encryption in that sense that these keys need to be encrypted.

The other important bit of this requirement is the requirement emphasizes that the entire PAN must be hashed. This is important because hashing only a portion of the PAN would still leave some sensitive information exposed. By hashing the entire PAN, we ensure that no part of it remains in plain text, adding an extra layer of protection.

Lastly, the requirement stresses the importance of proper key management processes and procedures, as outlined in Requirements 3.6 and 3.7. This means that the secret keys used for hashing must be securely generated, stored, and managed throughout their lifecycle. Weak key management can undermine the entire purpose of keyed hashing, so it’s crucial to get this right.

What does this mean?

It means, like a lot of new requirements in v4.0: more work.

It is, in its heart, a concept of defense-in-depth. Requirement 3.5.1.1 serves as a secondary line of defense against unauthorized access to stored PANs. Even if an attacker manages to exploit a vulnerability or misconfiguration in an entity’s primary access control system and accesses the database, the keyed cryptographic hashing of PANs acts as an additional barrier, preventing the attacker from obtaining the actual PANs, unless they manage to compromise the key.

By implementing a secondary, independent control system for managing cryptographic keys and decryption processes, entities can ensure that a failure in the primary access control system doesn’t automatically lead to a breach of PAN confidentiality. For instance, instead of storing the PANs in plain text, a website employs a keyed hashing algorithm, such as HMAC-SHA256, to render the PANs unreadable. Each PAN is combined with a unique, randomly generated secret key before being hashed, and the resulting hash values are stored in the website’s database.

Final note: It’s important to note that Requirement 3.5.1.1 applies to all instances of stored PANs, whether in primary storage (databases, flat files) or non-primary storage (backups, audit logs, exception logs). This means that entities must ensure that keyed cryptographic hashing is implemented consistently across all storage locations, leaving no room for gaps in protection.

However, the requirement does make an exception for temporary files containing cleartext PANs during the encryption and decryption process. This is a practical consideration, as it allows entities to temporarily work with unencrypted PANs while performing necessary operations, as long as the temporary files are properly secured and promptly removed after use.

If you have any questions or need assistance in navigating the complexities of PCI-DSS v4.0, don’t hesitate to reach out to us at avantedge@pkfmalaysia.com. Our team of experienced professionals is here to help you every step of the way, ensuring that your organization stays secure, compliant, and ahead of the curve in the ever-evolving landscape of data security.

Why QSAs Matter in your PCI-DSS

The questions we usually get asked, aside from why we prefer not to be a QSA (which, although it is fairly dated and need to be revised, have been answered), despite us doing PCI-DSS since 2012 in Malaysia, is why we hardly work with different QSAs in our PCI-DSS projects. Aren’t all QSAs the same? Aren’t all created equal?

Like everything in life, there are basis of variation. We are not here to say which is better, which is worse. It’s not in our culture to constantly provide a barrage of negative statements in regards to other companies and organizations, even with basis — because that’s not how we are wired.

That being said, we do have an internal list of companies (and QSAs) that we would perhaps have some less inclination to. This is due to either working firsthand with them, or mainly seeing some of the results of their work. Quite shocking some of the things we see. Additionally, we have also had clients who had suffered under their so called advisory and have asked us to step in for help.

So to the query on which QSA should you spend the next six months (or more) months with for your PCI Project? Let’s put a few options forward in a more quantifiable manner.

a) Experience

A question we get asked is why we generally don’t just work with local providers or assessors who are closer to home. It’s not because they are worse or better. It’s like comparing cars. They all have their pros and cons – we do not slag organizations off even if we would rather avoid some of them. But one way I would tell customers is, let’s look at experience first.

As of writing, we have 3065 PCI-DSS listed projects based on the Visa Provier List at https://www.visa.com/splisting/searchGrsp.do. The top 10 assessors on this list is as follows:

AssessorProjects
VikingCloud208
Foregenix142
ControlCase 113
SECTEC103
Compliance Control96
Coalfire Systems86
SISA 83
A-LIGN80
CIPHER 71
atsec 71
Total1053

The top 10 assessors make up almost 35% of the projects listed. Those are heavy hitters. Suffice to say a lot of projects remain unlisted – level 2 Service providers, SAQ projects, Merchant projects etc. So actual projects (included non listed) for each assessor is probably a lot higher. To put in context, there are the following numbers of projects for assessors:

ProjectsNumber of Assessors
154
235
325
422
518
613
713
87
910
103

There are 200 Assessors out there with 10 or less projects listed. In defence, some of these are actually the same company under another name, so it’s not like 100% accurate in terms of this overview. So out of 262 assessors in that list that does PCI, 77% of them have 10 or less projects, showing that it’s not that easy to get that number to a 100 or more. Again I will reiterate, quantity doesn’t automatically means it’s better. Some may argue, the more projects you have, the more quality is suffered. That is a good point. And I have experience with some of the overseas QSAs in that smaller project number group that I would gladly give a project and have a beer with. They are really good and extremely passionate about PCI-DSS and I’ve learnt truckloads from them. We are just saying this is one starting indicator you may want to jump from because most service providers start off with this off the bat when they are presenting their services: how many customers ‘trust’ them.

b) Location

This is slightly misleading in a sense that the query we ask is: do we need a QSA who is local? Local here would mean they have an office in the country they are serving the customer in. This argument, while it seems to initial hold some credence, is actually self defeating. And a bit strange, when most organisations now prefer to be known as regional or global, instead of touting themselves as just local players. If they use this as a plus point, then by going to their overseas customers, they are technically disputing the same argument point they are advocating. Most QSAs won’t use this track because they know that a QSA company needs to at the very least operate regionally, or if you want to be focused on a country – then, fine, take USA. The reason why the service provider list does not have a breakdown of all 195 countries (or if you are a Malaysian Minister, then that would be 500 countries) of Earth is that being a QSA is tough work. The breakdown is in regions and the only countries listed there are US and Canada because US makes up almost 35% of the listed projects there.

Think about the last time you dealt with a QSA. Did you have access to that QSA through messages or call? Did you call for a meeting and that QSA came as required? Did that QSA respond quicker? Was that QSA able to reply your queries, technical or otherwise related to your compliance in clear and consistent manner? Did they insist on you paying them more for advisory or delayed your project? Did they upsell more services to you that was unplanned and unknown? Think about the positives and negative experience you had.

Those are more pertinent queries than deciding someone to be ‘local’. That point is actually really moot. Because in almost all projects, the bulk of the work will be handled by a consultant. QSAs by definition should be global or regional anyway. In the economics of being a QSA (explored in another article), being a QSA operating in a single country would probably not be cost sustainable. Especially in a country where the currency is slightly more than the value of a turnip. So the assessor will still have to be flying to other places anyway. Therefore, it doesn’t really matter whether its local, regional or global when it comes to being an assessor, the question is how accessible and communicable they are.

In that sense, we strike a balance – we are local to Malaysia, or any other country that we operate in (we have presence in 150 countries as a global network), and we provide the independent, technical advisory needed to be consultants. We are not QSAs so we don’t need to be pulled all over the place in other PCI projects all over helter-skelter. We are all certified in various certifications and more product certs that I can throw a stone at. We are operational people all with more than a decade of experience so you won’t have a wide-eyed associate with a checklist coming to you. We also have non-IT services as we are also tax advisors, corporate financiers, risk managers, compliance directors – we aren’t just an IT company aiming to push IT services or cybersecurity solutions for you – our DNA is in advisory and consulting.

Enough of blowing our own horn then. Which leads me to item 3:

c) Reference

It’s important to not just look at a list of customers. I have a client who gets annoyed with seeing a presentation with a list of logos without any context of the work. Some may list down large companies or merchants under their so called ‘Customer’ but without any context. You know what? Fine. I can list down all telcos, up to twenty PLCs, more than half a dozen of oil and gas and more banks than I can swing a bat at just because I have given them ‘training’. Come on.

Look past that veneer and look at actual references in the industry. Is there a positive experience? Is there someone out there willing to endorse good will? Are there any bad experiences? Another area I got asked is, if the assessor has been involved in a breach before. This almost needs a new article to explore. Look, we all know PCI doesn’t guarantee non-breach. It’s not a panacea to world hunger. Its more important to note that what is the outcome of the investigation or forensics before we go witch hunting. It’s meaningless to state for instance, the top QSAs would never experience any breaches in their existence. For sure, some of them would need to deal with this one way or another and to see if indeed there was an oversight. If there’s none, then the breach could be down to myriad of reasons outside of PCI-DSS control. Remember – assessors are not operational. They enter an audit in good faith. Witchunting a QSA just because of a breach involvement without context or having the final conclusion is a narrow minded, irresponsible approach to assessing capability (or culpability). If the QSA is truly to blame, wouldn’t they be put in remediation by the Council? There you go.

One thing you will never catch us doing is giving an opinion about certain things that we don’t have the full context on. It’s simply not something we are comfortable at. If we see some issue with a report from other QSAs, even if it looks strange, the reply is always: what is the context of this, and there must be a reason why it was interpreted as such. So that gives us a more balanced view and not just mouth off without understanding. As the proverbs say: “The more talk, the less truth; the wise measure their words. “

d) Cost and Resources

Most PCI projects have the conflicting pull of cost and resources. A QSA with a lot of resources and consultants will be very useful. The last thing you want to see is a QSA not responding and after 3 months rushes you for evidences. Cost still plays a huge role in PCI-DSS and it’s not as if things are getting cheaper. With version 4.0, there is more work for QSAs to do and they likely will pass down some of these costs to the customers. This still remains a very subjective item in this filtering exercise — a QSA charging your liver and kidney for PCI isn’t ideal, but if a QSA comes in with a price that resembles a popsicle in a flea market, I would likely stay away as well. We all know how much effort PCI is. We don’t want a situation where halfway through, the bulk of invisible costs comes pouring in like the army of Mordor, or else things will not be done. If you want to build your house, have most of the materials cost sorted out. If there is a VO, don’t let it cross a threshold of percentage of your initial cost. Having a QSA who understands this and is willing to negotiate is important. Even if the cost is not lowered (because to be fair, QSA work is not trivial), then negotiate for future services, or better payment terms – anything else to meet in the middle.

e) Stamp of Trust

Are there any stamps of trust for QSAs?

No, there isn’t. At least not officially. However, I would like to highlight there is this thing called Global Executive Assessor Roundtable (GEAR) found here: https://www.pcisecuritystandards.org/about_us/press_releases/pci-security-standards-council-announces-2022-2024-global-executive-assessor-roundtable/

There are 28 QSAs in the GEAR currently, with the purpose below:

The Roundtable is an Executive Committee level advisory board comprised of senior executives from PCI SSC assessor companies. The 2022-2024 GEAR consists of 28 organizations, with the Roundtable term running 1 September 2022 – 31 August 2024.

“The Council depends on the input of a wide range of stakeholders to provide PCI SSC with valuable insights,” said PCI SSC Executive Director Lance J. Johnson.” With the release of version 4.0 of our PCI Data Security Standard this year, it is even more important to have active representation from every corner of the globe from an assessor perspective. Assessors are critical in assisting the Council with our effort to improve and evolve payment data security.”

PCI COUNCIL

The QSA we often work with, Controlcase is one of them, and have been reappointed, pointing out that in terms of reference, the Council considers their input as ‘valuable insights’. This is one of the list we look at, especially when requested about QSAs. Are they involved in GEAR?

IN SUMMARY

Like choosing a car, there is really no guarantees actually that your experience will be immaculate when it comes to PCI-DSS considerations. The above are just possible filters you can decide on when it comes to choosing your next QSA partner to embark your journey on. Or you can roll a dice or consult with the gods. Disclaimer of course is that we have not worked with ALL QSAs yet, so this still remains a rudimentary filter when you are thinking of a QSA. Find a QSA that can actually do the hard yards and have proven themselves with Project references and quality, Global Reach and experience, Positive Customer feedback and respect from the industry and finally, seen as an invaluable assistant to the almighty PCI Council themselves. In our personal opinion, it’s a start to look at these metrics and springboard from there. Because anyone can give a nice presentation or dress in a suit or talk negatively about other companies — but what are their numbers, references and contribution to the PCI council?

Drop us an email at pcidss@pkfmalaysia.com to learn more about PCI and other compliances like ISMS or ITSM or SOC!

The Question of QSA Conflict

An interesting conversation over coffee with a client today gave me something to mull over a little. The question brought to the table was how some assessors, while engaged in audit, brought up other services they offer like ASV, penetration testing and vulnerability scan and how this may look like a conflict of interest issue.

I will start first by proclaiming that we aren’t QSAs. We do have a myriad of certifications such as ISO and other personal certs in information security, but this article isn’t about our resume. It’s the ever important question of the role of the QSA and whether they should be providing advisory services.

Why we choose not to go the route of QSAs is for another article, but suffice to say, in the same regard we work with CBs for ISO projects, we employ the same business model for PCI or any other certification projects. We rabidly believe in the clear demarcation of those doing the audit and those doing the implementation and advisory. After all, we are in the DNAs of statutory auditors and every single customers or potential customers we have require a specific conflict check, in order to ensure independence and not provide consulting work that may jeopardize our opinions when it comes to audit. Does anyone recall Enron? Worldcom? Waste Management? Goodbye, 90 year old accounting firm.

We have worked with many QSAs in almost 14 years of doing PCI-DSS – and here, QSAs I mean by individuals as well as QSA-Cs (QSA Companies). Our group here is collectively made up of senior practitioners in information security and compliance, so we don’t have fresh graduates or juniors going about advising 20 years plus C level veterans on how to run their networks or business.

A QSA (Qualified Security Assessor) company in a nutshell is a company that is qualified by the PCI Security Standards Council (PCI SSC) to perform assessments of organization against the PCI standards. Take note of the word: QUALIFIED. This becomes important because there is a very strict re-qualification program from the PCI-SSC to ensure that the quality of QSAs are maintained. Essentially, QSAs are vouched by the PCI SSC to carry out assessment tasks. Are all QSAs created equal? Probaby not as based on our experience some are probably better than others in specific aspects of PCI-DSS. Even the PCI SSC has a special group of QSAs under their Global Executive Assessor Roundtable (GEAR), which we will touch on later.

The primary function of a QSA company is to evaluate and verify an organisation’s adherence to the PCI DSS requirements. This involves a thorough examination of the organisation’s cardholder data environment (CDE) — including its security systems, network architecture, access controls, and policies — to ensure that they meet the PCI requirements.

Following the assessment, the QSA company will then prepare a Report on Compliance (RoC) and an Attestation of Compliance (AoC), which are formal documents that certify the organization’s compliance status. Please don’t get me started on the dang certificate because I will lose another year of my life with high blood pressure. These OFFICIAL documents are critical for the organization to demonstrate the company’s commitment to security to partners, customers, and regulatory bodies. The certificate, however, can be framed to be hanged on the wall of your toilet, where it rightfully belongs. Right next to the toilet paper, which has probably a slightly higher value.

Anyway, QSAs have very specific roles defined by the SSC:

– Validating and confirming Cardholder Data Environment (CDE) scope as defined by the assessed entity.
– Selecting employees, facilities, systems, and system components accurately representing the assessed environment if sampling is employed.
– Being present onsite at the assessed entity for the duration of each PCI DSS Assessment or perform remote assessment activities in accordance with applicable PCI SSC assessment guidance.
– Evaluating compensating controls, as applicable.
– Identifying and documenting items noted for improvement, as applicable.
– Evaluating customized controls and deriving testing procedures to test those controls, as applicable.
– Providing an opinion about whether the assessed entity meets PCI DSS Requirements.
– Effectively using the PCI DSS ROC Template to produce Reports on Compliance.
– Validating and attesting as to an entity’s PCI DSS compliance status.
– Maintaining documents, workpapers, and interview notes that were collected during the PCI DSS Assessment and used to validate the findings.
– Applying and maintaining independent judgement in all PCI DSS Assessment decisions.
– Conducting follow-up assessments, as needed

QSA PROGRAM GUIDE 2023

You can see above, there is no advisory, recommendation, consultation, implementation work listed. It’s purely assessment and audit. What we do see are more often than not, QSAs do offer other services under separate entities. This isn’t disallowed specifically, but the SSC does recommend a healthy dose of independence:

The QSA Company must have separation of duties controls in place to ensure Assessor Employees conducting or assisting with PCI SSC Assessments are independent and not subject to any conflict of interest.

QSA Qualification requirements 2023

Its hard to adjudge this point, but the one providing the audit shouldn’t be the one providing the consultation and advisory services. Some companies get around this by having a separate arm providing special consultation. Which is where we step in, as without doing any gymnastics in organizational reference, we make a clear demarcation of who does the audit and who does the consultation and advisory role.

The next time you receive any proposal, be sure to ask the pertinent question: are they also providing support and advisory? Because a good part of the project is in that, not so much of the audit. We have actually seen cases where the engaged assessor flat out refused to provide any consultative or advisory or templates or anything to assist the customer due to conflict of interest, leaving the client hanging high and dry unless they engage another consultative project with them separately. Is that the assessor’s fault? In theory, the assessor is simply abiding with the requirements for independence. On the other hand, these things should have been mentioned before the engagement, that a bulk of the PCI project would be in the remediation part and definitely guidance and consultation would be needed! It might reek of being a little disingenuous. It’s frustrating for us when we get pulled in halfway through a project and we ask, well why haven’t you query your engaged QSA on this question? Well, because they want another sum of money for their consultative works, or they keep upselling us services that we are not sure we need unless we get their advisory in. What do you think their advisory is going to say? You can see whereas on paper, it might be easy to state that independence has been established, in reality, it’s often difficult to distinguish where the audit, recommendations, advisory and services all start or end as sometimes it’s all mashed. Like potatoes.

Here’s the another official reference to this issue in FAQ #1562 (shortened)

If a QSA Employee(s) recommends, designs, develops, provides, or implements controls for an entity, it is a conflict of interest for the same QSA Employee(s) to assess that control(s) or the requirement(s) impacted by the control(s).

Another QSA Employee of the same QSA Company (or subcontracted QSA) – not involved in designing, developing, or implementing the controls – may assess the effectiveness of the control(s) and/or the requirement(s) impacted by the control(s). The QSA Company must ensure adequate, documented, and defendable separation of duties is in place within its organization to prevent independence conflicts.

FAQ #1562

Again, this is fairly clear that QSAs providing both assessment and advisory/implementation services are not incorrect in doing so, but need to ensure that proper safeguards are in place, presumably to be checked thoroughly by their requalification requirements, under section 2.2 “Independence” of the QSA requalification document. To save you time on reading, there isn’t much prescriptive way to ensure this independence, so we’re left to how the company decides on their conflict of interest policies. Our service is to ensure with confidence that the advice you receive is indeed independent and as much as we know, to benefit the customer, not the assessor. We don’t have skin in their services.

In summary, QSAs can theoretically provide services but it should come separately from the audit, so ensure you get the right understanding before starting off your PCI journey. Furthermore and more concerningly, we’ve seen QSAs refused to validate the scope provided to them, citing that this constitute ‘consulting and advisory’ and needs additional payment. This is literally the first task a QSA does in their list of responsibility, so call them out on it or call us in and let us deal with them. These charlatans shouldn’t even be QSAs in the first place if this is what they are saying.

And finally, speaking on QSAs that are worth their salt – the primary one we often work with Controlcase has been included in the PCI SSC Global Executive Assessor Roundtable 2024 (GEAR 2024).

https://www.pcisecuritystandards.org/about_us/global_executive_assessor_roundtable/

These are nominated as an Executive Committee level advisory board comprising senior executives from PCI SSC assessor companies, that serves as a direct channel for communication between the senior leadership of payment security assessors and PCI SSC senior leadership.

In other words, if you want to know who the SSC looks to for PCI input, these are the guys. Personally, especially for complex level 1 certification, this would be the first group of QSAs I would start considering before approaching others, as these are nominated based on reputation, endeavor and commitment to the security standards — not companies that cough out money to sponsor events or conferences, or look prominent in their dazzling booths, give free gifts but is ultimately unable to deliver their projects properly to their clients.

Let us know via email to pcidss@pkfmalaysia.com if you have any queries on PCI-DSS, especially the new version 4.0 or any other compliances such as ISO27001, NIST, RMIT etc!

Major Changes of PCI v4

So now as we approach the final throes of PCI-DSS v3.2.1, the remaining 3 weeks is all that is left of this venerable standard before we say farewell once and for all.

PCI-DSS V4.0 is a relative youngster and we are already doing hours of updates with our customers on the things they need to prepare for. Don’t underestimate v4.0! While its not a time to panic, it’s also not a time to just lie back and think that v4.0 is not significant. It is.

Below is a table that provides an insight of the major changes we are facing in v4.0.

Bearing in mind that most of the requirements now start off with keeping policies updated and document roles and responsibilities, the major changes are worth a little bit of focus. In the next series of articles, we will go through each one as thoroughly as we can and try to understand the context in which it exists on.

Let’s start off the one on the top bin. Requirement 3.4.2.

Req. 3.4.2: When using remote-access technologies, technical controls prevent copy and/or relocation of PAN for all personnel, except for those with documented, explicit authorization and a legitimate, defined business need

PCI v4.0

Ok, we have underlined and emphasized a few key points in this statement. Because we feel that is important. Let’s start with what 3.4.2 applies to.

It applies to: Remote Access

It requires: Technical Controls

It must: PREVENT THE COPYING/RELOCATION

Of the subject matter: Full Primary Account Number

In v3.2.1 this was found in section 12.3.10 with slightly different wordings.

Req 12.3.10 For personnel accessing cardholder data via remote-access technologies, prohibit the copying, moving, and storage of cardholder data onto local hard drives and removable electronic media, unless explicitly authorized for a defined business need. Where there is an authorized business need, the usage policies must require the data be protected in accordance with all applicable PCI DSS Requirements.

PCI v3.2.1

I think 4.0, aside from the relocation of the requirement to the more relevant requirement 3 (as opposed to requirement 12, which we call the homeless requirement for any controls that don’t seem to fall into any other earlier requirements), reads better. Firstly, putting it in requirement 3 puts the onus on the reader to consider this as part of protection of storage of account data which is the point of Requirement 3. Furthermore, digging into the sub-requirement, 3.4 section header states: Access to displays of full PAN and ability to copy PAN is restricted.

This is the context of it, where we find the child of this 3.4 section called 3.4.2 and we need to understand it first, before we go out and start shopping for the first DLP system on the market and yell out “WE ARE COMPLIANT!”

3.4 talks about displays of FULL PAN. So we aren’t talking about truncated, or encrypted PAN here. So in theory, if you copy out a truncated PAN or encrypted PAN, you shouldn’t trigger 3.4.2. Its specific to full PAN. While we are at it, we aren’t even talking about cardholder data. A PAN is part of cardholder data, while not all cardholder data is PAN. Like the Hulk is part of the Avengers but not all Avengers are the Hulk. So if you want to copy the cardholder name or expiration date for whatever reasons like data analysis, behavioural prediction, stalking etc…this isn’t the requirement you are looking for.

Perhaps this is a good time to remind ourselves what is Account Data, Card Holder Data and Sensitive Authentication Data (SAD).

The previous v3.2.1 doesn’t actually state ‘technical controls’, which goes to say that if it’s a documentary controls, or a policy control, or something in the Acceptable Use Policy, it can also pass off as compliant. V4.0 removes that ambiguity. Of course, the policy should be there, but technical controls are specific. It has to be technical. It can’t be, oh wait, I have a nice paragraph in section 145.54(d)(i)(iii)(ab)(2.4601) in my information security acceptance document that stated this!

So these technical control(s) must PREVENT copying and relocation. Firstly just to be clear, copy is Ctrl-C and Ctrl-V somewhere else. Relocation is Ctrl-X and Ctrl-V somewhere else. Both has its problem. In copying, we will end up PAN having multiple locations of existence. In relocation, the PAN is moved, and now systems accessing the previous location will throw up an error – causing system integrity and performance issues. Suffice to say, v4.0 demands the prevention of both happening to PAN. Unless you have a need that is:

a) DOCUMENTED

b) EXPLICITLY AUTHORIZED (not Implied)

c) LEGITIMATE

d) DEFINED

When a business need is both “documented” and “defined,” it means that the requirement has been both precisely articulated (defined) and recorded in an official capacity (documented). So a list of people with access is needed for the who, why they legitimately need to access/copy/relocate PAN in terms of their business, explicitly authorized by proper authority (not themselves, obviously).

Finally, let’s talk about technical controls. Now, remember, this applies to REMOTE ACCESS. I’ve heard of clients who says, hey no worries, we have logging and monitoring in place for internal users. Or we have web application firewall in place. Or we have cloudflare in place. Or we have a thermonuclear rocket in place to release in case we get attacked. This control already implies ‘remote access’ into the environment. The users have passed the perimeter. It implies they are already trusted personnel, or contractors or service providers with properly authorized REMOTE ACCESS. Also, note that the authorization here is NOT for remote access, it is for the explicit action of copy/relocating PAN. In this case, most people would probably not have a business reason of copying/relocating PAN to their own systems unless for very specific business flow requirements. This means, only very few people in your organization should have this applied to them, under very specific circumstances. An actual real life example would be for an insurance client we have, they had to copy all transaction information, including card details in an encrypted format and put it into a removable media (like a CD-ROM) and then send it over to the Ombudsman for Financial Services as part of a regulatory requirement. That’s pretty specific.

So what passess off as a ‘technical control’? A Technical control may be as simple as to completely prevent copy/paste or cut/paste ability when accessing via remote access. This can be done in RDP or disable clipboard via SSLVPN. While I am not the most expert product specialist in remote access technologies, I can venture to say its fairly common to have these controls inbuilt into the remote access product. So, there may not be a need for DLP in that sense, as the goal here is to prevent the copying and relocation of PAN.

Now that being said, an umbrella disallow of copy and paste may not go well with some suits or C-levels who want to copy stuff to their drive to work while they are in the Bahamas. Of course. You could provide certain granular controls, depending on your VPN product or which part of the network they access. If a granular control cannot be agreed on, then a possible way is to enforce proper control via DLP (Data Loss Prevention) in endpoint protection. Or control access to CDE/PAN via a hardened jump server that has local policy locked down. So the general VPN into company resources may be more lax, but the moment access to PAN is required, 3.4.2 technical controls come in play.

At the end, how you justify your technical controls could be through a myriad of ways. The importance is of course, cost and efficiency. It has to make cost sense and it must not require your users to jump through hoops like a circus monkey.

So there you have it, a break down of 3.4.2. We are hopping into the next one in the next article so stay tuned. If you have any queries on PCI-DSS v4.0 or other related cybersecurity needs, be it SOC1 or 2, ISO27001, ISO20000, NIST or whether Apollo 11 really landed on the moon in 1969, drop us a note at avantedge@pkfmalaysia.com and we will get back to you!

Zero Trust for 2024

As we enter into the new year, lets start off with a topic that most cybersecurity denizens would have heard of and let’s clarify it a little.

Zero Trust.

It seems a good place as any, to start 2024 off with the pessimism that accompanied the end of last year – the spate of cybersecurity attacks in 2023 had given us a taste of what is to come – insurance company – check, social security – check, the app with our vaccination information – check. While breaking down the attacks is meant for another article, what we are approaching now for the coming year is not just more of the same, but much more and more advanced attacks are bound to happen.

While Zero Trust is simply a concept – one of many – to increase resistance to attacks or breach, it’s by no means a silver bullet. There is NO silver bullet to this. We are in a constant siege of information warfare and the constant need to balance the need for sharing and the need for protection. It is as they say; the safest place would be in a cave. But that’s now living, that’s surviving. If you need to go somewhere, you need to fly, you have information with the airlines. If you need to do banking, you have information with the banks. If you need to conduct your daily shopping online, you are entrusting these guys like Lazada et al the information that otherwise you may not likely provide.

So Zero Trust isn’t the fact that you conduct zero transaction, its basically a simple principle: Trust no one, Verify everything. Compare it to the more traditional “trust but verify” approach, which assumed that everything inside an organisation’s network should be trusted, even if we do have verifications of it. Here’s a breakdown of the concept, in hopefully simpler terms.

The Basic Premise: Imagine a company as a fortified castle. In the old days, once you were inside the castle walls, it was assumed you belonged there and could roam freely. At least this is based on the limited studies we have done by binge watching Game of Thrones. All historical facts of the middle ages can be verified through Game of Thrones, including the correct anatomy of a dragon.

Back to the analogy, what if an enemy disguised as a friend managed to get inside? They would potentially have access to everything. Zero Trust Architecture operates on the assumption that threats can exist both outside and inside the walls. Therefore, it verifies everyone’s identity and privileges, no matter where they are, before granting access to the castle’s resources. The 3 keys you can remember can be:

  1. Never Trust, Always Verify: Zero Trust means no implicit trust is granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned). Basically, we are saying, I don’t care where you are or who you are, you are not having access to this system until I can verify who you are.
  2. Least Privilege Access: Individuals or systems are given the minimum levels of access — or permissions — needed to perform their tasks. This limits the potential damage from incidents such as breaches or employee mistakes. We see this issue a lot, whereby a C level person insist on having access to everything even if he doesn’t necessarily know how to navigate a system without a mouse. When asked why, they say, well, because I am the boss. No. In Zero Trust, in fact, because you are the boss, you shouldn’t have access into a system that does not require your meddling. Get more sales and let the tech guys do their job!
  3. Micro-Segmentation: The network is broken into smaller zones to maintain separate access for separate parts of the network. If a hacker breaches one segment, they won’t have access to the entire network.

The steps you can follow to implement the concept of Zero Trust:

Identify Sensitive Data: Know where your critical data is stored and who has access to it. You can’t protect everything. Or at least not with the budget you are given, which for most IT groups, usually is slightly more than they allocate to upkeep the company’s cat. So data identification is a must-have. Find out what is the data that you most want to protect and spend your shoe-string budget to protect it!

Verify Identity Rigorously: Use multi-factor authentication (MFA) and identity verification for anyone trying to access resources, especially important resources like logging systems, firewalls, external webservers etc. This could mean something you know (password), something you have (a smartphone or token), or something you are (biometrics). It used to cost a mortgage to implement things like this but over the years, cheaper solutions which are just as good are now available.

Contextual Access: Access decisions should consider the context. For example, accessing sensitive data from a company laptop in the office might be okay, but trying to access the same data from a personal device in a coffee shop might not be. This may not be easy, because now with mobile devices, you are basically accessing top secret information via the same device that you watch the cat playing the piano. Its a nightmare for IT security – but again, this has to have discipline. If you honestly need to access the server from Starbucks , then implement key controls like MFA, VPN, layered security and from a locked-down system.

Inspect and Log Traffic: Continuously monitor and log traffic for suspicious activity. If something unusual is detected, access can be automatically restricted. SOAR and SIEM products have advanced considerably over the years and today we have many solutions that do not require you to sell a kidney to use. This is beneficial as small companies are usually targeted for attacks, especially if these smaller companies services larger companies.

At the end, it all comes down to what are the benefits to adopt this approach.

Enhanced Security: By verifying everything, Zero Trust minimizes the chances of unauthorised access, thereby enhancing overall security. Hopefully. Of course, we may still have those authorised but have malicious intent, which would be much harder to protect from.

Data Protection: Sensitive data is better protected when access is tightly controlled and monitored. This equates to less quarter given to threat players out there.

Adaptability: Zero Trust is not tied to any one technology or platform and can adapt to the changing IT environment and emerging threats.

On the downside, there are still some challenges we need to surmount:

Complexity: Implementing Zero Trust can be complex, requiring changes in technology and culture. It’s not a single product but a security strategy that might involve various tools and technologies. This is not just a technical challenge as well, but a process and cultural change that may take time to adapt to.

User Experience: If not implemented thoughtfully, Zero Trust can lead to a cumbersome user experience with repeated authentication requests and restricted access. This is a problem we see a lot, especially in finance and insurance – user experience is key – but efficiency and security are like oil and water. Eternal enemies. Vader and Skywalker. Lex and Supes. United and Liverpool. Pineapple and Pizza.

Continuous Monitoring: Zero Trust requires continuous monitoring and adjustment of security policies and systems, which can be resource-intensive. We’ve seen implementation of SIEM and SOAR products which are basically producing so many alerts and alarms that it makes no sense anymore. These all become noise and the effects of monitoring is diluted.

In summary, an era where cyber threats are increasingly sophisticated and insiders can pose as much of a threat as external attackers, Zero Trust Architecture offers a robust framework for protecting an organisation’s critical assets. It’s about making our security proactive rather than reactive and ensuring that the right people have the right access at the right times, and under the right conditions. It’s culturally difficult, especially in Malaysia, where I will have to admit, our innate trust of people and our sense of bringing up means we always almost would open the door for the guy behind us to walk in, especially if he is dressed like the boss. We hardly would turn around and ask, “Who are you?” because we are such nice people in this country.

But, adopt we must. For any organisation looking to bolster its cybersecurity posture, Zero Trust isn’t just an option; it’s becoming a necessity. In PKF we have several services and products promoting Zero Trust – contact us at avantedge@pkfmalaysia.com and find out more. Happy New Year!

« Older posts

© 2024 PKF AvantEdge

Up ↑