Tag: PCIDSS (Page 1 of 6)

PCI-DSS Scope Understanding: Encryption

Scoping is one of the first and main thing that we do the moment we get engaged, after the customary celebratory drinks. In all projects, scope is always key, moreso in auditing and consulting, in standards compliance, be it PCI, ISMS, NIST 800s, CSA or all the other compliances we end up doing. Without scope there is nothing. Or there is everything, if you are looking at it from the customer’s viewpoint. If boundaries are not set, then everything is in open season. Everything needs to be tested, prodded, penetrated, reviewed. While this is all good and all, projects are all bounded by cost, time and quality. Scope determines this.

In PCI, scoping became such a tremendously huge issue that the council deem it necessary to publish an entire supplemetary document called “Guidance for PCI DSS Scoping and Network Segmentation” back in December 2016. Now, here is a trivia for you, from someone who has been doing PCI for donkey years. Did you know that this isn’t even the first attempt at sorting out scope for PCI-DSS?

Back in 2012, there was a group Open Scoping Framework Group that published the extremely useful Open PCI-DSS Scoping Toolkit that we used for many years as guidance before the council amalgamated the information there into formal documentation. This was our go-to bible and a shout out to those brilliant folks at http://www.itrevolution.com for providing it, many of its original concepts retained when PCI council released their formal documentation on scope and eventually within the standards itself. YES, scoping is finally in the iteration of v4 and v4.0.1 for PCI-DSS in the first few pages, so that people will not get angry anymore.

Or will they?

We’re seeing a phenomenon more and more in the industry of what we term as scope creep. Ok fine, that’s not our word. It’s been in existence since the fall of Adam. Anyway, in PCI context, for no apparent reason some of our customers come back to us and state their consultants, or even QSAs insists on scope being included — for NO REASON except that it is MANDATORY for PCI-DSS. Now, I don’t want to say we have no skin in the game, but this is where I often end up arguing with even the QSAs we partner with. I tell them, “Look, our first job here is to help our customers. We minimize or optimize scope for them, reducing it to the most consumable portion possible, and if they want to do anything extra, let them decide on it. We’re not here to upsell Penetration testing. Or segmentation testing. Or Risk Assessment. Or ASV. Or Policies and Procedures. Or SIEM. Or SOC. Or Logging. Or a basket of guavas and durians.” Dang it, we are here to do one thing: get you PCI compliant and move on with our lives.

The trend we see now is that everything seems to be piled up to our clients to do this and to do that. In the words of one extremely frustrated customer: “Everytime we talk to this *** (name redacted), it seems they are trying to sell us something and getting something out of us, like we are some kind of golden goose.”

Now, obviously, if you are a QSA company and doing that:- STOP IT. Stop it. It’s not only naughty and bring disrepute to your other brethren in the industry, it’s frowned upon and considered against your QSA code! Look at the article here.

Now PCI scoping itself deserves a whole new series of articles but I just want to zoom down to a particular scoping scenario that we recently encountered. It’s in a merchant environment.

Now many of our merchants have either or both of these scopes: Card terminal to process card present at the stores and E-Commerce site. There is one particular customer with card terminal POI (point of interaction), or traditionally known as EDCs (Electronic Data capture). Basically this is where the customer comes, take out the physical card and dip/wave it to this device at the location of the stores. So yes, PCI is required for the merchant for the very fact that the stores have these devices that interact with cards. Now what happens after this?

Most EDCs have SIM based connectivity now, and it goes straight to the acquirer using ISO8583 messages. These are already encrypted on the terminal itself and routes through the telco network to the bank/acquirer for further processing. Other ways are through the store network, routing back to the headquarters and then out to the acquirer. There are reasons why this happens, of course, one would be the aggregation of stores to HQ allows more visibility on the transactions and analysis of traffic. The thing here is, the terminal messages are encrypted by the terminals, that the merchants do not have any access to the keys for decryption. This is important.

Now, what happened was that some QSAs have taken into their mind that because the traffic is routed through the HQ environment, the HQ gets pulled into scope. And therefore , this particular traffic must be properly segmented and then segmentation PT needs to be performed. This could potentially lead to a lot of issues, especially if your HQ environment is populated with a lot of different segments – it could constitute multiple, tiring, tedious testing by the merchant team….or it could constitute a profitable service done by your ‘service providers’ (Again, if these service providers happen to be your QSA, you can see where the question of upsell and independence come from).

Now here’s the crux. We hear these merchants telling us, oh, their consultant or QSA say that it’s mandatory for segmentation PT to occur in this HQ environment. The reasoning is that there is card data flowing through it. Regardless whether it is encrypted or not, as long as there is card data, IT IS IN SCOPE. Segmentation PT MUST BE DONE.

But. Is it though?

The whole point of segmentation PT is that it demarcates out of scope to in-scope. By insisting to have segmentation PT done, is to concede that there is an IN-SCOPE segment or environment in the HQ. The smug QSA nods, as he sagely says, “Well, as QSAs, we are the judge, jury and executioner. I say there is an in scope, regardless of encryption.”

So, we look at the PCI SSC and the standards, and let’s see. QSAs will point to page 14 of PCI-DSS v4.0 standards under “Encrypted Cardholder Data and Impact on PCI DSS Scope”.

Encryption alone is generally insufficient to render the cardholder data out of scope for PCI DSS and does not remove the need for PCI DSS in that environment.

PCI-DSS v4.0.1 by a SMILING QSA RUBBING PALMS TOGETHER

Let’s read further this wonderful excerpt:

The entity’s environment is still in scope for PCI DSS due to the presence of cardholder data. For example, for a merchant card-present environment, there is physical access to the payment cards to complete a transaction and
there may also be paper reports or receipts with cardholder data. Similarly, in merchant card-not-present environments, such as mailorder/telephone-order and e-commerce, payment card details are provided via channels that need to be evaluated and protected according to PCI DSS.

So far, correct. We agree to this. Exactly like what was mentioned, PCI is in scope. The question here is, will the HQ gets pulled in scope just for transmitting encrypted card data from the POIs?

Let’s look at what causes an environment with card encryption to be in scope (reading further down the paragraph)

a) Systems performing encryption and/or decryption of cardholder data, and systems performing key management functions,

b) Encrypted cardholder data that is not isolated from the encryption and decryption and key management processes,

c) Encrypted cardholder data that is present on a system or media that also contains the decryption key,

d) Encrypted cardholder data that is present in the same environment as the decryption key,

e) Encrypted cardholder data that is accessible to an entity that also has access to the decryption key.

So let’s look at the HQ scope. Does it cover the following 5 criteria for in-scope PCI-DSS dealing with encrypted card data? There is no decryption or encryption process done. The encrypted cardholder data is isolated from the key management processes. The merchant has no access or anything to do with the decryption key.

So now you see the drift. Moving down the paragraph, we find noted that when an entity receives and/or stores only data encrypted by another entity, and where they do not have the ability to decrypt the data, they may be able to consider the encrypted data out of scope if certain conditions are met. This is because responsibility for the data generally remains with the entity, or entities, with the ability to decrypt the data or impact the security of the
encrypted data.

In other words: Encrypted cardholder data (CHD) is out of scope if the entity being assessed for PCI cannot decrypt the encrypted card data.

So now back to the question, if this is so, then why does the merchant still need PCI? Well, because it’s already provisioned above: For example, for a merchant card-present environment, there is physical access to the payment cards to complete a transaction and there may also be paper reports or receipts with cardholder data.

So therefore, stores are always in scope. The question we have here is, if the HQ or any other areas are pulled in scope simply for transmitting encrypted CHD as a passthrough to the acquirer. In many way, this is similar to why PCI considered telco lines as out of scope. They simply provide the highway where all these encrypted messages travel on.

Now, of course, the QSA is right about one thing. They do have the final say, because they can still insist on customers doing the segment PT even if its not needed by the standard. They can impose their own risk-based requirements. They can insist the clients do a full application pentest or ASV over all IPs not related to PCI. They can insist on clients getting a pink elephant to dance in a tutu in order to pass PCI. It’s up to them. But guess what?

It’s also up to the customer to change or have another opinion on this. There are plenty of QSAs about. And once more, not all QSAs are created equal as explored in our article here.  Here we debunk common myths like whether having a local QSA makes any difference or not (it doesn’t), and whether all QSAs interpret PCI the same way (they don’t) and how important independence and conflict of interest should play a role, especially in scoping and working for the best interest of the customer, and not peddling services.

So, if you want to have a go with us, or at least just get an opinion on your PCI scope, drop a message to pcidss@pkfmalaysia.com and we will get back to you and sort out your scoping questions!


The Question of QSA Conflict

An interesting conversation over coffee with a client today gave me something to mull over a little. The question brought to the table was how some assessors, while engaged in audit, brought up other services they offer like ASV, penetration testing and vulnerability scan and how this may look like a conflict of interest issue.

I will start first by proclaiming that we aren’t QSAs. We do have a myriad of certifications such as ISO and other personal certs in information security, but this article isn’t about our resume. It’s the ever important question of the role of the QSA and whether they should be providing advisory services.

Why we choose not to go the route of QSAs is for another article, but suffice to say, in the same regard we work with CBs for ISO projects, we employ the same business model for PCI or any other certification projects. We rabidly believe in the clear demarcation of those doing the audit and those doing the implementation and advisory. After all, we are in the DNAs of statutory auditors and every single customers or potential customers we have require a specific conflict check, in order to ensure independence and not provide consulting work that may jeopardize our opinions when it comes to audit. Does anyone recall Enron? Worldcom? Waste Management? Goodbye, 90 year old accounting firm.

We have worked with many QSAs in almost 14 years of doing PCI-DSS – and here, QSAs I mean by individuals as well as QSA-Cs (QSA Companies). Our group here is collectively made up of senior practitioners in information security and compliance, so we don’t have fresh graduates or juniors going about advising 20 years plus C level veterans on how to run their networks or business.

A QSA (Qualified Security Assessor) company in a nutshell is a company that is qualified by the PCI Security Standards Council (PCI SSC) to perform assessments of organization against the PCI standards. Take note of the word: QUALIFIED. This becomes important because there is a very strict re-qualification program from the PCI-SSC to ensure that the quality of QSAs are maintained. Essentially, QSAs are vouched by the PCI SSC to carry out assessment tasks. Are all QSAs created equal? Probaby not as based on our experience some are probably better than others in specific aspects of PCI-DSS. Even the PCI SSC has a special group of QSAs under their Global Executive Assessor Roundtable (GEAR), which we will touch on later.

The primary function of a QSA company is to evaluate and verify an organisation’s adherence to the PCI DSS requirements. This involves a thorough examination of the organisation’s cardholder data environment (CDE) — including its security systems, network architecture, access controls, and policies — to ensure that they meet the PCI requirements.

Following the assessment, the QSA company will then prepare a Report on Compliance (RoC) and an Attestation of Compliance (AoC), which are formal documents that certify the organization’s compliance status. Please don’t get me started on the dang certificate because I will lose another year of my life with high blood pressure. These OFFICIAL documents are critical for the organization to demonstrate the company’s commitment to security to partners, customers, and regulatory bodies. The certificate, however, can be framed to be hanged on the wall of your toilet, where it rightfully belongs. Right next to the toilet paper, which has probably a slightly higher value.

Anyway, QSAs have very specific roles defined by the SSC:

– Validating and confirming Cardholder Data Environment (CDE) scope as defined by the assessed entity.
– Selecting employees, facilities, systems, and system components accurately representing the assessed environment if sampling is employed.
– Being present onsite at the assessed entity for the duration of each PCI DSS Assessment or perform remote assessment activities in accordance with applicable PCI SSC assessment guidance.
– Evaluating compensating controls, as applicable.
– Identifying and documenting items noted for improvement, as applicable.
– Evaluating customized controls and deriving testing procedures to test those controls, as applicable.
– Providing an opinion about whether the assessed entity meets PCI DSS Requirements.
– Effectively using the PCI DSS ROC Template to produce Reports on Compliance.
– Validating and attesting as to an entity’s PCI DSS compliance status.
– Maintaining documents, workpapers, and interview notes that were collected during the PCI DSS Assessment and used to validate the findings.
– Applying and maintaining independent judgement in all PCI DSS Assessment decisions.
– Conducting follow-up assessments, as needed

QSA PROGRAM GUIDE 2023

You can see above, there is no advisory, recommendation, consultation, implementation work listed. It’s purely assessment and audit. What we do see are more often than not, QSAs do offer other services under separate entities. This isn’t disallowed specifically, but the SSC does recommend a healthy dose of independence:

The QSA Company must have separation of duties controls in place to ensure Assessor Employees conducting or assisting with PCI SSC Assessments are independent and not subject to any conflict of interest.

QSA Qualification requirements 2023

Its hard to adjudge this point, but the one providing the audit shouldn’t be the one providing the consultation and advisory services. Some companies get around this by having a separate arm providing special consultation. Which is where we step in, as without doing any gymnastics in organizational reference, we make a clear demarcation of who does the audit and who does the consultation and advisory role.

The next time you receive any proposal, be sure to ask the pertinent question: are they also providing support and advisory? Because a good part of the project is in that, not so much of the audit. We have actually seen cases where the engaged assessor flat out refused to provide any consultative or advisory or templates or anything to assist the customer due to conflict of interest, leaving the client hanging high and dry unless they engage another consultative project with them separately. Is that the assessor’s fault? In theory, the assessor is simply abiding with the requirements for independence. On the other hand, these things should have been mentioned before the engagement, that a bulk of the PCI project would be in the remediation part and definitely guidance and consultation would be needed! It might reek of being a little disingenuous. It’s frustrating for us when we get pulled in halfway through a project and we ask, well why haven’t you query your engaged QSA on this question? Well, because they want another sum of money for their consultative works, or they keep upselling us services that we are not sure we need unless we get their advisory in. What do you think their advisory is going to say? You can see whereas on paper, it might be easy to state that independence has been established, in reality, it’s often difficult to distinguish where the audit, recommendations, advisory and services all start or end as sometimes it’s all mashed. Like potatoes.

Here’s the another official reference to this issue in FAQ #1562 (shortened)

If a QSA Employee(s) recommends, designs, develops, provides, or implements controls for an entity, it is a conflict of interest for the same QSA Employee(s) to assess that control(s) or the requirement(s) impacted by the control(s).

Another QSA Employee of the same QSA Company (or subcontracted QSA) – not involved in designing, developing, or implementing the controls – may assess the effectiveness of the control(s) and/or the requirement(s) impacted by the control(s). The QSA Company must ensure adequate, documented, and defendable separation of duties is in place within its organization to prevent independence conflicts.

FAQ #1562

Again, this is fairly clear that QSAs providing both assessment and advisory/implementation services are not incorrect in doing so, but need to ensure that proper safeguards are in place, presumably to be checked thoroughly by their requalification requirements, under section 2.2 “Independence” of the QSA requalification document. To save you time on reading, there isn’t much prescriptive way to ensure this independence, so we’re left to how the company decides on their conflict of interest policies. Our service is to ensure with confidence that the advice you receive is indeed independent and as much as we know, to benefit the customer, not the assessor. We don’t have skin in their services.

In summary, QSAs can theoretically provide services but it should come separately from the audit, so ensure you get the right understanding before starting off your PCI journey. Furthermore and more concerningly, we’ve seen QSAs refused to validate the scope provided to them, citing that this constitute ‘consulting and advisory’ and needs additional payment. This is literally the first task a QSA does in their list of responsibility, so call them out on it or call us in and let us deal with them. These charlatans shouldn’t even be QSAs in the first place if this is what they are saying.

And finally, speaking on QSAs that are worth their salt – the primary one we often work with Controlcase has been included in the PCI SSC Global Executive Assessor Roundtable 2024 (GEAR 2024).

https://www.pcisecuritystandards.org/about_us/global_executive_assessor_roundtable/

These are nominated as an Executive Committee level advisory board comprising senior executives from PCI SSC assessor companies, that serves as a direct channel for communication between the senior leadership of payment security assessors and PCI SSC senior leadership.

In other words, if you want to know who the SSC looks to for PCI input, these are the guys. Personally, especially for complex level 1 certification, this would be the first group of QSAs I would start considering before approaching others, as these are nominated based on reputation, endeavor and commitment to the security standards — not companies that cough out money to sponsor events or conferences, or look prominent in their dazzling booths, give free gifts but is ultimately unable to deliver their projects properly to their clients.

Let us know via email to pcidss@pkfmalaysia.com if you have any queries on PCI-DSS, especially the new version 4.0 or any other compliances such as ISO27001, NIST, RMIT etc!

Major Changes of PCI v4

So now as we approach the final throes of PCI-DSS v3.2.1, the remaining 3 weeks is all that is left of this venerable standard before we say farewell once and for all.

PCI-DSS V4.0 is a relative youngster and we are already doing hours of updates with our customers on the things they need to prepare for. Don’t underestimate v4.0! While its not a time to panic, it’s also not a time to just lie back and think that v4.0 is not significant. It is.

Below is a table that provides an insight of the major changes we are facing in v4.0.

Bearing in mind that most of the requirements now start off with keeping policies updated and document roles and responsibilities, the major changes are worth a little bit of focus. In the next series of articles, we will go through each one as thoroughly as we can and try to understand the context in which it exists on.

Let’s start off the one on the top bin. Requirement 3.4.2.

Req. 3.4.2: When using remote-access technologies, technical controls prevent copy and/or relocation of PAN for all personnel, except for those with documented, explicit authorization and a legitimate, defined business need

PCI v4.0

Ok, we have underlined and emphasized a few key points in this statement. Because we feel that is important. Let’s start with what 3.4.2 applies to.

It applies to: Remote Access

It requires: Technical Controls

It must: PREVENT THE COPYING/RELOCATION

Of the subject matter: Full Primary Account Number

In v3.2.1 this was found in section 12.3.10 with slightly different wordings.

Req 12.3.10 For personnel accessing cardholder data via remote-access technologies, prohibit the copying, moving, and storage of cardholder data onto local hard drives and removable electronic media, unless explicitly authorized for a defined business need. Where there is an authorized business need, the usage policies must require the data be protected in accordance with all applicable PCI DSS Requirements.

PCI v3.2.1

I think 4.0, aside from the relocation of the requirement to the more relevant requirement 3 (as opposed to requirement 12, which we call the homeless requirement for any controls that don’t seem to fall into any other earlier requirements), reads better. Firstly, putting it in requirement 3 puts the onus on the reader to consider this as part of protection of storage of account data which is the point of Requirement 3. Furthermore, digging into the sub-requirement, 3.4 section header states: Access to displays of full PAN and ability to copy PAN is restricted.

This is the context of it, where we find the child of this 3.4 section called 3.4.2 and we need to understand it first, before we go out and start shopping for the first DLP system on the market and yell out “WE ARE COMPLIANT!”

3.4 talks about displays of FULL PAN. So we aren’t talking about truncated, or encrypted PAN here. So in theory, if you copy out a truncated PAN or encrypted PAN, you shouldn’t trigger 3.4.2. Its specific to full PAN. While we are at it, we aren’t even talking about cardholder data. A PAN is part of cardholder data, while not all cardholder data is PAN. Like the Hulk is part of the Avengers but not all Avengers are the Hulk. So if you want to copy the cardholder name or expiration date for whatever reasons like data analysis, behavioural prediction, stalking etc…this isn’t the requirement you are looking for.

Perhaps this is a good time to remind ourselves what is Account Data, Card Holder Data and Sensitive Authentication Data (SAD).

The previous v3.2.1 doesn’t actually state ‘technical controls’, which goes to say that if it’s a documentary controls, or a policy control, or something in the Acceptable Use Policy, it can also pass off as compliant. V4.0 removes that ambiguity. Of course, the policy should be there, but technical controls are specific. It has to be technical. It can’t be, oh wait, I have a nice paragraph in section 145.54(d)(i)(iii)(ab)(2.4601) in my information security acceptance document that stated this!

So these technical control(s) must PREVENT copying and relocation. Firstly just to be clear, copy is Ctrl-C and Ctrl-V somewhere else. Relocation is Ctrl-X and Ctrl-V somewhere else. Both has its problem. In copying, we will end up PAN having multiple locations of existence. In relocation, the PAN is moved, and now systems accessing the previous location will throw up an error – causing system integrity and performance issues. Suffice to say, v4.0 demands the prevention of both happening to PAN. Unless you have a need that is:

a) DOCUMENTED

b) EXPLICITLY AUTHORIZED (not Implied)

c) LEGITIMATE

d) DEFINED

When a business need is both “documented” and “defined,” it means that the requirement has been both precisely articulated (defined) and recorded in an official capacity (documented). So a list of people with access is needed for the who, why they legitimately need to access/copy/relocate PAN in terms of their business, explicitly authorized by proper authority (not themselves, obviously).

Finally, let’s talk about technical controls. Now, remember, this applies to REMOTE ACCESS. I’ve heard of clients who says, hey no worries, we have logging and monitoring in place for internal users. Or we have web application firewall in place. Or we have cloudflare in place. Or we have a thermonuclear rocket in place to release in case we get attacked. This control already implies ‘remote access’ into the environment. The users have passed the perimeter. It implies they are already trusted personnel, or contractors or service providers with properly authorized REMOTE ACCESS. Also, note that the authorization here is NOT for remote access, it is for the explicit action of copy/relocating PAN. In this case, most people would probably not have a business reason of copying/relocating PAN to their own systems unless for very specific business flow requirements. This means, only very few people in your organization should have this applied to them, under very specific circumstances. An actual real life example would be for an insurance client we have, they had to copy all transaction information, including card details in an encrypted format and put it into a removable media (like a CD-ROM) and then send it over to the Ombudsman for Financial Services as part of a regulatory requirement. That’s pretty specific.

So what passess off as a ‘technical control’? A Technical control may be as simple as to completely prevent copy/paste or cut/paste ability when accessing via remote access. This can be done in RDP or disable clipboard via SSLVPN. While I am not the most expert product specialist in remote access technologies, I can venture to say its fairly common to have these controls inbuilt into the remote access product. So, there may not be a need for DLP in that sense, as the goal here is to prevent the copying and relocation of PAN.

Now that being said, an umbrella disallow of copy and paste may not go well with some suits or C-levels who want to copy stuff to their drive to work while they are in the Bahamas. Of course. You could provide certain granular controls, depending on your VPN product or which part of the network they access. If a granular control cannot be agreed on, then a possible way is to enforce proper control via DLP (Data Loss Prevention) in endpoint protection. Or control access to CDE/PAN via a hardened jump server that has local policy locked down. So the general VPN into company resources may be more lax, but the moment access to PAN is required, 3.4.2 technical controls come in play.

At the end, how you justify your technical controls could be through a myriad of ways. The importance is of course, cost and efficiency. It has to make cost sense and it must not require your users to jump through hoops like a circus monkey.

So there you have it, a break down of 3.4.2. We are hopping into the next one in the next article so stay tuned. If you have any queries on PCI-DSS v4.0 or other related cybersecurity needs, be it SOC1 or 2, ISO27001, ISO20000, NIST or whether Apollo 11 really landed on the moon in 1969, drop us a note at avantedge@pkfmalaysia.com and we will get back to you!

What the FIM is going on

If you have been doing PCI-DSS for some years, you have probably come across this term called FIM (File Integrity Montioring), which sometimes absolutely befuddles our customers. They generally think this is part of a wider SIEM or SOAR solution but not necessarily so. We’ll explore a little on why FIM is important, how it impacts PCI-DSS, some examples on configuration and what alternatives are there (if any). Here we go!

File Integrity Monitoring is the process of validating the integrity of operating system and application software files. It ensures that files have not been altered or compromised, whether maliciously or accidentally.

  1. Detecting Unauthorized Changes: FIM helps in detecting unauthorized changes to critical system files, configurations, and content files. These changes could be indicative of a breach, malware infection, or insider threat.
  2. Compliance Requirements: Many regulatory standards, such as PCI-DSS, HIPAA, and SOX, require FIM as part of their compliance criteria. It ensures that sensitive data is protected and that the integrity of the system is maintained.
  3. Preventing Data Breaches: By monitoring file changes, FIM can provide early warning signs of a potential data breach. It allows organizations to take proactive measures to prevent unauthorized access to sensitive information.
  4. Enhancing Forensic Analysis: FIM provides detailed logs of file changes, aiding in forensic analysis. It helps in understanding the nature of an attack, the affected files, and the potential impact.

Let’s pause for now and see if common Antivirus/antimalware can take over this compliance requirement without deploying a specific FIM. Why? Because all companies generally have some sort of anti-virus running in their systems and all companies are stingy in their compliance spending, so part of our job is to see if current technology can be sufficient to address compliance requirements. The difference between Anti virus and FIM boils down to the reason of their existence, their meaning to life and everything. Its 42!

While FIM focuses on monitoring the integrity of files, antivirus and antimalware solutions are designed to detect and remove malicious software.

  • Antivirus: Primarily targets known viruses and relies on signature-based detection. It may not detect unauthorized changes to files unless they are associated with a known virus signature.
  • Antimalware: Broader in scope, antimalware solutions target various malicious software, including viruses, spyware, and ransomware. Like antivirus, it may not detect subtle unauthorized file changes.

FIM complements these solutions by providing an additional layer of security, focusing on the integrity of files rather than just malicious content.

FIM also differs from Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solutions. That being said, its common that these systems are bundled along with FIM solutions so while it’s possible that SIEM may have FIM, it might not be true that FIM has SIEM. They are like, maybe a dysfunctional family who sometimes get together over Chinese New Year reunions.

  • SIEM: SIEM solutions collect and analyze log data from various sources to provide real-time analysis of security alerts. While SIEM can include FIM as a component, it encompasses a broader range of security monitoring functions.
  • SOAR: SOAR solutions focus on automating and orchestrating security operations. They help in coordinating various security tools and processes. Unlike FIM, which is more focused on file integrity, SOAR aims to streamline security operations and response.

FIM makes its appearance in PCI-DSS v4.0 in requirement 10, specifically 10.2, 10.3,10.4,10.5,10.7 and further on in 11.5, 12.10 and A3.5.1.

In 10.2, PCI basically wants FIM to be part of the logging requirements in terms of what to capture, retention, response and so on. Make sure your FIM is configured to monitor the critical files, and the details of the FIM logs has user and process details, who is responsible for the change event and captured in real time. Ensure alerts are generated for change events by privileged accounts which can be further correlated to create an automated incident. Also, make sure changes to log file security settings or removal of log files triggers real time alerts, with exhaustive event details. All creation and deletion activities are captured as well, and all event details must be as per 10.2.2 for the FIM log files.

10.3.4 makes specific mention of FIM but there is some confusion to this requirement ” File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data cannot be changed without generating alerts. “. Obviously if you try to monitor for changes in a log file and alert everytime that file is changed, your SIEM or SOAR will light up like Christmas. Because of the nature of log files, it is supposed to change! So to avoid the noise, ensure log files are monitored for changes in security settings, like permissions or ownership. If a log file is deleted, that is also an anomaly. And for those logs that are archived or digitally signed, if any changes are made to these, then your FIM should be able to detect it.

Requirement 11 doesn’t change much for V4.0 — it is the main portion for FIM in 11.5.2 and it remains pretty much the same. Requirement 12.10.5 does provide an explicit requirement to include FIM alerts into incident management and response. But you know that already, right?

There are plenty of FIM solutions out there. The common ones we see is OSSEC which is deployed together with Alienvault previously. Tripwire is also a well known name in the FIM arena. If you want to explore the inbuilt Linux version of FIM, auditd might be worth your time. For those unfamiliar with auditd, it’s a component that provides auditing functionality for the Linux kernel. It’s widely used for security monitoring, system troubleshooting, and compliance reporting. Configuring auditd might be intimidating to some at first, but here’s some rules to get you started, found in this link

https://github.com/linux-audit/audit-userspace/blob/master/rules/30-pci-dss-v31.rules

In summary, it covers the following areas (config has been omitted in this article, you can go to the site to get the details)

  1. User Access Linking (10.1): Implicitly met by the audit system.
  2. User Access to Cardholder Data (10.2.1): Requires a watch on the database, excluding daemon access. (Path to the database must be specified.)
  3. Logging Administrative Actions (10.2.2): Enable tty logging for su and sudo. Special cases for systemd-run and pkexec are included.
  4. Monitoring Privilege Escalation Configuration (10.2.2): Watches changes to /etc/sudoers and /etc/sudoers.d/.
  5. Access to Audit Trails (10.2.3): Monitors access to /var/log/audit/ and specific audit commands.
  6. Invalid Logical Access Attempts (10.2.4): Naturally met by PAM.
  7. Logging of Identification & Authentication (I&A) Mechanisms (10.2.5.a): Handled by PAM.
  8. Logging of Privilege Elevation (10.2.5.b): Monitors specific syscalls related to privilege elevation.
  9. Logging Account Changes (10.2.5.c): Watches changes to account-related files like /etc/group, /etc/passwd, etc.
  10. Time Data Protection (10.4.2b): Places rules to check time synchronization.
  11. Securing Audit Trails (10.5): Includes various measures to protect audit logs, limit viewing, prevent unauthorized modifications, back up files, and monitor log modifications.

So, there you go. Lastly, though since PCI v4.0 came out, the council seem to have made distinction of change detection mechanisms vs File integrity monitoring, stating that FIM is part of CDM, sort of like a subset. I suppose this gives a little more leeway for companies to implement other types of CDM other than FIM, although FIM is probably the only one that can address all the above requirements comprehensively and without any need for compensating controls. But just for some ideas, the below may be a list of other CDMs that can possibly address the FIM functionalities in part, automated or manual:

  1. Version Control Systems: These systems track changes to files and code within a development environment. They allow developers to see what was changed, who changed it, and why. Tools like Git, Subversion, and Mercurial are examples of version control systems that provide change detection.
  2. Database Monitoring Tools: These tools monitor changes to database schemas, configurations, and content. They can alert administrators to unauthorized alterations, additions, or deletions within the database. Tools like Redgate SQL Monitor or Oracle Audit Vault are examples.
  3. Configuration Management Tools: Configuration management tools like Ansible, Puppet, and Chef can detect changes in system configurations. They ensure that systems are consistently configured according to predefined policies and can alert administrators to unauthorized changes.
  4. Network Anomaly Detection Systems: These systems monitor network behavior and alert to changes that may indicate a security threat. They can detect changes in traffic patterns, unusual login attempts, or alterations to network configurations.
  5. Endpoint Detection and Response (EDR) Solutions: EDR solutions monitor endpoints for signs of malicious activities and changes. They can detect changes in system behavior, file activities, and registry settings, providing a broader view of potential security incidents.
  6. Log Monitoring and Analysis Tools: Tools like Splunk or LogRhythm analyze log files from various sources to detect changes in system behavior, user activities, or security settings. They can provide real-time alerts for suspicious changes.
  7. Digital Signature Verification: Some systems use digital signatures to verify the integrity of files and data. Any alteration to the digitally signed content would cause a verification failure, alerting to a potential unauthorized change.
  8. Cloud Security Tools: With the rise of cloud computing, tools like AWS Config or Azure Security Center provide change detection for cloud resources. They monitor configurations, permissions, and activities within the cloud environment.

Again, we would highly recommend that a FIM be used, but in the case where it is not possible in that environment, for instance Cloud environment, then other CDMs can be possible. If you need to know more about FIM and PCI or any compliance in general, drop us a note at pcidss@pkfmalaysia.com and we will get back to you immediately!

Recap on PCI v4.0: Changes in The 12 Requirements

So here we are in 2023 and PCIv4.0 is on everyone’s thoughts. Most of our customers have finished their 2022 cycle; and some are going through their 2023 cycle. Anyone certifying this year in general, means that for the next cycle on 2024, they will be certified against v4.0. V3.2.1 will be sunset in March 2024, so as a general rule of thumb, anyone going for certification/recertification in 2024 – hop onto v4.0.

Take also special note of the requirements where statements are “Best practices until 31 March 2025, after which these requirements will be required and must be fully considered during a PCI DSS assessment “.

It doesn’t mean that you can actively ignore these requirements until 2025; rather, to use this period of around 2 years as a transition period for your business to move into these newer requirements. So, to put it short: start even now. One of the requirements that gets a lot of flak is 3.5.1.2 which is the disk level encryption; in other words, technology like TDE being used to address encryption requirements. This is no longer a get out of jail free card because after March 2025, you will need to implement (on top of TDE, if you still insist on using it), if you are not using it on removable media – the 4 horsemen of the apocalypse – Truncation, Tokenization, Encryption or Hashing. And before you get too smart and say yes, you are using Encryption already, i.e transparent or disk-level encryption; PCI is one step ahead of you, you Maestro of Maleficant Excuses, as they spell out “through truncation or a data-level encryption mechanism“.

So, for v4.0 it’s probably easier to just break it up into

a) SAQs v4.0 – Self assessment

This is straight forward – a lot of changes have occurred to some of the venerable SAQs out there, such as SAQ A. I’ll cover that in another article.

b) ROC v4.0 – from QSA/ISA

Most QSAs should be able to certify against v4.0. You can check on the PCI-DSS QSA lists, they have ” ** PCI DSS v4 Assessors  ” under their names. There also may be some shakeout that some underqualified QSAs may not go through the training to upgrade to v4 assessors. On another note, ISAs don’t generally have these requirements to upgrade to v4.0; although it’s recommended.

Now, perhaps is a good time to just go through a very big overview of V4.0 and explain why some of these changes had been effected.

Changes to Requirements

For this overview, we will first look at the 12 requirements statements and see where the changes are. In a big move, the council has updated the main requirements (not so subtly), getting rid of many of the tropes of previous incarnation of the standard. Let’s start here.

Requirement 1 is now changed to “Install and Maintain Network Security Controls” as opposed to “Install and maintain firewall configuration to protect CHD.”

This is a good change; even if the wordings are still a little clumsy. After all Network Security Controls are defined so broadly and may not just be a service or product like a firewall or a NAC or TACACs. It could be access controls, AAA policies, IAM practices, password policies, remote access controls etc. So how do you ‘install’ such policies or practices? A better word would be to “Implement” but I think that’s nitpicking. Install is an OK word here, but everytime I hear that, I think of someone installing a subwoofer in my car or installing an air-cond in my rental unit. But overall, it’s a lot better than just relying on the firewall word – since in today’s environment, a firewall may no longer just function as a firewall anymore; and integrated security systems are fairly common where multiple security functions are rolled into one.

Requirement 2 now reads as “Apply Secure Configurations to All System Components.” Which is a heck better than “Do not use vendor supplied defaults for system passwords and other security parameter.” The latter always sounded so off, as if it’s like a foster child that never belonged to the family. Because it reads more like a control objective or part of a smaller subset of control area as opposed to an overarching requirement. It just made PCI sounds juvenile compared to much better written standards like the ISO, or NIST or CIS.

Requirement 3 changes are subtle from “Protect stored cardholder data” to “Protect stored account data” – they removed cardholder data and replace it with “account” data. It generally means the same thing; but with account data, they possibly want to broaden the applicability of the standard. Afterall, it may be soon that cards may be obsolete; and it might be all information will be contained in the mobile device, or authenticated through virtual cloud services. Hence a traditional person ‘holding a card’ may no longer be a concept anymore.

Requirement 4 reverts back to cardholder data, with the new 4.0 stating “Protect Cardholder Data with Strong Cryptography During Transmission Over Open, Public Networks”. Which is sometimes frustrating. If you have decided to call account data moving forward, just call it account data and not revert back to cardholder data. Also this requirement changed from the older “Encrypt transmission of cardholder data across open, public networks”. It may sound the same, but it’s different. It removes the age old confusion on, what if I encrypt my data first and then only transmit it? In the previous definition, it doesn’t matter. The transmission still needs to be encrypted by the way it is written. However, with the new definition, you are now able to encrypt the data and send it across an unencrypted channel (though not recommended) and still be in compliance. Ah, English.

“Requirement 5: Protect All Systems and Networks from Malicious Software” is a definite upgrade from the old “Requirement 5: Protect all systems against malware and regularly update anti-virus software or programs”. This gives a better context from the anti-virus trope – where QSAs insist on every system having an antivirus even if its running on VAX or even if it brings down the database with its constant updates. Now, with a broader understanding that anti-virus is NOT the solution to malicious software threats; we are able to move to a myriad of end point security that serves a better purpose to the requirement. So long, CLAMAV for Linux and Unix!

Requirement 6 reads about the same except they changed the word ‘applications’ to software i.e “Develop and maintain secure systems and applications” to “Develop and Maintain Secure Systems and Software”. I am not sure why; but I suppose that many software that may serve as a vector of attack may not be classified as an application. It could be a middle ware, or an API etc.

By the way, just to meander away here. I noted that in V4.0 requirements, every word’s first letter is Capitalised, except for minor words like conjuctions, prepositions, articles. This seems to be in line with some of the published standards such as CIS (but not NIST), and its basically just an interesting way to write it. This style is called “Title Case”, and It Can Be Overused and Abused Quite a Lot if We Are Not Careful.

Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know vs previous version Requirement 7: Restrict access to cardholder data by business need to know. Again, this is more expansionary; as system components (we assume those in scope) may not just be containing cardholder data; but have influence over the security posture of the environment overall. Where previously you may say, well, it’s only access to the account data that requires ‘business need to know’ or least privvy; now, access to authentication devices; or SIEM, or any security based service that influences the security posture of the environment – all these accesses must be restricted to business need to know. Again – this is a good thing.

Requirement 8: Identify Users and Authenticate Access to System Components vs previous version “Identify and authenticate access to system components”. This seems like just an aesthetic fix. Since, yes, you probably want to identify USERS as opposed to identify ACCESS. It could mean the same thing, or it may not. A smart alec somewhere probably told the QSA, hey, we identified the access properly. It came from login 24601 from the bakery department at 6 am yesterday. Do we know the user? No, but PCI just needs us to identify the ‘access’ and not the user, right? OK, smart alec.

Requirement 9: Restrict Physical Access to Cardholder Data is the only one that does not have any changes, except for the aforementioned Title Case changes.

Requirement 10: Log and Monitor All Access to System Components and Cardholder Data vs Track and monitor all access to network resources and cardholder data. So two things changed here. “Log” vs “Track” and System Components vs Network Resources. I personally find the first change a bit limiting when you are saying to just log instead of ‘track’. But I know why they did it. Because Tracking is redundant, if you are already Monitoring it. So in another dimension somewhere, the same smart alec may state, no where did it tell us to ‘log’ or keep logs in this statement – they just want us to Track/Monitor users. So its just for clarity that from here on, you log and monitor, not just track/monitor. The second change is very good, because now, there is no ambiguity for non-network resources. It’s true when one day, we actually came across a client stating this does not apply to them because they do not put their critical systems on the network and they only use terminal access to it, therefore there’s no need to log or monitor. The creativity of these geniuses know no bounds when it comes to avoiding requirements.

Requirement 11: Test Security of Systems and Networks Regularly vs Regularly test security systems and processes. Switching the word regularly is done just for aesthetic reading, but the newer word strings better and again, removes ambiguity. I mean first thing, the older requirement tells us to test ‘security systems’. Now most of the workstations et al may not be defined as ‘security systems’. I would define security systems as a system that contributes to the security posture of a company – an authentication system, a logging system, the NAC, the firewall etc. Of course, this isn’t what PCI meant and they realised, snap, English is really a cruel language. “Security systems” does not equal to “Security of Systems”. That two letters there changed everything. Now, systems are defined as any system in scope – not just one that influences security. We need to test security of all systems in scope. The second change to remove processes and insert in Networks is better, I agree. I did have a client asking me, how do we ‘test processes’ for PCI. Do we need to audit and check the human process of doing something? While that is true in an audit, that’s not the spirit of this requirement. This is for technical testing, i.e scans, penetration testing etc. So rightly, they removed ‘processes’ and inserted Networks; which also clears the ambiguity of performance of a network penetration testing, as well as application penetration testing.

Again, I just want to add, all these are actually clarified in the sub controls in the both v3.2.1 and v4.0 but if someone were just to skate through PCI reading the main requirements titles – I can see where the misunderstanding may occur with the old titles.

Finally, Requirement 12 Support Information Security with Organizational Policies and Programs is an upgrade from the previous Maintain a policy that addresses information security for all personnel. The previous title was just clumsy. Many clients understood it to be a single policy, or information security policy that needs to be drawn up, because it states Maintain A Policy. One Policy to rule them all. And this policy governs information security for all humans. Which doesn’t make sense. Unless the ‘for’ here was to mean that this policy needs to be adhered by all personnel; not that the personnel were the subjects of the information security. Yikes. The newer route makes more sense. Have your policies and programs support information security overall. Not information security of your people; but information security, period.

So just by reading the titles (and not going deep dive yet), we can see the improvement in clarifying certain things. There is more function in the sentence; there is more of an overarching purpose to it and most of all, it looks and reads more professionally that puts PCI more into the stately tomes of ISO, CIS or NIST.

While waiting for the next deep dive article, drop us a note at pcidss@pkfmalaysia.com if you have any queries at all about PCI, ISO27001, NIST, SOC or any standard at all. Happy New Year, all!

« Older posts

© 2024 PKF AvantEdge

Up ↑