Page 2 of 40

Major Changes of PCI v4

So now as we approach the final throes of PCI-DSS v3.2.1, the remaining 3 weeks is all that is left of this venerable standard before we say farewell once and for all.

PCI-DSS V4.0 is a relative youngster and we are already doing hours of updates with our customers on the things they need to prepare for. Don’t underestimate v4.0! While its not a time to panic, it’s also not a time to just lie back and think that v4.0 is not significant. It is.

Below is a table that provides an insight of the major changes we are facing in v4.0.

Bearing in mind that most of the requirements now start off with keeping policies updated and document roles and responsibilities, the major changes are worth a little bit of focus. In the next series of articles, we will go through each one as thoroughly as we can and try to understand the context in which it exists on.

Let’s start off the one on the top bin. Requirement 3.4.2.

Req. 3.4.2: When using remote-access technologies, technical controls prevent copy and/or relocation of PAN for all personnel, except for those with documented, explicit authorization and a legitimate, defined business need

PCI v4.0

Ok, we have underlined and emphasized a few key points in this statement. Because we feel that is important. Let’s start with what 3.4.2 applies to.

It applies to: Remote Access

It requires: Technical Controls

It must: PREVENT THE COPYING/RELOCATION

Of the subject matter: Full Primary Account Number

In v3.2.1 this was found in section 12.3.10 with slightly different wordings.

Req 12.3.10 For personnel accessing cardholder data via remote-access technologies, prohibit the copying, moving, and storage of cardholder data onto local hard drives and removable electronic media, unless explicitly authorized for a defined business need. Where there is an authorized business need, the usage policies must require the data be protected in accordance with all applicable PCI DSS Requirements.

PCI v3.2.1

I think 4.0, aside from the relocation of the requirement to the more relevant requirement 3 (as opposed to requirement 12, which we call the homeless requirement for any controls that don’t seem to fall into any other earlier requirements), reads better. Firstly, putting it in requirement 3 puts the onus on the reader to consider this as part of protection of storage of account data which is the point of Requirement 3. Furthermore, digging into the sub-requirement, 3.4 section header states: Access to displays of full PAN and ability to copy PAN is restricted.

This is the context of it, where we find the child of this 3.4 section called 3.4.2 and we need to understand it first, before we go out and start shopping for the first DLP system on the market and yell out “WE ARE COMPLIANT!”

3.4 talks about displays of FULL PAN. So we aren’t talking about truncated, or encrypted PAN here. So in theory, if you copy out a truncated PAN or encrypted PAN, you shouldn’t trigger 3.4.2. Its specific to full PAN. While we are at it, we aren’t even talking about cardholder data. A PAN is part of cardholder data, while not all cardholder data is PAN. Like the Hulk is part of the Avengers but not all Avengers are the Hulk. So if you want to copy the cardholder name or expiration date for whatever reasons like data analysis, behavioural prediction, stalking etc…this isn’t the requirement you are looking for.

Perhaps this is a good time to remind ourselves what is Account Data, Card Holder Data and Sensitive Authentication Data (SAD).

The previous v3.2.1 doesn’t actually state ‘technical controls’, which goes to say that if it’s a documentary controls, or a policy control, or something in the Acceptable Use Policy, it can also pass off as compliant. V4.0 removes that ambiguity. Of course, the policy should be there, but technical controls are specific. It has to be technical. It can’t be, oh wait, I have a nice paragraph in section 145.54(d)(i)(iii)(ab)(2.4601) in my information security acceptance document that stated this!

So these technical control(s) must PREVENT copying and relocation. Firstly just to be clear, copy is Ctrl-C and Ctrl-V somewhere else. Relocation is Ctrl-X and Ctrl-V somewhere else. Both has its problem. In copying, we will end up PAN having multiple locations of existence. In relocation, the PAN is moved, and now systems accessing the previous location will throw up an error – causing system integrity and performance issues. Suffice to say, v4.0 demands the prevention of both happening to PAN. Unless you have a need that is:

a) DOCUMENTED

b) EXPLICITLY AUTHORIZED (not Implied)

c) LEGITIMATE

d) DEFINED

When a business need is both “documented” and “defined,” it means that the requirement has been both precisely articulated (defined) and recorded in an official capacity (documented). So a list of people with access is needed for the who, why they legitimately need to access/copy/relocate PAN in terms of their business, explicitly authorized by proper authority (not themselves, obviously).

Finally, let’s talk about technical controls. Now, remember, this applies to REMOTE ACCESS. I’ve heard of clients who says, hey no worries, we have logging and monitoring in place for internal users. Or we have web application firewall in place. Or we have cloudflare in place. Or we have a thermonuclear rocket in place to release in case we get attacked. This control already implies ‘remote access’ into the environment. The users have passed the perimeter. It implies they are already trusted personnel, or contractors or service providers with properly authorized REMOTE ACCESS. Also, note that the authorization here is NOT for remote access, it is for the explicit action of copy/relocating PAN. In this case, most people would probably not have a business reason of copying/relocating PAN to their own systems unless for very specific business flow requirements. This means, only very few people in your organization should have this applied to them, under very specific circumstances. An actual real life example would be for an insurance client we have, they had to copy all transaction information, including card details in an encrypted format and put it into a removable media (like a CD-ROM) and then send it over to the Ombudsman for Financial Services as part of a regulatory requirement. That’s pretty specific.

So what passess off as a ‘technical control’? A Technical control may be as simple as to completely prevent copy/paste or cut/paste ability when accessing via remote access. This can be done in RDP or disable clipboard via SSLVPN. While I am not the most expert product specialist in remote access technologies, I can venture to say its fairly common to have these controls inbuilt into the remote access product. So, there may not be a need for DLP in that sense, as the goal here is to prevent the copying and relocation of PAN.

Now that being said, an umbrella disallow of copy and paste may not go well with some suits or C-levels who want to copy stuff to their drive to work while they are in the Bahamas. Of course. You could provide certain granular controls, depending on your VPN product or which part of the network they access. If a granular control cannot be agreed on, then a possible way is to enforce proper control via DLP (Data Loss Prevention) in endpoint protection. Or control access to CDE/PAN via a hardened jump server that has local policy locked down. So the general VPN into company resources may be more lax, but the moment access to PAN is required, 3.4.2 technical controls come in play.

At the end, how you justify your technical controls could be through a myriad of ways. The importance is of course, cost and efficiency. It has to make cost sense and it must not require your users to jump through hoops like a circus monkey.

So there you have it, a break down of 3.4.2. We are hopping into the next one in the next article so stay tuned. If you have any queries on PCI-DSS v4.0 or other related cybersecurity needs, be it SOC1 or 2, ISO27001, ISO20000, NIST or whether Apollo 11 really landed on the moon in 1969, drop us a note at avantedge@pkfmalaysia.com and we will get back to you!

Zero Trust for 2024

As we enter into the new year, lets start off with a topic that most cybersecurity denizens would have heard of and let’s clarify it a little.

Zero Trust.

It seems a good place as any, to start 2024 off with the pessimism that accompanied the end of last year – the spate of cybersecurity attacks in 2023 had given us a taste of what is to come – insurance company – check, social security – check, the app with our vaccination information – check. While breaking down the attacks is meant for another article, what we are approaching now for the coming year is not just more of the same, but much more and more advanced attacks are bound to happen.

While Zero Trust is simply a concept – one of many – to increase resistance to attacks or breach, it’s by no means a silver bullet. There is NO silver bullet to this. We are in a constant siege of information warfare and the constant need to balance the need for sharing and the need for protection. It is as they say; the safest place would be in a cave. But that’s now living, that’s surviving. If you need to go somewhere, you need to fly, you have information with the airlines. If you need to do banking, you have information with the banks. If you need to conduct your daily shopping online, you are entrusting these guys like Lazada et al the information that otherwise you may not likely provide.

So Zero Trust isn’t the fact that you conduct zero transaction, its basically a simple principle: Trust no one, Verify everything. Compare it to the more traditional “trust but verify” approach, which assumed that everything inside an organisation’s network should be trusted, even if we do have verifications of it. Here’s a breakdown of the concept, in hopefully simpler terms.

The Basic Premise: Imagine a company as a fortified castle. In the old days, once you were inside the castle walls, it was assumed you belonged there and could roam freely. At least this is based on the limited studies we have done by binge watching Game of Thrones. All historical facts of the middle ages can be verified through Game of Thrones, including the correct anatomy of a dragon.

Back to the analogy, what if an enemy disguised as a friend managed to get inside? They would potentially have access to everything. Zero Trust Architecture operates on the assumption that threats can exist both outside and inside the walls. Therefore, it verifies everyone’s identity and privileges, no matter where they are, before granting access to the castle’s resources. The 3 keys you can remember can be:

  1. Never Trust, Always Verify: Zero Trust means no implicit trust is granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned). Basically, we are saying, I don’t care where you are or who you are, you are not having access to this system until I can verify who you are.
  2. Least Privilege Access: Individuals or systems are given the minimum levels of access — or permissions — needed to perform their tasks. This limits the potential damage from incidents such as breaches or employee mistakes. We see this issue a lot, whereby a C level person insist on having access to everything even if he doesn’t necessarily know how to navigate a system without a mouse. When asked why, they say, well, because I am the boss. No. In Zero Trust, in fact, because you are the boss, you shouldn’t have access into a system that does not require your meddling. Get more sales and let the tech guys do their job!
  3. Micro-Segmentation: The network is broken into smaller zones to maintain separate access for separate parts of the network. If a hacker breaches one segment, they won’t have access to the entire network.

The steps you can follow to implement the concept of Zero Trust:

Identify Sensitive Data: Know where your critical data is stored and who has access to it. You can’t protect everything. Or at least not with the budget you are given, which for most IT groups, usually is slightly more than they allocate to upkeep the company’s cat. So data identification is a must-have. Find out what is the data that you most want to protect and spend your shoe-string budget to protect it!

Verify Identity Rigorously: Use multi-factor authentication (MFA) and identity verification for anyone trying to access resources, especially important resources like logging systems, firewalls, external webservers etc. This could mean something you know (password), something you have (a smartphone or token), or something you are (biometrics). It used to cost a mortgage to implement things like this but over the years, cheaper solutions which are just as good are now available.

Contextual Access: Access decisions should consider the context. For example, accessing sensitive data from a company laptop in the office might be okay, but trying to access the same data from a personal device in a coffee shop might not be. This may not be easy, because now with mobile devices, you are basically accessing top secret information via the same device that you watch the cat playing the piano. Its a nightmare for IT security – but again, this has to have discipline. If you honestly need to access the server from Starbucks , then implement key controls like MFA, VPN, layered security and from a locked-down system.

Inspect and Log Traffic: Continuously monitor and log traffic for suspicious activity. If something unusual is detected, access can be automatically restricted. SOAR and SIEM products have advanced considerably over the years and today we have many solutions that do not require you to sell a kidney to use. This is beneficial as small companies are usually targeted for attacks, especially if these smaller companies services larger companies.

At the end, it all comes down to what are the benefits to adopt this approach.

Enhanced Security: By verifying everything, Zero Trust minimizes the chances of unauthorised access, thereby enhancing overall security. Hopefully. Of course, we may still have those authorised but have malicious intent, which would be much harder to protect from.

Data Protection: Sensitive data is better protected when access is tightly controlled and monitored. This equates to less quarter given to threat players out there.

Adaptability: Zero Trust is not tied to any one technology or platform and can adapt to the changing IT environment and emerging threats.

On the downside, there are still some challenges we need to surmount:

Complexity: Implementing Zero Trust can be complex, requiring changes in technology and culture. It’s not a single product but a security strategy that might involve various tools and technologies. This is not just a technical challenge as well, but a process and cultural change that may take time to adapt to.

User Experience: If not implemented thoughtfully, Zero Trust can lead to a cumbersome user experience with repeated authentication requests and restricted access. This is a problem we see a lot, especially in finance and insurance – user experience is key – but efficiency and security are like oil and water. Eternal enemies. Vader and Skywalker. Lex and Supes. United and Liverpool. Pineapple and Pizza.

Continuous Monitoring: Zero Trust requires continuous monitoring and adjustment of security policies and systems, which can be resource-intensive. We’ve seen implementation of SIEM and SOAR products which are basically producing so many alerts and alarms that it makes no sense anymore. These all become noise and the effects of monitoring is diluted.

In summary, an era where cyber threats are increasingly sophisticated and insiders can pose as much of a threat as external attackers, Zero Trust Architecture offers a robust framework for protecting an organisation’s critical assets. It’s about making our security proactive rather than reactive and ensuring that the right people have the right access at the right times, and under the right conditions. It’s culturally difficult, especially in Malaysia, where I will have to admit, our innate trust of people and our sense of bringing up means we always almost would open the door for the guy behind us to walk in, especially if he is dressed like the boss. We hardly would turn around and ask, “Who are you?” because we are such nice people in this country.

But, adopt we must. For any organisation looking to bolster its cybersecurity posture, Zero Trust isn’t just an option; it’s becoming a necessity. In PKF we have several services and products promoting Zero Trust – contact us at avantedge@pkfmalaysia.com and find out more. Happy New Year!

Gearing Up: How New Cybersecurity Guidelines Accelerate the Automotive Industry Security

So here you are, with your new spanking SUV that is fully EV and fully automated, with the most state of the art systems inbuilt. You get into the car, switch everything on, put in your favourite tune and head off to work. Suddenly, out of nowhere, your speakers go bonkers and suddenly says in an ominous voice, “Now I got you…” and your steering decides to turn despite your best effort to right it and the accelerator depresses despite you removing your feet off the pedal and your brakes don’t work anymore. You watch helplessly as your car flies over the embankment 120 km an hour.

Homicide by the car. Open your pod bay doors, Hal.

This seems far removed from current reality, but it might not be as far as we think.

Cyberattacks are on the rise in the traditional automotive industry in recent years, as cars become more dependent on circuits and electronics as opposed to mechanics and gaskets.

Connectivity defines the modern vehicle. With some cars containing over 100 million lines of code and processing nearly 25GB of data per hour, computerization radically reimagines mobility – enabling telematics, infotainment and autonomous drive capabilities that were unthinkable barely a decade ago. This software-ized transformation, securing IT components against cyber risks grows ever-more vital. As showcased by researchers commandeering functions like braking and steering via consumer Wi-Fi or compromised infotainment apps, hackers now have pathways into safety-critical vehicle controls. Highly automated models promise even larger attack surfaces.

In the future, mechanics will be phased out by electronic engineers to fix cars. You would go to an electronic shop instead of a mechanic shop. Say goodbye to the toothy uncle with the towel around his shoulder shaking his leg in his greasy shirt.

Bearing this in mind, the Japanese automotive industry is making serious efforts to improve cybersecurity. The Japan Automobile Manufacturers Association (JAMA) and the Japan Auto Parts Industries Association (JAPIA) both formed cybersecurity working groups. These two collaborated in 2019 to develop the JAMA/JAPIA Cybersecurity Guidelines, and on March 31, 2022, a second version was released to help steer the industry toward a more cyber-resilient course. Spanning 156 requirements aligned to internationally recognized standards, the guidelines furnish a sector-specific blueprint for fortifying defenses.

Who Do the Guidelines Target?

Given deepening connectivity between various players, the guidelines take broad aim across the mobility ecosystem:

  • Automobile manufacturers
  • Major Tier 1 parts suppliers
  • Software and semiconductor vendors tightly integrated into products
  • Telecommunications carriers facilitating connectivity
  • Fleet operations centers managing vehicle data
  • Components manufacturers farther down supply tiers
  • Aftermarket service providers accessing internal buses
  • Dealership networks bridging manufacturers and consumers
  • Academic partners feeding talent pipelines

Essentially, any entity handling sensitive intellectual property or providing critical products/services supporting vehicle R&D, manufacturing, sales, maintenance or communications should adhere to the prescribed cyber controls. This is fairly normal, like other standards out there, sub-contractors usually take the hit, as these standards are pushed down from the top.

While the guidelines focus on securing corporate IT environments, they spotlight risks from increasing convergence of enterprise and industrial assets. As connected platforms, analytics and cloud infrastructures provide gateway for adversaries into production systems, shoring up corporate IT protection grows imperative.

Three-Year Roadmap for Enhancing Cybersecurity Posture

Given the significant dedication for properly implementing comprehensive cybersecurity management programs, requirements are divided into three priority tiers reflecting basic, intermediate and advanced measures. The purpose of this is to demonstrate the minimum necessary countermeasures that must be used regardless of company size. This division allows organizations to methodically elevate security stature over a three-year adoption roadmap:

Level 1 – Basic Security Hygiene (Mandatory):

The 35+ non-negotiable Level 1 controls target universals like access management, malware defenses, monitoring fundamentals, compliance auditing, encryption, and security training. These form basic cyber hygiene mandatory across all auto sector entities. These requirements are intended to build a chain of security and trust between companies and their business partners and are also applicable to small and medium-sized enterprises. Non automative industry might do well to also use some of these as baseline cybersecurity practices. It’s basically cybersecurity hygiene. And we all know Japan has the best hygiene in the world, right?

Level 2 – Best Practices (2 Years):

An additional 60+ intermediate requirements call out data protection expansions, enhanced monitoring/logging, vulnerability management, security testing and supply chain risk management practices. Deeper employee training and executive awareness campaigns also feature.

Firms handling sensitive IP or high transaction volumes are expected to adopt Level 1 and 2 guidelines covering both foundational and sector-specific heightened risk areas within two years.

Companies should implement these controls, especially if they meet one of the following conditions:

1. Companies handling external confidential information (technical, customer information, etc.) within the supply chain.

2. Companies with significant internal technology/information relevant to the automotive industry.

3. Companies with a reasonable size/share that could have a significant impact on the industry supply chain due to unexpected disruptions.

Level 3 – Advanced Protections (3 Years):

Finally, over 50 sophisticated measures comprise the advanced tier targeting state-of-the-art safeguards. Encryption ubiquity, advanced behavioral monitoring, automated validation testing, penetration assessments and further elevation of risk management programs defined here help drive the industry’s cybermaturity.

These practices showcase leadership, with Level 3 representing an ultimate target for manufacturers expected to benchmark sector-wide security.

Built-in Flexibility Accounts for Organization Size

The tiered model acknowledges the varying cybersecurity investment capabilities across the industry landscape. This allows smaller players an achievable Level 1 entry point before working toward the expanded Layer 2 and 3 guidelines on a timeline proportional to organizational size and risk.

Again, in comparison to standards like PCI-DSS that also adopts similar tiered approach for compliance, this makes sense, given the number of different entities affected by this standard.

Checklist Format Provides Clear Milestones for Growth

To ease adoption, requirements trace to numbered checkpoints within a detailed appendix. This enumerated format lets companies definitively benchmark postures against guidelines and methodically strengthen defenses while tracking progress.

Shared criteria similarly help suppliers demonstrate security improvements to automaker customers through consistent maturity evaluations, facilitating trust in the supply chain.

Guidance Tuned to Automotive Sector Risk Landscape

Along with staging requirements by attainability, guidelines tailor controls and concepts to risks distinct from other industries. While mapping extensively to internationally recognized standards like NIST and ISO27K, authors customized content to the sector’s specialized threats and priorities.

For example, Level 1 mandates continuous monitoring for unauthorized access or malware activity. This acknowledges the havoc potential of a breach within an interconnected web of automakers, parts suppliers and assembly lines. Different secure zones and security focuses blur the lines on whether if (or when) a breach occurs, whose problem is that, how do we track it?

The repeated emphasis on supply chain oversight, information exchange standards and third-party security likewise reflects the complex hand-offs and trust relationships fundamental to mobility ecosystem operations.

Build Cyber Resilience Across Fragmented Environments

As vehicles evolve into software-defined platforms, cyber principles growing from these Japanese guidelines can shape sector-wide baseline resilience. Automotive IT interconnectivity will only intensify, making comprehensive, unified cybersecurity strategy essential. The scenario of the killer SUV may still be well into the future, but everything starts somewhere and as the world move more into the electronic and artificial, so too our dependence on everyday technology that we take for granted.

Whether global manufacturer or tiny niche parts maker, each player shares responsibility for hardening the greater environment. Just as drivetrains integrate thousands of precision components into harmonized mechanical systems, robust digital defenses emerge from many entities working in synch.

Implementing defined building blocks now allows the industry to preemptively navigate obstacles that could imperil revolutionary mobility pursuits ahead. For those seeking secure footing in the auto sector’s cyber journey, this three-year roadmap paves a straight path forward. This isn’t just for Japanese companies, but for any company whether in Malaysia or other regions that does business with Japanese automakers. This is a clarion call to the industry that cybersecurity should be foremost in the board’s agenda. Contact us at avantedge@pkfmalaysia.com and we will immediately get back to you. With our Japanese auditor and implementation partners, we can assist you in any way you want in navigating this standard.

Unless of course, you are in your Killer Suv. In that case, we can’t navigate that. Good luck!

What the FIM is going on

If you have been doing PCI-DSS for some years, you have probably come across this term called FIM (File Integrity Montioring), which sometimes absolutely befuddles our customers. They generally think this is part of a wider SIEM or SOAR solution but not necessarily so. We’ll explore a little on why FIM is important, how it impacts PCI-DSS, some examples on configuration and what alternatives are there (if any). Here we go!

File Integrity Monitoring is the process of validating the integrity of operating system and application software files. It ensures that files have not been altered or compromised, whether maliciously or accidentally.

  1. Detecting Unauthorized Changes: FIM helps in detecting unauthorized changes to critical system files, configurations, and content files. These changes could be indicative of a breach, malware infection, or insider threat.
  2. Compliance Requirements: Many regulatory standards, such as PCI-DSS, HIPAA, and SOX, require FIM as part of their compliance criteria. It ensures that sensitive data is protected and that the integrity of the system is maintained.
  3. Preventing Data Breaches: By monitoring file changes, FIM can provide early warning signs of a potential data breach. It allows organizations to take proactive measures to prevent unauthorized access to sensitive information.
  4. Enhancing Forensic Analysis: FIM provides detailed logs of file changes, aiding in forensic analysis. It helps in understanding the nature of an attack, the affected files, and the potential impact.

Let’s pause for now and see if common Antivirus/antimalware can take over this compliance requirement without deploying a specific FIM. Why? Because all companies generally have some sort of anti-virus running in their systems and all companies are stingy in their compliance spending, so part of our job is to see if current technology can be sufficient to address compliance requirements. The difference between Anti virus and FIM boils down to the reason of their existence, their meaning to life and everything. Its 42!

While FIM focuses on monitoring the integrity of files, antivirus and antimalware solutions are designed to detect and remove malicious software.

  • Antivirus: Primarily targets known viruses and relies on signature-based detection. It may not detect unauthorized changes to files unless they are associated with a known virus signature.
  • Antimalware: Broader in scope, antimalware solutions target various malicious software, including viruses, spyware, and ransomware. Like antivirus, it may not detect subtle unauthorized file changes.

FIM complements these solutions by providing an additional layer of security, focusing on the integrity of files rather than just malicious content.

FIM also differs from Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) solutions. That being said, its common that these systems are bundled along with FIM solutions so while it’s possible that SIEM may have FIM, it might not be true that FIM has SIEM. They are like, maybe a dysfunctional family who sometimes get together over Chinese New Year reunions.

  • SIEM: SIEM solutions collect and analyze log data from various sources to provide real-time analysis of security alerts. While SIEM can include FIM as a component, it encompasses a broader range of security monitoring functions.
  • SOAR: SOAR solutions focus on automating and orchestrating security operations. They help in coordinating various security tools and processes. Unlike FIM, which is more focused on file integrity, SOAR aims to streamline security operations and response.

FIM makes its appearance in PCI-DSS v4.0 in requirement 10, specifically 10.2, 10.3,10.4,10.5,10.7 and further on in 11.5, 12.10 and A3.5.1.

In 10.2, PCI basically wants FIM to be part of the logging requirements in terms of what to capture, retention, response and so on. Make sure your FIM is configured to monitor the critical files, and the details of the FIM logs has user and process details, who is responsible for the change event and captured in real time. Ensure alerts are generated for change events by privileged accounts which can be further correlated to create an automated incident. Also, make sure changes to log file security settings or removal of log files triggers real time alerts, with exhaustive event details. All creation and deletion activities are captured as well, and all event details must be as per 10.2.2 for the FIM log files.

10.3.4 makes specific mention of FIM but there is some confusion to this requirement ” File integrity monitoring or change-detection mechanisms is used on audit logs to ensure that existing log data cannot be changed without generating alerts. “. Obviously if you try to monitor for changes in a log file and alert everytime that file is changed, your SIEM or SOAR will light up like Christmas. Because of the nature of log files, it is supposed to change! So to avoid the noise, ensure log files are monitored for changes in security settings, like permissions or ownership. If a log file is deleted, that is also an anomaly. And for those logs that are archived or digitally signed, if any changes are made to these, then your FIM should be able to detect it.

Requirement 11 doesn’t change much for V4.0 — it is the main portion for FIM in 11.5.2 and it remains pretty much the same. Requirement 12.10.5 does provide an explicit requirement to include FIM alerts into incident management and response. But you know that already, right?

There are plenty of FIM solutions out there. The common ones we see is OSSEC which is deployed together with Alienvault previously. Tripwire is also a well known name in the FIM arena. If you want to explore the inbuilt Linux version of FIM, auditd might be worth your time. For those unfamiliar with auditd, it’s a component that provides auditing functionality for the Linux kernel. It’s widely used for security monitoring, system troubleshooting, and compliance reporting. Configuring auditd might be intimidating to some at first, but here’s some rules to get you started, found in this link

https://github.com/linux-audit/audit-userspace/blob/master/rules/30-pci-dss-v31.rules

In summary, it covers the following areas (config has been omitted in this article, you can go to the site to get the details)

  1. User Access Linking (10.1): Implicitly met by the audit system.
  2. User Access to Cardholder Data (10.2.1): Requires a watch on the database, excluding daemon access. (Path to the database must be specified.)
  3. Logging Administrative Actions (10.2.2): Enable tty logging for su and sudo. Special cases for systemd-run and pkexec are included.
  4. Monitoring Privilege Escalation Configuration (10.2.2): Watches changes to /etc/sudoers and /etc/sudoers.d/.
  5. Access to Audit Trails (10.2.3): Monitors access to /var/log/audit/ and specific audit commands.
  6. Invalid Logical Access Attempts (10.2.4): Naturally met by PAM.
  7. Logging of Identification & Authentication (I&A) Mechanisms (10.2.5.a): Handled by PAM.
  8. Logging of Privilege Elevation (10.2.5.b): Monitors specific syscalls related to privilege elevation.
  9. Logging Account Changes (10.2.5.c): Watches changes to account-related files like /etc/group, /etc/passwd, etc.
  10. Time Data Protection (10.4.2b): Places rules to check time synchronization.
  11. Securing Audit Trails (10.5): Includes various measures to protect audit logs, limit viewing, prevent unauthorized modifications, back up files, and monitor log modifications.

So, there you go. Lastly, though since PCI v4.0 came out, the council seem to have made distinction of change detection mechanisms vs File integrity monitoring, stating that FIM is part of CDM, sort of like a subset. I suppose this gives a little more leeway for companies to implement other types of CDM other than FIM, although FIM is probably the only one that can address all the above requirements comprehensively and without any need for compensating controls. But just for some ideas, the below may be a list of other CDMs that can possibly address the FIM functionalities in part, automated or manual:

  1. Version Control Systems: These systems track changes to files and code within a development environment. They allow developers to see what was changed, who changed it, and why. Tools like Git, Subversion, and Mercurial are examples of version control systems that provide change detection.
  2. Database Monitoring Tools: These tools monitor changes to database schemas, configurations, and content. They can alert administrators to unauthorized alterations, additions, or deletions within the database. Tools like Redgate SQL Monitor or Oracle Audit Vault are examples.
  3. Configuration Management Tools: Configuration management tools like Ansible, Puppet, and Chef can detect changes in system configurations. They ensure that systems are consistently configured according to predefined policies and can alert administrators to unauthorized changes.
  4. Network Anomaly Detection Systems: These systems monitor network behavior and alert to changes that may indicate a security threat. They can detect changes in traffic patterns, unusual login attempts, or alterations to network configurations.
  5. Endpoint Detection and Response (EDR) Solutions: EDR solutions monitor endpoints for signs of malicious activities and changes. They can detect changes in system behavior, file activities, and registry settings, providing a broader view of potential security incidents.
  6. Log Monitoring and Analysis Tools: Tools like Splunk or LogRhythm analyze log files from various sources to detect changes in system behavior, user activities, or security settings. They can provide real-time alerts for suspicious changes.
  7. Digital Signature Verification: Some systems use digital signatures to verify the integrity of files and data. Any alteration to the digitally signed content would cause a verification failure, alerting to a potential unauthorized change.
  8. Cloud Security Tools: With the rise of cloud computing, tools like AWS Config or Azure Security Center provide change detection for cloud resources. They monitor configurations, permissions, and activities within the cloud environment.

Again, we would highly recommend that a FIM be used, but in the case where it is not possible in that environment, for instance Cloud environment, then other CDMs can be possible. If you need to know more about FIM and PCI or any compliance in general, drop us a note at pcidss@pkfmalaysia.com and we will get back to you immediately!

An Ode to the Invalid Certificate

Once upon a time, in a not-so-faraway land of PeaCeEye, merchants, credit card transactions, online payments, payment gateways, POS terminals all lived in harmony. In this land, all citizens carry a trust symbol, held together by validation documents, called the Citizen Badge. However, PeaCeEye is now facing an existential threat. A threat shrouded in the cloak of validation, a false symbol of security and trust – called the Certificate. But, dear reader, beware! For this tale of caution and deception, and the Certificate, much like the elusive unicorn, while tangible, carries a false value – nothing more than a fabrication. A figment of imagination, conjured up by the minds of its idle creators, the Qessays.

You see, in the kingdom of PeaCeEye, there exists a council – a council of wise men and women who determine the rules and regulations that govern this realm. This council, known as the Secret Sorceror Council (SSC), has decreed that only three sacred documents hold the key to validation for the Citizen Badge – the Attestation of Compliance (AoC), the Report on Compliance (RoC), and the Self-Assessment Questionnaires (SAQs). Yet, despite the council’s resolute stance on this matter, a mysterious fourth document continues to emerge from the shadows – the Certificate.

Ah, the Certificate, a work of art crafted by the Qessays. You see, these Qessays were charged by the council to uphold what is truthful and right, and to ensure that all Citizens of PeaCeEye are identifiable by their Citizen Badges – The AoC, Roc and/or the SAQs. However, over the years, some of these noble Qessays have turned to the darkside and the sinister art of producing corrupted documentation, called the 4th deception, or the Certificate as it is now known. These dark Qessays have mastered the art of illusion, conjuring certificates out of thin air to dazzle their customers. They’ve become modern-day alchemists, turning mere paper and ink into a symbol of validation, which, in reality, is as weightless as a feather and as useful as a chocolate teapot. Or a fork and spoon when eating Chapati. It’s a thing of beauty, destined to hang on the walls of businesses, gracing them with its shimmering falsehoods.

But why do these Qessays continue to spin their webs of deception, offering their customers a document that has no merit in the eyes of the SSC? Something that even invalid citizens to PeaCeEye can procure? To unravel this mystery, we must dive into the murky depths of human nature. For, you see, people are drawn to shiny, pretty things, much like moths to a flame. A certificate, with its elegant calligraphy and embossed seal, is a testament to the allure of appearance over substance. It is a tangible representation of validation, regardless of its actual worth.

Moreover, the Certificate serves as a placebo, a sugar pill of sorts, which instills in businesses a false sense of security. It is a talisman that they cling to, convincing themselves that they are protected from the malicious forces of the World beyond PeaCeEye – the World called Cyberattacks. And, in the process, they become blind to the fact that the true power of validation lies in the sacred trio of documents – the AoC, RoC, and SAQs.

Now, one might argue that those who peddle these invalid certificates are merely fulfilling a demand. After all, the customer is always right, and if they desire a shiny piece of paper to adorn their walls, who are we to deny them? But, as the saying goes, “With great power comes great responsibility.” And these Qessays, as the gatekeepers of the citizenship of PeaCeEye, must hold themselves to a higher standard.

By offering these overvalued and useless certificates-that even the SSC had themselves admonished and had announced to the citizens to not place any value to them- these certificates not only betray the trust of customers but also undermine the very foundation of Citizen Badge. They turn the realm of PeaCeEye into a farce, a stage where pretenders masquerade as protectors, and businesses are lulled into a false sense of security. There are even Qessays who are not even involved in the process of validating an SAQ being answered; luring their customers to portals with questionnaires answered by the citizen themselves and then conjuring these certificates that look as if it has been validated by the Qessays, but instead are just self aggrandizing papers that has been only self validated by the person answering their own questions! In other words, the person becomes their own judge and jury and are able to produce a Certificate that looks as if they have been properly validated by a third-party Qessays. Amazing art! An ostentatious object of grandeur and magnificence, yet with all the actual value of a discarded banana peel withering in the Sahara sun.

But, dear reader, do not despair, for there is hope. You see, the truth has a funny way of revealing itself, much like the sun breaking through the clouds after a storm. And, as the truth about the invalidity of these Certificates spreads, businesses will begin to see through the veil of deception, and the demand for these counterfeit documents will wane. Qessays who persist in peddling these worthless certificates will find themselves exposed, their credibility crumbling like a house of cards.

In the meantime, we must not sit idly by, complacent in the face of falsehoods. Instead, we must raise our voices and spread the word, educating businesses on the true path to Citizen validation. We must sing the praises of the AoC, RoC, and SAQs, enlightening those who have been led astray by the allure of the invalid certificate. For it is only through knowledge that we can pierce the veil of deception and lay the mythical beast of the Certificate to rest.

So, let us embark on this crusade together, wielding the sword of truth and the shield of knowledge. As we march forward on this noble journey, let us remember the wise words of the SSC: “Trust, but verify.” Let us tear down the great wall of this Certificate, brick by brick, and replace it with a fortress built on the solid foundation of the council’s sacred trio of documents. And as we watch the last remnants of the Certificate crumble to dust, we will know that we have triumphed over the forces of deception.

We bid farewell to this Certificate, and to welcome a new era of transparency, security, and trust. An era where the mythical beast of the Certificate is relegated to the annals of history, and where the true power of validation is embraced, in all its glorious, council-approved forms. May the sacred trio of documents – the AoC, RoC, and SAQs – guide us on our path to a brighter, more secure future, and may the Certificate forever remain a cautionary tale of the perils of deception and the triumph of truth.*

** The above is written obviously in satire and tongue-in-cheek with absolute no journalistic value nor based on any real world reimagination and solely based on our absolute frustration at the continuous dependence and insistence from acquirers or banks to have our customers produce them ‘certificates’. In addition, some clients even go through self-service portals provided by QSAs and answer SAQ questions on their own, at the end of this process of self answering, a certificate is produced. Granted, the certificates do come with disclaimers in small prints stating that the certificate is actually based on self assessment and even admits that it isn’t recognised by the council.

But in reality, who actually reads the fine print?

In the end, anyone having gone through these ‘compliance’ portals, answering affirmative to everything would be able to procure these certificates and remarkably, some acquirers even accept them as proof of third party audit (which they are clearly NOT). Again, we are not stating that QSAs providing this service is doing anything wrong. There is nothing essentially wrong with certificates on its own, or QSAs providing these certificates as a simple means to show a company has undergone PCI-DSS compliance. But where it becomes a gray area is when there is too much dependence placed on these certificates to the point where even the AoC is rejected and acquirers insist on every company showing them these certificates. In this case, QSAs who are willing to provide so called certificates to companies without having undergone any assessment and only answering questions from the SAQ based on their own knowledge or whim – unless the QSA is willing to go through each question of each customer and validate these through evidence submission and review (the process called audit); then these creation of self signed certificates should be stopped. It’s akin to a banking website issuing a self-signed SSL cert on their own website and tell everyone to trust it. Does this happen in the world of e-commerce? No, it’s absurd. Then why is it different in the world of compliance? Why is this practice still allowed to prosper? How do we stop this practice?

We have been advocating removing certificates for years now from the PCI-DSS landscape and to have a more consistent and acceptable way to show PCI validation. Unfortunately, unlike the satirical tale above, this still eludes us. Drop us an email at pcidss@pkfmalaysia.com if you have any ideas and comments to this!

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑