Category: PKF AvantEdge (Page 1 of 5)

An Ode to the Invalid Certificate

Once upon a time, in a not-so-faraway land of PeaCeEye, merchants, credit card transactions, online payments, payment gateways, POS terminals all lived in harmony. In this land, all citizens carry a trust symbol, held together by validation documents, called the Citizen Badge. However, PeaCeEye is now facing an existential threat. A threat shrouded in the cloak of validation, a false symbol of security and trust – called the Certificate. But, dear reader, beware! For this tale of caution and deception, and the Certificate, much like the elusive unicorn, while tangible, carries a false value – nothing more than a fabrication. A figment of imagination, conjured up by the minds of its idle creators, the Qessays.

You see, in the kingdom of PeaCeEye, there exists a council – a council of wise men and women who determine the rules and regulations that govern this realm. This council, known as the Secret Sorceror Council (SSC), has decreed that only three sacred documents hold the key to validation for the Citizen Badge – the Attestation of Compliance (AoC), the Report on Compliance (RoC), and the Self-Assessment Questionnaires (SAQs). Yet, despite the council’s resolute stance on this matter, a mysterious fourth document continues to emerge from the shadows – the Certificate.

Ah, the Certificate, a work of art crafted by the Qessays. You see, these Qessays were charged by the council to uphold what is truthful and right, and to ensure that all Citizens of PeaCeEye are identifiable by their Citizen Badges – The AoC, Roc and/or the SAQs. However, over the years, some of these noble Qessays have turned to the darkside and the sinister art of producing corrupted documentation, called the 4th deception, or the Certificate as it is now known. These dark Qessays have mastered the art of illusion, conjuring certificates out of thin air to dazzle their customers. They’ve become modern-day alchemists, turning mere paper and ink into a symbol of validation, which, in reality, is as weightless as a feather and as useful as a chocolate teapot. Or a fork and spoon when eating Chapati. It’s a thing of beauty, destined to hang on the walls of businesses, gracing them with its shimmering falsehoods.

But why do these Qessays continue to spin their webs of deception, offering their customers a document that has no merit in the eyes of the SSC? Something that even invalid citizens to PeaCeEye can procure? To unravel this mystery, we must dive into the murky depths of human nature. For, you see, people are drawn to shiny, pretty things, much like moths to a flame. A certificate, with its elegant calligraphy and embossed seal, is a testament to the allure of appearance over substance. It is a tangible representation of validation, regardless of its actual worth.

Moreover, the Certificate serves as a placebo, a sugar pill of sorts, which instills in businesses a false sense of security. It is a talisman that they cling to, convincing themselves that they are protected from the malicious forces of the World beyond PeaCeEye – the World called Cyberattacks. And, in the process, they become blind to the fact that the true power of validation lies in the sacred trio of documents – the AoC, RoC, and SAQs.

Now, one might argue that those who peddle these invalid certificates are merely fulfilling a demand. After all, the customer is always right, and if they desire a shiny piece of paper to adorn their walls, who are we to deny them? But, as the saying goes, “With great power comes great responsibility.” And these Qessays, as the gatekeepers of the citizenship of PeaCeEye, must hold themselves to a higher standard.

By offering these overvalued and useless certificates-that even the SSC had themselves admonished and had announced to the citizens to not place any value to them- these certificates not only betray the trust of customers but also undermine the very foundation of Citizen Badge. They turn the realm of PeaCeEye into a farce, a stage where pretenders masquerade as protectors, and businesses are lulled into a false sense of security. There are even Qessays who are not even involved in the process of validating an SAQ being answered; luring their customers to portals with questionnaires answered by the citizen themselves and then conjuring these certificates that look as if it has been validated by the Qessays, but instead are just self aggrandizing papers that has been only self validated by the person answering their own questions! In other words, the person becomes their own judge and jury and are able to produce a Certificate that looks as if they have been properly validated by a third-party Qessays. Amazing art! An ostentatious object of grandeur and magnificence, yet with all the actual value of a discarded banana peel withering in the Sahara sun.

But, dear reader, do not despair, for there is hope. You see, the truth has a funny way of revealing itself, much like the sun breaking through the clouds after a storm. And, as the truth about the invalidity of these Certificates spreads, businesses will begin to see through the veil of deception, and the demand for these counterfeit documents will wane. Qessays who persist in peddling these worthless certificates will find themselves exposed, their credibility crumbling like a house of cards.

In the meantime, we must not sit idly by, complacent in the face of falsehoods. Instead, we must raise our voices and spread the word, educating businesses on the true path to Citizen validation. We must sing the praises of the AoC, RoC, and SAQs, enlightening those who have been led astray by the allure of the invalid certificate. For it is only through knowledge that we can pierce the veil of deception and lay the mythical beast of the Certificate to rest.

So, let us embark on this crusade together, wielding the sword of truth and the shield of knowledge. As we march forward on this noble journey, let us remember the wise words of the SSC: “Trust, but verify.” Let us tear down the great wall of this Certificate, brick by brick, and replace it with a fortress built on the solid foundation of the council’s sacred trio of documents. And as we watch the last remnants of the Certificate crumble to dust, we will know that we have triumphed over the forces of deception.

We bid farewell to this Certificate, and to welcome a new era of transparency, security, and trust. An era where the mythical beast of the Certificate is relegated to the annals of history, and where the true power of validation is embraced, in all its glorious, council-approved forms. May the sacred trio of documents – the AoC, RoC, and SAQs – guide us on our path to a brighter, more secure future, and may the Certificate forever remain a cautionary tale of the perils of deception and the triumph of truth.*

** The above is written obviously in satire and tongue-in-cheek with absolute no journalistic value nor based on any real world reimagination and solely based on our absolute frustration at the continuous dependence and insistence from acquirers or banks to have our customers produce them ‘certificates’. In addition, some clients even go through self-service portals provided by QSAs and answer SAQ questions on their own, at the end of this process of self answering, a certificate is produced. Granted, the certificates do come with disclaimers in small prints stating that the certificate is actually based on self assessment and even admits that it isn’t recognised by the council.

But in reality, who actually reads the fine print?

In the end, anyone having gone through these ‘compliance’ portals, answering affirmative to everything would be able to procure these certificates and remarkably, some acquirers even accept them as proof of third party audit (which they are clearly NOT). Again, we are not stating that QSAs providing this service is doing anything wrong. There is nothing essentially wrong with certificates on its own, or QSAs providing these certificates as a simple means to show a company has undergone PCI-DSS compliance. But where it becomes a gray area is when there is too much dependence placed on these certificates to the point where even the AoC is rejected and acquirers insist on every company showing them these certificates. In this case, QSAs who are willing to provide so called certificates to companies without having undergone any assessment and only answering questions from the SAQ based on their own knowledge or whim – unless the QSA is willing to go through each question of each customer and validate these through evidence submission and review (the process called audit); then these creation of self signed certificates should be stopped. It’s akin to a banking website issuing a self-signed SSL cert on their own website and tell everyone to trust it. Does this happen in the world of e-commerce? No, it’s absurd. Then why is it different in the world of compliance? Why is this practice still allowed to prosper? How do we stop this practice?

We have been advocating removing certificates for years now from the PCI-DSS landscape and to have a more consistent and acceptable way to show PCI validation. Unfortunately, unlike the satirical tale above, this still eludes us. Drop us an email at pcidss@pkfmalaysia.com if you have any ideas and comments to this!

PCI-DSS v4.0 vs v3.2.1 Deepdive Part 1

OK, now that we are well into 2023, the main question here is why isn’t the current assessments this year going into v4.0? Most of our customers are still doing their v3.2.1 for 2023, before doing 4.0 the next cycle. The answer is: Well, you can go for v4.0 if you want to. There’s really not much difference for now. The difference is probably more on the auditor side, as reporting requirements are different in V4.0. But from the client end, some of the scary changes like authenticated scans for internal vulnerability scanning, or updating of password complexity to 12 characters etc – these actually don’t come in force until March 2025. So there’s actually a grace period for v3.2.1 to v4.0 and another grace period for PCI v4.0 controls to be implemented, up to March 2025. Basically, anything past March 2025, the controls in v4.0 becomes Standard. No more compromise. Its like the biblical ten Commandments, except you have around 300+ commandments here. That’s a lot of chiseling on the rock by Moses.

Before we deepdive into v4.0, let’s set out the landscape a bit again, like unfurling a carpet or a mat before we feast into our metaphorical compliance picnic.

  1. Scope and Applicability

One of the key changes in PCI DSS v4.0 is the clarification of the scope of the standard. The new version provides more explicit guidance on how to apply the standard to different types of organizations, and it emphasizes the need for organizations to understand the scope of their cardholder data environment (CDE). This comes as a fairly significant change, as the initial pages of V4.0 is strewn with explanations of scoping and methodologies on how to define scope. It reads almost like they are trying to make up for lost time, and trying to cover all their bases, whereas in the previous version, just a cursory glance was done. PCI DSS v4.0 also provides guidance on how to identify and manage different types of risks. Risk has always been a difficult item to quantify in PCI. Because at the end, PCI is a result of a risk assessment anyway, done by the card schemes. It’s specifically to mitigate the risks they identified that the PCI program was born. So what’s the point of running a risk assessment in PCI-DSS if its already a standard? Well, PCI DSS v4.0 states that organizations should have a risk management program in place to identify and prioritize risks, and to take appropriate measures to mitigate those risks. Its a way of saying that while controls are required, how you address the controls are dependent on your risk assessment. Additionally, you can even opt to go above and beyond the PCI standard to address a particularly high risk area (although to find a company doing this is like finding the Lost Ark). Above the brownie points you would get from the QSA by showing you are a company keyed into your risk assessment practices; a risk assessment will likely help you identify other areas of concerns as well. The standard also requires organizations to have a process in place for identifying changes to their CDE, and for reviewing and updating their risk management program as needed. So to the point on whether the risk assessment is useful – yes. Whether it is critical to passing your PCI-DSS – well, I would say that depends a lot on your QSA. We’ve seen QSAs pass a bunch of colored coded excel sheets off as a PCI risk assessment easily.

2. New Control Objectives

PCI DSS v4.0 introduces several new control objectives to address emerging security risks. One of the key new objectives is to address the risks associated with cloud computing. The new version of the standard includes new requirements for securing cloud environments, including the need to assess the security of cloud service providers and to implement additional controls to secure cloud-based data. In v4.0, the word ‘Cloud’ appears 42 times in the entire standard. In v3.2.1, the word ‘Cloud’ appears as often as ‘NasiLemak’. Which is zero.

3. Password Requirements
PCI DSS v4.0 introduces new requirements for password management. We are in 2023 and we are still trying to remember all our passwords. PCI is now making our lives easier by introducing longer passwords! Great, now everyone just add incremental numbers behind your password from seven to twelve. The standard requires the use of multi-factor authentication for all non-console administrative access, this has already been evident in previous version. This just basically means that organizations must implement additional security measures, such as biometric authentication or smart card authentication, in addition to a password, to access sensitive systems and data

4. Encryption

The new standard maintains that organizations use more robust encryption algorithms and key lengths as per 3.2.1. Key management more or less remain as it is, but the biggest issue in v4.0 is the doing away with full disk or transparent encryption. We will do a deep dive in this later.

5. Penetration Testing and Vulnerability Management

PCI DSS v4.0 includes new requirements for penetration testing and vulnerability management. Among others is the requirement for Internal vulnerability scans to be authenticated whereas previously, this was a bit more gray area (actually not required). This could have potential impact especially for entities chasing a quarterly deadline, if you have a lot of systems in your scanning scope. So this makes the scoping a lot more critical. Because you can be sure the effort for internal scans are going to be going way up.

6. Remote Access

PCI DSS v4.0 includes new requirements for securing remote access to cardholder data environments. PCI requires organizations to implement multi-factor authentication for all remote access, and to use secure protocols, such as SSH or VPN, to access sensitive systems and data. While this remains, the other issue with 4.0 is the need to implement controls to prevent copy/relocation of PAN for all personnel unless there is a business need. We have a bad feeling about this. This could generally mean getting a DLP in place or a NAC in place to limit what can or cannot be done by users logging in remotely. There are solutions for these, but this needs to be planned and invested. The key word here is to ‘prevent’ not just ‘detect’, so this basically mean a proactive control in place to block these actions.

So in the next couple of articles, we will dive right into the changes for v4.0 in detail, including those requirements where it is stated “This requirement is a best practice until 31 March 2025, after which it will be required and must be fully considered during a PCI DSS assessment.”

We will also look into the SAQs and what has changed in the SAQs for those preparing to do self assessment in accordance to v4.0.

In the meantime, for any PCI related queries or any standards like CSA, ISO27001 etc, drop us a note at pcidss@pkfmalaysia.com and we will get back to you!

Technical Session: Clearing NTFS Dirty Bit

Every once in a while, we take a break from boring compliance articles and write what’s more interesting – fixing broken stuff or troubleshooting problems that has nothing to do with human beings. It’s far easier dealing with machines.

So, what happened was, we had a USB plugged into one of our servers and doing some file transfers. The server wasn’t hooked up onto our UPS, as this was a test system – ok, it was actually sitting under my desk and everytime I turned it on, everyone in the office thinks a helicopter is outside the window. It’s old and loud and totally unsuitable to be located outside of a server room. Ah well.

In any case, halfway through the transfer, the power tripped. The server was ok upon restart but not the USB external drive.

It demonstrated a few symptoms:

a) When plugged in, the drive does whir up and explorer recognises it. The problem was it was listed as ‘Local Drive’ and nothing else, no other information. When clicked, it just freezes up everything. Right click does eventually brings up the context menu but when ‘Properties’ is selected, it hangs and never proceeds. So trying to scan the drive for errors from the GUI is a no go.

b) Command line wise – when accessing G:, again it just hangs. Chkdsk /f also hangs from command line so trying to scan from command line = no go.

c) Going into disk management GUI, it takes a long time before it eventually pops up and the good news was that disk management actually saw the drive. However, right clicking on it and trying to reassign the drive letter (as suggested by some other articles to recover), we get this annoying message:

The operation failed to complete because the Disk Management console view is not up-to-date.  Refresh the view by using the refresh task.  If the problem persists close the Disk Management console, then restart Disk Management or restart the computer

Microsoft being cryptic and mysterious

So like Lemmings, we proceed to refresh the console with F5 and it just hangs indefinitely and nothing happens until we unplug the drive. Then a string of errors come out like Location of drive cannot be found etc. It seems the auto opening of the USB drive was activated but Windows just couldn’t read the drive. So disk management is a no-go.

d) We tried installing other software like Acronis, or Easeus but none of these managed to read the hard drive and simply hangs until we unplug it.

e) Changing laptops/desktops/cables (all running Windows) – all the same result. The drive was acknowledged but explorer or other programs couldn’t open anything on it. This is good news actually; it doesn’t seem there was a hardware issue or any dreaded clicking noise indicating the drive was a dead duck.

f) So it does point to a software layer issue, which should be handled with a scan disk or check disk by Windows. However the problem is, the disk couldn’t be read, so it couldn’t be scanned. Booting into safe mode doesn’t help anything. Reinstalling the USB drivers doesn’t help. The drive simply refuses to go to work, like all of us on a Monday morning after being smashed with a hangover from a Sunday night out.

g) Finally, on event viewer under Windows Logs -> System, this particular classic comes up: “An error was detected on device \Device\Harddisk2\DR21 during a paging operation.” So if you go to advanced under system properties -> Performance ->Settings ->Advanced. Under virtual memory, you could uncheck the box to automatically manage the paging file size if you can. But no, Windows doesn’t read the drive, so clicking on G: once more hangs the whole system.

At this point we have wasted an hour trying to sort this nonsense out. Nothing in Windows was able to indicate the issue. One suggested running fsutil from command line. This can check for the dirty bit on NTFS, which is an annoying feature that basically renders the drive useless until the bit is ‘cleared’.

The problem with this was – yes, you got it – you couldn’t run any command on that drive as it just hangs. Nothing, no programs in Windows was able to do anything for this drive.

The Dirty Bit

So some definitions first – the dirty bit is a modified bit. It refers to a bit in memory, which switches on when an update is made to a page by computer hardware. It is just a 1 hex value situated in some place hidden on the portable hard drive.

From Microsoft definition

A volume’s dirty bit indicates that the file system may be in an inconsistent state. The dirty bit can be set because:

  • The volume is online and it has outstanding changes.
  • Changes were made to the volume and the computer was shut down before the changes were committed to the disk.
  • Corruption was detected on the volume.

If the dirty bit is set when the computer restarts, chkdsk runs to verify the file system integrity and to attempt to fix any issues with the volume. (In our case, this didn’t happen, obviously).

Assuming that this was a dirty bit problem (at this point, we were just shooting in the dark due to the lack of diagnostics, logs or events and we were just working on with some black magic of guessing).

From some articles in the net, the options to remove the dirty bit as follows:

  • You have 3 options to remove dirty bits from your computer. The first option is to trust the Microsoft disk checking utility by finishing a disk check operation. [This didn’t work as Windows wasn’t able to read ANYTHING and we could not run any windows based operations or commands or programs on it.]
  • The second method is that you move the data from the volume and format the drive. After that, move the data back. [This is way too much work. Plus, Windows can’t even access it. So the only option is to do a clone such as through Clonezilla? That’s a lot of work. And a last resort.]
  • The third method to remove the dirty bit is by using a hex editor with disk editing supported. [We didn’t explore this as this seemed a bit extreme, and probably the last time we handled a hex editor was when we had to hack in some computer games like Football Manager to give unlimited funds or a 99 in dribbling skills]

There’s an easier way.

So this is where you just need to give up on Windows and figure another way to check this disk. If you have a standby Linux box or Mac, that would help. But if not, you could actually use this great little tool called SystemRescue which among other tools, have the delectable DDRescue and Ntfs3g which will be important.

Boot up to SystemRescue (you can make a boot disk with DDRescue which is very much recommended – just use Rufus or another program to make it bootable, and download the distribution https://www.system-rescue.org/) and you basically now have a nice little Linux distro running from your USB and you should be able to also see your USB mounted with the command lsusb or lsblk.

Using lsblk -o gives you a view to see the type, size, device and a few more details. The below is an example (not ours)

Just identify which is your USB drive.

Using a the nifty ntfsfix (assuming /dev/sda1 is the USB drive you want to fix)

ntfsfix -d /dev/sda1

This basically clears the dirty bit which Windows for whatever reason, finds it so difficult to do and makes us jumps through hoops. In fact, fsutils from Windows only tells you that you have a dirty bit but doesn’t clear it. That’s like paying a doctor to tell you that you have cancer and not providing you any healthcare to it. Come on, Microsoft.

So right after clearing the dirty bit, the external drive is once again accessible. There were still some errors on the drive, but we just ran the check for errors option via GUI (since now we are able to access the properties of the drive again by right clicking for the context menu), and fixed up the inaccessible files.

So now you know. The next time you have an outage during a file transfer, it could just be the dirty bit. The problem is the diagnosis (again, Windows could just put into the event that there is a dirty bit set instead of leading us to this paging file nonsense treasure hunt). And of course, if Windows cannot access, using the SystemRescue utility, it’s a great tool to solve this issue.

And finally, according to some, another even easier way is to just plug in this drive into a Mac and apparently, it resets the dirty bit for some reason. I never tried this, so perhaps others can give it a try first before going the SystemRescue way.

Contact us at avantedge@pkfmalaysia.com for more information on what services we can offer you.

Have a good week dealing with human beings!

PCI-DSS 2022 and Version 4

pci-compliance

So we are now in 2022. PCI-DSS v4.0 is due to be out and one of the things we have been doing for the first two weeks of the year is to get over our holiday hangovers. That’s right. In our country (Malaysia), the slowest months are December, January and February. It’s like starting a car in the dead of winter. These 3 months are like the Amen Corner in Augusta for businesses. December hits like a ton of bricks due to the Christmas season; and then just when things start moving in January, it grinds to a halt for Chinese New Year, where the entire nation just flat out refuses to work. When we are back in the second week of Chinese New Year, we are once more in first gear climbing up the hill again of 2022.

So we did things a bit differently. We started the first two weeks with a series of training for clients and potential clients, to go through PCI-DSS v4.0 and create an awareness of what is there to expect.

The above is taken from the PCI website and immediately we see some interesting things here. Number one: PCI-DSS v3.2.1 only retires in 2024. This is interesting, because usually the transition period isn’t so long. It’s long now because – I don’t know, there may be an ongoing pandemic and such. So here we are Q1 2022, and our customers are asking when do we transition to v4.0?

Well, the answer would be: as soon as you can. But in theory, you can probably stick to v3.2.1 validation for 2022 and realistically move to v4.0 in 2023. In fact, for some of our clients whose PCI maintenance period follows the calendar year, they can even force 3.2.1 into their 2023 validation year.

As for the actual content in PCI v4, it’s still a well kept secret like the plot of Spiderman: No Way Home; but we have been reading a bit and also have joined last year’s PCI-DSS community meeting and learnt some interesting tid-bits of it.

No 1: Compensating Controls

The-get-out-of-jail-free card. Customers have been dangling this Compensating Controls card in front of our faces ever since the Mesopotamian times. When they can’t address a control – use compensating controls! When they cannot implement something due to budget – compensating controls! When they can’t make changes to an application because it was designed by a group of kindergarten kids and it would break the moment you touch it – Compensating Controls! When you don’t know what to say to your wife after a long night out at the pub with the mates and come back smelling like a keg of kerosene – Compensating Controls!

The problem with compensating controls is that they are a pain in the neck to implement and to document. And to justify. The compensating control worksheet, the justification documentation, the implementation of the control itself to be ‘above and beyond’ the scope of PCI-DSS etc. Everyone things this is a silver bullet only to find it the deepest rabbit hole you can ever fall into.

So, PCI v4 does away with compensating controls. Great.

And they introduce Customized Implementation.

A lot of people are saying this is a game changer.

Honestly? Until more information comes out, we only have this to go with:


Customized implementation considers the intent of the objective and allows entities to design their own security controls to meet it. Once an organization determines the security control for a given objective, it must provide full documentation to enable their Qualified Security Auditor (QSA) to make a final decision on the effectiveness of a control.

Cryptic PCI v4 DOCUMENTATION

Design their own security controls? Well, ok, isn’t this the same as compensating controls? I am thinking this just expands the interpretation to something a bit broader in which case the control may not even be a technical control. So instead of stating , ok, we can’t meet certain password controls due to the legacy application issue, and compensating controls were previously excessive logging and monitoring; isolation of network, whitelisting of IPs and access; using WAF and DLP and Virtual patching etc etc; are we stating now, a possible customized approach would be: instead of all these technical controls; we now have a customized security approach. Which includes isolation of network, whitelisting of IPs and access; using WAF and DLP and Virtual patching etc etc.

Until we see some examples of this, it may just be well that most companies will go along with the ‘normal’ approach; or adopt a wait and see approach and eke out the last remaining drop of v3.2.1. into 2024.

No 2: UP in the Clouds

Another item that has been long overdue? Cloud. It’s about time things get addressed and not just cloud, but how services and containers work as well. We have had auditors coming to our clients insisting on them doing testing, VA/PT on services from AWS, not recognizing there’s not even an IP address to start with. To be fair to the SSC, they do have a few Cloud Guidelines Supplementary documentation, which we actually find very useful especially in our projects on certifying cloud technologies. We can see this being incorporated more formally into v4.0 where the requirements will be designed around Cloud environment more organically than what we see right now (sort of force-fitting many of the traditional concepts like Network IDS, Patching etc into the cloud environment).

No 3: Not another MF-A!

I have a bad feeling about this.

MFA has been a constant pain for us. Firstly, where MFA is being implemented – not just on perimeter but now on every access to the CDE. At least it’s now still on admin accounts. We hear they plan to introduce for ALL users. We also hear the collective screams of the tormented from the nine hells of Dante. Secondly, a lot of customers are still depending on MFA via SMS. If PCI goes along the NIST route, we could see this being deprecated soon. Also, clarifications as well on whether client side certificate are acceptable as a ‘something you have’ factor would be most welcomed. We see different QSAs interpreting this so differently you’d think we’ve asked them to interpret some ancient Thuggee text. Multi-factor challenges are already there for us over the past years, with Bank Negara’s RMIT focus on ‘strong MFA’ for large financial institutions. A clear guidance also should be there on how to evaluate multi-factor that is dependent on a cloud provider; and whether common implementation like Google Authentication etc can still be considered as good enough for V4.0

No 4: Encrypting everything

We also hear now that the “Pocket Protector Trope” security may be implemented. Remember those movies we watch, where the hero gets shot in the chest and you think he dies but he reveals that the bullet is stopped by his pocket watch; his badge; a bible; or some other dang sentimental thing that was given to him like 40 scenes ago?

So in PCI, usually when data is traversing the internet or network, it states the transmission needs to be encrypted. It doesn’t technically state anything about encrypting the data package itself while in transmission. The data encryption almost exclusive occurs during data at rest. So in this case, they are doubling the protection: They are adding that pocket watch to catch the bullet; so if the transmission gets compromised, the data is still secured. The bullet doesn’t hit the hero!

No 5: Recovery and Continuity

Not so much as something coming, but more of what we’d like to see. One of the biggest criticism we see customers bemoaning at PCI (other than the cost and budget and the complexity and..ok, everything else) – is that PCI has little focus on business continuity and disaster recovery. It’s almost as if PCI is standing there saying, “OK, you have outage for a few days? Great, make sure your credit card information is safe.” It’s not really business focused, it’s more credit card confidentiality focus. What we would like to see is a little more focus on this area. Over the past 2 years, we have seen customers getting all sorts of attacks from cyberspace. Malware, ransomware, hacking, fraud, defacement — it’s like the world goes into a pandemic and everyone’s bored to bits at home and everyone is taking up hacking as a part time gig. Malware for instance – how prepared is a PCI compliant company against ransomware attack? Have they done their backups? Have they tested their systems to recover?

So, if you have any queries on PCIv4 for us, drop us an email at pcidss@pkfmalaysia.com and we will definitely get back to you. Have a great and safe year ahead for 2022!

PCI-DSS and Card Storage

pci-compliance

We had an interesting discussion a few weeks back about storage in PCI-DSS. We disagreed with an acquirer’s position in how PCI-DSS views storage and therefore opened a whole can of … interesting debate.

The problem the acquirer had with our position was simple. We have a client who is currently doing a data migration import from another service provider to their document management system. Amongst the terabytes of data were possible scanned copies of credit card information, either in forms or actual card photo-copies themselves. Now, we are talking about terabytes.

Our position was fairly straightforward. Do you need these card data? We asked. No, said our client. We don’t need the card data as we do recon and backoffice operations on other form of identification. Can these information be removed or redacted? Bemused, they said, possibly, but the problem is that there are going to be millions of records to be dealt with.

Well, is there a way we can sanitize the data before it enters into your environment?

Yes, possibly, we need to ask the acquirer to ask their current provider to do it for us.

The provider you are taking business away from?

Yes.

Good luck…

And sure enough, the acquirer responded and asked us, “Shouldn’t PCI-DSS allow the storage of these card information, and how your client is able to deal with it? Why do you insist on us redacting and removing the card information? What then is the purpose of PCI-DSS??”

Now, on the surface, that argument does make sense. After all PCI-DSS applies to entities who store, transmit and process credit card information right? Why then wouldn’t we want our client to store credit card information if they are going through PCI-DSS?

Unfortunately, this is a case of getting the solution (PCI-DSS) mixed up with the problem(storing card data). In other words, in a more current analogy, just because I got vaccinated doesn’t mean I would purposely go out and try to get infected so that the vaccine has something to do. The purpose of PCI isn’t for you to store credit card. It’s for you to manage the storage of credit card IF you store it. Storing credit card isn’t a PCI-DSS objective, its an issue that PCI-DSS tries to solve.

So back to this little kerfuffle; if they pass us terabytes of information with card data, our client will need to figure a way to protect this data. Likely encryption of any information that card data is present, which includes key management etc. If they can redact it and remove it before it enters into our client’s environment, then we avoid it. We are basically following the concept of PCI-DSS :

Requirement 3 addresses protection of stored cardholder data. Merchants who do not store any cardholder data automatically provide stronger protection by having eliminated a key target for data thieves. Remember if you don’t need it, don’t store it!

PCI-DSS Prioritized approach

If we don’t need it, don’t store it. In this case, we don’t need it, so we are trying to escape storing it. However, if this cannot be done (which likely it won’t be), then we just need to put controls in there. We’re trying to get our clients to do less and we are also trying to remove card footprints in other areas, thus reducing the risks to the card brands, and likely save the world from impending disaster and destruction.

However, we do have another issue.

Because there is potentially CVV storage (photocopy of cards front and back) and scanned into softcopies, we have a bit of a problem. CVV cannot be stored in any format or in any media post authorisation. So therefore, if this is being dumped into our client’s environment, it’s imperative someone removes this information. To us, its a lot easier to remove it at source; but unfortunately that means there is an effort to be spent on it, which no one is willing to do.

How the CVV got stored in the first place is a question that we don’t have an answer to. However, we do know that if CVV is present, we cannot just encrypt it and be done with it. We will need to remove these information one by one. There are a few solutions out there that can do auto redaction and be applied to a massive amount of files, provided that the files are in a sort of standard fashion. That could be a solution on this, but again, it’s beyond what we are discussing for this article.

The point is, having PCI-DSS doesn’t automatically mean we MUST store card data. It simply means IF we store card data we are applying PCI-DSS controls to that storage of card data.

Let us know if you need more information about PCI-DSS or any IT standard compliance like ISO27001 or CSA/SOC, we are ready to assist, just contact us here. Stay safe everyone!

« Older posts

© 2023 PKF AvantEdge

Up ↑