Category: Technology (Page 1 of 10)

Technical Session: Clearing NTFS Dirty Bit

Every once in a while, we take a break from boring compliance articles and write what’s more interesting – fixing broken stuff or troubleshooting problems that has nothing to do with human beings. It’s far easier dealing with machines.

So, what happened was, we had a USB plugged into one of our servers and doing some file transfers. The server wasn’t hooked up onto our UPS, as this was a test system – ok, it was actually sitting under my desk and everytime I turned it on, everyone in the office thinks a helicopter is outside the window. It’s old and loud and totally unsuitable to be located outside of a server room. Ah well.

In any case, halfway through the transfer, the power tripped. The server was ok upon restart but not the USB external drive.

It demonstrated a few symptoms:

a) When plugged in, the drive does whir up and explorer recognises it. The problem was it was listed as ‘Local Drive’ and nothing else, no other information. When clicked, it just freezes up everything. Right click does eventually brings up the context menu but when ‘Properties’ is selected, it hangs and never proceeds. So trying to scan the drive for errors from the GUI is a no go.

b) Command line wise – when accessing G:, again it just hangs. Chkdsk /f also hangs from command line so trying to scan from command line = no go.

c) Going into disk management GUI, it takes a long time before it eventually pops up and the good news was that disk management actually saw the drive. However, right clicking on it and trying to reassign the drive letter (as suggested by some other articles to recover), we get this annoying message:

The operation failed to complete because the Disk Management console view is not up-to-date.  Refresh the view by using the refresh task.  If the problem persists close the Disk Management console, then restart Disk Management or restart the computer

Microsoft being cryptic and mysterious

So like Lemmings, we proceed to refresh the console with F5 and it just hangs indefinitely and nothing happens until we unplug the drive. Then a string of errors come out like Location of drive cannot be found etc. It seems the auto opening of the USB drive was activated but Windows just couldn’t read the drive. So disk management is a no-go.

d) We tried installing other software like Acronis, or Easeus but none of these managed to read the hard drive and simply hangs until we unplug it.

e) Changing laptops/desktops/cables (all running Windows) – all the same result. The drive was acknowledged but explorer or other programs couldn’t open anything on it. This is good news actually; it doesn’t seem there was a hardware issue or any dreaded clicking noise indicating the drive was a dead duck.

f) So it does point to a software layer issue, which should be handled with a scan disk or check disk by Windows. However the problem is, the disk couldn’t be read, so it couldn’t be scanned. Booting into safe mode doesn’t help anything. Reinstalling the USB drivers doesn’t help. The drive simply refuses to go to work, like all of us on a Monday morning after being smashed with a hangover from a Sunday night out.

g) Finally, on event viewer under Windows Logs -> System, this particular classic comes up: “An error was detected on device \Device\Harddisk2\DR21 during a paging operation.” So if you go to advanced under system properties -> Performance ->Settings ->Advanced. Under virtual memory, you could uncheck the box to automatically manage the paging file size if you can. But no, Windows doesn’t read the drive, so clicking on G: once more hangs the whole system.

At this point we have wasted an hour trying to sort this nonsense out. Nothing in Windows was able to indicate the issue. One suggested running fsutil from command line. This can check for the dirty bit on NTFS, which is an annoying feature that basically renders the drive useless until the bit is ‘cleared’.

The problem with this was – yes, you got it – you couldn’t run any command on that drive as it just hangs. Nothing, no programs in Windows was able to do anything for this drive.

The Dirty Bit

So some definitions first – the dirty bit is a modified bit. It refers to a bit in memory, which switches on when an update is made to a page by computer hardware. It is just a 1 hex value situated in some place hidden on the portable hard drive.

From Microsoft definition

A volume’s dirty bit indicates that the file system may be in an inconsistent state. The dirty bit can be set because:

  • The volume is online and it has outstanding changes.
  • Changes were made to the volume and the computer was shut down before the changes were committed to the disk.
  • Corruption was detected on the volume.

If the dirty bit is set when the computer restarts, chkdsk runs to verify the file system integrity and to attempt to fix any issues with the volume. (In our case, this didn’t happen, obviously).

Assuming that this was a dirty bit problem (at this point, we were just shooting in the dark due to the lack of diagnostics, logs or events and we were just working on with some black magic of guessing).

From some articles in the net, the options to remove the dirty bit as follows:

  • You have 3 options to remove dirty bits from your computer. The first option is to trust the Microsoft disk checking utility by finishing a disk check operation. [This didn’t work as Windows wasn’t able to read ANYTHING and we could not run any windows based operations or commands or programs on it.]
  • The second method is that you move the data from the volume and format the drive. After that, move the data back. [This is way too much work. Plus, Windows can’t even access it. So the only option is to do a clone such as through Clonezilla? That’s a lot of work. And a last resort.]
  • The third method to remove the dirty bit is by using a hex editor with disk editing supported. [We didn’t explore this as this seemed a bit extreme, and probably the last time we handled a hex editor was when we had to hack in some computer games like Football Manager to give unlimited funds or a 99 in dribbling skills]

There’s an easier way.

So this is where you just need to give up on Windows and figure another way to check this disk. If you have a standby Linux box or Mac, that would help. But if not, you could actually use this great little tool called SystemRescue which among other tools, have the delectable DDRescue and Ntfs3g which will be important.

Boot up to SystemRescue (you can make a boot disk with DDRescue which is very much recommended – just use Rufus or another program to make it bootable, and download the distribution https://www.system-rescue.org/) and you basically now have a nice little Linux distro running from your USB and you should be able to also see your USB mounted with the command lsusb or lsblk.

Using lsblk -o gives you a view to see the type, size, device and a few more details. The below is an example (not ours)

Just identify which is your USB drive.

Using a the nifty ntfsfix (assuming /dev/sda1 is the USB drive you want to fix)

ntfsfix -d /dev/sda1

This basically clears the dirty bit which Windows for whatever reason, finds it so difficult to do and makes us jumps through hoops. In fact, fsutils from Windows only tells you that you have a dirty bit but doesn’t clear it. That’s like paying a doctor to tell you that you have cancer and not providing you any healthcare to it. Come on, Microsoft.

So right after clearing the dirty bit, the external drive is once again accessible. There were still some errors on the drive, but we just ran the check for errors option via GUI (since now we are able to access the properties of the drive again by right clicking for the context menu), and fixed up the inaccessible files.

So now you know. The next time you have an outage during a file transfer, it could just be the dirty bit. The problem is the diagnosis (again, Windows could just put into the event that there is a dirty bit set instead of leading us to this paging file nonsense treasure hunt). And of course, if Windows cannot access, using the SystemRescue utility, it’s a great tool to solve this issue.

And finally, according to some, another even easier way is to just plug in this drive into a Mac and apparently, it resets the dirty bit for some reason. I never tried this, so perhaps others can give it a try first before going the SystemRescue way.

Contact us at avantedge@pkfmalaysia.com for more information on what services we can offer you.

Have a good week dealing with human beings!

Let’s Talk v4: Overview

So, on March 31st 2022, PCI-DSS v4.0 dropped on us.

The original timeline for v4.0 has already passed a long time back. Back in 2019, there had been talks that v4 would drop in late 2020. Then due to the global pandemic of unknown origins, it was moved to 2021 and now finally, they decide to release it in 2022. We all know PCI SSC loves deadlines. They love the whooshing noise deadlines make as they go by.

First of all, let’s start with another quote from the wisest sage of all generations:

Don’t Panic.

Douglas Adams

Because if we take a look at the timeline below, there’s a pretty long runway to adopt v4.0.

The above basically means this:

a) Entities undergoing PCI right now, whether it’s first time or renewals, if you are going to be certified in 2022, your current cycle and next renewal in 2023 can stay with v3.2.1.

b) Entities thinking to go through PCI-DSS, and will likely be certified in 2023, you can stay with v3.2.1 for this cycle, and then for the next renewal up in 2024, you will need to move to v4.0

Long story short, entities have 1.5 years to stay on PCIv3.2.1 and go v4.0 on your 2024 cycle. That doesn’t mean that you don’t do anything from now till then of course. Depending on your processes, there may be some changes. However, it’s not too crazy and it’s more incremental than anything else, including areas where we are already practicing , but was not noted in v3.2.1 (example being anti-phishing controls, which have been a staple for most of our FSI clients).

So we’re going to have a few breakdown of areas we think is fairly relevant to note in v4.0; a deeper dive into requirements that are added or changed, and more importantly how we think a company can move forward in preparation.

Of course that being said, the v4.0 is only 3 weeks old. A toddler in terms of its predecessors. Let’s put it into perspective. PCIv1 (and its sub versions 1.1 and 1.2) lasted almost 6 years from 2004 – 2010.

PCIv2 lasted half that time from 2010 – 2013.

PCIv3 and its sub-versions (3.1, 3.2, 3.2.1) lasted from 2013 to 2022. That’s 9 years old. So in retrospect, we are literally in the 0.6% timeline for v4 if it were to follow the v3 age. Meaning, there could be a lot of changes yet to come, or clarifications or explanations etc.

Over the life of v3, we’ve seen many supplementary documents (for scoping, logging, penetration testing, risk management etc) churned out in support to clarify v3 items. While not part of the standard itself, these supplementary documents and hundreds of FAQs are generally quoted or referenced by us to support our arguments for and against some of the decisions that QSAs put to our clients. These are extremely useful especially when QSAs put in some pretty daft interpretations of the requirements (see our previous post on CDD).

There has been some extremely subtle changes aside from the major ones and we want to note these items in page 4 of v4:

PCI DSS is intended for all entities that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) or could impact the security of the cardholder data environment (CDE).

Some PCI DSS requirements may also apply to entities with environments that do not store, process, or transmit account data – for example, entities that outsource payment operations or management of their CDE.

In accordance with those organizations that manage compliance programs (such as payment brands and acquirers); entities should contact the organizations of interest for more details.

pci v4.0 warning to those entities that scream i am out of scope because i don’t store, transmit or process stuff!

There’s a lot of things we dislike about v4.0. But there’s a lot of things we LIKE about it as well. So it’s like that family trip that you are taking with your entire extended family. There’s that cousin that you completely dislike that you wish you don’t need to make small conversations with – you know, the one that constantly name drops and questions whether you have achieve as much as he has in life. And tries to coach you to be a better person and live a better life, and have more than your currently unfulfilling, loveless marriage and a deadend, purposeless job as a PCI-DSS consultant. Yeah, you know it. But at the same time, you like these trips because it’s time with your family as well, and time to goof off with your kids, walk on the beach with your spouse and basically fantasize throwing your cousin into a pit full of vipers. v4.0 is like that trip.

The main takeaways from the above quote would be

a) No more free passes to those entities who claim they are out of scope simply because they don’t store, process or transmit card data. If you have impact on the security of the CDE, then you are in.

b) First time we are seeing the word “Organizations of Interest”. While this is nothing much, it’s like watching a movie in the cinema that’s based on a comic book and you see an obscure easter egg referencing to that comic and you get goosebumps because you know, you’re a nerd. And you like this kind of subtle references that no one else knows about. Basically OIs are the upstream customers, banks, FSI, organisations that are requesting your PCI-DSS compliance. It’s easier now to make this reference as it is now an official term in v4.0. Yay.

c) Organizations that ‘impact security’ is in. Previously the problem is that we had outsourced SOC/NOC, or outsourced providers that do not handle card data (e.g managed providers for firewalls etc) and even cloud services that handle the MFA or authentication generation, claiming that there is no card data, therefore they don’t need PCI. That’s fair enough, but we still need to assess that service as part of an on-demand assessment to ensure that that service is properly secured or at least has basic security functionality over it. While a majority of providers are fine with this, we have had antagonistic providers shouting to high heaven that we are idiots because of the very fact that they do not store, process or transmit card data; they should be completely disregarded from the PCI assessment. Um. No. You’re not and V4.0 is smacking you in the face for this.

Another item on v4.0 is the sheer amount of information they provide right at the beginning of the standard. They are talking about the scoping methods, segmentation, encryption and applicability on third party providers, use of third party providers and how to be compliant with them, BAU best practices, sampling methods, definition of timeframes, definition of words like significant changes, approaches to implementation of PCI-DSS, testing methods, assessment process, RoC writing and if you look carefully, there is also a recipe in there for Jamie Oliver’s Yorkshire Pudding.

In the previous v3.2.1, the requirements started on page 20. In v4.0 the requirements start on page 43. The total number of pages in v4.0 is 360, up 158% from the previous 139 pages. So, simply put, you are going from reading Enid Blyton’s Famous Five Goes to Finniston Farm to Leo Tolstoy’s War and Peace.

The requirements themselves remain as 12, so in essence, despite all the fluff at the beginning, the actual requirements are still intact. There’s quite a fair bit of items to look at, and here we provide a brief overview of it:

a) Customized implementation

So, we have this outcomes-based implementation of PCIv4. This is based on the purpose or the ‘spirit’ of the requirements and may not necessarily use the standards-defined controls to achieve it. So, for instance, the requirement to do quarterly internal scans – the objective is to identify vulnerabilities in a regular interval and to ensure that the organisation addresses this vulnerability. Instead of having an option for on-demand scanning, the organisation may opt to sign up for a continuous analysis and automated scanning that are available in cloud such as Google or AliCloud. So while the controls are different, it addresses the same objective.

It is noted that custom implementation should only be done by organisations with a mature risk management practice in place, as this requires more work for the organisation and the QSA to define tests of these controls.

On how this is implemented or samples of it, I am sure we will be seeing more examples as the standard starts maturing. Remember, v4.0 is still a baby, not even out of the maternity ward yet.

b) Multi-factor and Passwords

Multi factor is now needed for any access into the CDE. So, we call in Multi-Multi Factor – whereby, an MFA is required for remote users to get into the network, and from the non-cde network, to get into the CDE, it requires additional MFA. It would seem fairly straightforward, but companies now have to consider to implement a jump server in the CDE to act as a control aggregator to go to multiple systems in the CDE – or they could just deploy another MFA solution on the network .

Passwords are to be changed to 12 alphanumeric up from 7. There’s still a runway on this as it is only considered standard in 31 March 2025. A lot of things can happen from now till then and a lot of technology can change. We could be facing global climate crisis and end of the world, or world war 3 nuclear warfare, or an asteroid could hit earth, or the Rapture happens, you know, future stuff. But in case none of those things come to past, then yeah, make sure you move your passwords to 12 alphanumeric.

c) Group Accounts

8.2.2 gives a needed reprieve on this kerfuffle of having group accounts. In v3.2.1, this is disallowed, but v4.0 , it is allowed, based on the rule of common sense. Some systems do have group accounts for a purpose, or is unable to provide certain functionality to individual accounts. So while there is now more justifications etc needed, it’s no longer a hard no for group accounts.

d) Targeted risk analysis

Targeted risk analysis can now be done to determine the frequency of certain actions – such as password changes, POI device inspections, non-CDE log reviews, low vulnerabilities remediation, FIM review, frequency of training etc. Now while we want to believe that the PCI-SSC idea on having this is for organizations to change frequencies of controls to be MORE stringent (example to have the password changed every 30 days instead of 90 days), the reality is that most of us would stretch this requirement to make life a lot easier for us. I mean, what’s the point of having flexibility if you can’t make it as flexible (i.e as little work to be done) as possible, right?

e) Card data discovery (CDD)

Card Data Discovery Scans – CDD. There is finally some clarifications on Card Data scans to be done every 12 months and to clarify what we have already covered in our previous post in educating the QSA on how to interpret the particular CDD requirement. So yeah, kudos PCI-SSC for supporting us!

d) Misc – Anti Phishing and Full Disk Encryption

As mentioned previously, we now have references to Anti-Phishing requirements, which should have been there long before, to be honest.

We have clarifications which will have significant impact to some of our clients – the use (or abuse) of the full disk encryption requirement. V4.0 has basically blocked that way out for some of our customers utilising Bitlocker with TPM to get past Requirement 3. This is , to us, a fairly significant item of v4.0 which we will be dedicating a post later on it.

Well, so that’s it for the overview for now. We hope to get more articles out to do deeper dives into v4.0 but like I said, it’s still early days and there would be more clarifications ahead. Hopefully it will be more positive, and the experience of v4.0 will be less like that family outing with the cousin that should be thrown into a pit of vipers.

Contact us at pcidss@pkfmalaysia.com for any queries you have on PCI and we will get back to you immediately.

Official Announcement AT&T Cybersecurity on sales hold for Alienvault

As previously announced, USM Appliance will be placed on a new sales hold effective January 1st, 2022.

What does that mean for me?

All net new sales of USM Appliance to new customers will be discontinued. Any new USM Appliance orders placed until December 31st, 2021 will be accepted. Renewals and expansions to existing deployments will continue to be accepted until December 31st, 2023. A sales hold is NOT a declaration of any end of support; AT&T will continue to provide support through December 31st, 2024

How can I continue to support my customers?

AT&T Cybersecurity is committed to providing our customers with innovative security solutions. USM Anywhere, our SaaS-based solution, will continue to be the focus and flagship product for our Threat Detection and Response offerings.

Please take a look at https://cybersecurity.att.com/products/USM-Anywhere for more information on USM Anywhere. If you have any questions, you can reach out to us at alienvault@pkfmalaysia.com.

PCI Delta Assessments

pci-compliance

Let’s start off by saying this isn’t a way for us to make light of the current situation by using the word ‘Delta’ here. We all know how dangerous and virulent the current strain of COVID is and this isn’t a matter of writing an article simply to get a search hit on that word.

That being said, this is a topic that seemed a bit obscure, even to us who have been doing PCI-DSS for more than a decade now.

So the question that can sometimes pop up would be: Great, we got our PCI-DSS certification now, everyone is celebrating and patting each other on the back. In 2 weeks time after our AoC/RoC has been produced, our product management rolls out a new Application XYZ which deals with credit card information along with a new environment, database, systems etc. Is this Application XYZ included in our current PCI-DSS certification or not?

It’s a good question. Because the fact is that many view PCI-DSS as a point in time audit, whereby the audit is done at a certain time and not over a period of time. One might argue that during the audit itself, sampling will be done over a 12 month period, therefore it cannot be categorised as a strictly point in time assessment. Regardless how you categorise it, at the end of the audit, there is the big result: a compliant AoC/RoC pair. Don’t get us started on the dreaded Certificate of Compliance or CoC, or CoC-n-Bull in our terms. Enough of that certificate nonsense. As for the AoC/RoC pair, the scope is stated clearly in it, defining the audit scope, the boundaries, the applications scoped in, locations etc. So this is great. When we get a new application onboard, we just add in that application into the AoC, right?

Right?

Unfortunately, at this point, the QSA will say, not really. Once the AoC is out, it’s out. Unless you want to re-do the audit or to recertify, then yes, that new application can be added in.

Now, we’ve faced such a situation before. And in fact PCI-DSS addresses it nicely at this wonderful piece of work: https://www.pcisecuritystandards.org/documents/PCI_DSS_V2.0_Best_Practices_for_Maintaining_PCI_DSS_Compliance.pdf

In item 3.10.3 it states:

Any change to the network architecture or infrastructures directly related to or supporting the CDE should be reviewed prior to implementation. Examples of such changes include, but are not limited to, the deployment of new systems or applications, changes in system or network configurations, and changes in overall system topologies.

PCI reminding us to stay focus!

So in this case, application XYZ falls under new application. The point of PCI-DSS is that, just because you deploy a new thing or new firewall or new application doesn’t mean you are no longer compliant to PCI-DSS. After all, PCI encompass the practice and process as well, so the council understands and advice that these changes be implemented into the PCI program and PCI processes ensures that this stays compliant. So in short, if you have application XYZ coming in, make sure the PCI controls apply to it and it will then be reviewed under the next audit and included into the PCI AoC of the coming year. Let’s just update the current Aoc and we all go home now, right?

Right?

But wait, you aren’t listening, says the auditor, you still can’t update the current AoC. The AoC is already fixed for that year, unless you want to do an audit. Again. Like a month after you have done and dusted your recertification audit for that year.

In most cases, these changes for our clients go through the maintenance cycle without and issue and the following AoC simply gets updated to include it. But what if the customer insist on having the CURRENT AoC updated? This could be due to requirements from their client, regulatory or what not. How do we put that application into the current AoC without spinning off the whole audit all over again?

In short, you can’t. You either wait it out for the next year audit OR you re-do your certification audit and nullify the previous one. However, this is where that little obscurity comes in. Delta assessment.

Now I’ve heard of Delta assessment for PCI, but it’s almost invariably related to PA-DSS (SSF now), PCI PTS, P2PE where basically, vendors who had completed, let’s say their SSF, can validate low risk changes to their application and do a delta assessment. In PTS, the delta is done by the PTS Lab, but for SSF, the SLC vendor can basically do a self attestation. However, we don’t see any such item or recourse for PCI-DSS.

Discussing with the auditors, we find that indeed, there are possibilities of a delta assessment to be done, although rare, and not exactly cost effective, since whatever the delta is doing, it’s would just have a short lifespan before the changes get swallowed up by the main PCI program once the yearly audit cycle rolls in. That’s why we rarely see this done. But I rarely see a tapir doing a jig in a tutu, but that doesn’t mean it doesn’t exist.

So what happens is that the auditor will formally audit this application and its environment and go through the certification process as would normally be done – except that this is limited to the application and systems. Once assessed, a formal delta AoC/Roc pair is released to supplement the existing AoC/RoC pair. And so that’s it, these supplement documents can then be shown together with the current AoC/Roc for verification purpose and in the next cycle, it’s consolidated back into the main RoC.

Now, this is fairly new to us. The logic of it is still beyond us somewhat because the whole point of PCI is for an environment to be able to handle changes and not have it audited everytime there is a significant change that occurs. Because every audit is costly and I’m sure every organisation has already got its hands full trying to sort out budgets during these times, without worrying about delta assessments.

The above is basically what we gather from discussions with auditor and not really from experience, because at the end, once the proposal was put out, our client thought better of it and decided not to pursue. So really, it’s still in the realms of theory and we may not be accurate in our assumptions. However, it’s still something interesting to keep in mind, though rare – like the tapir in tutu – it helps to know that this option does possibly exist.

Drop us a note at pcidss@pkfmalaysia.com and we will try to address all your concerns on PCI or other compliance matters like ISO27001, ISO20000 etc!

PCI Pentesting and ASV Scans

Back in the days (as in when we started PCI more than 10 years ago), when it came to testing and scans, there were probably very gray lines on it. We saw a lot of reports that came out under the guise of ‘penetration testing’ that was straight out lifted from an automated Nessus Scan or one of the free Acunetix scans available. The problem was exacerbated when these penetration testing reports were further accepted by regulatory bodies like our regulatory bank and passed by other internal/external auditors. They basically just looked at a report and if it sounded and looked technical enough then it was technical enough.

Now, PCI got the hint and released a few versions of the Penetration Testing Guidance document, the latest iteration on 2017. A big part of it talks about scoping, clarifying on qualifications and requirement 11. But one of the key features of the document is highlighted in 2.1:

This came about to stem the misconception that as long as you have completed the vulnerability scan, you can use that to pass off as a penetration testing. We still see customers going down this route, in whatever creative ways they can conjure to avoid the penetration testing exercise.

An example was this response on their external PT report stating:

“We have conducted the PT exercise based on the recently passed ASV scan report by the QSA. Since the ASV scan has passed, the penetration testing report is also considered to be passed as there are no vulnerabilities to test.”

Which is basically the philosophy that as long as the scans do not yield any high or medium vulnerabilities, i.e a passing scan, there is no longer a need to conduct any penetration testing. Their concept was simple and fairly understandable: since there are no “vulnerabilities” in the scan, there is nothing for us to ‘test’.

Of course, this was rejected by the QSA.

While there are many arguments on this matter, the simple case against this is: the scan produces potential vulnerabilities and may even miss some out that may not be reported. False negatives do exist even in commercial scanners such Qualys or Nessus (two common auto-scanners). Additionally, a passing scan does not mean no vulnerabilities, it just means there are no medium/high vulnerabilities based on a non-contextual scan to the environment. A non-contextual scan means a lot of scanners already use internal libraries in their scanning database to categorise vulnerabilities without the definition of the actual environment risk it is scanning. So to equate CVSS to the actual risk of the organisation may be too broad an assumption as some low vulnerabilities may still be able to be exploited manually. The classic example here is when we check a simple form entry password and find it is well protected and designed, technically. However, a pentester may then go out into the organisation’s forum and discover that the admin regularly upkeeps a password file in Google Drive and shares it to the entire world inadvertently. The scanner won’t discover things like that.

Therefore to simply state, just because there is a passing ASV scan, it equates to penetration testing passing, is not going to get a free pass in PCI.

Another question that many organisations come back to us, when they have their team of penetration testers doing internal testing is: Well, then how do you do a penetration test, then, if you state we cannot use the ASV report to also pass our external penetration testing?

And it would seem weird, that when I look at them and answer: wouldn’t your penetration testers be able to answer that, instead of us? So from the auditor perspective, we look at 3 things: Tools, Technique, Team.  

The tools being used are important, but not all for pentest. Just by stating you have Kali or Metasploit doesn’t necessarily mean you know how to operate it. Technique (or method) is important to document. This is key for PCI and a key difference between hackers and pentesters. A pentester would know how to document each step, inform their client and normalize and not destroy the environment. A hacker (or let’s use the more correct term cracker) would simply go in and cause as much damage as possible, depending on his/her objective. You would rarely come across crackers developing comments and detailed reports/documents to their victims and executive summaries to the Audit Committee justifying their methods, the scope of coverage and the time and date of engagement. And finally, PCI looks at the personnel (or team) conducting the exercise. They may be certified (or not), but they should at least be qualified. In this case, if the pentester has no idea how to start a pentest, then the normal assumption would be — he’s not a pentester. A chef doesn’t ask people how to start cooking. He may require an input or two to understand what he needs to cook, or how spicy the broth should be for the customer; but if the he’s asking how do we start the cooking process or what is a wok, then that should be a red flag.

So, while the coverage of penetration testing and vulnerability scanning in the entire document is not the the purpose of this article, it is keenly important to know the difference between both (penetration test vs vulnerability scan), and not use one to justify the inaction of the other. Your QSA may bounce back that vulnerability scan attempting to disguise itself as a penetration test and waste precious compliance timeline in the process.

Drop us a note at pcidss@pkfmalaysia.com for any queries you have for PCI-DSS or ISMS and we will get back to you straight away! Stay Safe!

« Older posts

© 2022 PKF AvantEdge

Up ↑