Category: PKF AvantEdge (Page 1 of 5)

Technical Session: Clearing NTFS Dirty Bit

Every once in a while, we take a break from boring compliance articles and write what’s more interesting – fixing broken stuff or troubleshooting problems that has nothing to do with human beings. It’s far easier dealing with machines.

So, what happened was, we had a USB plugged into one of our servers and doing some file transfers. The server wasn’t hooked up onto our UPS, as this was a test system – ok, it was actually sitting under my desk and everytime I turned it on, everyone in the office thinks a helicopter is outside the window. It’s old and loud and totally unsuitable to be located outside of a server room. Ah well.

In any case, halfway through the transfer, the power tripped. The server was ok upon restart but not the USB external drive.

It demonstrated a few symptoms:

a) When plugged in, the drive does whir up and explorer recognises it. The problem was it was listed as ‘Local Drive’ and nothing else, no other information. When clicked, it just freezes up everything. Right click does eventually brings up the context menu but when ‘Properties’ is selected, it hangs and never proceeds. So trying to scan the drive for errors from the GUI is a no go.

b) Command line wise – when accessing G:, again it just hangs. Chkdsk /f also hangs from command line so trying to scan from command line = no go.

c) Going into disk management GUI, it takes a long time before it eventually pops up and the good news was that disk management actually saw the drive. However, right clicking on it and trying to reassign the drive letter (as suggested by some other articles to recover), we get this annoying message:

The operation failed to complete because the Disk Management console view is not up-to-date.  Refresh the view by using the refresh task.  If the problem persists close the Disk Management console, then restart Disk Management or restart the computer

Microsoft being cryptic and mysterious

So like Lemmings, we proceed to refresh the console with F5 and it just hangs indefinitely and nothing happens until we unplug the drive. Then a string of errors come out like Location of drive cannot be found etc. It seems the auto opening of the USB drive was activated but Windows just couldn’t read the drive. So disk management is a no-go.

d) We tried installing other software like Acronis, or Easeus but none of these managed to read the hard drive and simply hangs until we unplug it.

e) Changing laptops/desktops/cables (all running Windows) – all the same result. The drive was acknowledged but explorer or other programs couldn’t open anything on it. This is good news actually; it doesn’t seem there was a hardware issue or any dreaded clicking noise indicating the drive was a dead duck.

f) So it does point to a software layer issue, which should be handled with a scan disk or check disk by Windows. However the problem is, the disk couldn’t be read, so it couldn’t be scanned. Booting into safe mode doesn’t help anything. Reinstalling the USB drivers doesn’t help. The drive simply refuses to go to work, like all of us on a Monday morning after being smashed with a hangover from a Sunday night out.

g) Finally, on event viewer under Windows Logs -> System, this particular classic comes up: “An error was detected on device \Device\Harddisk2\DR21 during a paging operation.” So if you go to advanced under system properties -> Performance ->Settings ->Advanced. Under virtual memory, you could uncheck the box to automatically manage the paging file size if you can. But no, Windows doesn’t read the drive, so clicking on G: once more hangs the whole system.

At this point we have wasted an hour trying to sort this nonsense out. Nothing in Windows was able to indicate the issue. One suggested running fsutil from command line. This can check for the dirty bit on NTFS, which is an annoying feature that basically renders the drive useless until the bit is ‘cleared’.

The problem with this was – yes, you got it – you couldn’t run any command on that drive as it just hangs. Nothing, no programs in Windows was able to do anything for this drive.

The Dirty Bit

So some definitions first – the dirty bit is a modified bit. It refers to a bit in memory, which switches on when an update is made to a page by computer hardware. It is just a 1 hex value situated in some place hidden on the portable hard drive.

From Microsoft definition

A volume’s dirty bit indicates that the file system may be in an inconsistent state. The dirty bit can be set because:

  • The volume is online and it has outstanding changes.
  • Changes were made to the volume and the computer was shut down before the changes were committed to the disk.
  • Corruption was detected on the volume.

If the dirty bit is set when the computer restarts, chkdsk runs to verify the file system integrity and to attempt to fix any issues with the volume. (In our case, this didn’t happen, obviously).

Assuming that this was a dirty bit problem (at this point, we were just shooting in the dark due to the lack of diagnostics, logs or events and we were just working on with some black magic of guessing).

From some articles in the net, the options to remove the dirty bit as follows:

  • You have 3 options to remove dirty bits from your computer. The first option is to trust the Microsoft disk checking utility by finishing a disk check operation. [This didn’t work as Windows wasn’t able to read ANYTHING and we could not run any windows based operations or commands or programs on it.]
  • The second method is that you move the data from the volume and format the drive. After that, move the data back. [This is way too much work. Plus, Windows can’t even access it. So the only option is to do a clone such as through Clonezilla? That’s a lot of work. And a last resort.]
  • The third method to remove the dirty bit is by using a hex editor with disk editing supported. [We didn’t explore this as this seemed a bit extreme, and probably the last time we handled a hex editor was when we had to hack in some computer games like Football Manager to give unlimited funds or a 99 in dribbling skills]

There’s an easier way.

So this is where you just need to give up on Windows and figure another way to check this disk. If you have a standby Linux box or Mac, that would help. But if not, you could actually use this great little tool called SystemRescue which among other tools, have the delectable DDRescue and Ntfs3g which will be important.

Boot up to SystemRescue (you can make a boot disk with DDRescue which is very much recommended – just use Rufus or another program to make it bootable, and download the distribution and you basically now have a nice little Linux distro running from your USB and you should be able to also see your USB mounted with the command lsusb or lsblk.

Using lsblk -o gives you a view to see the type, size, device and a few more details. The below is an example (not ours)

Just identify which is your USB drive.

Using a the nifty ntfsfix (assuming /dev/sda1 is the USB drive you want to fix)

ntfsfix -d /dev/sda1

This basically clears the dirty bit which Windows for whatever reason, finds it so difficult to do and makes us jumps through hoops. In fact, fsutils from Windows only tells you that you have a dirty bit but doesn’t clear it. That’s like paying a doctor to tell you that you have cancer and not providing you any healthcare to it. Come on, Microsoft.

So right after clearing the dirty bit, the external drive is once again accessible. There were still some errors on the drive, but we just ran the check for errors option via GUI (since now we are able to access the properties of the drive again by right clicking for the context menu), and fixed up the inaccessible files.

So now you know. The next time you have an outage during a file transfer, it could just be the dirty bit. The problem is the diagnosis (again, Windows could just put into the event that there is a dirty bit set instead of leading us to this paging file nonsense treasure hunt). And of course, if Windows cannot access, using the SystemRescue utility, it’s a great tool to solve this issue.

And finally, according to some, another even easier way is to just plug in this drive into a Mac and apparently, it resets the dirty bit for some reason. I never tried this, so perhaps others can give it a try first before going the SystemRescue way.

Contact us at for more information on what services we can offer you.

Have a good week dealing with human beings!

PCI-DSS 2022 and Version 4


So we are now in 2022. PCI-DSS v4.0 is due to be out and one of the things we have been doing for the first two weeks of the year is to get over our holiday hangovers. That’s right. In our country (Malaysia), the slowest months are December, January and February. It’s like starting a car in the dead of winter. These 3 months are like the Amen Corner in Augusta for businesses. December hits like a ton of bricks due to the Christmas season; and then just when things start moving in January, it grinds to a halt for Chinese New Year, where the entire nation just flat out refuses to work. When we are back in the second week of Chinese New Year, we are once more in first gear climbing up the hill again of 2022.

So we did things a bit differently. We started the first two weeks with a series of training for clients and potential clients, to go through PCI-DSS v4.0 and create an awareness of what is there to expect.

The above is taken from the PCI website and immediately we see some interesting things here. Number one: PCI-DSS v3.2.1 only retires in 2024. This is interesting, because usually the transition period isn’t so long. It’s long now because – I don’t know, there may be an ongoing pandemic and such. So here we are Q1 2022, and our customers are asking when do we transition to v4.0?

Well, the answer would be: as soon as you can. But in theory, you can probably stick to v3.2.1 validation for 2022 and realistically move to v4.0 in 2023. In fact, for some of our clients whose PCI maintenance period follows the calendar year, they can even force 3.2.1 into their 2023 validation year.

As for the actual content in PCI v4, it’s still a well kept secret like the plot of Spiderman: No Way Home; but we have been reading a bit and also have joined last year’s PCI-DSS community meeting and learnt some interesting tid-bits of it.

No 1: Compensating Controls

The-get-out-of-jail-free card. Customers have been dangling this Compensating Controls card in front of our faces ever since the Mesopotamian times. When they can’t address a control – use compensating controls! When they cannot implement something due to budget – compensating controls! When they can’t make changes to an application because it was designed by a group of kindergarten kids and it would break the moment you touch it – Compensating Controls! When you don’t know what to say to your wife after a long night out at the pub with the mates and come back smelling like a keg of kerosene – Compensating Controls!

The problem with compensating controls is that they are a pain in the neck to implement and to document. And to justify. The compensating control worksheet, the justification documentation, the implementation of the control itself to be ‘above and beyond’ the scope of PCI-DSS etc. Everyone things this is a silver bullet only to find it the deepest rabbit hole you can ever fall into.

So, PCI v4 does away with compensating controls. Great.

And they introduce Customized Implementation.

A lot of people are saying this is a game changer.

Honestly? Until more information comes out, we only have this to go with:

Customized implementation considers the intent of the objective and allows entities to design their own security controls to meet it. Once an organization determines the security control for a given objective, it must provide full documentation to enable their Qualified Security Auditor (QSA) to make a final decision on the effectiveness of a control.


Design their own security controls? Well, ok, isn’t this the same as compensating controls? I am thinking this just expands the interpretation to something a bit broader in which case the control may not even be a technical control. So instead of stating , ok, we can’t meet certain password controls due to the legacy application issue, and compensating controls were previously excessive logging and monitoring; isolation of network, whitelisting of IPs and access; using WAF and DLP and Virtual patching etc etc; are we stating now, a possible customized approach would be: instead of all these technical controls; we now have a customized security approach. Which includes isolation of network, whitelisting of IPs and access; using WAF and DLP and Virtual patching etc etc.

Until we see some examples of this, it may just be well that most companies will go along with the ‘normal’ approach; or adopt a wait and see approach and eke out the last remaining drop of v3.2.1. into 2024.

No 2: UP in the Clouds

Another item that has been long overdue? Cloud. It’s about time things get addressed and not just cloud, but how services and containers work as well. We have had auditors coming to our clients insisting on them doing testing, VA/PT on services from AWS, not recognizing there’s not even an IP address to start with. To be fair to the SSC, they do have a few Cloud Guidelines Supplementary documentation, which we actually find very useful especially in our projects on certifying cloud technologies. We can see this being incorporated more formally into v4.0 where the requirements will be designed around Cloud environment more organically than what we see right now (sort of force-fitting many of the traditional concepts like Network IDS, Patching etc into the cloud environment).

No 3: Not another MF-A!

I have a bad feeling about this.

MFA has been a constant pain for us. Firstly, where MFA is being implemented – not just on perimeter but now on every access to the CDE. At least it’s now still on admin accounts. We hear they plan to introduce for ALL users. We also hear the collective screams of the tormented from the nine hells of Dante. Secondly, a lot of customers are still depending on MFA via SMS. If PCI goes along the NIST route, we could see this being deprecated soon. Also, clarifications as well on whether client side certificate are acceptable as a ‘something you have’ factor would be most welcomed. We see different QSAs interpreting this so differently you’d think we’ve asked them to interpret some ancient Thuggee text. Multi-factor challenges are already there for us over the past years, with Bank Negara’s RMIT focus on ‘strong MFA’ for large financial institutions. A clear guidance also should be there on how to evaluate multi-factor that is dependent on a cloud provider; and whether common implementation like Google Authentication etc can still be considered as good enough for V4.0

No 4: Encrypting everything

We also hear now that the “Pocket Protector Trope” security may be implemented. Remember those movies we watch, where the hero gets shot in the chest and you think he dies but he reveals that the bullet is stopped by his pocket watch; his badge; a bible; or some other dang sentimental thing that was given to him like 40 scenes ago?

So in PCI, usually when data is traversing the internet or network, it states the transmission needs to be encrypted. It doesn’t technically state anything about encrypting the data package itself while in transmission. The data encryption almost exclusive occurs during data at rest. So in this case, they are doubling the protection: They are adding that pocket watch to catch the bullet; so if the transmission gets compromised, the data is still secured. The bullet doesn’t hit the hero!

No 5: Recovery and Continuity

Not so much as something coming, but more of what we’d like to see. One of the biggest criticism we see customers bemoaning at PCI (other than the cost and budget and the complexity and..ok, everything else) – is that PCI has little focus on business continuity and disaster recovery. It’s almost as if PCI is standing there saying, “OK, you have outage for a few days? Great, make sure your credit card information is safe.” It’s not really business focused, it’s more credit card confidentiality focus. What we would like to see is a little more focus on this area. Over the past 2 years, we have seen customers getting all sorts of attacks from cyberspace. Malware, ransomware, hacking, fraud, defacement — it’s like the world goes into a pandemic and everyone’s bored to bits at home and everyone is taking up hacking as a part time gig. Malware for instance – how prepared is a PCI compliant company against ransomware attack? Have they done their backups? Have they tested their systems to recover?

So, if you have any queries on PCIv4 for us, drop us an email at and we will definitely get back to you. Have a great and safe year ahead for 2022!

PCI-DSS and Card Storage


We had an interesting discussion a few weeks back about storage in PCI-DSS. We disagreed with an acquirer’s position in how PCI-DSS views storage and therefore opened a whole can of … interesting debate.

The problem the acquirer had with our position was simple. We have a client who is currently doing a data migration import from another service provider to their document management system. Amongst the terabytes of data were possible scanned copies of credit card information, either in forms or actual card photo-copies themselves. Now, we are talking about terabytes.

Our position was fairly straightforward. Do you need these card data? We asked. No, said our client. We don’t need the card data as we do recon and backoffice operations on other form of identification. Can these information be removed or redacted? Bemused, they said, possibly, but the problem is that there are going to be millions of records to be dealt with.

Well, is there a way we can sanitize the data before it enters into your environment?

Yes, possibly, we need to ask the acquirer to ask their current provider to do it for us.

The provider you are taking business away from?


Good luck…

And sure enough, the acquirer responded and asked us, “Shouldn’t PCI-DSS allow the storage of these card information, and how your client is able to deal with it? Why do you insist on us redacting and removing the card information? What then is the purpose of PCI-DSS??”

Now, on the surface, that argument does make sense. After all PCI-DSS applies to entities who store, transmit and process credit card information right? Why then wouldn’t we want our client to store credit card information if they are going through PCI-DSS?

Unfortunately, this is a case of getting the solution (PCI-DSS) mixed up with the problem(storing card data). In other words, in a more current analogy, just because I got vaccinated doesn’t mean I would purposely go out and try to get infected so that the vaccine has something to do. The purpose of PCI isn’t for you to store credit card. It’s for you to manage the storage of credit card IF you store it. Storing credit card isn’t a PCI-DSS objective, its an issue that PCI-DSS tries to solve.

So back to this little kerfuffle; if they pass us terabytes of information with card data, our client will need to figure a way to protect this data. Likely encryption of any information that card data is present, which includes key management etc. If they can redact it and remove it before it enters into our client’s environment, then we avoid it. We are basically following the concept of PCI-DSS :

Requirement 3 addresses protection of stored cardholder data. Merchants who do not store any cardholder data automatically provide stronger protection by having eliminated a key target for data thieves. Remember if you don’t need it, don’t store it!

PCI-DSS Prioritized approach

If we don’t need it, don’t store it. In this case, we don’t need it, so we are trying to escape storing it. However, if this cannot be done (which likely it won’t be), then we just need to put controls in there. We’re trying to get our clients to do less and we are also trying to remove card footprints in other areas, thus reducing the risks to the card brands, and likely save the world from impending disaster and destruction.

However, we do have another issue.

Because there is potentially CVV storage (photocopy of cards front and back) and scanned into softcopies, we have a bit of a problem. CVV cannot be stored in any format or in any media post authorisation. So therefore, if this is being dumped into our client’s environment, it’s imperative someone removes this information. To us, its a lot easier to remove it at source; but unfortunately that means there is an effort to be spent on it, which no one is willing to do.

How the CVV got stored in the first place is a question that we don’t have an answer to. However, we do know that if CVV is present, we cannot just encrypt it and be done with it. We will need to remove these information one by one. There are a few solutions out there that can do auto redaction and be applied to a massive amount of files, provided that the files are in a sort of standard fashion. That could be a solution on this, but again, it’s beyond what we are discussing for this article.

The point is, having PCI-DSS doesn’t automatically mean we MUST store card data. It simply means IF we store card data we are applying PCI-DSS controls to that storage of card data.

Let us know if you need more information about PCI-DSS or any IT standard compliance like ISO27001 or CSA/SOC, we are ready to assist, just contact us here. Stay safe everyone!

Hardening Checklist

Requirement 2.2 has been often deliberated by customers undergoing PCI-DSS. To recap, the requirement states:

Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.
Sources of industry-accepted system hardening standards may include, but are not limited to:
• Center for Internet Security (CIS)
• International Organization for Standardization (ISO)
• SysAdmin Audit Network Security (SANS) Institute
• National Institute of Standards Technology (NIST).

Requirement 2.2

So often, customers go ahead and download the CIS hardening documents at and copy lock stock and barrel into their policies and send it in. Now all this may be well and good, but now you have around 1,200 page tome with guidelines like 14 character alphanumeric password, as opposed to what PCI requires (7 Alphanumeric). This is where our customers get stuck, and some even send in a 1000 page hardening document to us to review, only for us to find that they have not implemented even 1% of what is noted in their hardening document.

After that, the hardening documents get re-jigged again until it meets a reasonable, practical standard that is implementable, usually in the form of a checklist. For a very quick hardening checklist, this is the initial one we often end up using, just to get our clients up to baseline speed, whether it’s PCI or not:

Hardening ItemServersNetwork DevicesDatabases
Assign individual server for each critical role (App, Web, DB, AD, AV, Patching etc)YNAY
Disable/Rename/Remove default user accountsYYY
Assign role based access to usersYYY
Disable insesure or unnecessary servicesYYNA
Use Secure Versions of Remote Access Services (SSH, RDP over SSL)YYY
Install well known Anti Virus with latest signaturesYNANA
Install latest OS / Firmware / Software security patchesYYY
Disable inactive users automatically after 90 daysYYY
Ensure Following Password Policies –
1. Use Complex Password with 7 characters or more
2. Remember minimum last 4 Passwords
3. Require passsword change within 90 days
4. Require password change upon password reset and first logon
Ensure following account policies –
1. Account lockout threshold – Max 6 attempts
2. Account lockdout duration – 30 mins or until admin unlocks
3. Idle Session Timeout – 15 Mins or less
Ensure passwords are stored securely with encryptionYYY
Enable Audit logging to Capture at minimum following events –
1. Successful Login
2. Failed Login
3. Administrative Actions
4. User Creation
5. User Deletion
6. User Updates
7. Escalation of Privileges
8. Access to Audit Trails
9. Initialization or stopping auditing
Configure NTP and time syncronizationYYY
Implement File Integrity Monitoring`YYY

Now obviously this doesn’t cover all the requirements of PCI (testing, scans, retention etc) but this should give us a fair idea of how ready our systems are for an audit or assessment.

If you have any queries on PCI or ISMS or any other security related standard, drop us a message at

Do or Do Not – ASV for SAQ A


I would have thought this debate died out with the extinction of dinosaurs, but apparently, we are still at this subject in 2021. Still. Going. On.

So in the past weeks, there were some debate between us and some consultants as to whether the SAQ A requires an ASV scan or not. Our position was No. Their position was yes. So let’s look at it.

Now, keep in mind, we aren’t talking about best practice. We are talking about PCI-DSS v3.2.1 and what it says about ASV scans being mandatory for SAQ A. That’s it. That’s the statement. Now, debate.

There is actually no debate. This isn’t some sort of grey area, hard to explain, obscure rule in Sanskrit and written on the Sankara stones. This is just: Look at SAQ A, search for ASV, don’t find it. Thank you.

The ASV requirement is present in item 11.2.2 of PCI-DSS.

SAQ A does not have it.

So why do consultants still insist people do ASV scans for SAQ A?

There could be a lot of reasons, ranging from ‘guideline’, ‘best practice’ and so on. No doubt, having a scan (which isn’t expensive in any case) would be the least effort of security done by the merchant if they are hosting an e-commerce website that is redirecting customers to their payment processor once the “Click here to pay” is clicked. I mean, even if it has nothing to do with PCI, it may seem like common sense to have at least a scan done on your site to ensure it passes the very minimal requirement of security. So do we advocate an ASV scan to be done on any e-commerce site that deals with payment options (not necessarily payment data)? Yes, we do. There are many ways a site may get compromise. A coding error may allow data to be siphoned off, or passwords may be compromised. A re-direct may be vulnerable to man in the middle attacks; or even a total redirect to another page altogether where payment data is inadvertently entered. While the e-commerce site may be outsourcing the payment part to a processor, it still has the job of redirecting traffic to it.

Think of it as an usher (not the singer, but the job); where you enter into a dark auditorium, let’s say Royal Albert Hall to watch Ed Sheeran – and the usher takes you through this row of lights to what is supposedly your seat which you paid RM10,000 for.

When the lights come on, you find yourself in nice cosy room and in front of you someone who seemed to resemble Ed Sheeran but slightly off. His hair isn’t ginger and he isn’t as chubby as you see that guy on TV and he speaks with a slight Indian accent. And isn’t the Royal Albert Hall a HALL? Why are you in this room that resembles a glorified grandmother’s living room? You find out later that the usher had led you through the wrong Hall into a neighboring pub attached to the side of the hall and you are listening to the wonky music of Eddy Shiran.

The point is, the usher is pretty important in leading people to their seats. So as a redirect, even though you aren’t the main draw, you could end up leading your customers to Eddy Shiran instead.

But back to the main debate, whether it is required for SAQ A customers to go through ASV? No, it’s not.

However, there is always a but in everything. There are exceptions.

Some acquirers make it a point to state that they still require an ASV report even if merchants are going through SAQ A. That’s completely fine because the guidelines from Visa/Mastercard are just guidelines. At the end, the acquirer or payment brands may make individual decisions based on merchants, so it’s not written in stone. However, if there are no such requirement, we’re left to interpret the SAQ as it is, and it doesn’t state anything there.

Some may point out within the SAQ A under part 3a, there is a statement

ASV scans are being completed by the PCI SSC Approved Scanning Vendor (ASV Name)

Triumphantly being pointed out as proof of ASv requirement

Take note however, that above, under Part 3a, the instructions do state:

Signatory(s) confirms:
(Check all that apply)

the realisation that asv is still not needed for Saq A (or B)

Even under the title “PCI DSS Self-Assessment Completion Steps” of the SAQ:

Submit the SAQ and Attestation of Compliance (AOC), along with any other requested documentation—such as ASV scan reports—to your acquirer, payment brand, or other requester.

It does seem to be grappling at straws if this sentence was used to justify the requirement for PCI-DSS. “Such as” generally denotes an example, which may or may not exist or is required.

In previous requirements of merchants from Visa, there used to be statements describing merchant levels such as

 * Merchant levels are based on Visa USA definitions
** The PCI DSS requires that all merchants perform external network scanning to achieve compliance. Acquirers may require submission of scan reports and/or questionnaires by level 4 merchants

And perhaps there is where the myth was perpetuated from. In recent times Visa has updated its site ( to reflect a better understanding, stating:

“Conduct a quarterly network scan by an Approved Scan Vendor (“ASV”) (if applicable)”

In conclusion, SAQ A and B do not require ASV scans. If it’s required by the acquirer then so be it. If it’s supposed to be done out of best practice requirements, so be it. But you don’t want to hear an ASV/QSA telling you that you need to do something that is above and beyond your PCI requirement without them pointing to something in the standards that states so.

Finally – for SAQ B, which usually applies to POS terminals dialing up to the bank for authorisation; we’ve even seen some consultants requiring the merchant’s website to undergo ASV, which has nothing to do with their POS Terminals. Why ASV the website? Don’t know. So the merchants go about scanning their website that hasn’t been updated since 2012 and wonder, what sort of nonsensical requirement is this from PCI-DSS that needs them to pay just to scan something that is built by an 18 year old intern who had left the company 10 years ago? You don’t need to. So don’t do it.

Anyway, that’s it for now. Let us know your thoughts or questions and send to us at and we will get back to you ASAP. Now, back to listening to our Spotify for Eddy Shiran!

« Older posts

© 2022 PKF AvantEdge

Up ↑