Category: PKF Avant Edge (Page 2 of 18)

Technical Session: Clearing NTFS Dirty Bit

Every once in a while, we take a break from boring compliance articles and write what’s more interesting – fixing broken stuff or troubleshooting problems that has nothing to do with human beings. It’s far easier dealing with machines.

So, what happened was, we had a USB plugged into one of our servers and doing some file transfers. The server wasn’t hooked up onto our UPS, as this was a test system – ok, it was actually sitting under my desk and everytime I turned it on, everyone in the office thinks a helicopter is outside the window. It’s old and loud and totally unsuitable to be located outside of a server room. Ah well.

In any case, halfway through the transfer, the power tripped. The server was ok upon restart but not the USB external drive.

It demonstrated a few symptoms:

a) When plugged in, the drive does whir up and explorer recognises it. The problem was it was listed as ‘Local Drive’ and nothing else, no other information. When clicked, it just freezes up everything. Right click does eventually brings up the context menu but when ‘Properties’ is selected, it hangs and never proceeds. So trying to scan the drive for errors from the GUI is a no go.

b) Command line wise – when accessing G:, again it just hangs. Chkdsk /f also hangs from command line so trying to scan from command line = no go.

c) Going into disk management GUI, it takes a long time before it eventually pops up and the good news was that disk management actually saw the drive. However, right clicking on it and trying to reassign the drive letter (as suggested by some other articles to recover), we get this annoying message:

The operation failed to complete because the Disk Management console view is not up-to-date.  Refresh the view by using the refresh task.  If the problem persists close the Disk Management console, then restart Disk Management or restart the computer

Microsoft being cryptic and mysterious

So like Lemmings, we proceed to refresh the console with F5 and it just hangs indefinitely and nothing happens until we unplug the drive. Then a string of errors come out like Location of drive cannot be found etc. It seems the auto opening of the USB drive was activated but Windows just couldn’t read the drive. So disk management is a no-go.

d) We tried installing other software like Acronis, or Easeus but none of these managed to read the hard drive and simply hangs until we unplug it.

e) Changing laptops/desktops/cables (all running Windows) – all the same result. The drive was acknowledged but explorer or other programs couldn’t open anything on it. This is good news actually; it doesn’t seem there was a hardware issue or any dreaded clicking noise indicating the drive was a dead duck.

f) So it does point to a software layer issue, which should be handled with a scan disk or check disk by Windows. However the problem is, the disk couldn’t be read, so it couldn’t be scanned. Booting into safe mode doesn’t help anything. Reinstalling the USB drivers doesn’t help. The drive simply refuses to go to work, like all of us on a Monday morning after being smashed with a hangover from a Sunday night out.

g) Finally, on event viewer under Windows Logs -> System, this particular classic comes up: “An error was detected on device \Device\Harddisk2\DR21 during a paging operation.” So if you go to advanced under system properties -> Performance ->Settings ->Advanced. Under virtual memory, you could uncheck the box to automatically manage the paging file size if you can. But no, Windows doesn’t read the drive, so clicking on G: once more hangs the whole system.

At this point we have wasted an hour trying to sort this nonsense out. Nothing in Windows was able to indicate the issue. One suggested running fsutil from command line. This can check for the dirty bit on NTFS, which is an annoying feature that basically renders the drive useless until the bit is ‘cleared’.

The problem with this was – yes, you got it – you couldn’t run any command on that drive as it just hangs. Nothing, no programs in Windows was able to do anything for this drive.

The Dirty Bit

So some definitions first – the dirty bit is a modified bit. It refers to a bit in memory, which switches on when an update is made to a page by computer hardware. It is just a 1 hex value situated in some place hidden on the portable hard drive.

From Microsoft definition

A volume’s dirty bit indicates that the file system may be in an inconsistent state. The dirty bit can be set because:

  • The volume is online and it has outstanding changes.
  • Changes were made to the volume and the computer was shut down before the changes were committed to the disk.
  • Corruption was detected on the volume.

If the dirty bit is set when the computer restarts, chkdsk runs to verify the file system integrity and to attempt to fix any issues with the volume. (In our case, this didn’t happen, obviously).

Assuming that this was a dirty bit problem (at this point, we were just shooting in the dark due to the lack of diagnostics, logs or events and we were just working on with some black magic of guessing).

From some articles in the net, the options to remove the dirty bit as follows:

  • You have 3 options to remove dirty bits from your computer. The first option is to trust the Microsoft disk checking utility by finishing a disk check operation. [This didn’t work as Windows wasn’t able to read ANYTHING and we could not run any windows based operations or commands or programs on it.]
  • The second method is that you move the data from the volume and format the drive. After that, move the data back. [This is way too much work. Plus, Windows can’t even access it. So the only option is to do a clone such as through Clonezilla? That’s a lot of work. And a last resort.]
  • The third method to remove the dirty bit is by using a hex editor with disk editing supported. [We didn’t explore this as this seemed a bit extreme, and probably the last time we handled a hex editor was when we had to hack in some computer games like Football Manager to give unlimited funds or a 99 in dribbling skills]

There’s an easier way.

So this is where you just need to give up on Windows and figure another way to check this disk. If you have a standby Linux box or Mac, that would help. But if not, you could actually use this great little tool called SystemRescue which among other tools, have the delectable DDRescue and Ntfs3g which will be important.

Boot up to SystemRescue (you can make a boot disk with DDRescue which is very much recommended – just use Rufus or another program to make it bootable, and download the distribution https://www.system-rescue.org/) and you basically now have a nice little Linux distro running from your USB and you should be able to also see your USB mounted with the command lsusb or lsblk.

Using lsblk -o gives you a view to see the type, size, device and a few more details. The below is an example (not ours)

Just identify which is your USB drive.

Using a the nifty ntfsfix (assuming /dev/sda1 is the USB drive you want to fix)

ntfsfix -d /dev/sda1

This basically clears the dirty bit which Windows for whatever reason, finds it so difficult to do and makes us jumps through hoops. In fact, fsutils from Windows only tells you that you have a dirty bit but doesn’t clear it. That’s like paying a doctor to tell you that you have cancer and not providing you any healthcare to it. Come on, Microsoft.

So right after clearing the dirty bit, the external drive is once again accessible. There were still some errors on the drive, but we just ran the check for errors option via GUI (since now we are able to access the properties of the drive again by right clicking for the context menu), and fixed up the inaccessible files.

So now you know. The next time you have an outage during a file transfer, it could just be the dirty bit. The problem is the diagnosis (again, Windows could just put into the event that there is a dirty bit set instead of leading us to this paging file nonsense treasure hunt). And of course, if Windows cannot access, using the SystemRescue utility, it’s a great tool to solve this issue.

And finally, according to some, another even easier way is to just plug in this drive into a Mac and apparently, it resets the dirty bit for some reason. I never tried this, so perhaps others can give it a try first before going the SystemRescue way.

Contact us at avantedge@pkfmalaysia.com for more information on what services we can offer you.

Have a good week dealing with human beings!

PCI-DSS Card Data Discovery Scans

pci-compliance

For PCI-DSS, there are some fairly obvious requirements that are set in stone in order for you to pass PCI-DSS. ASV scans quarterly. Internal vulnerability scans – quarterly. Annual penetration testing. Half yearly reviews of firewall config and policies. Annual training awareness. These are biblical principles of the gospel of PCI.

And then again, there are other areas where interpretation is a little more of a touch and go; up in the air; subjective to the wind; sort of the things where there are as much disagreements and controversies as whether Han shot first or Greedo was just an absolute tool who misses from two feet.

And while most arguments often stems from our clients and us as we try to explain some concepts to them, there comes once in a while a subject where we find ourselves against the explanation of QSAs. Now, not all QSAs are created equal. When I say QSAs here, I refer to the individual QSA, not the organisation QSA. As in the human being who are QSAs for the QSA-C (QSA Company). We’ve worked with some who are technically well versed; we’ve worked with some who are strong in documentation and theory, we’ve worked with some who can communicate well but not so technical, and those who are opposite. But every once in a while, we come across QSAs who think they know everything (they don’t), and they stubbornly stick to a point of argument even when we have exhausted all avenues to show them their point is flawed. The more we argue, the more adamant they take their stance even if their justifications seem to be plucked directly out of their …. posterior appendages.

One of the items you will often see coming up in PCI-DSS is this thing called the Credit Card Discovery Scanner (CDD). What is this? In PCI-DSS standard pg 10:

To confirm the accuracy of the defined CDE, perform the following:
The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined CDE.

PCI DSS v3.2.1

The CDD process is basically just a process using a tool usually to identify whether card information is stored in the clear within the organisation. These are usually regular expressions based applications; where it can categorise the type of card based on BIN or the initial numbers. These tools are often quite useful as well to find other forms of information like personal information etc, as long as you can identify filters and regular expressions for them. Some tools out there are from Groundlabs, Managed Engine, ControlCase etc. We also have free CDD tools like Pan Buster, Credit Card Scanner etc. The free tools are a little bit more difficult to use in our opinion and there seems to be less support for database scans and more false positives overall, so you may spend a longer time cleaning up the results.

Whether commercial or free tools, what PCI has been fairly silent about is whether these are mandated in the standard to be done. Unlike ASV scans or penetration testing, the standard doesn’t specifically state the need to run these tools for a normal PCI-DSS standard. When I say ‘normal’; I refer to a set of additional requirements under Appendix A3: Designated Entities Supplemental Validation (DESV) . These are specially assigned entities that has large volume of card data or has suffered significant breaches. This is designated by payment brands or acquirers, and it’s not something a QSA or even the audited entity decides on.

So looking into the card data scan requirements; we only have the Pg 10 scoping requirement and in the DESV portion , A.3.2.5 – “Implement a data-discovery methodology to confirm PCI DSS scope and to locate all sources and locations of clear-text PAN at least quarterly and upon significant changes to the cardholder environment or processes”

In most cases, CDD scans are done on an annual basis for normal PCI-DSS (non DESV), or at times half-yearly as required by the QSA.

So along came another QSA who stoutly declares that all companies are required to do a quarterly CDD scan regardless of size for all systems in scope. When politely reminded that he seems to be mixing up the DESV quarterly scan requirements; he says no. He is highlighting requirement 3.1: “A quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention.”

When pressed to explain why this is a CDD scan, he states its obvious, that everyone needs to run the CDD scanner every quarter to address this requirement.

OK. We disagree. Completely. This is one of the instance, where QSA super-imposes requirements on each other just because it sounds the same.

Let’s break it down by looking at the PURPOSE of the CDD scan. And the best way is to go back to the standard and pick up the part where the standard states a ‘data-discovery’ method in DESV A3.2.5.

Implement a data-discovery methodology to confirm PCI DSS scope and to locate all sources and locations of clear-text PAN

A3.2.5 PCI-DSS V3.2.1

It’s clear that the CDD purpose is to locate where CLEAR-TEXT PAN is found in the CDE (and non-CDE) environment. Why is this important? Because in the CDE, there should never be any clear-text PAN found in storage. All PANs must be protected by either of the Four Horsemen of the Apocalypse: Encryption, Truncation, Hashing or Tokenization. A failed CDD means there are card PAN found in clear text within the CDE.

So with that in mind, lets go back to requirement 3.1. This is nothing to do with identifying clear PAN. It talks about identifying AND deleting EXPIRED card data (based on retention policies). That’s it. If the PAN is encrypted or tokenized but its stored beyond its retention period; requirement 3.1 tells you to delete it. It talks about retention period and storage beyond it. Which part of it talks about doing a card data scan to identify clear text card information?

In the description, it further states: A quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention requirements.

So QSA, please RTFM; requirement 3.1 isn’t talking about the need to run CDD quarterly to identify clear-text PAN storage; it is to run something (script) or manual; to identify PAN storage that is already expired. It is to discover duration of storage; not security of storage. Running a shell script may be good enough to get the timestamp of files; or checking the timestamp on the database entries to ensure that all card data is removed or anonymized after a period of say, 7 years.

If you need assistance in PCI-DSS or any other compliance standards like the ISMS or ITSM, drop us a note at pcidss@pkfmalaysia.com. We can help clarify some of these annoying requirements that even QSAs (as experienced as they are) are plucking out of their rear appendages.

PCI-DSS and Card Storage

pci-compliance

We had an interesting discussion a few weeks back about storage in PCI-DSS. We disagreed with an acquirer’s position in how PCI-DSS views storage and therefore opened a whole can of … interesting debate.

The problem the acquirer had with our position was simple. We have a client who is currently doing a data migration import from another service provider to their document management system. Amongst the terabytes of data were possible scanned copies of credit card information, either in forms or actual card photo-copies themselves. Now, we are talking about terabytes.

Our position was fairly straightforward. Do you need these card data? We asked. No, said our client. We don’t need the card data as we do recon and backoffice operations on other form of identification. Can these information be removed or redacted? Bemused, they said, possibly, but the problem is that there are going to be millions of records to be dealt with.

Well, is there a way we can sanitize the data before it enters into your environment?

Yes, possibly, we need to ask the acquirer to ask their current provider to do it for us.

The provider you are taking business away from?

Yes.

Good luck…

And sure enough, the acquirer responded and asked us, “Shouldn’t PCI-DSS allow the storage of these card information, and how your client is able to deal with it? Why do you insist on us redacting and removing the card information? What then is the purpose of PCI-DSS??”

Now, on the surface, that argument does make sense. After all PCI-DSS applies to entities who store, transmit and process credit card information right? Why then wouldn’t we want our client to store credit card information if they are going through PCI-DSS?

Unfortunately, this is a case of getting the solution (PCI-DSS) mixed up with the problem(storing card data). In other words, in a more current analogy, just because I got vaccinated doesn’t mean I would purposely go out and try to get infected so that the vaccine has something to do. The purpose of PCI isn’t for you to store credit card. It’s for you to manage the storage of credit card IF you store it. Storing credit card isn’t a PCI-DSS objective, its an issue that PCI-DSS tries to solve.

So back to this little kerfuffle; if they pass us terabytes of information with card data, our client will need to figure a way to protect this data. Likely encryption of any information that card data is present, which includes key management etc. If they can redact it and remove it before it enters into our client’s environment, then we avoid it. We are basically following the concept of PCI-DSS :

Requirement 3 addresses protection of stored cardholder data. Merchants who do not store any cardholder data automatically provide stronger protection by having eliminated a key target for data thieves. Remember if you don’t need it, don’t store it!

PCI-DSS Prioritized approach

If we don’t need it, don’t store it. In this case, we don’t need it, so we are trying to escape storing it. However, if this cannot be done (which likely it won’t be), then we just need to put controls in there. We’re trying to get our clients to do less and we are also trying to remove card footprints in other areas, thus reducing the risks to the card brands, and likely save the world from impending disaster and destruction.

However, we do have another issue.

Because there is potentially CVV storage (photocopy of cards front and back) and scanned into softcopies, we have a bit of a problem. CVV cannot be stored in any format or in any media post authorisation. So therefore, if this is being dumped into our client’s environment, it’s imperative someone removes this information. To us, its a lot easier to remove it at source; but unfortunately that means there is an effort to be spent on it, which no one is willing to do.

How the CVV got stored in the first place is a question that we don’t have an answer to. However, we do know that if CVV is present, we cannot just encrypt it and be done with it. We will need to remove these information one by one. There are a few solutions out there that can do auto redaction and be applied to a massive amount of files, provided that the files are in a sort of standard fashion. That could be a solution on this, but again, it’s beyond what we are discussing for this article.

The point is, having PCI-DSS doesn’t automatically mean we MUST store card data. It simply means IF we store card data we are applying PCI-DSS controls to that storage of card data.

Let us know if you need more information about PCI-DSS or any IT standard compliance like ISO27001 or CSA/SOC, we are ready to assist, just contact us here. Stay safe everyone!

Hardening Checklist

Picture from https://guardiansafeandvault.com/

Requirement 2.2 has been often deliberated by customers undergoing PCI-DSS. To recap, the requirement states:

Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.
Sources of industry-accepted system hardening standards may include, but are not limited to:
• Center for Internet Security (CIS)
• International Organization for Standardization (ISO)
• SysAdmin Audit Network Security (SANS) Institute
• National Institute of Standards Technology (NIST).

Requirement 2.2

So often, customers go ahead and download the CIS hardening documents at https://www.cisecurity.org/cis-benchmarks/ and copy lock stock and barrel into their policies and send it in. Now all this may be well and good, but now you have around 1,200 page tome with guidelines like 14 character alphanumeric password, as opposed to what PCI requires (7 Alphanumeric). This is where our customers get stuck, and some even send in a 1000 page hardening document to us to review, only for us to find that they have not implemented even 1% of what is noted in their hardening document.

After that, the hardening documents get re-jigged again until it meets a reasonable, practical standard that is implementable, usually in the form of a checklist. For a very quick hardening checklist, this is the initial one we often end up using, just to get our clients up to baseline speed, whether it’s PCI or not:

Hardening ItemServersNetwork DevicesDatabases
Assign individual server for each critical role (App, Web, DB, AD, AV, Patching etc)YNAY
Disable/Rename/Remove default user accountsYYY
Assign role based access to usersYYY
Disable insesure or unnecessary servicesYYNA
Use Secure Versions of Remote Access Services (SSH, RDP over SSL)YYY
Install well known Anti Virus with latest signaturesYNANA
Install latest OS / Firmware / Software security patchesYYY
Disable inactive users automatically after 90 daysYYY
Ensure Following Password Policies –
1. Use Complex Password with 7 characters or more
2. Remember minimum last 4 Passwords
3. Require passsword change within 90 days
4. Require password change upon password reset and first logon
YYY
Ensure following account policies –
1. Account lockout threshold – Max 6 attempts
2. Account lockdout duration – 30 mins or until admin unlocks
3. Idle Session Timeout – 15 Mins or less
YYY
Ensure passwords are stored securely with encryptionYYY
Enable Audit logging to Capture at minimum following events –
1. Successful Login
2. Failed Login
3. Administrative Actions
4. User Creation
5. User Deletion
6. User Updates
7. Escalation of Privileges
8. Access to Audit Trails
9. Initialization or stopping auditing
YYY
Configure NTP and time syncronizationYYY
Implement File Integrity Monitoring`YYY

Now obviously this doesn’t cover all the requirements of PCI (testing, scans, retention etc) but this should give us a fair idea of how ready our systems are for an audit or assessment.

If you have any queries on PCI or ISMS or any other security related standard, drop us a message at avantedge@pkfmalaysia.com.

Do or Do Not – ASV for SAQ A

pci-compliance

I would have thought this debate died out with the extinction of dinosaurs, but apparently, we are still at this subject in 2021. Still. Going. On.

So in the past weeks, there were some debate between us and some consultants as to whether the SAQ A requires an ASV scan or not. Our position was No. Their position was yes. So let’s look at it.

Now, keep in mind, we aren’t talking about best practice. We are talking about PCI-DSS v3.2.1 and what it says about ASV scans being mandatory for SAQ A. That’s it. That’s the statement. Now, debate.

There is actually no debate. This isn’t some sort of grey area, hard to explain, obscure rule in Sanskrit and written on the Sankara stones. This is just: Look at SAQ A, search for ASV, don’t find it. Thank you.

The ASV requirement is present in item 11.2.2 of PCI-DSS.

SAQ A does not have it.

So why do consultants still insist people do ASV scans for SAQ A?

There could be a lot of reasons, ranging from ‘guideline’, ‘best practice’ and so on. No doubt, having a scan (which isn’t expensive in any case) would be the least effort of security done by the merchant if they are hosting an e-commerce website that is redirecting customers to their payment processor once the “Click here to pay” is clicked. I mean, even if it has nothing to do with PCI, it may seem like common sense to have at least a scan done on your site to ensure it passes the very minimal requirement of security. So do we advocate an ASV scan to be done on any e-commerce site that deals with payment options (not necessarily payment data)? Yes, we do. There are many ways a site may get compromise. A coding error may allow data to be siphoned off, or passwords may be compromised. A re-direct may be vulnerable to man in the middle attacks; or even a total redirect to another page altogether where payment data is inadvertently entered. While the e-commerce site may be outsourcing the payment part to a processor, it still has the job of redirecting traffic to it.

Think of it as an usher (not the singer, but the job); where you enter into a dark auditorium, let’s say Royal Albert Hall to watch Ed Sheeran – and the usher takes you through this row of lights to what is supposedly your seat which you paid RM10,000 for.

When the lights come on, you find yourself in nice cosy room and in front of you someone who seemed to resemble Ed Sheeran but slightly off. His hair isn’t ginger and he isn’t as chubby as you see that guy on TV and he speaks with a slight Indian accent. And isn’t the Royal Albert Hall a HALL? Why are you in this room that resembles a glorified grandmother’s living room? You find out later that the usher had led you through the wrong Hall into a neighboring pub attached to the side of the hall and you are listening to the wonky music of Eddy Shiran.

The point is, the usher is pretty important in leading people to their seats. So as a redirect, even though you aren’t the main draw, you could end up leading your customers to Eddy Shiran instead.

But back to the main debate, whether it is required for SAQ A customers to go through ASV? No, it’s not.

However, there is always a but in everything. There are exceptions.

Some acquirers make it a point to state that they still require an ASV report even if merchants are going through SAQ A. That’s completely fine because the guidelines from Visa/Mastercard are just guidelines. At the end, the acquirer or payment brands may make individual decisions based on merchants, so it’s not written in stone. However, if there are no such requirement, we’re left to interpret the SAQ as it is, and it doesn’t state anything there.

Some may point out within the SAQ A under part 3a, there is a statement

ASV scans are being completed by the PCI SSC Approved Scanning Vendor (ASV Name)

Triumphantly being pointed out as proof of ASv requirement

Take note however, that above, under Part 3a, the instructions do state:

Signatory(s) confirms:
(Check all that apply)

the realisation that asv is still not needed for Saq A (or B)

Even under the title “PCI DSS Self-Assessment Completion Steps” of the SAQ:

Submit the SAQ and Attestation of Compliance (AOC), along with any other requested documentation—such as ASV scan reports—to your acquirer, payment brand, or other requester.

It does seem to be grappling at straws if this sentence was used to justify the requirement for PCI-DSS. “Such as” generally denotes an example, which may or may not exist or is required.

In previous requirements of merchants from Visa, there used to be statements describing merchant levels such as

 * Merchant levels are based on Visa USA definitions
** The PCI DSS requires that all merchants perform external network scanning to achieve compliance. Acquirers may require submission of scan reports and/or questionnaires by level 4 merchants

And perhaps there is where the myth was perpetuated from. In recent times Visa has updated its site (https://www.visa.com.my/support/small-business/security-compliance.html) to reflect a better understanding, stating:

“Conduct a quarterly network scan by an Approved Scan Vendor (“ASV”) (if applicable)”

In conclusion, SAQ A and B do not require ASV scans. If it’s required by the acquirer then so be it. If it’s supposed to be done out of best practice requirements, so be it. But you don’t want to hear an ASV/QSA telling you that you need to do something that is above and beyond your PCI requirement without them pointing to something in the standards that states so.

Finally – for SAQ B, which usually applies to POS terminals dialing up to the bank for authorisation; we’ve even seen some consultants requiring the merchant’s website to undergo ASV, which has nothing to do with their POS Terminals. Why ASV the website? Don’t know. So the merchants go about scanning their website that hasn’t been updated since 2012 and wonder, what sort of nonsensical requirement is this from PCI-DSS that needs them to pay just to scan something that is built by an 18 year old intern who had left the company 10 years ago? You don’t need to. So don’t do it.

Anyway, that’s it for now. Let us know your thoughts or questions and send to us at pcidss@pkfmalaysia.com and we will get back to you ASAP. Now, back to listening to our Spotify for Eddy Shiran!

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑