Author: pkfavantedge (Page 4 of 37)

Technical Session: Clearing NTFS Dirty Bit

Every once in a while, we take a break from boring compliance articles and write what’s more interesting – fixing broken stuff or troubleshooting problems that has nothing to do with human beings. It’s far easier dealing with machines.

So, what happened was, we had a USB plugged into one of our servers and doing some file transfers. The server wasn’t hooked up onto our UPS, as this was a test system – ok, it was actually sitting under my desk and everytime I turned it on, everyone in the office thinks a helicopter is outside the window. It’s old and loud and totally unsuitable to be located outside of a server room. Ah well.

In any case, halfway through the transfer, the power tripped. The server was ok upon restart but not the USB external drive.

It demonstrated a few symptoms:

a) When plugged in, the drive does whir up and explorer recognises it. The problem was it was listed as ‘Local Drive’ and nothing else, no other information. When clicked, it just freezes up everything. Right click does eventually brings up the context menu but when ‘Properties’ is selected, it hangs and never proceeds. So trying to scan the drive for errors from the GUI is a no go.

b) Command line wise – when accessing G:, again it just hangs. Chkdsk /f also hangs from command line so trying to scan from command line = no go.

c) Going into disk management GUI, it takes a long time before it eventually pops up and the good news was that disk management actually saw the drive. However, right clicking on it and trying to reassign the drive letter (as suggested by some other articles to recover), we get this annoying message:

The operation failed to complete because the Disk Management console view is not up-to-date.  Refresh the view by using the refresh task.  If the problem persists close the Disk Management console, then restart Disk Management or restart the computer

Microsoft being cryptic and mysterious

So like Lemmings, we proceed to refresh the console with F5 and it just hangs indefinitely and nothing happens until we unplug the drive. Then a string of errors come out like Location of drive cannot be found etc. It seems the auto opening of the USB drive was activated but Windows just couldn’t read the drive. So disk management is a no-go.

d) We tried installing other software like Acronis, or Easeus but none of these managed to read the hard drive and simply hangs until we unplug it.

e) Changing laptops/desktops/cables (all running Windows) – all the same result. The drive was acknowledged but explorer or other programs couldn’t open anything on it. This is good news actually; it doesn’t seem there was a hardware issue or any dreaded clicking noise indicating the drive was a dead duck.

f) So it does point to a software layer issue, which should be handled with a scan disk or check disk by Windows. However the problem is, the disk couldn’t be read, so it couldn’t be scanned. Booting into safe mode doesn’t help anything. Reinstalling the USB drivers doesn’t help. The drive simply refuses to go to work, like all of us on a Monday morning after being smashed with a hangover from a Sunday night out.

g) Finally, on event viewer under Windows Logs -> System, this particular classic comes up: “An error was detected on device \Device\Harddisk2\DR21 during a paging operation.” So if you go to advanced under system properties -> Performance ->Settings ->Advanced. Under virtual memory, you could uncheck the box to automatically manage the paging file size if you can. But no, Windows doesn’t read the drive, so clicking on G: once more hangs the whole system.

At this point we have wasted an hour trying to sort this nonsense out. Nothing in Windows was able to indicate the issue. One suggested running fsutil from command line. This can check for the dirty bit on NTFS, which is an annoying feature that basically renders the drive useless until the bit is ‘cleared’.

The problem with this was – yes, you got it – you couldn’t run any command on that drive as it just hangs. Nothing, no programs in Windows was able to do anything for this drive.

The Dirty Bit

So some definitions first – the dirty bit is a modified bit. It refers to a bit in memory, which switches on when an update is made to a page by computer hardware. It is just a 1 hex value situated in some place hidden on the portable hard drive.

From Microsoft definition

A volume’s dirty bit indicates that the file system may be in an inconsistent state. The dirty bit can be set because:

  • The volume is online and it has outstanding changes.
  • Changes were made to the volume and the computer was shut down before the changes were committed to the disk.
  • Corruption was detected on the volume.

If the dirty bit is set when the computer restarts, chkdsk runs to verify the file system integrity and to attempt to fix any issues with the volume. (In our case, this didn’t happen, obviously).

Assuming that this was a dirty bit problem (at this point, we were just shooting in the dark due to the lack of diagnostics, logs or events and we were just working on with some black magic of guessing).

From some articles in the net, the options to remove the dirty bit as follows:

  • You have 3 options to remove dirty bits from your computer. The first option is to trust the Microsoft disk checking utility by finishing a disk check operation. [This didn’t work as Windows wasn’t able to read ANYTHING and we could not run any windows based operations or commands or programs on it.]
  • The second method is that you move the data from the volume and format the drive. After that, move the data back. [This is way too much work. Plus, Windows can’t even access it. So the only option is to do a clone such as through Clonezilla? That’s a lot of work. And a last resort.]
  • The third method to remove the dirty bit is by using a hex editor with disk editing supported. [We didn’t explore this as this seemed a bit extreme, and probably the last time we handled a hex editor was when we had to hack in some computer games like Football Manager to give unlimited funds or a 99 in dribbling skills]

There’s an easier way.

So this is where you just need to give up on Windows and figure another way to check this disk. If you have a standby Linux box or Mac, that would help. But if not, you could actually use this great little tool called SystemRescue which among other tools, have the delectable DDRescue and Ntfs3g which will be important.

Boot up to SystemRescue (you can make a boot disk with DDRescue which is very much recommended – just use Rufus or another program to make it bootable, and download the distribution https://www.system-rescue.org/) and you basically now have a nice little Linux distro running from your USB and you should be able to also see your USB mounted with the command lsusb or lsblk.

Using lsblk -o gives you a view to see the type, size, device and a few more details. The below is an example (not ours)

Just identify which is your USB drive.

Using a the nifty ntfsfix (assuming /dev/sda1 is the USB drive you want to fix)

ntfsfix -d /dev/sda1

This basically clears the dirty bit which Windows for whatever reason, finds it so difficult to do and makes us jumps through hoops. In fact, fsutils from Windows only tells you that you have a dirty bit but doesn’t clear it. That’s like paying a doctor to tell you that you have cancer and not providing you any healthcare to it. Come on, Microsoft.

So right after clearing the dirty bit, the external drive is once again accessible. There were still some errors on the drive, but we just ran the check for errors option via GUI (since now we are able to access the properties of the drive again by right clicking for the context menu), and fixed up the inaccessible files.

So now you know. The next time you have an outage during a file transfer, it could just be the dirty bit. The problem is the diagnosis (again, Windows could just put into the event that there is a dirty bit set instead of leading us to this paging file nonsense treasure hunt). And of course, if Windows cannot access, using the SystemRescue utility, it’s a great tool to solve this issue.

And finally, according to some, another even easier way is to just plug in this drive into a Mac and apparently, it resets the dirty bit for some reason. I never tried this, so perhaps others can give it a try first before going the SystemRescue way.

Contact us at avantedge@pkfmalaysia.com for more information on what services we can offer you.

Have a good week dealing with human beings!

Let’s Talk v4: Overview

So, on March 31st 2022, PCI-DSS v4.0 dropped on us.

The original timeline for v4.0 has already passed a long time back. Back in 2019, there had been talks that v4 would drop in late 2020. Then due to the global pandemic of unknown origins, it was moved to 2021 and now finally, they decide to release it in 2022. We all know PCI SSC loves deadlines. They love the whooshing noise deadlines make as they go by.

First of all, let’s start with another quote from the wisest sage of all generations:

Don’t Panic.

Douglas Adams

Because if we take a look at the timeline below, there’s a pretty long runway to adopt v4.0.

The above basically means this:

a) Entities undergoing PCI right now, whether it’s first time or renewals, if you are going to be certified in 2022, your current cycle and next renewal in 2023 can stay with v3.2.1.

b) Entities thinking to go through PCI-DSS, and will likely be certified in 2023, you can stay with v3.2.1 for this cycle, and then for the next renewal up in 2024, you will need to move to v4.0

Long story short, entities have 1.5 years to stay on PCIv3.2.1 and go v4.0 on your 2024 cycle. That doesn’t mean that you don’t do anything from now till then of course. Depending on your processes, there may be some changes. However, it’s not too crazy and it’s more incremental than anything else, including areas where we are already practicing , but was not noted in v3.2.1 (example being anti-phishing controls, which have been a staple for most of our FSI clients).

So we’re going to have a few breakdown of areas we think is fairly relevant to note in v4.0; a deeper dive into requirements that are added or changed, and more importantly how we think a company can move forward in preparation.

Of course that being said, the v4.0 is only 3 weeks old. A toddler in terms of its predecessors. Let’s put it into perspective. PCIv1 (and its sub versions 1.1 and 1.2) lasted almost 6 years from 2004 – 2010.

PCIv2 lasted half that time from 2010 – 2013.

PCIv3 and its sub-versions (3.1, 3.2, 3.2.1) lasted from 2013 to 2022. That’s 9 years old. So in retrospect, we are literally in the 0.6% timeline for v4 if it were to follow the v3 age. Meaning, there could be a lot of changes yet to come, or clarifications or explanations etc.

Over the life of v3, we’ve seen many supplementary documents (for scoping, logging, penetration testing, risk management etc) churned out in support to clarify v3 items. While not part of the standard itself, these supplementary documents and hundreds of FAQs are generally quoted or referenced by us to support our arguments for and against some of the decisions that QSAs put to our clients. These are extremely useful especially when QSAs put in some pretty daft interpretations of the requirements (see our previous post on CDD).

There has been some extremely subtle changes aside from the major ones and we want to note these items in page 4 of v4:

PCI DSS is intended for all entities that store, process, or transmit cardholder data (CHD) and/or sensitive authentication data (SAD) or could impact the security of the cardholder data environment (CDE).

Some PCI DSS requirements may also apply to entities with environments that do not store, process, or transmit account data – for example, entities that outsource payment operations or management of their CDE.

In accordance with those organizations that manage compliance programs (such as payment brands and acquirers); entities should contact the organizations of interest for more details.

pci v4.0 warning to those entities that scream i am out of scope because i don’t store, transmit or process stuff!

There’s a lot of things we dislike about v4.0. But there’s a lot of things we LIKE about it as well. So it’s like that family trip that you are taking with your entire extended family. There’s that cousin that you completely dislike that you wish you don’t need to make small conversations with – you know, the one that constantly name drops and questions whether you have achieve as much as he has in life. And tries to coach you to be a better person and live a better life, and have more than your currently unfulfilling, loveless marriage and a deadend, purposeless job as a PCI-DSS consultant. Yeah, you know it. But at the same time, you like these trips because it’s time with your family as well, and time to goof off with your kids, walk on the beach with your spouse and basically fantasize throwing your cousin into a pit full of vipers. v4.0 is like that trip.

The main takeaways from the above quote would be

a) No more free passes to those entities who claim they are out of scope simply because they don’t store, process or transmit card data. If you have impact on the security of the CDE, then you are in.

b) First time we are seeing the word “Organizations of Interest”. While this is nothing much, it’s like watching a movie in the cinema that’s based on a comic book and you see an obscure easter egg referencing to that comic and you get goosebumps because you know, you’re a nerd. And you like this kind of subtle references that no one else knows about. Basically OIs are the upstream customers, banks, FSI, organisations that are requesting your PCI-DSS compliance. It’s easier now to make this reference as it is now an official term in v4.0. Yay.

c) Organizations that ‘impact security’ is in. Previously the problem is that we had outsourced SOC/NOC, or outsourced providers that do not handle card data (e.g managed providers for firewalls etc) and even cloud services that handle the MFA or authentication generation, claiming that there is no card data, therefore they don’t need PCI. That’s fair enough, but we still need to assess that service as part of an on-demand assessment to ensure that that service is properly secured or at least has basic security functionality over it. While a majority of providers are fine with this, we have had antagonistic providers shouting to high heaven that we are idiots because of the very fact that they do not store, process or transmit card data; they should be completely disregarded from the PCI assessment. Um. No. You’re not and V4.0 is smacking you in the face for this.

Another item on v4.0 is the sheer amount of information they provide right at the beginning of the standard. They are talking about the scoping methods, segmentation, encryption and applicability on third party providers, use of third party providers and how to be compliant with them, BAU best practices, sampling methods, definition of timeframes, definition of words like significant changes, approaches to implementation of PCI-DSS, testing methods, assessment process, RoC writing and if you look carefully, there is also a recipe in there for Jamie Oliver’s Yorkshire Pudding.

In the previous v3.2.1, the requirements started on page 20. In v4.0 the requirements start on page 43. The total number of pages in v4.0 is 360, up 158% from the previous 139 pages. So, simply put, you are going from reading Enid Blyton’s Famous Five Goes to Finniston Farm to Leo Tolstoy’s War and Peace.

The requirements themselves remain as 12, so in essence, despite all the fluff at the beginning, the actual requirements are still intact. There’s quite a fair bit of items to look at, and here we provide a brief overview of it:

a) Customized implementation

So, we have this outcomes-based implementation of PCIv4. This is based on the purpose or the ‘spirit’ of the requirements and may not necessarily use the standards-defined controls to achieve it. So, for instance, the requirement to do quarterly internal scans – the objective is to identify vulnerabilities in a regular interval and to ensure that the organisation addresses this vulnerability. Instead of having an option for on-demand scanning, the organisation may opt to sign up for a continuous analysis and automated scanning that are available in cloud such as Google or AliCloud. So while the controls are different, it addresses the same objective.

It is noted that custom implementation should only be done by organisations with a mature risk management practice in place, as this requires more work for the organisation and the QSA to define tests of these controls.

On how this is implemented or samples of it, I am sure we will be seeing more examples as the standard starts maturing. Remember, v4.0 is still a baby, not even out of the maternity ward yet.

b) Multi-factor and Passwords

Multi factor is now needed for any access into the CDE. So, we call in Multi-Multi Factor – whereby, an MFA is required for remote users to get into the network, and from the non-cde network, to get into the CDE, it requires additional MFA. It would seem fairly straightforward, but companies now have to consider to implement a jump server in the CDE to act as a control aggregator to go to multiple systems in the CDE – or they could just deploy another MFA solution on the network .

Passwords are to be changed to 12 alphanumeric up from 7. There’s still a runway on this as it is only considered standard in 31 March 2025. A lot of things can happen from now till then and a lot of technology can change. We could be facing global climate crisis and end of the world, or world war 3 nuclear warfare, or an asteroid could hit earth, or the Rapture happens, you know, future stuff. But in case none of those things come to past, then yeah, make sure you move your passwords to 12 alphanumeric.

c) Group Accounts

8.2.2 gives a needed reprieve on this kerfuffle of having group accounts. In v3.2.1, this is disallowed, but v4.0 , it is allowed, based on the rule of common sense. Some systems do have group accounts for a purpose, or is unable to provide certain functionality to individual accounts. So while there is now more justifications etc needed, it’s no longer a hard no for group accounts.

d) Targeted risk analysis

Targeted risk analysis can now be done to determine the frequency of certain actions – such as password changes, POI device inspections, non-CDE log reviews, low vulnerabilities remediation, FIM review, frequency of training etc. Now while we want to believe that the PCI-SSC idea on having this is for organizations to change frequencies of controls to be MORE stringent (example to have the password changed every 30 days instead of 90 days), the reality is that most of us would stretch this requirement to make life a lot easier for us. I mean, what’s the point of having flexibility if you can’t make it as flexible (i.e as little work to be done) as possible, right?

e) Card data discovery (CDD)

Card Data Discovery Scans – CDD. There is finally some clarifications on Card Data scans to be done every 12 months and to clarify what we have already covered in our previous post in educating the QSA on how to interpret the particular CDD requirement. So yeah, kudos PCI-SSC for supporting us!

d) Misc – Anti Phishing and Full Disk Encryption

As mentioned previously, we now have references to Anti-Phishing requirements, which should have been there long before, to be honest.

We have clarifications which will have significant impact to some of our clients – the use (or abuse) of the full disk encryption requirement. V4.0 has basically blocked that way out for some of our customers utilising Bitlocker with TPM to get past Requirement 3. This is , to us, a fairly significant item of v4.0 which we will be dedicating a post later on it.

Well, so that’s it for the overview for now. We hope to get more articles out to do deeper dives into v4.0 but like I said, it’s still early days and there would be more clarifications ahead. Hopefully it will be more positive, and the experience of v4.0 will be less like that family outing with the cousin that should be thrown into a pit of vipers.

Contact us at pcidss@pkfmalaysia.com for any queries you have on PCI and we will get back to you immediately.

PCI-DSS Card Data Discovery Scans

pci-compliance

For PCI-DSS, there are some fairly obvious requirements that are set in stone in order for you to pass PCI-DSS. ASV scans quarterly. Internal vulnerability scans – quarterly. Annual penetration testing. Half yearly reviews of firewall config and policies. Annual training awareness. These are biblical principles of the gospel of PCI.

And then again, there are other areas where interpretation is a little more of a touch and go; up in the air; subjective to the wind; sort of the things where there are as much disagreements and controversies as whether Han shot first or Greedo was just an absolute tool who misses from two feet.

And while most arguments often stems from our clients and us as we try to explain some concepts to them, there comes once in a while a subject where we find ourselves against the explanation of QSAs. Now, not all QSAs are created equal. When I say QSAs here, I refer to the individual QSA, not the organisation QSA. As in the human being who are QSAs for the QSA-C (QSA Company). We’ve worked with some who are technically well versed; we’ve worked with some who are strong in documentation and theory, we’ve worked with some who can communicate well but not so technical, and those who are opposite. But every once in a while, we come across QSAs who think they know everything (they don’t), and they stubbornly stick to a point of argument even when we have exhausted all avenues to show them their point is flawed. The more we argue, the more adamant they take their stance even if their justifications seem to be plucked directly out of their …. posterior appendages.

One of the items you will often see coming up in PCI-DSS is this thing called the Credit Card Discovery Scanner (CDD). What is this? In PCI-DSS standard pg 10:

To confirm the accuracy of the defined CDE, perform the following:
The assessed entity identifies and documents the existence of all cardholder data in their environment, to verify that no cardholder data exists outside of the currently defined CDE.

PCI DSS v3.2.1

The CDD process is basically just a process using a tool usually to identify whether card information is stored in the clear within the organisation. These are usually regular expressions based applications; where it can categorise the type of card based on BIN or the initial numbers. These tools are often quite useful as well to find other forms of information like personal information etc, as long as you can identify filters and regular expressions for them. Some tools out there are from Groundlabs, Managed Engine, ControlCase etc. We also have free CDD tools like Pan Buster, Credit Card Scanner etc. The free tools are a little bit more difficult to use in our opinion and there seems to be less support for database scans and more false positives overall, so you may spend a longer time cleaning up the results.

Whether commercial or free tools, what PCI has been fairly silent about is whether these are mandated in the standard to be done. Unlike ASV scans or penetration testing, the standard doesn’t specifically state the need to run these tools for a normal PCI-DSS standard. When I say ‘normal’; I refer to a set of additional requirements under Appendix A3: Designated Entities Supplemental Validation (DESV) . These are specially assigned entities that has large volume of card data or has suffered significant breaches. This is designated by payment brands or acquirers, and it’s not something a QSA or even the audited entity decides on.

So looking into the card data scan requirements; we only have the Pg 10 scoping requirement and in the DESV portion , A.3.2.5 – “Implement a data-discovery methodology to confirm PCI DSS scope and to locate all sources and locations of clear-text PAN at least quarterly and upon significant changes to the cardholder environment or processes”

In most cases, CDD scans are done on an annual basis for normal PCI-DSS (non DESV), or at times half-yearly as required by the QSA.

So along came another QSA who stoutly declares that all companies are required to do a quarterly CDD scan regardless of size for all systems in scope. When politely reminded that he seems to be mixing up the DESV quarterly scan requirements; he says no. He is highlighting requirement 3.1: “A quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention.”

When pressed to explain why this is a CDD scan, he states its obvious, that everyone needs to run the CDD scanner every quarter to address this requirement.

OK. We disagree. Completely. This is one of the instance, where QSA super-imposes requirements on each other just because it sounds the same.

Let’s break it down by looking at the PURPOSE of the CDD scan. And the best way is to go back to the standard and pick up the part where the standard states a ‘data-discovery’ method in DESV A3.2.5.

Implement a data-discovery methodology to confirm PCI DSS scope and to locate all sources and locations of clear-text PAN

A3.2.5 PCI-DSS V3.2.1

It’s clear that the CDD purpose is to locate where CLEAR-TEXT PAN is found in the CDE (and non-CDE) environment. Why is this important? Because in the CDE, there should never be any clear-text PAN found in storage. All PANs must be protected by either of the Four Horsemen of the Apocalypse: Encryption, Truncation, Hashing or Tokenization. A failed CDD means there are card PAN found in clear text within the CDE.

So with that in mind, lets go back to requirement 3.1. This is nothing to do with identifying clear PAN. It talks about identifying AND deleting EXPIRED card data (based on retention policies). That’s it. If the PAN is encrypted or tokenized but its stored beyond its retention period; requirement 3.1 tells you to delete it. It talks about retention period and storage beyond it. Which part of it talks about doing a card data scan to identify clear text card information?

In the description, it further states: A quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention requirements.

So QSA, please RTFM; requirement 3.1 isn’t talking about the need to run CDD quarterly to identify clear-text PAN storage; it is to run something (script) or manual; to identify PAN storage that is already expired. It is to discover duration of storage; not security of storage. Running a shell script may be good enough to get the timestamp of files; or checking the timestamp on the database entries to ensure that all card data is removed or anonymized after a period of say, 7 years.

If you need assistance in PCI-DSS or any other compliance standards like the ISMS or ITSM, drop us a note at pcidss@pkfmalaysia.com. We can help clarify some of these annoying requirements that even QSAs (as experienced as they are) are plucking out of their rear appendages.

PCI-DSS 2022 and Version 4

pci-compliance

So we are now in 2022. PCI-DSS v4.0 is due to be out and one of the things we have been doing for the first two weeks of the year is to get over our holiday hangovers. That’s right. In our country (Malaysia), the slowest months are December, January and February. It’s like starting a car in the dead of winter. These 3 months are like the Amen Corner in Augusta for businesses. December hits like a ton of bricks due to the Christmas season; and then just when things start moving in January, it grinds to a halt for Chinese New Year, where the entire nation just flat out refuses to work. When we are back in the second week of Chinese New Year, we are once more in first gear climbing up the hill again of 2022.

So we did things a bit differently. We started the first two weeks with a series of training for clients and potential clients, to go through PCI-DSS v4.0 and create an awareness of what is there to expect.

The above is taken from the PCI website and immediately we see some interesting things here. Number one: PCI-DSS v3.2.1 only retires in 2024. This is interesting, because usually the transition period isn’t so long. It’s long now because – I don’t know, there may be an ongoing pandemic and such. So here we are Q1 2022, and our customers are asking when do we transition to v4.0?

Well, the answer would be: as soon as you can. But in theory, you can probably stick to v3.2.1 validation for 2022 and realistically move to v4.0 in 2023. In fact, for some of our clients whose PCI maintenance period follows the calendar year, they can even force 3.2.1 into their 2023 validation year.

As for the actual content in PCI v4, it’s still a well kept secret like the plot of Spiderman: No Way Home; but we have been reading a bit and also have joined last year’s PCI-DSS community meeting and learnt some interesting tid-bits of it.

No 1: Compensating Controls

The-get-out-of-jail-free card. Customers have been dangling this Compensating Controls card in front of our faces ever since the Mesopotamian times. When they can’t address a control – use compensating controls! When they cannot implement something due to budget – compensating controls! When they can’t make changes to an application because it was designed by a group of kindergarten kids and it would break the moment you touch it – Compensating Controls! When you don’t know what to say to your wife after a long night out at the pub with the mates and come back smelling like a keg of kerosene – Compensating Controls!

The problem with compensating controls is that they are a pain in the neck to implement and to document. And to justify. The compensating control worksheet, the justification documentation, the implementation of the control itself to be ‘above and beyond’ the scope of PCI-DSS etc. Everyone things this is a silver bullet only to find it the deepest rabbit hole you can ever fall into.

So, PCI v4 does away with compensating controls. Great.

And they introduce Customized Implementation.

A lot of people are saying this is a game changer.

Honestly? Until more information comes out, we only have this to go with:


Customized implementation considers the intent of the objective and allows entities to design their own security controls to meet it. Once an organization determines the security control for a given objective, it must provide full documentation to enable their Qualified Security Auditor (QSA) to make a final decision on the effectiveness of a control.

Cryptic PCI v4 DOCUMENTATION

Design their own security controls? Well, ok, isn’t this the same as compensating controls? I am thinking this just expands the interpretation to something a bit broader in which case the control may not even be a technical control. So instead of stating , ok, we can’t meet certain password controls due to the legacy application issue, and compensating controls were previously excessive logging and monitoring; isolation of network, whitelisting of IPs and access; using WAF and DLP and Virtual patching etc etc; are we stating now, a possible customized approach would be: instead of all these technical controls; we now have a customized security approach. Which includes isolation of network, whitelisting of IPs and access; using WAF and DLP and Virtual patching etc etc.

Until we see some examples of this, it may just be well that most companies will go along with the ‘normal’ approach; or adopt a wait and see approach and eke out the last remaining drop of v3.2.1. into 2024.

No 2: UP in the Clouds

Another item that has been long overdue? Cloud. It’s about time things get addressed and not just cloud, but how services and containers work as well. We have had auditors coming to our clients insisting on them doing testing, VA/PT on services from AWS, not recognizing there’s not even an IP address to start with. To be fair to the SSC, they do have a few Cloud Guidelines Supplementary documentation, which we actually find very useful especially in our projects on certifying cloud technologies. We can see this being incorporated more formally into v4.0 where the requirements will be designed around Cloud environment more organically than what we see right now (sort of force-fitting many of the traditional concepts like Network IDS, Patching etc into the cloud environment).

No 3: Not another MF-A!

I have a bad feeling about this.

MFA has been a constant pain for us. Firstly, where MFA is being implemented – not just on perimeter but now on every access to the CDE. At least it’s now still on admin accounts. We hear they plan to introduce for ALL users. We also hear the collective screams of the tormented from the nine hells of Dante. Secondly, a lot of customers are still depending on MFA via SMS. If PCI goes along the NIST route, we could see this being deprecated soon. Also, clarifications as well on whether client side certificate are acceptable as a ‘something you have’ factor would be most welcomed. We see different QSAs interpreting this so differently you’d think we’ve asked them to interpret some ancient Thuggee text. Multi-factor challenges are already there for us over the past years, with Bank Negara’s RMIT focus on ‘strong MFA’ for large financial institutions. A clear guidance also should be there on how to evaluate multi-factor that is dependent on a cloud provider; and whether common implementation like Google Authentication etc can still be considered as good enough for V4.0

No 4: Encrypting everything

We also hear now that the “Pocket Protector Trope” security may be implemented. Remember those movies we watch, where the hero gets shot in the chest and you think he dies but he reveals that the bullet is stopped by his pocket watch; his badge; a bible; or some other dang sentimental thing that was given to him like 40 scenes ago?

So in PCI, usually when data is traversing the internet or network, it states the transmission needs to be encrypted. It doesn’t technically state anything about encrypting the data package itself while in transmission. The data encryption almost exclusive occurs during data at rest. So in this case, they are doubling the protection: They are adding that pocket watch to catch the bullet; so if the transmission gets compromised, the data is still secured. The bullet doesn’t hit the hero!

No 5: Recovery and Continuity

Not so much as something coming, but more of what we’d like to see. One of the biggest criticism we see customers bemoaning at PCI (other than the cost and budget and the complexity and..ok, everything else) – is that PCI has little focus on business continuity and disaster recovery. It’s almost as if PCI is standing there saying, “OK, you have outage for a few days? Great, make sure your credit card information is safe.” It’s not really business focused, it’s more credit card confidentiality focus. What we would like to see is a little more focus on this area. Over the past 2 years, we have seen customers getting all sorts of attacks from cyberspace. Malware, ransomware, hacking, fraud, defacement — it’s like the world goes into a pandemic and everyone’s bored to bits at home and everyone is taking up hacking as a part time gig. Malware for instance – how prepared is a PCI compliant company against ransomware attack? Have they done their backups? Have they tested their systems to recover?

So, if you have any queries on PCIv4 for us, drop us an email at pcidss@pkfmalaysia.com and we will definitely get back to you. Have a great and safe year ahead for 2022!

Official Announcement AT&T Cybersecurity on sales hold for Alienvault

As previously announced, USM Appliance will be placed on a new sales hold effective January 1st, 2022.

What does that mean for me?

All net new sales of USM Appliance to new customers will be discontinued. Any new USM Appliance orders placed until December 31st, 2021 will be accepted. Renewals and expansions to existing deployments will continue to be accepted until December 31st, 2023. A sales hold is NOT a declaration of any end of support; AT&T will continue to provide support through December 31st, 2024

How can I continue to support my customers?

AT&T Cybersecurity is committed to providing our customers with innovative security solutions. USM Anywhere, our SaaS-based solution, will continue to be the focus and flagship product for our Threat Detection and Response offerings.

Please take a look at https://cybersecurity.att.com/products/USM-Anywhere for more information on USM Anywhere. If you have any questions, you can reach out to us at alienvault@pkfmalaysia.com.

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑