Category: PKF Avant Edge (Page 1 of 18)

Major Changes of PCI v4

So now as we approach the final throes of PCI-DSS v3.2.1, the remaining 3 weeks is all that is left of this venerable standard before we say farewell once and for all.

PCI-DSS V4.0 is a relative youngster and we are already doing hours of updates with our customers on the things they need to prepare for. Don’t underestimate v4.0! While its not a time to panic, it’s also not a time to just lie back and think that v4.0 is not significant. It is.

Below is a table that provides an insight of the major changes we are facing in v4.0.

Bearing in mind that most of the requirements now start off with keeping policies updated and document roles and responsibilities, the major changes are worth a little bit of focus. In the next series of articles, we will go through each one as thoroughly as we can and try to understand the context in which it exists on.

Let’s start off the one on the top bin. Requirement 3.4.2.

Req. 3.4.2: When using remote-access technologies, technical controls prevent copy and/or relocation of PAN for all personnel, except for those with documented, explicit authorization and a legitimate, defined business need

PCI v4.0

Ok, we have underlined and emphasized a few key points in this statement. Because we feel that is important. Let’s start with what 3.4.2 applies to.

It applies to: Remote Access

It requires: Technical Controls

It must: PREVENT THE COPYING/RELOCATION

Of the subject matter: Full Primary Account Number

In v3.2.1 this was found in section 12.3.10 with slightly different wordings.

Req 12.3.10 For personnel accessing cardholder data via remote-access technologies, prohibit the copying, moving, and storage of cardholder data onto local hard drives and removable electronic media, unless explicitly authorized for a defined business need. Where there is an authorized business need, the usage policies must require the data be protected in accordance with all applicable PCI DSS Requirements.

PCI v3.2.1

I think 4.0, aside from the relocation of the requirement to the more relevant requirement 3 (as opposed to requirement 12, which we call the homeless requirement for any controls that don’t seem to fall into any other earlier requirements), reads better. Firstly, putting it in requirement 3 puts the onus on the reader to consider this as part of protection of storage of account data which is the point of Requirement 3. Furthermore, digging into the sub-requirement, 3.4 section header states: Access to displays of full PAN and ability to copy PAN is restricted.

This is the context of it, where we find the child of this 3.4 section called 3.4.2 and we need to understand it first, before we go out and start shopping for the first DLP system on the market and yell out “WE ARE COMPLIANT!”

3.4 talks about displays of FULL PAN. So we aren’t talking about truncated, or encrypted PAN here. So in theory, if you copy out a truncated PAN or encrypted PAN, you shouldn’t trigger 3.4.2. Its specific to full PAN. While we are at it, we aren’t even talking about cardholder data. A PAN is part of cardholder data, while not all cardholder data is PAN. Like the Hulk is part of the Avengers but not all Avengers are the Hulk. So if you want to copy the cardholder name or expiration date for whatever reasons like data analysis, behavioural prediction, stalking etc…this isn’t the requirement you are looking for.

Perhaps this is a good time to remind ourselves what is Account Data, Card Holder Data and Sensitive Authentication Data (SAD).

The previous v3.2.1 doesn’t actually state ‘technical controls’, which goes to say that if it’s a documentary controls, or a policy control, or something in the Acceptable Use Policy, it can also pass off as compliant. V4.0 removes that ambiguity. Of course, the policy should be there, but technical controls are specific. It has to be technical. It can’t be, oh wait, I have a nice paragraph in section 145.54(d)(i)(iii)(ab)(2.4601) in my information security acceptance document that stated this!

So these technical control(s) must PREVENT copying and relocation. Firstly just to be clear, copy is Ctrl-C and Ctrl-V somewhere else. Relocation is Ctrl-X and Ctrl-V somewhere else. Both has its problem. In copying, we will end up PAN having multiple locations of existence. In relocation, the PAN is moved, and now systems accessing the previous location will throw up an error – causing system integrity and performance issues. Suffice to say, v4.0 demands the prevention of both happening to PAN. Unless you have a need that is:

a) DOCUMENTED

b) EXPLICITLY AUTHORIZED (not Implied)

c) LEGITIMATE

d) DEFINED

When a business need is both “documented” and “defined,” it means that the requirement has been both precisely articulated (defined) and recorded in an official capacity (documented). So a list of people with access is needed for the who, why they legitimately need to access/copy/relocate PAN in terms of their business, explicitly authorized by proper authority (not themselves, obviously).

Finally, let’s talk about technical controls. Now, remember, this applies to REMOTE ACCESS. I’ve heard of clients who says, hey no worries, we have logging and monitoring in place for internal users. Or we have web application firewall in place. Or we have cloudflare in place. Or we have a thermonuclear rocket in place to release in case we get attacked. This control already implies ‘remote access’ into the environment. The users have passed the perimeter. It implies they are already trusted personnel, or contractors or service providers with properly authorized REMOTE ACCESS. Also, note that the authorization here is NOT for remote access, it is for the explicit action of copy/relocating PAN. In this case, most people would probably not have a business reason of copying/relocating PAN to their own systems unless for very specific business flow requirements. This means, only very few people in your organization should have this applied to them, under very specific circumstances. An actual real life example would be for an insurance client we have, they had to copy all transaction information, including card details in an encrypted format and put it into a removable media (like a CD-ROM) and then send it over to the Ombudsman for Financial Services as part of a regulatory requirement. That’s pretty specific.

So what passess off as a ‘technical control’? A Technical control may be as simple as to completely prevent copy/paste or cut/paste ability when accessing via remote access. This can be done in RDP or disable clipboard via SSLVPN. While I am not the most expert product specialist in remote access technologies, I can venture to say its fairly common to have these controls inbuilt into the remote access product. So, there may not be a need for DLP in that sense, as the goal here is to prevent the copying and relocation of PAN.

Now that being said, an umbrella disallow of copy and paste may not go well with some suits or C-levels who want to copy stuff to their drive to work while they are in the Bahamas. Of course. You could provide certain granular controls, depending on your VPN product or which part of the network they access. If a granular control cannot be agreed on, then a possible way is to enforce proper control via DLP (Data Loss Prevention) in endpoint protection. Or control access to CDE/PAN via a hardened jump server that has local policy locked down. So the general VPN into company resources may be more lax, but the moment access to PAN is required, 3.4.2 technical controls come in play.

At the end, how you justify your technical controls could be through a myriad of ways. The importance is of course, cost and efficiency. It has to make cost sense and it must not require your users to jump through hoops like a circus monkey.

So there you have it, a break down of 3.4.2. We are hopping into the next one in the next article so stay tuned. If you have any queries on PCI-DSS v4.0 or other related cybersecurity needs, be it SOC1 or 2, ISO27001, ISO20000, NIST or whether Apollo 11 really landed on the moon in 1969, drop us a note at avantedge@pkfmalaysia.com and we will get back to you!

Zero Trust for 2024

As we enter into the new year, lets start off with a topic that most cybersecurity denizens would have heard of and let’s clarify it a little.

Zero Trust.

It seems a good place as any, to start 2024 off with the pessimism that accompanied the end of last year – the spate of cybersecurity attacks in 2023 had given us a taste of what is to come – insurance company – check, social security – check, the app with our vaccination information – check. While breaking down the attacks is meant for another article, what we are approaching now for the coming year is not just more of the same, but much more and more advanced attacks are bound to happen.

While Zero Trust is simply a concept – one of many – to increase resistance to attacks or breach, it’s by no means a silver bullet. There is NO silver bullet to this. We are in a constant siege of information warfare and the constant need to balance the need for sharing and the need for protection. It is as they say; the safest place would be in a cave. But that’s now living, that’s surviving. If you need to go somewhere, you need to fly, you have information with the airlines. If you need to do banking, you have information with the banks. If you need to conduct your daily shopping online, you are entrusting these guys like Lazada et al the information that otherwise you may not likely provide.

So Zero Trust isn’t the fact that you conduct zero transaction, its basically a simple principle: Trust no one, Verify everything. Compare it to the more traditional “trust but verify” approach, which assumed that everything inside an organisation’s network should be trusted, even if we do have verifications of it. Here’s a breakdown of the concept, in hopefully simpler terms.

The Basic Premise: Imagine a company as a fortified castle. In the old days, once you were inside the castle walls, it was assumed you belonged there and could roam freely. At least this is based on the limited studies we have done by binge watching Game of Thrones. All historical facts of the middle ages can be verified through Game of Thrones, including the correct anatomy of a dragon.

Back to the analogy, what if an enemy disguised as a friend managed to get inside? They would potentially have access to everything. Zero Trust Architecture operates on the assumption that threats can exist both outside and inside the walls. Therefore, it verifies everyone’s identity and privileges, no matter where they are, before granting access to the castle’s resources. The 3 keys you can remember can be:

  1. Never Trust, Always Verify: Zero Trust means no implicit trust is granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned). Basically, we are saying, I don’t care where you are or who you are, you are not having access to this system until I can verify who you are.
  2. Least Privilege Access: Individuals or systems are given the minimum levels of access — or permissions — needed to perform their tasks. This limits the potential damage from incidents such as breaches or employee mistakes. We see this issue a lot, whereby a C level person insist on having access to everything even if he doesn’t necessarily know how to navigate a system without a mouse. When asked why, they say, well, because I am the boss. No. In Zero Trust, in fact, because you are the boss, you shouldn’t have access into a system that does not require your meddling. Get more sales and let the tech guys do their job!
  3. Micro-Segmentation: The network is broken into smaller zones to maintain separate access for separate parts of the network. If a hacker breaches one segment, they won’t have access to the entire network.

The steps you can follow to implement the concept of Zero Trust:

Identify Sensitive Data: Know where your critical data is stored and who has access to it. You can’t protect everything. Or at least not with the budget you are given, which for most IT groups, usually is slightly more than they allocate to upkeep the company’s cat. So data identification is a must-have. Find out what is the data that you most want to protect and spend your shoe-string budget to protect it!

Verify Identity Rigorously: Use multi-factor authentication (MFA) and identity verification for anyone trying to access resources, especially important resources like logging systems, firewalls, external webservers etc. This could mean something you know (password), something you have (a smartphone or token), or something you are (biometrics). It used to cost a mortgage to implement things like this but over the years, cheaper solutions which are just as good are now available.

Contextual Access: Access decisions should consider the context. For example, accessing sensitive data from a company laptop in the office might be okay, but trying to access the same data from a personal device in a coffee shop might not be. This may not be easy, because now with mobile devices, you are basically accessing top secret information via the same device that you watch the cat playing the piano. Its a nightmare for IT security – but again, this has to have discipline. If you honestly need to access the server from Starbucks , then implement key controls like MFA, VPN, layered security and from a locked-down system.

Inspect and Log Traffic: Continuously monitor and log traffic for suspicious activity. If something unusual is detected, access can be automatically restricted. SOAR and SIEM products have advanced considerably over the years and today we have many solutions that do not require you to sell a kidney to use. This is beneficial as small companies are usually targeted for attacks, especially if these smaller companies services larger companies.

At the end, it all comes down to what are the benefits to adopt this approach.

Enhanced Security: By verifying everything, Zero Trust minimizes the chances of unauthorised access, thereby enhancing overall security. Hopefully. Of course, we may still have those authorised but have malicious intent, which would be much harder to protect from.

Data Protection: Sensitive data is better protected when access is tightly controlled and monitored. This equates to less quarter given to threat players out there.

Adaptability: Zero Trust is not tied to any one technology or platform and can adapt to the changing IT environment and emerging threats.

On the downside, there are still some challenges we need to surmount:

Complexity: Implementing Zero Trust can be complex, requiring changes in technology and culture. It’s not a single product but a security strategy that might involve various tools and technologies. This is not just a technical challenge as well, but a process and cultural change that may take time to adapt to.

User Experience: If not implemented thoughtfully, Zero Trust can lead to a cumbersome user experience with repeated authentication requests and restricted access. This is a problem we see a lot, especially in finance and insurance – user experience is key – but efficiency and security are like oil and water. Eternal enemies. Vader and Skywalker. Lex and Supes. United and Liverpool. Pineapple and Pizza.

Continuous Monitoring: Zero Trust requires continuous monitoring and adjustment of security policies and systems, which can be resource-intensive. We’ve seen implementation of SIEM and SOAR products which are basically producing so many alerts and alarms that it makes no sense anymore. These all become noise and the effects of monitoring is diluted.

In summary, an era where cyber threats are increasingly sophisticated and insiders can pose as much of a threat as external attackers, Zero Trust Architecture offers a robust framework for protecting an organisation’s critical assets. It’s about making our security proactive rather than reactive and ensuring that the right people have the right access at the right times, and under the right conditions. It’s culturally difficult, especially in Malaysia, where I will have to admit, our innate trust of people and our sense of bringing up means we always almost would open the door for the guy behind us to walk in, especially if he is dressed like the boss. We hardly would turn around and ask, “Who are you?” because we are such nice people in this country.

But, adopt we must. For any organisation looking to bolster its cybersecurity posture, Zero Trust isn’t just an option; it’s becoming a necessity. In PKF we have several services and products promoting Zero Trust – contact us at avantedge@pkfmalaysia.com and find out more. Happy New Year!

Trends for InfoSec moving into 2023

When I was a kid, I used to watch this show called Beyond 2000 and imagined, if I lived to year 2000, I would be seeing flying cars and teleportation and space travel. Later on, I had to temper my expectation but was still filled with optimism when October 21, 2015 rolled around, at least, we would have a hoverboard to fool around with. At least.

We are now in 2023. No flying cars. No hoverboards or hovertrains and no flux capacitors to go back in time to make gambling bets. We do have a lot of information security issues, though, and while not really sexy enough to make a Hollywood movie around it, it’s still giving us enough to do as we ride into this new year on what trends we think may impact us moving forward.

To understand why information security has become increasingly important in recent years, we look at the sheer amount of sensitive information being stored and transmitted electronically, and shared in our everyday interaction. We share and give information without us knowing it, even. Everytime we browse the net, everytime we hover our mouse over a product, everytime we use our credit card to get your coffee or pay for Karaoke session, everytime we check our location on Waze:- the vast array of information and data is being transmitted and curated carefully by organisations intent on peering into our lives to make it “better”.

As information continues to grow, increasing amount of incidents follow. Some of the more high profile ones include

a) SingHealth – In July 2018, one of Singapore’s largest healthcare group, SingHealth, suffered a data breach where personal information of 1.5 million patients, including Prime Minister Lee Hsien Loong, was stolen. How was this achieved? The attackers had gained unauthorized access to the network and exfiltrated the data through a sophisticated method, which involved using a “well-planned and carefully orchestrated cyber attack” and a “spear-phishing” campaign in which the attackers sent targeted emails to specific individuals within the organization to gain access to the network. No matter how much investments we make in technology, the weakest link still remain the humans around it, especially those interested to click on links depicting a cat playing the piano furiously.

b) India’s National Payment Corporation of India (NPCI) – In January 2021, the NPCI, the company that manages India’s Unified Payments Interface (UPI) system, which enables inter-bank transactions, experienced a data breach. The breach was caused by a vulnerability in the UPI system that was exploited by hackers, who then used the stolen data to make fraudulent transactions. The incident resulted in a temporary suspension of the UPI system, causing inconvenience to millions of users.

c) Garmin – Back in 2021, Garmin, a leading provider of GPS navigation and fitness tracking devices, was targeted by a ransomware attack. The attackers used a variant of ransomware called WastedLocker, which encrypted the company’s data and demanded a ransom payment. The attack caused the company to shut down its operations, leading to widespread service disruptions.

d) SolarWinds – Ah, this was probably one of the largest profile cases of data breach in recent memory. It was discovered that a sophisticated cyber attack had breached multiple government agencies and private companies, including SolarWinds, that runs IT management software. The attackers used a vulnerability in SolarWinds’ software to gain access to the networks of the companies and organizations that used it, and used those accesses to steal sensitive information. The incident was attributed to a Russian cyber espionage group known as APT29 or “Cozy Bear”.

Many more information security issues will continue to occur well into this year and the next and the next. One of the burning question is how companies can keep up with this movement, and how we can remain vigilant.

One trend that is likely to continue into this year is the establishment of cloud computing. While previously we had AWS/Azure, we now see a larger array and options for cloud providers. Within the cloud itself, services being offered are replacing traditional needs for separate security functions like logging systems, authentication systems etc. As more and more organizations move their data and applications to the cloud, it will become increasingly important to ensure that this data is protected against unauthorized access and breaches. This will require more stringent security measures to improve encryption, multi-factor authentication, and continuous monitoring of cloud environments.

One of the more interesting ideas that has floated around is the use of blockchain technology for security. Blockchain is a decentralized, distributed ledger that can be used to securely store and transmit sensitive information. This can help in the C,I,A triad of security. Encryption for confidentiality, immutability in blockchain records to ensure integrity; decentralization of data to remove single points of failure to ensure availability. There could be many more uses, but it still remains an abstract for many organisations looking at this for their information technology. As such for basic implementation, this may be useful for applications such as supply chain management, where multiple parties need to share information in a secure and transparent way.

Another growing trend, as always, is the need for strong cybersecurity workforce. As the number of cyber threats continues to grow, it will be increasingly important to have a workforce that is trained and equipped to deal with these threats. This will require organizations to invest in employee training and development, as well as to recruit and retain highly skilled cybersecurity professionals. Professional training, a big industry in Malaysia, will continue to play a key role in enabling people to carry out their vital tasks within the information security landscape.

Another abstract trend we often hear, deals with the Internet of Things (IoT) devices. In short, IoT refers to the growing network of physical devices, vehicles, buildings, and other items that are embedded with sensors, software, and connectivity, allowing them to collect and exchange data with each other. The example we always see is that fridge telling us we are running short on milk and placing an order to get milk for us. But IoT is happening whether we like it or not. Healthcare will be heavily dependent on it as information is exchanged with digital systems across nationwide healthcare systems; manufacturing of course is putting more traditional systems onto the network to integrate with automated processing tools; transportation is getting more digitized than ever, car manufacturers now looking not just to hardware but to cloud enablement of software running in cars. Even wearables, fitness apps, smart homes etc are impacting end users in more ways than we can imagine. It’s coming. or it’s here – eitherway, we expect 75 billion devices to be connected over IoT by 2025.

Another trend we like to see more in 2023 is the use of artificial intelligence and machine learning for security. These technologies can be used to detect and respond to cyber threats in real-time, as well as to analyze large amounts of security data to identify patterns and anomalies that may indicate a potential attack. We traditionally have threat intelligence but the time to respond to threats were still lagging behind, dependence on human intervention and decisions. With automated systems, more advanced rules and correlation of multiple information points, actions can be orchestrated through a more meaningful, machine learnt manner as opposed to depending on manual rules and signatures.

While not the most sexy or interesting, where we want to see improvement and a trend to get better, would be to improve and make more effective incident response plans. With the increasing number of cyber threats and attacks, it is critical that organizations have the ability to quickly and effectively respond to security incidents. This will require organizations to have detailed incident response plans in place, as well as to regularly test and update these plans to ensure that they are current and effective.

One trend we want to see more, especially in our accounting and auditing industry, is the adoption of security automation. This will involve the use of software tools and technologies that can automate various security tasks, such as vulnerability management, incident response, and threat intelligence. Implementation of tools such as Ansible has been done in our organisation, providing at least a first layer of understanding configuration and management of systems. With more automation, this will help us to more efficiently and effectively protect and respond against cyber threats.

Finally, some of the things we hardly talk about in information security is how much more integrated infosec needs to be in the field of humanities. A lot of us approach info sec from a technical viewpoint, which is great but perhaps a more effective viewpoint should be from the views from humanities. The humanities can play several roles in information security, including providing a broader understanding of the social and cultural contexts in which security threats occur, assisting with the development of effective communication strategies for raising awareness and educating the public about security risks, and helping to design user-centered security systems that take into account the needs and behaviors of different groups of users. Additionally, the study of ethics in the humanities can be used to inform decision making and policy development in information security. An example would be how implementing more stringent security monitoring may impact the innate need for privacy within employees – where, though the technology is sound and good and the intent is well thought of, organisations may still end up pushing out policies and technology that people will revolt against as opposed to embracing. This is not a field we often think of, but moving forward, it’s worth dwelling on and indeed provides us a more holistic way on how infosec can be part of our lives.

This isn’t so much of our traditional compliance article, but it’s always interesting to try to peer into a crystal ball and see what’s ahead and then at the end of the year see what has been proven more correct or wrong in our trends prediction. Drop us a note at avantedge@pkfmalaysia.com and tell us what you think, or if you require any of our services. Have a great year ahead!

Recap on PCI v4.0: Changes in The 12 Requirements

So here we are in 2023 and PCIv4.0 is on everyone’s thoughts. Most of our customers have finished their 2022 cycle; and some are going through their 2023 cycle. Anyone certifying this year in general, means that for the next cycle on 2024, they will be certified against v4.0. V3.2.1 will be sunset in March 2024, so as a general rule of thumb, anyone going for certification/recertification in 2024 – hop onto v4.0.

Take also special note of the requirements where statements are “Best practices until 31 March 2025, after which these requirements will be required and must be fully considered during a PCI DSS assessment “.

It doesn’t mean that you can actively ignore these requirements until 2025; rather, to use this period of around 2 years as a transition period for your business to move into these newer requirements. So, to put it short: start even now. One of the requirements that gets a lot of flak is 3.5.1.2 which is the disk level encryption; in other words, technology like TDE being used to address encryption requirements. This is no longer a get out of jail free card because after March 2025, you will need to implement (on top of TDE, if you still insist on using it), if you are not using it on removable media – the 4 horsemen of the apocalypse – Truncation, Tokenization, Encryption or Hashing. And before you get too smart and say yes, you are using Encryption already, i.e transparent or disk-level encryption; PCI is one step ahead of you, you Maestro of Maleficant Excuses, as they spell out “through truncation or a data-level encryption mechanism“.

So, for v4.0 it’s probably easier to just break it up into

a) SAQs v4.0 – Self assessment

This is straight forward – a lot of changes have occurred to some of the venerable SAQs out there, such as SAQ A. I’ll cover that in another article.

b) ROC v4.0 – from QSA/ISA

Most QSAs should be able to certify against v4.0. You can check on the PCI-DSS QSA lists, they have ” ** PCI DSS v4 Assessors  ” under their names. There also may be some shakeout that some underqualified QSAs may not go through the training to upgrade to v4 assessors. On another note, ISAs don’t generally have these requirements to upgrade to v4.0; although it’s recommended.

Now, perhaps is a good time to just go through a very big overview of V4.0 and explain why some of these changes had been effected.

Changes to Requirements

For this overview, we will first look at the 12 requirements statements and see where the changes are. In a big move, the council has updated the main requirements (not so subtly), getting rid of many of the tropes of previous incarnation of the standard. Let’s start here.

Requirement 1 is now changed to “Install and Maintain Network Security Controls” as opposed to “Install and maintain firewall configuration to protect CHD.”

This is a good change; even if the wordings are still a little clumsy. After all Network Security Controls are defined so broadly and may not just be a service or product like a firewall or a NAC or TACACs. It could be access controls, AAA policies, IAM practices, password policies, remote access controls etc. So how do you ‘install’ such policies or practices? A better word would be to “Implement” but I think that’s nitpicking. Install is an OK word here, but everytime I hear that, I think of someone installing a subwoofer in my car or installing an air-cond in my rental unit. But overall, it’s a lot better than just relying on the firewall word – since in today’s environment, a firewall may no longer just function as a firewall anymore; and integrated security systems are fairly common where multiple security functions are rolled into one.

Requirement 2 now reads as “Apply Secure Configurations to All System Components.” Which is a heck better than “Do not use vendor supplied defaults for system passwords and other security parameter.” The latter always sounded so off, as if it’s like a foster child that never belonged to the family. Because it reads more like a control objective or part of a smaller subset of control area as opposed to an overarching requirement. It just made PCI sounds juvenile compared to much better written standards like the ISO, or NIST or CIS.

Requirement 3 changes are subtle from “Protect stored cardholder data” to “Protect stored account data” – they removed cardholder data and replace it with “account” data. It generally means the same thing; but with account data, they possibly want to broaden the applicability of the standard. Afterall, it may be soon that cards may be obsolete; and it might be all information will be contained in the mobile device, or authenticated through virtual cloud services. Hence a traditional person ‘holding a card’ may no longer be a concept anymore.

Requirement 4 reverts back to cardholder data, with the new 4.0 stating “Protect Cardholder Data with Strong Cryptography During Transmission Over Open, Public Networks”. Which is sometimes frustrating. If you have decided to call account data moving forward, just call it account data and not revert back to cardholder data. Also this requirement changed from the older “Encrypt transmission of cardholder data across open, public networks”. It may sound the same, but it’s different. It removes the age old confusion on, what if I encrypt my data first and then only transmit it? In the previous definition, it doesn’t matter. The transmission still needs to be encrypted by the way it is written. However, with the new definition, you are now able to encrypt the data and send it across an unencrypted channel (though not recommended) and still be in compliance. Ah, English.

“Requirement 5: Protect All Systems and Networks from Malicious Software” is a definite upgrade from the old “Requirement 5: Protect all systems against malware and regularly update anti-virus software or programs”. This gives a better context from the anti-virus trope – where QSAs insist on every system having an antivirus even if its running on VAX or even if it brings down the database with its constant updates. Now, with a broader understanding that anti-virus is NOT the solution to malicious software threats; we are able to move to a myriad of end point security that serves a better purpose to the requirement. So long, CLAMAV for Linux and Unix!

Requirement 6 reads about the same except they changed the word ‘applications’ to software i.e “Develop and maintain secure systems and applications” to “Develop and Maintain Secure Systems and Software”. I am not sure why; but I suppose that many software that may serve as a vector of attack may not be classified as an application. It could be a middle ware, or an API etc.

By the way, just to meander away here. I noted that in V4.0 requirements, every word’s first letter is Capitalised, except for minor words like conjuctions, prepositions, articles. This seems to be in line with some of the published standards such as CIS (but not NIST), and its basically just an interesting way to write it. This style is called “Title Case”, and It Can Be Overused and Abused Quite a Lot if We Are Not Careful.

Requirement 7: Restrict Access to System Components and Cardholder Data by Business Need to Know vs previous version Requirement 7: Restrict access to cardholder data by business need to know. Again, this is more expansionary; as system components (we assume those in scope) may not just be containing cardholder data; but have influence over the security posture of the environment overall. Where previously you may say, well, it’s only access to the account data that requires ‘business need to know’ or least privvy; now, access to authentication devices; or SIEM, or any security based service that influences the security posture of the environment – all these accesses must be restricted to business need to know. Again – this is a good thing.

Requirement 8: Identify Users and Authenticate Access to System Components vs previous version “Identify and authenticate access to system components”. This seems like just an aesthetic fix. Since, yes, you probably want to identify USERS as opposed to identify ACCESS. It could mean the same thing, or it may not. A smart alec somewhere probably told the QSA, hey, we identified the access properly. It came from login 24601 from the bakery department at 6 am yesterday. Do we know the user? No, but PCI just needs us to identify the ‘access’ and not the user, right? OK, smart alec.

Requirement 9: Restrict Physical Access to Cardholder Data is the only one that does not have any changes, except for the aforementioned Title Case changes.

Requirement 10: Log and Monitor All Access to System Components and Cardholder Data vs Track and monitor all access to network resources and cardholder data. So two things changed here. “Log” vs “Track” and System Components vs Network Resources. I personally find the first change a bit limiting when you are saying to just log instead of ‘track’. But I know why they did it. Because Tracking is redundant, if you are already Monitoring it. So in another dimension somewhere, the same smart alec may state, no where did it tell us to ‘log’ or keep logs in this statement – they just want us to Track/Monitor users. So its just for clarity that from here on, you log and monitor, not just track/monitor. The second change is very good, because now, there is no ambiguity for non-network resources. It’s true when one day, we actually came across a client stating this does not apply to them because they do not put their critical systems on the network and they only use terminal access to it, therefore there’s no need to log or monitor. The creativity of these geniuses know no bounds when it comes to avoiding requirements.

Requirement 11: Test Security of Systems and Networks Regularly vs Regularly test security systems and processes. Switching the word regularly is done just for aesthetic reading, but the newer word strings better and again, removes ambiguity. I mean first thing, the older requirement tells us to test ‘security systems’. Now most of the workstations et al may not be defined as ‘security systems’. I would define security systems as a system that contributes to the security posture of a company – an authentication system, a logging system, the NAC, the firewall etc. Of course, this isn’t what PCI meant and they realised, snap, English is really a cruel language. “Security systems” does not equal to “Security of Systems”. That two letters there changed everything. Now, systems are defined as any system in scope – not just one that influences security. We need to test security of all systems in scope. The second change to remove processes and insert in Networks is better, I agree. I did have a client asking me, how do we ‘test processes’ for PCI. Do we need to audit and check the human process of doing something? While that is true in an audit, that’s not the spirit of this requirement. This is for technical testing, i.e scans, penetration testing etc. So rightly, they removed ‘processes’ and inserted Networks; which also clears the ambiguity of performance of a network penetration testing, as well as application penetration testing.

Again, I just want to add, all these are actually clarified in the sub controls in the both v3.2.1 and v4.0 but if someone were just to skate through PCI reading the main requirements titles – I can see where the misunderstanding may occur with the old titles.

Finally, Requirement 12 Support Information Security with Organizational Policies and Programs is an upgrade from the previous Maintain a policy that addresses information security for all personnel. The previous title was just clumsy. Many clients understood it to be a single policy, or information security policy that needs to be drawn up, because it states Maintain A Policy. One Policy to rule them all. And this policy governs information security for all humans. Which doesn’t make sense. Unless the ‘for’ here was to mean that this policy needs to be adhered by all personnel; not that the personnel were the subjects of the information security. Yikes. The newer route makes more sense. Have your policies and programs support information security overall. Not information security of your people; but information security, period.

So just by reading the titles (and not going deep dive yet), we can see the improvement in clarifying certain things. There is more function in the sentence; there is more of an overarching purpose to it and most of all, it looks and reads more professionally that puts PCI more into the stately tomes of ISO, CIS or NIST.

While waiting for the next deep dive article, drop us a note at pcidss@pkfmalaysia.com if you have any queries at all about PCI, ISO27001, NIST, SOC or any standard at all. Happy New Year, all!

Breakdown of BNM RMIT 2023 Table of Contents Part 1

TABLE OF CONTENTS
1 Introduction………………………………………………………………………………………….. 3
2 Applicability …………………………………………………………………………………………. 3
3 Legal provision …………………………………………………………………………………….. 3
4 Effective date ……………………………………………………………………………………….. 4
5 Interpretation ……………………………………………………………………………………….. 4
6 Related legal instruments and policy documents……………………………………. 6
7 Policy documents and circulars superseded ………………………………………….. 6
PART B POLICY REQUIREMENTS……………………………………………………………………… 8
8 Governance………………………………………………………………………………………….. 8
9 Technology Risk Management …………………………………………………………….. 10
10 Technology Operations Management …………………………………………………… 11
11 Cybersecurity Management …………………………………………………………………. 25
12 Technology Audit ……………………………………………………………………………….. 31
13 Internal Awareness and Training………………………………………………………….. 31
PART C REGULATORY PROCESS …………………………………………………………………… 32
14 Notification for Technology-Related Applications …………………………………. 32
15 Consultation and Notification related to Cloud Services………………………… 34
16 Assessment and Gap Analysis…………………………………………………………….. 35
APPENDICES ………………………………………………………………………………………………..36
Appendix 1 Storage and Transportation of Sensitive Data in Removable Media………. 36
Appendix 2 Control Measures on Self-service Terminals (SST) …………………………. 37
Appendix 3 Control Measures on Internet Banking …………………………………………. 40
Appendix 4 Control Measures on Mobile Application and Devices………………………. 41
Appendix 5 Control Measures on Cybersecurity …………………………………………….. 42
Appendix 6 Positive List for Enhancements to Electronic Banking, Internet
Insurance and Internet Takaful Services ……………………………………….. 43
Appendix 7 Risk Assessment Report…………………………………………………………… 47
Appendix 8 Format of Confirmation………………………………………………………………….. 49
Appendix 9 Supervisory Expectations on External Party Assurance ……………………. 50
Appendix 10 Key Risks and Control Measures for Cloud Services …………………….…52

« Older posts

© 2024 PKF AvantEdge

Up ↑