Month: April 2016 (Page 1 of 2)

PCI-DSS v3.2 is officially published

pci-compliance

After some back and forth on the draft versions, PCI v3.2 is now officially published. You can go ahead and download it here, and click on the nice little link saying 3.2 and agree to all sorts of terms and agreements nobody ever reads about.

Anyway, a little bit of background on this release. Usually, versions for PCI are released in the later stages of the year in November. In fact, even I mentioned this to a few clients that version updates were done in November, until PCI recently announced that v3.2 is to be released in March/April timeline due to a few factors as described in this article. So yeah, now I need to admit I was bamboozled. PA-DSS v3.2 is likewise to be released sometime in May or June.

So here’s how it works: 3.2 is now officially effective. PCI v3.1 will be retired end of October 2016 (basically to allow everyone to sort of complete the v3.1 if you are already in the final stage of completing it). So all assessments/audit that occurs AFTER October will be version 3.2. This is important to note, because if any gap assessments begin now, and has a timeline to complete AFTER October, you want to use 3.2. For ongoing projects, it is best we scurry and get it all done before October! Chop chop!

There is a bunch of ‘best practices’ that will become requirements by February 2018. Other dates you need to be aware of:

a) June 30, 2016 – for companies not migrated yet out of SSL/early TLS, you will need to have a secure service offering (meaning an alternative service utilising TLS.1.1 and above. I will go out on a limb here and suggest to use TLS1.2 knowing how volatile PCI guys are in changing stuff).

b) June 30, 2018 – SSL/early TLS becomes extinct as far as PCI is concerned. No more mitigation plan! The exception is on POS terminals that has no known exploits.

c) January 31, 2018 – This is the deadline where new requirements graduate from being ‘best practices’ to ‘mandatory requirements’.

OK, now that’s out of the way, here’s a snapshot on the main stuff of v3.2 and what we are facing:

a) New Appendix A3 covers the Designated Entities Supplemental Validation. This basically means that if any acquirer or VISA/Master deems that a service provider needs to go through ADDITIONAL requirements on top of the torture they have endured for PCI, they can. These victims could include companies making ridiculous amount of transactions, aggregators or companies that are constantly breached. So PCI has a whole bunch of extra stuff for you to do, mainly to deal with BAU activities, incident response, documentation and logical access controls.

b) Additional cryptographic documentation – Service providers are not going to enjoy this. We will now need to formally document the protocols, key strength, cryptoperiod, key usage for each key and HSM inventory. This should technically be done anyway in your key management procedure document, but now its a requirement. Take a look at NIST SP800-57 for the key concepts to get you started.

c) 8.3 is significant : Multifactor login. Whereby previous versions stated that 2 factor authentication is required for remote access from non-secure networks, now 3.2 shifted this requirement to “all personnel with non-console administrative access, and all personnel with remote access to the CDE”. Wait, what? This means, even if you are accessing an administrative UI or page (non-console) from a secure environment, multi-factor (2 factor is good enough) is required! I think there would be some pushback on this as this requires a fair bit of effort. We have until February 2018 to implement this.

d) Another big one is 11.3.4.1 – segmentation PT now needs to be done every SIX months as opposed to a year. This is not good news for some clients who have segments popping up like acne on a pubescent face. That’s quite a lot of work for them to do and this might give them more cause to think of a completely isolated network just for PCI-DSS with its own link and architecture, as opposed to sharing with multiple not in scope segments. Again, we have some grace period till Feb 2018.

e) New requirement 12.11 is interesting. I have always been an advocate to do constant checks with clients to make sure they are at least practicing PCI. We have this free healthcheck service every quarter for clients who take up our other services and we are checking exactly this: daily log reviews, firewall is clean, new systems are documented and hardened, incidents are responded, proper approval for changes etc. It’s nice to see that our efforts now have something formal tied to them. Feb 2018 is the deadline.

f) Here’s a downer. Appendix A2. We all know there was some sort of escape loop for those who were caught with SSL and early TLS in their applications. They created mitigation documents which may or may not be true. Just saying. Now, if you take this route, this is no longer a free pass for your ASV scans or vulnerability scans. If you have these protocols in place, your mitigation plan must fully address A2.2 requirements. If you are a service provider, take note of A2.3: YOU MUST have a secure service option in place by June 30, 2016! Not 2018. 2018 is when you stop using SSL/early TLS. So this timeline is slightly confusing. Like X-Men:Days of Future Past confusing.

Some main clarifications include:

a) Secure code training now officially needs to be done annually – you won’t want to guess how much push back I get on this when I tell clients it’s annual, and not something that is done when they have the budget for it (which is never).

b) Removing the need to interview developers to ‘demonstrate’ their knowledge – I do programming a bit, but I’ll be foolish to think I can go up against a senior developer who eats, breathes and … lives for coding. How awkward I’ve seen some younger QSAs struggle to do this (determining whether the senior dev guru is good enough), when its obviously not something they even know about. Let auditors audit and let developers code.

c) Finally, note added to Req 8 to say that authentication requirements are not required for cardholder accounts, but only to administrative or operational/support/third party accounts. We have always practiced this anyway but now its clear.

d) More clarifications on addressing vulnerabilities considered ‘high’ or ‘critical’. I am not a big fan of these. I think every vulnerability should eventually be addressed, just prioritised in terms of timing. Even if it’s low or medium, it’s still important to have a mitigating factor to it. There is a reason why it’s a vulnerability and not something you can sweep under the carpet.

e) A good note on pentesting in 11.3.4c – testing now needs to be done by qualified internal or external resource with independence. Again, we already practice this but it’s good that now it’s official.

So, that’s about it. Of course, there’s a fair bit more. I suggest you to poke through the summary of changes first and then go through the documentation itself.

Be aware of those dates! It’s all over the place (June 2016, June 2018, Jan 2018), and who knows these might change in the future. Have a happy compliance.

 

 

 

Deployment of Alienvault in Practice Part 1

avlogo

In this article, we are going to explore deploying Alienvault in practice. While there are many documents out there that give pretty clear steps on what to do, these documents are somewhat pretty distributed, and we don’t want to come to a point where we are 85% into the deployment, only to find that we were supposed to do something 25% in and did not do it.

Before anything else, you should have a deployment checklist to make sure everything is in order. The checklist is pretty long, much too detailed to put into a post like this Email us at alienvault@pkfmalaysia.com, and we can get you started.

In this example, we will be using a 3 piece band: the Server, the sensor and the logger. You can generally just trade the server for an AIO, which we did, but in general, it’s going to serve as a server. Remember though, with an AIO, you do have an additional sensor if you want to enable it, or a logger as well, with around 4 TB of compressed space (vs 9TB of compressed space for a standalone logger).

With that out of the way, and assuming that physically everything is racked and connected, and the VMs are up and running, you are ready to go. Remember, if you have separate systems, always start with the server (or the AIO) first, and then only move on to the sensor. Else, your sensor might be orphaned.

Now, of course, if you are using virtual appliance, your VMWare needs to be set up. Some questions we encountered is, how many interfaces we should have. Well, you should have the management interface (and use that as log collection), and the other interfaces would be for monitoring. Now one of the trick questions here is that, hey, I want to have a separate management interface and log collection interface. So that you know, nobody knows my management interface.

Possible. But we have seen deployments where both the management interface and log collection interface sits on the same subnet. This is probably going to cause some issues – one of it is routing might likely be screwed up. Another thing is that deployment of HIDS might constantly refer back to the management interface. So, rule of the thumb:

If you only have one subnet, just use the one interface for management and log collection.

Another question we have is, by default, AIO comes with six interfaces. (because, remember, it’s also a sensor!). Some clients have it in their minds to use all six interfaces. Generally, aside from the management and log, all the other interfaces won’t be assigned an IP and will be monitoring interfaces (i.e put it in a SPAN port and monitor away). Now unless you have very specific reasons to, it would not be so likely to use all monitoring interfaces (depending on how you set it up), so don’t feel like you are losing out. A lot of the setups we see simply has the sensor or AIO located at a central switch with SPAN or TAP and monitors fine.

Another question: Thin or thick provisioning for disk format. Well – we are used to just setting it as thin, meaning that it will just grow as the logs increase, but if you have space, setting it to thick might still be fine. I am not a VMWare guru, and I am sure the VMWare gurus out there will go into battle with this one, but we’ve deployed on both disk format and it doesn’t seem to have an extreme impact at all. Of course, I stand to be corrected.

Yet another question (even before we go into deployment!) is if I buy a hardware with a hard drive of 200TB, can Alienvault use all the 200TB instead of the measly 1TB for AIO and 1.8TB for Logger? The short of the answer is no, the size of the virtual machine is in the OVF itself, so if you purchase a ridiculous amount of hard drive space, the alienvault image is still going to occupy what it is going to occupy. But hey, you could start hosting other virtual systems there of course and use them up!

Setting up the server

1) Ok, finally, let’s get down to it. Once you boot up and assuming you have installed the OVF correctly if you are running virtual appliance, you will be dropped into the setup menu. Select Manual network interface and define an IP. I would suggest this as opposed to depending on a DHCP server. Aside from that, other setup paramaters are what you should expect and should be able to fill up pretty easily.

Now one of the annoying things that sometimes we face is that when the initial setup is rebooted, we get stuck at that Alienvault face that keeps loading but nothing happens. To be safe, when you reboot, just keep pressing ESC till you see the booting details. If you are still stuck, alt+F2 might be able to escape you. Else, you might need to give it the good old Vulcan Nerve Pinch. (Ctrl-Alt-Del).

Other times, you might just be stuck at VMWare console and the annoying “Waiting for connection” that seems to hang. Your system is fine, it’s just the VMWare console is moody. Restarting your Vsphere might do the trick.

Once you can SSH into your box you are confronted with a login screen and once logged in, you need to change the root password. Don’t forget it!

After that, register your appliance. Now, if you are running on AIO/server/logger, I would suggest to do an online Web UI registration. Obviously you will need connectivity to the internet. You can copy and paste your product license key once you access the Web UI as there will be an option for you in the Free Trial Screen. After that, you can set up the admin user and password. There is an offline technique as well, or if you are in the mood to type the entire license, you can do so from the alienvault menu itself.

After this is done, set up the hostname. You need to do this from the alienvault setup menu, select System Preferences -> Configure Hostname.

Make sure you apply all changes. Once you apply all changes, go ahead and reboot the appliance from the menu itself.

Another important thing is to change the time zone. After reboot, head over to

System Preferences -> Change Location -> Date and Time -> Configure Time Zone. Select the place you are at and apply all changes.

Likewise, you might want to use an NTP (network time protocol) server as well. In the same Data and Time menu, select Configure NTP Server. Enable it by selecting it and put in the NTP hostname (if you have DNS defined) or IP. Apply everything.

Now, this might be a good time to check on the linux box if your time is correct.

Jail Break your system, and type in ‘date’, you should see it changed.

Likewise go to WebUI, login and click on Settings at the top right. Make sure the time zone for that user is properly defined. Now check back on the SIEM (Analysis -> SIEM) on the WebUI , you should see the Date as whatever timezone you have defined yourself in.

Timestamping is obviously a big deal in any SIEM, and other than these areas to be wary off, we should also know that individual plugins also have timezone options. This is helpful if the data source suddenly changes timezones and we have to accomodate the data source.

It looks like the server is all set. If you have an AIO, you should also now see under

Configuration -> Deployment -> Sensors / Servers , your IP address because you are a Sensor and a Server.

Next, we will look at setting up the sensor and logger.

 

Advisory on Badlock Vulnerability

badlock

This is a security advisory on the Badlock Bug.

What is Badlock?

Samba is an important component to seamlessly integrate Linux/Unix Servers and Desktops into Active Directory environments. It can function both as a domain controller and as a regular domain member. On April 12th, 2016 Badlock, a crucial security bug in Windows and Samba was disclosed. The security vulnerabilities can be mostly categorized as man-in-the-middle or denial of service attacks.

Man-in-the-middle (MITM) attacks:

There are several MITM attacks that can be performed against a variety of protocols used by Samba. These would permit execution of arbitrary Samba network calls using the context of the intercepted user. Impact examples of intercepting administrator network traffic:

  • Samba AD server – view or modify secrets within an AD database, including user password hashes, or shutdown critical services.
  • Standard Samba server – modify user permissions on files or directories.

Denial-of-Service (DoS) attacks:

Samba services are vulnerable to a denial of service from an attacker with remote network connectivity to the Samba service. Microsoft has addressed this in MS16-047. This vulnerability can be used to login as another user for applications that use the SAMR or LSAD protocol. All versions of Windows are affected.

Who is Vulnerable?

Samba Application running on Linux/Unix Systems

  • 3.6.x,
  • 4.0.x,
  • 4.1.x,
  • 4.2.0-4.2.9,
  • 4.3.0-4.3.6,
  • 4.4.0

Windows

All supported editions of Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8.1, Windows Server 2012, Windows Server 2012 R2, Windows RT 8.1, and Windows 10.

Associated CVEs

Badlock for Samba is referenced by CVE-2016-2118 (SAMR and LSA man in the middle attacks possible) and for Windows by CVE-2016-0128 / MS16-047 (Windows SAM and LSAD Downgrade Vulnerability).

There are additional CVEs related to Badlock. Those are:

  • CVE-2015-5370 (Multiple errors in DCE-RPC code)
  • CVE-2016-2110 (Man in the middle attacks possible with NTLMSSP)
  • CVE-2016-2111 (NETLOGON Spoofing Vulnerability)
  • CVE-2016-2112 (LDAP client and server don’t enforce integrity)
  • CVE-2016-2113 (Missing TLS certificate validation)
  • CVE-2016-2114 (“server signing = mandatory” not enforced)
  • CVE-2016-2115 (SMB IPC traffic is not integrity protected)

How to check if server is vulnerable?

A server is vulnerable to BADLOCK if:

  • It is running any of the above mentioned versions of SAMBA
  • For vulnerable Windows versions refer the following link:

https://technet.microsoft.com/library/security/MS16-047

How to fix

For Samba service running on Linux/Unix systems, apply the patches provided by the Samba Team and SerNet for EnterpriseSAMBA / SAMBA+ immediately.

Patched versions are (both the interim and final security release have the patches):

  • 4.2.10 / 4.2.11,
  • 4.3.7 / 4.3.8,
  • 4.4.1 / 4.4.2.

For Windows Installations, refer following link for patch details:

https://technet.microsoft.com/library/security/MS16-047

 

References and Useful Links

http://badlock.org/

https://www.samba.org/samba/latest_news.html#4.4.2

https://www.samba.org/samba/security/CVE-2016-2118.html

https://technet.microsoft.com/library/security/MS16-047

For more information or a vulnerability scan, please contact us at avantedge@pkfmalaysia.com.

The Myths of the Top 10 Myths of PCI-DSS Part Two

pci-compliance

Continuing where we left off yesterday, let’s jump right into the next Myth

Myth 6 – PCI requires us to hire a Qualified Security Assessor

Technically true. Once again for merchant level 3 and below, SAQs are good enough to be compliant. Here’s how it works: merchants complete an SAQ, the management signs it off and they pass the Attestation of Compliance (AoC) over to whoever is asking – generally either the acquiring bank, or the payment gateway. Some of these SAQs are easy. Which SAQ you choose is a little bit more work. While we are not going into SAQ in this article, a quick comparison of SAQ A (mainly for Ecommerce merchants that outsource all processing functions) and SAQ D-MER (generally for merchants who store, process and transmit card data): 14 questions for SAQ A vs 326 questions for SAQ D-MER. That’s right. It’s 23X more work.

So while this Myth is generally true, for a merchant to undergo SAQ D-MER, most do not have the capacity to do it themselves, hence require expertise from either QSAs or consultants outside of the company. What about this Internal Security Auditor (ISA) option?

Here’s where it gets a little strange. In 2012 Mastercard released a statement stating:

“Effective 30 June 2012, Level 1 merchants that choose to conduct an annual onsite assessment using an internal auditor must ensure that primary internal auditor staff engaged in validating PCI DSS compliance attend PCI SSC ISA Training and pass the associated accreditation program annually in order to continue to use internal auditors.”

And

“Effective 30 June 2012, Level 2 merchants that choose to complete an annual self-assessment questionnaire must ensure that staff engaged in the self-assessment attend PCI SSC ISA Training and pass the associated accreditation program annually in order to continue the option of self-assessment for compliance validation. Alternatively, Level 2 merchants may, at their own discretion, complete an annual onsite assessment conducted by a PCI SSC approved Qualified Security Assessor (QSA) rather than complete an annual self-assessment questionnaire.”

What they effectively are saying is that Level 1 to 4 merchants CAN have an option not to engage a QSA, but the caveat is that for level 1 and 2, they need to be ‘validated’ by internal auditors. Not just any internal auditors, but auditors certified as “ISA”, by the PCI council. Yes, it’s a certification that is created to sign off SAQs.

If you do not have an ISA, you are stuck, and you will need a QSA to validate your SAQ. In most cases, having a QSA validate is as much work as having them certify the environment, so you do end up ‘hiring’ a QSA to validate it.

Why not all join in the ISA bandwagon then?

Well, you need to cough out around USD500 for the PCI Fundamentals course, then around USD3,000 – USD4,000 for the ISA course and then every year USD1,000 for requalification training fee. Only companies going for PCI-DSS can have ISA so if you are consultants like us, you are out of luck.

Large merchants probably might want to invest in an ISA. But note of caution, ISA is NON transferable. So if you are an ISA for Company A, and you move to Company B, your ISA status does not go with you. If Company B wants you to be their ISA, you need to go through the entire course again. Yes, even the fundamentals course again.

It is certainly less expensive to get an ISA to validate your SAQ compared to having an external QSA, so large merchants might opt to have one or two ISAs in their stable and invest in them yearly.

Myth 7 – We don’t take enough credit cards to be compliant

PCI likes to state, even if you take ONE credit card, you are supposed to be PCI certified/compliant. But honestly, unless that one credit card transaction is to buy a Bugati Veyron, the acquirer is likely not going to come knocking on your door to ask you to become PCI compliant. The theory is that everyone who deals with credit cards will happily agree to invest in time to go through the SAQ and 12 requirements. The reality is starkly different. Businesses have 600 different things to look into daily, and most business turn a blind eye to PCI as long as there is no burning platform or pressure from above. The card brands push the acquirers, the acquirers push the payment processors and gateways and large merchants, and the payment processors push their service providers. Somehere down the line, the little travel agency around the corner that collects credit card information, jots down the the PAN and CVV on a log book for recording purposes so they can book online flights in behalf of the customer, is overlooked. As long as there is no massive exercise to push everyone to be PCI compliant, there will be organisations that continue to operate outside the PCI requirements. Yes, your CVV will still be kept in a log book by that little travel agency – still oblivious to why storing CVV is such a big deal.

Myth 8 – We completed a SAQ so we’re compliant

Well – technically, you are. Again “being compliant” is not really an end state itself. How can anyone sustain compliance 100%? When Target was breached, they were just re-certified as compliant. Hence, the word compliant is generally just used as punchline for businesses. For instance – Ecommerce starts online payment system. They register with acquirer, acquirer tells them to be ‘PCI Compliant’. They finish their SAQ and submit. Acquirer is happy with the signoff and allows them to connect. Ecommerce proudly displays “PCI Compliant” Logo (which is not allowed, by the way) prominently on their website. They have actually successfully completed an SAQ and they are ‘compliant’ because the acquirer tells them that they are. If they are not compliant, they wouldn’t be able to connect. By the fact that this is allowed, shows that Myth 8 is actually true!

Myth 9 – PCI makes us store cardholder data

It’s true that PCI would rather you NOT store cardholder data. But this myth doesn’t make any sense. It’s not because of PCI that businesses shape their business processes after. It is because of the business processes, that there is a need for PCI. So, it’s up to the business to store, transmit or process cardholder data or not. Nobody goes into PCI-DSS saying, oh, because of PCI-DSS we now need to store data and need to invest in HSMs and key management, encryption etc. Because of PCI, we now need to have a payment business. I have never seen such a client. It’s always the other way round. Based on your business, PCI might or might not apply.

Myth 10 – PCI is too hard

This is the same argument as Myth 5. The PCI SSC makes a good point by saying, it’s good practice regardless to have controls in place, aside from PCI-DSS compliance. But the myth is here because they are actually stating PCI is not hard, simply because you should be practicing good security in the first place. To many, good security is hard! Turnover of staffs, zero day attacks, business as usual priorities, advancement of technologies, software and hardware being obsolete, pressure from management, costing issues, new vulnerabilities and exploits discovered (and not discovered yet) – and the fact that in the cybercrime world, the bad guys are miles ahead of the good guys – security is hard, make no mistake about it.

So there you have it. You would think with a post like this, PCI-DSS is a fruitless endeavor. Far from it. It’s an excellent repository of security practices that all organisations should consider. While some of the standards in there show their age (Anti virus, anyone? Please.), overall, it’s one of the more direct, implementable standards we have experienced (compared to the labyrinth we know as the ISO27001). The point of the post is to clarify that sometimes, standards in practice can turn out quite different from standards in documentation.

Now – should you check if your CVV is stored by your travel agency?

The Myths of the Top 10 Myths of PCI-DSS

pci-compliance

A while back, the PCI council published a good article called the Ten Common Myths of PCI-DSS, to basically debunk a few conclusions people (or so they think) might have on PCI-DSS.

After wading deep into this standard for the past 6 years, I am taking a look again at these myths and I am like, Wait a minute, this isn’t exactly correct.

Myth 1 – One vendor and product will make us compliant

This myth is hard to beat. It’s obvious that one vendor and one product doesn’t make anybody compliant since PCI-DSS is so much more than a product or an implementation. It’s the practice of security within an organisation itself. But wait. Not all PCI projects are created equal, and here’s where we call ‘scope reduction’ comes into play. It is possible that a product can significantly reduce scope so much so that it’s almost easy to become compliant. For instance, tokenization. This is a solution created to remove the need of dealing and storing actual card data in a merchant environment. Instead a token is used and is mapped in the token vault provided by a service provider. Hence, the merchant does not have the key or means to decrypt. Of course, they still handle the first time card data is transmitted through, but it removes the need of the merchant to completely fill the dreaded SAQ D-MER as they no longer need to store card data. Or P2PE for instance. When it started, the solution was to provide a point to point encryption so that merchants need not have the means to manage the keys or decrypt the data. Of course, P2PE bombed and they had to revise the standard to make it more realistic.

Myth 2 – Outsourcing card processing makes us compliant

Again – if you outsource card processing, it might not immediately make you compliant but it sure as heck make it a lot easier to be compliant! With the new revisions of SAQ, we have the nicely flavoured SAQ-A and SAQ A-EP for ecommerce merchants to deal with, to avoid the death knell of SAQ D-MER. There are like 9 flavours of SAQ (self assessment questionaire if you are wondering), and merchants might differ in their journey of PCI depending on their business. Outsourcing to a PCI compliant card processor or payment gateway is a great way to reduce your scope. So while this myth is correct, outsourcing is still a great strategy if payment processing is not your core business.

Myth 3 – PCI compliance is an IT project

Everyone will nod their head in the board room and steering com and agree to this one sagely. But from experience, I will tell you, whether it’s business or IT project – IT guys will be significantly involved in it. If you think you can breeze through this sucker the way you championed through ISO27001, you are in for a little surprise. A large part of the 12 requirements deal with technical requirements, from firewall configuration to antivirus to logical access controls. Logging and key management are significant challenges we find, and an entire requirement 6 deals with patch management and secure coding practices. So, if you are not familiar with terms like OWASP, KEK, DEK, WSUS, TACACS/ACS/LDAP, XSS, CSRF,CVE and all these, it’s time to get cracking. Having done both ISO27001 and PCI, we can say the ISO is more of a best practice/guideline on security while PCI is a standard. It’s either you do or do not. There is no try.

Myth 4 – PCI will make us secure

I know what this myth is trying to say, but technically, if you are practicing PCI, you are a heck lot more secure than someone that’s not. Besides, being ‘secure’ isn’t a final state to be – it’s not possible, but rather it should be a constant practice of a hundred different activities to contribute to ‘being secure’. It’s like when people talk about ‘enlightenment’ or ‘world peace’ – it’s not actually achievable – and even if it is – it’s not sustainable. So yes, PCI will make you secure , relative to the company that has their server under a marketing director’s desk and the password “PASSWORD”.

Myth 5 – PCI is unreasonable; it requires too much

Obviously, PCI Council wants you to think this (again, they are saying these are myths, so whatever you read, PCI Council is asking you to think opposite). They are saying, PCI is reasonable, it doesn’t require much.

I disagree.

It requires a lot.

I mean, OK, if you say SAQ A or A-EP, fine, agreed, it’s a breeze.  But if you are talking about SAQ D or a full cert?

Unless you have unlimited resources, money, time or already practicing some Level 5 Maturity of security, then you need to really look into managing PCI. Most of our clients aren’t in that state, so when you talk to them about the amount of work needed? Oh boy.

Let’s say they have 40 devices in scope. Multiple applications, running on multiple servers. Several layers of firewalls. Firewall rules need to be clean. Sounds easier than it is. The amount of legacy rules we see in some clients would make you think that firewall has been around since the internet was invented. Server upgrades due to EOL. Network changes because the database is accessing the internet directly. Applications not patched. Devices not updated. Logging not centralised. No correlation of logs or event management. No incident management. No central management of passwords and users. Applications developed eons ago and developers have since left the company with the only document being a note saying, “Goodbye and thanks for all the fish!”. You get the drift.

Myth 5 and Myth 10 (PCI is too hard) is the same. When you put the word “TOO” in there, its brings in relativity. What is “TOO” to some companies? When you have a single administrator running the whole thing and he is dividing his time between this and a thousand other things, “TOO” might be the key phrase there. So, I won’t say it’s not possible for a full certification – it obviously is, since we have certified a number – but what we don’t want is our clients walking into PCI with expectations of breezing through and then get slammed like a deer in headlights. It will take effort. It will take resources. It will take money and it will take time. Yeah, time. Around 3 – 4 months if you are lucky for a full certifications. I often get a stunned look of disbelief and a general retort that goes like: “<insert invectives here>, I thought it would only take 2 – 4 weeks, man.”

Look – PCI is really useful and it’s not the intention to discourage people from going for it – but it would be great to be better informed so that expectations can be synced to reality. In the next article, we will cover the remaining myths of PCI.

« Older posts

© 2024 PKF AvantEdge

Up ↑