Category: Technology (Page 2 of 11)

PCI Delta Assessments

pci-compliance

Let’s start off by saying this isn’t a way for us to make light of the current situation by using the word ‘Delta’ here. We all know how dangerous and virulent the current strain of COVID is and this isn’t a matter of writing an article simply to get a search hit on that word.

That being said, this is a topic that seemed a bit obscure, even to us who have been doing PCI-DSS for more than a decade now.

So the question that can sometimes pop up would be: Great, we got our PCI-DSS certification now, everyone is celebrating and patting each other on the back. In 2 weeks time after our AoC/RoC has been produced, our product management rolls out a new Application XYZ which deals with credit card information along with a new environment, database, systems etc. Is this Application XYZ included in our current PCI-DSS certification or not?

It’s a good question. Because the fact is that many view PCI-DSS as a point in time audit, whereby the audit is done at a certain time and not over a period of time. One might argue that during the audit itself, sampling will be done over a 12 month period, therefore it cannot be categorised as a strictly point in time assessment. Regardless how you categorise it, at the end of the audit, there is the big result: a compliant AoC/RoC pair. Don’t get us started on the dreaded Certificate of Compliance or CoC, or CoC-n-Bull in our terms. Enough of that certificate nonsense. As for the AoC/RoC pair, the scope is stated clearly in it, defining the audit scope, the boundaries, the applications scoped in, locations etc. So this is great. When we get a new application onboard, we just add in that application into the AoC, right?

Right?

Unfortunately, at this point, the QSA will say, not really. Once the AoC is out, it’s out. Unless you want to re-do the audit or to recertify, then yes, that new application can be added in.

Now, we’ve faced such a situation before. And in fact PCI-DSS addresses it nicely at this wonderful piece of work: https://www.pcisecuritystandards.org/documents/PCI_DSS_V2.0_Best_Practices_for_Maintaining_PCI_DSS_Compliance.pdf

In item 3.10.3 it states:

Any change to the network architecture or infrastructures directly related to or supporting the CDE should be reviewed prior to implementation. Examples of such changes include, but are not limited to, the deployment of new systems or applications, changes in system or network configurations, and changes in overall system topologies.

PCI reminding us to stay focus!

So in this case, application XYZ falls under new application. The point of PCI-DSS is that, just because you deploy a new thing or new firewall or new application doesn’t mean you are no longer compliant to PCI-DSS. After all, PCI encompass the practice and process as well, so the council understands and advice that these changes be implemented into the PCI program and PCI processes ensures that this stays compliant. So in short, if you have application XYZ coming in, make sure the PCI controls apply to it and it will then be reviewed under the next audit and included into the PCI AoC of the coming year. Let’s just update the current Aoc and we all go home now, right?

Right?

But wait, you aren’t listening, says the auditor, you still can’t update the current AoC. The AoC is already fixed for that year, unless you want to do an audit. Again. Like a month after you have done and dusted your recertification audit for that year.

In most cases, these changes for our clients go through the maintenance cycle without and issue and the following AoC simply gets updated to include it. But what if the customer insist on having the CURRENT AoC updated? This could be due to requirements from their client, regulatory or what not. How do we put that application into the current AoC without spinning off the whole audit all over again?

In short, you can’t. You either wait it out for the next year audit OR you re-do your certification audit and nullify the previous one. However, this is where that little obscurity comes in. Delta assessment.

Now I’ve heard of Delta assessment for PCI, but it’s almost invariably related to PA-DSS (SSF now), PCI PTS, P2PE where basically, vendors who had completed, let’s say their SSF, can validate low risk changes to their application and do a delta assessment. In PTS, the delta is done by the PTS Lab, but for SSF, the SLC vendor can basically do a self attestation. However, we don’t see any such item or recourse for PCI-DSS.

Discussing with the auditors, we find that indeed, there are possibilities of a delta assessment to be done, although rare, and not exactly cost effective, since whatever the delta is doing, it’s would just have a short lifespan before the changes get swallowed up by the main PCI program once the yearly audit cycle rolls in. That’s why we rarely see this done. But I rarely see a tapir doing a jig in a tutu, but that doesn’t mean it doesn’t exist.

So what happens is that the auditor will formally audit this application and its environment and go through the certification process as would normally be done – except that this is limited to the application and systems. Once assessed, a formal delta AoC/Roc pair is released to supplement the existing AoC/RoC pair. And so that’s it, these supplement documents can then be shown together with the current AoC/Roc for verification purpose and in the next cycle, it’s consolidated back into the main RoC.

Now, this is fairly new to us. The logic of it is still beyond us somewhat because the whole point of PCI is for an environment to be able to handle changes and not have it audited everytime there is a significant change that occurs. Because every audit is costly and I’m sure every organisation has already got its hands full trying to sort out budgets during these times, without worrying about delta assessments.

The above is basically what we gather from discussions with auditor and not really from experience, because at the end, once the proposal was put out, our client thought better of it and decided not to pursue. So really, it’s still in the realms of theory and we may not be accurate in our assumptions. However, it’s still something interesting to keep in mind, though rare – like the tapir in tutu – it helps to know that this option does possibly exist.

Drop us a note at pcidss@pkfmalaysia.com and we will try to address all your concerns on PCI or other compliance matters like ISO27001, ISO20000 etc!

PCI Pentesting and ASV Scans

Back in the days (as in when we started PCI more than 10 years ago), when it came to testing and scans, there were probably very gray lines on it. We saw a lot of reports that came out under the guise of ‘penetration testing’ that was straight out lifted from an automated Nessus Scan or one of the free Acunetix scans available. The problem was exacerbated when these penetration testing reports were further accepted by regulatory bodies like our regulatory bank and passed by other internal/external auditors. They basically just looked at a report and if it sounded and looked technical enough then it was technical enough.

Now, PCI got the hint and released a few versions of the Penetration Testing Guidance document, the latest iteration on 2017. A big part of it talks about scoping, clarifying on qualifications and requirement 11. But one of the key features of the document is highlighted in 2.1:

This came about to stem the misconception that as long as you have completed the vulnerability scan, you can use that to pass off as a penetration testing. We still see customers going down this route, in whatever creative ways they can conjure to avoid the penetration testing exercise.

An example was this response on their external PT report stating:

“We have conducted the PT exercise based on the recently passed ASV scan report by the QSA. Since the ASV scan has passed, the penetration testing report is also considered to be passed as there are no vulnerabilities to test.”

Which is basically the philosophy that as long as the scans do not yield any high or medium vulnerabilities, i.e a passing scan, there is no longer a need to conduct any penetration testing. Their concept was simple and fairly understandable: since there are no “vulnerabilities” in the scan, there is nothing for us to ‘test’.

Of course, this was rejected by the QSA.

While there are many arguments on this matter, the simple case against this is: the scan produces potential vulnerabilities and may even miss some out that may not be reported. False negatives do exist even in commercial scanners such Qualys or Nessus (two common auto-scanners). Additionally, a passing scan does not mean no vulnerabilities, it just means there are no medium/high vulnerabilities based on a non-contextual scan to the environment. A non-contextual scan means a lot of scanners already use internal libraries in their scanning database to categorise vulnerabilities without the definition of the actual environment risk it is scanning. So to equate CVSS to the actual risk of the organisation may be too broad an assumption as some low vulnerabilities may still be able to be exploited manually. The classic example here is when we check a simple form entry password and find it is well protected and designed, technically. However, a pentester may then go out into the organisation’s forum and discover that the admin regularly upkeeps a password file in Google Drive and shares it to the entire world inadvertently. The scanner won’t discover things like that.

Therefore to simply state, just because there is a passing ASV scan, it equates to penetration testing passing, is not going to get a free pass in PCI.

Another question that many organisations come back to us, when they have their team of penetration testers doing internal testing is: Well, then how do you do a penetration test, then, if you state we cannot use the ASV report to also pass our external penetration testing?

And it would seem weird, that when I look at them and answer: wouldn’t your penetration testers be able to answer that, instead of us? So from the auditor perspective, we look at 3 things: Tools, Technique, Team.  

The tools being used are important, but not all for pentest. Just by stating you have Kali or Metasploit doesn’t necessarily mean you know how to operate it. Technique (or method) is important to document. This is key for PCI and a key difference between hackers and pentesters. A pentester would know how to document each step, inform their client and normalize and not destroy the environment. A hacker (or let’s use the more correct term cracker) would simply go in and cause as much damage as possible, depending on his/her objective. You would rarely come across crackers developing comments and detailed reports/documents to their victims and executive summaries to the Audit Committee justifying their methods, the scope of coverage and the time and date of engagement. And finally, PCI looks at the personnel (or team) conducting the exercise. They may be certified (or not), but they should at least be qualified. In this case, if the pentester has no idea how to start a pentest, then the normal assumption would be — he’s not a pentester. A chef doesn’t ask people how to start cooking. He may require an input or two to understand what he needs to cook, or how spicy the broth should be for the customer; but if the he’s asking how do we start the cooking process or what is a wok, then that should be a red flag.

So, while the coverage of penetration testing and vulnerability scanning in the entire document is not the the purpose of this article, it is keenly important to know the difference between both (penetration test vs vulnerability scan), and not use one to justify the inaction of the other. Your QSA may bounce back that vulnerability scan attempting to disguise itself as a penetration test and waste precious compliance timeline in the process.

Drop us a note at pcidss@pkfmalaysia.com for any queries you have for PCI-DSS or ISMS and we will get back to you straight away! Stay Safe!

The Biggest (Real) Myths of PCI-DSS: Part 2

pci-compliance

So, continuing the Real Myths of PCI-DSS, lets move down the list.

Real Myth 5: All PCI-DSS services must be outsourced

Now, this is a very important myth to clear up. Because it directly relates to the usually biggest concern of all: cost. A while ago, we provided an idea on how to cost PCI-DSS, and break it up into certification/advisory costing and implementation cost. While the certification-advisory cost is easier to gauge based on locations, processes, card storage, activities covered , implementation cost is harder to gauge. Because number one – you don’t know your scope yet. This means, you may have 10 or you may have 200 systems in scope, you don’t know. Some go, “Ah but we know, because we have already decided our scope!” and we go, “Ah, but that’s the Real Myth 7, that you can decide your own scope…read on, intrepid adventurer of PCI!”

In any case, one way to cap a cost or save cost is to in-source your work, i.e have your own people provide the implementation services. There are no “PCI-certified” company to actually do the implementation services. All services – except for ASV scans – can be performed by your own, if you are qualified enough to do it (more on that later). I’ll throw in some services that for a typical PCI project, is a must:- Penetration testing, Internal Vulnerability assessment, secure code review and code training, patching, logging and monitoring and daily review of logs, card data scan, application testing, systems hardening, segmentation penetration testing, encryption, key management etc. These are fairly typical activities you will find in PCI – and you can do it all on your own if you have the resources and knowledge to do it. So, don’t feel cornered by any firms or consultants stating that these services must be done by them in order to pass PCI-DSS!

Real Myth 6: All service providers MUST be certified to do implementation services

This is an extension of Real Myth 5. So once the company decides to outsource the PCI services, in the case where they do not have the resources to do it internally – they go about requiring “PCI qualified” service providers to do these services. We’ve seen this requirement before where the requirement was to be a “QIR – Qualified Integrator and Reseller” to do services like penetration testing and code reviews and such. QIR isn’t created for that. QIR is created for implementing merchant payment systems and has nothing to do with the services mentioned. Aside from that, there is a growing call for PCI services to be only performed by “Certified Penetration Testing Companies” with CREST or individuals with certifications like Certified Ethical Hacker etc. Now, while these are all well and good, and certainly mentioned even by the PCI-DSS as a guidance in selecting your vendors, these are by no means a requirement by the standard. Meaning, the QSA cannot enforce all your testing to be done by the above said certified entities if you have ready, qualified and experienced personnel on your end to do it. Again – this doesn’t mean any Tom, Dick and Harry, Joe and Sally can perform testing or activities in your environment. The above certs and qualifications obviously carry weight and we should not dismiss the fact that if an organisation takes the trouble to go through CREST, versus a company that was set up two days ago, and employ 2 testers working in Elbonia – which you should prefer or which one will the QSA has less of an issue of – that’s pretty obvious. What I am stating here is that, we’ve seen many veterans who are far more efficient or experienced in systems testing and security testing than we can ever hope to be and for whatever reason, they don’t bother much about these paper chase or certifications.

At the end, the QSA may raise a query on who carried out the test and may choose to check the credentials of the testers, but in most cases, if the testing seems to be in order, most QSAs are OK with it.

Real Myth 7: PCI scope and application of controls can be determined by the customer

This one is my favourite. Because it played out like an episode of a slapstick comedy. I was called one day by one of our clients who had a new group handling their PCI-DSS program. You see, we’ve been doing their program for four plus years and we’ve been servicing them fine for years – but the new group handling PCI now isn’t well versed with PCI. It’s frustrating because no matter how many “knowledge transfer” sessions we gave, we still ended up with the same questions. We realised we were stuck in a Groundhog Day scenario, where things never change no matter what we do. The group wasn’t technical, which was an obstacle but overall, I think maybe they just have too many things on their plate.

So on this call, they said they were going to compare our quote to other providers this time around and I said, yeah, it’s fine. They then proceeded to give me a scope to quote and I commented, “Hold on, this is the wrong scope. This is the list of assets two years back. You have now changed your scope, and there is a new list of assets under scope for PCI.”

From there, the proverbial excretion hit the fan. They maintained how did I know their scope? I said, well, we helped you guys work it out. Your operations team is aware of it, that every year we help you validate your scope (as per PCI-DSS guidance). And they went: “Why must the scope come from you? We are the owners of the environment and the project, so we decide the scope!”

Aha. This is where our points diverge. You see, while the organisation does have the overall responsibility in setting the scope for PCI, PCI-DSS also has a guidance document “Guidance-PCI-DSS-Scoping-and-Segmentation” that defines how that scope should include assets and networks and therefore affecting how and where services should be implemented. So for illustration:

Company A says, “Well, we have a payment gateway and a payment switch business. We also have a call center and a merchant business that accepts credit cards through kiosks or direct POS acceptance in our outlets. Now, getting our merchant environment to be certified is going to be a pain. We have decided to just certify our payment switch environment which is isolated in a cloud, and not related to our payment gateway at all which we are just about to launch a few months from now, so there are no transactions yet.”

So there you go, Company A has set their scope and from the outset, it kinda looks fine. Yeah, if these are all isolated environment, it’s ok. In any case, in the report of compliance, the QSA would detail any services offered by the company that are NOT assessed, making clear what are the services NOT PCI compliant for that company.

However, what Company A cannot decide are the services and the assets involved in their scope. There is a method to scoping defined by PCI-DSS and we have written at length in this article here.   There are a few ways to minimise the scope by segmentation and so on, but for instance if you run a flat network and insist on it being flat, then everything within that network comes into scope – be it it’s your payment gateway, your merchant business servers, your call center laptops etc. So you can ‘define’ your scope, but what gets sucked into your scope to do hardening, pentesting, patching and all the PCI controls – that is already defined by the PCI on how it’s done. And we just have to identify these assets and systems and networks that get sucked into scope. PCI is a like a giant vortex or blackhole. Everything that is sitting on the same network or touches the systems in CDE, gets pulled into scope.

So there you have it. We will be exploring the final 3 Real Myths of PCI soon, but for now, if you have any queries on PCI-DSS, or ISMS or Theory of Relativity and Blackholes, drop us a note at pcidss@pkfmalaysia.com. Till then, be safe!

Alienvault: Working with Decoders and Rules

When we started out with Alienvault years ago, they were just a smallish, start up company and we worked directly almost with the engineers and sales team in Cork. Of course, a lot has changed since AT&T took over, but during the early days, there were a lot of knowledge and mindshare done directly between us and them. So much so that if you were to check their partner site, they still list us as the only Malaysian company as their reseller, due to the early days of listing. What attracted us to the product was that we could lift the hood and see what was underneath. Alienvault (or OSSIM) was previously a hodgepodge of many working parts that were glued together and somehow made to work. The agent was a product called OSSEC, which is an open-source HIDS. The IDS is Suricata/Snort and if you look closely at the availability tool, you would see the backend is a Nagios running. NFSen is used for their netflow data display, and PRADS for their asset discovery. OPENVAS is their vulnerability scanner and best of all, they allow you to jailbreak the system and go into the OS itself and do what you need to do. In fact, most of the time, we are more comfortable on the command line than through the actual UI itself.

The history aside, the downside of adding in these different applications and getting them all to play nice together, is that you would have to understand the interworkings of these pieces.

For instance, if you were to send logs via Syslog to Alienvault, you would have to know that the daemon rsyslog (not an Alienvault product) is the one being used to receive these logs. If you were to use the agent, then the application receiving these logs is different – it’s the OSSEC server that receives it. So it depends how logs come in, and from there you can decide what you wish to do with it.

The challenge is oftentimes to filter and ‘massage’ the logs when it hits Alienvault. There are a few approaches to this:

The basics are at stage 1 where the client (server, workstation etc) send logs (or have logs to be collected) to Alienvault. The initial filtering should theoretically happen here if possible. Many applications have the capability to control their logs – Windows server being one of them. Turning on debug logs on Linux for instance would cause a fair bit of log traffic across the network. Applications as well, have options of what to log and what not to log. We see firewalls logging traffic logs, proxies logging every single connection that goes through – this causes loads of logs hitting the Alienvault.

AV (especially the All In Ones) isn’t designed to take on heavy loads the way Splunk or other enterprise SIEM like ArcSight, that chews through 100,000 EPS like Galactus chews through planets. The AV approach has always been, we aren’t a SIEM only, we are a unified security management system, so security logs are what we are after. Correlation is what we are after. APT are what we are after. Their philosophy isn’t to overload and do generic Business Intelligence with millions of log lines, but to focus on Security and what is happening to your network. That being said, it’s no pushover as well, being able to work with 90 – 120 million events and going through 15,000 EPS on their enterprise.

The reality however is that most clients just turn on logs at Item 1 and plow these logs over to Alienvault. So it’s really up to Alienvault to start filtering these logs and stopping them coming in. At layer 2, is what we call the outer layer. This is the front line defence against these attacks of logs. These are where the engine running these log systems (OSSEC, rsyslog etc) can filter out and then trickle what is needed to Alienvault main engine itself in Layer 3. The AV main engine also has its form of defence, in policies, where we can create ‘junk’ policies to simply ignore logs coming in and not process them through the resource intensive risk assessment calculations.

So, we are going to assume that Layer 1 filtering wasn’t done. What we are going to look at is sorting out Layer 2 and we will assume that logs are coming in via OSSEC. We will have another article on Rsyslog filtering because that is a whole different novel to write.

When it hits OSSEC, it’s going via default port 1514/udp. Now remember, when logs first enters Alienvault, it doesn’t immediately go into the SIEM event display. It first needs to be logged, before it can be turned into events, before it can trigger alarms. So the basic rule is to get it logged:

Make sure you are receiving logs first.

This may seem juvenile in terms of understanding but we have been through enough to know that no matter WHAT the client says, oftentimes, their systems are not even sending the logs to us! A simple tcpdump -Xni eth0 “udp port 1514” will see if the logs are getting in, so go ahead with that first to ensure you are receiving. Just add a “and host <ip address>” if you need to filter it by the IP address.

Another way that Alienvault allows, when you are getting logs via HIDS/OSSEC is by enabling the “logall” on USM HIDS configuration, which we covered in the previous articles here. But be aware turning on logall potentially will bring a lot of logs and information into the box so we generally avoid this unless it’s really needed.

Once you are seeing logs coming into Alienvault, for OSSEC at least the next thing to do is to move these logs to “alerts.log” and from there, Alienvault can start putting it into the SIEM display.

For this to happen, you need to understand 3 things here, aside from the fact that we are currently now working on layer 2 from the diagram above – OSSEC:

a) Decoders

b) Rules

c) /var/ossec/bin/ossec-logtest

The above are actually OSSEC terminologies – not strictly Alienvault. What this means is that if you were to decouple OSSEC from Alienvault, you can. You can just download OSSEC. Or you could download other products like Wazuh, which is also another product we carry. Wazuh runs OSSEC (its own flavor) but has a different presentation layer (Layer 3 in our diagram above) and integrates with ELK to provide a more enterprise ready product but the foundation came from the same OSSEC principles. So when we talk about Rules and Decoders and using the ossec-logtest script to test your stuff, it’s not an Alienvault specific talk. Alienvault specific talk we can go later with plugins and stuff. In the actual ACSE course from Alienvault (at least the one I passed 5 years ago), there is really no mention on decoders and rules – it basically just focus on the core Alienvault items only.

At this point, we need to make the decision on whether to have the filtering done on OSSEC level (2) or on Alienvault level (3)? As a rule, the closer the filtering is done to source, the better…however, in our opinion, the filtering by Alienvault plugins is a lot more flexible and intuitive in design, compared to OSSEC (and because we are biasedly trained in Alienvault, but not so much in OSSEC). So for this article (which is taking VERY long in getting to its point), we are tasked to simply funnel the logs into /var/ossec/logs/alerts/alerts.log because that is where OSSEC sends its logs to and where we can get our AV plugins to read from.

The logs in /var/ossec/logs/archives/archives.log (remember, we turned on the logall option in the OSSEC configuration for this illustration) aren’t monitored by plugins. Because in a production environment, you won’t have that turned on. So, once you have logs into the alerts.log file, you are good to go, because then you can sit down and write plugins for Alienvault to use in the SIEM display.

OK – Firstly Decoders. OSSEC has a bunch of default decoders (like plugins in Alienvault) that is able to interpret a whole bunch of logs coming in. Basically, the decoder is set up with Regular expression to go through a particular file and just grab the information from the file and drop it into fields like IP address, date, source IPs etc. Similar to the AV plugin, but for this illustration, we are not going to use much of the OSSEC filtering, but simply to ensure we select the right logs and send them over to the alerts.log file.

So ok, let’s take the previous article example of having MySQL logs into Alienvault. Let’s say we have this example query log coming into our Alienvault (archive.log, if we turned it on)

2021 Feb 21 00:46:05 (Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:41:42.271529Z        28 Query     SHOW CREATE TABLE db.persons

So the above doesn’t really offer much, but you can technically see there is the date and time, and the command line etc and a decoder will need to be created to parse the incoming log.

Picking up from where we left off at the Alienvault link, Task 4 covers the steps to create the decoder:

a) Edit /var/ossec/alienvault/decoders/local_decoder.xml and add in the following:

<decoder name="mysql-query">
        <prematch> Query</prematch>
</decoder>
<decoder name="mysql-connect">
        <prematch> Connect\s*</prematch>
</decoder>
<decoder name="mysql-quit">
        <prematch> Quit</prematch>
</decoder>

The above is simplistic decoder to catch the 3 important events from the logs coming in from MySQL – Query log, i.e

2021-02-22T09:41:42.271529Z        28 Query     SHOW CREATE TABLE db.persons

Connect Log

2021-02-20T16:35:28.019734Z        8 Connect   root@localhost on  using SSL/TLS

Quit

2021-02-20T18:29:35.626687Z       13 Quit  

Now of course, for those aware, the Query logs have many different types of query – Query Use, Query Show, Query Select, Query Set, Query Insert, Query Update and so on. The idea of the decoder is simply to catch all the queries, and we will theoretically log all Queries into Alienvault.

Now, remember to tell Alienvault you have a new decoder file

In the USM Appliance web UI, go to Environment > Detection > HIDS > Config > Configuration.

Add <decoder>alienvault/decoders/local_decoder.xml</decoder> after <decoder> :

Adding the "local_decoder.xmll" setting to ossec_config

Adding this setting enables the usage of a custom decoder. Save it and restart HIDS.

So that’s it for the decoder.

Now, on the CLI, go to /var/ossec/bin and run ./ossec-logtest

Paste the following “2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4”

And you should the get result as below

linux:/var/ossec/bin# ./ossec-logtest
2021/03/29 09:50:10 ossec-testrule: INFO: Reading decoder file alienvault/decoders/decoder.xml.
2021/03/29 09:50:10 ossec-testrule: INFO: Reading decoder file alienvault/decoders/local_decoder.xml.
2021/03/29 09:50:10 ossec-testrule: INFO: Started (pid: 25070).
ossec-testrule: Type one log per line.
2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4
**Phase 1: Completed pre-decoding.
full event: '2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4'
hostname: 'linux'
program_name: '(null)'
log: '2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4'
**Phase 2: Completed decoding.
decoder: 'mysql-query'

So basically, any logs that come into archive.log that has that sample line “Query” you will be lumping it in as mysql-query decoded. Of course you can further refine it with Regular expression to get the exact term you wish, but for the illustration, we want to catch the queries here and it’s fine for now.

The next item is the rules. Again, referring to the Alienvault writeup above, go ahead and edit
/var/ossec/alienvault/rules/local_rules.xml.

What we will do is to add the following in

<group name="mysql-connect">
<rule id="192000" level="0">
<decoded_as>mysql-connect</decoded_as>
<description>Connect log is enabled</description>
</rule>

<rule id="192001" level="1">
<if_sid>192000</if_sid>
<regex>Connect\s*</regex>
<description>Connection is found</description>
</rule>
</group>


<group name="mysql-query">
<rule id="195000" level="0">
<decoded_as>mysql-query</decoded_as>
<description>Mysql Query log is enabled!</description>
</rule>


<rule id="195001" level="0">
<if_sid>195000</if_sid>
<match>SET</match>
<description>Query set is found and ignored!</description>
</rule>


<rule id="195002" level="1">
<if_sid>195000</if_sid>
<regex>Query\s*</regex>
<description>Query is found</description>
</rule>
</group>


<group name="mysql-quit">
<rule id="194000" level="0">
<decoded_as>mysql-quit</decoded_as>
<description> Quit log is enabled</description>
</rule>

<rule id="194001" level="1">
<if_sid>194000</if_sid>
<regex>Quit\s*</regex>
<description>Quit command is found</description>
</rule>
</group>

So what the above does is to decide what to do with 3 types of MySQL logs you are getting: Connect, Query and Quit. We want to dump these logs into alerts.log so that we can work on it with Alienvault’s plugin. We don’t want to do any fancy stuff here so it’s pretty straightforward.

Each of these 3 have a foundation rule

a) Connect – 192000

b) Quit – 194000

c) Query – 195000

Each rule has a nested rule to decide what to do with it. Notice you can actually do Regex or Match on the rules which really provides a lot of flexibility in filtering. In fact, if it wasn’t for Alienvault’s plugins, OSSEC’s filtering would probably be sufficient for most of your custom logs requirement.

For this illustration, our job is simple – for each of these rules, find out the key word in the log, and then escalate it to an alert. An alert is created when you create a rule ID with level = 1, i.e <rule id=”195002″ level=”1″>

If you run ossec-logtest again, and paste the log there, you would be able to see

**Phase 1: Completed pre-decoding.
full event: '2021 Feb 21 00:46:46 (Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:42:21.711131Z 28 Quit'
hostname: '(Host-192-168-1-62)'
program_name: '(null)'
log: '192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:42:21.711131Z 28 Quit'
**Phase 2: Completed decoding.
decoder: 'mysql-quit'
**Phase 3: Completed filtering (rules).
Rule id: '194001'
Level: '1'
Description: 'Quit command is found'
**Alert to be generated.

Once you see “alert to be generated” you will find that same alert in the /var/ossec/logs/alerts/alerts.log

AV - Alert - "1613881201" --> RID: "197011"; RL: "1"; RG: "connect"; RC: "Quit Command found"; USER: "None"; SRCIP: "None"; HOSTNAME: "(Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log"; LOCATION: "(Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log"; EVENT: "[INIT] 2021-02-22T09:42:21.711131Z        28 Quit       [END]";

From there, you can go about doing the plugins and getting it into the SIEM.

Whew. That’s it.

You would notice, however, there is another sub-rules in there for Query:

<rule id="195001" level="0">
<if_sid>195000</if_sid>
<match>SET</match>
<description>Query set is found and ignored!</description>
</rule>

This is set above the “alert” rule and you notice that this is Level=0. This means whatever Query that is decoded, first runs this rule and basically if I see there is a Query “SET”, I am going to ignore it. I.e it’s not a log I want and I am not going to put it into the alerts.log. Level 0 means, not to alert.

I am ignoring Query Set because in this case, we are finding millions of query set as it is invoked a lot of times and mostly it is false positives. I am interested in Query Selects, Inserts and Updates etc.

Once you have this rule put in, it will filter out all Query Sets. This is basically the only filtering we are doing so we don’t have those millions of Query Sets jamming up my alerts.log file in Alienvault.

alienvault:/var/ossec/logs/archives# ossec-logtest
2021/03/14 12:36:33 ossec-testrule: INFO: Reading decoder file alienvault/decoders/decoder.xml.
2021/03/14 12:36:33 ossec-testrule: INFO: Reading decoder file alienvault/decoders/local_decoder.xml.
2021/03/14 12:36:33 ossec-testrule: INFO: Started (pid: 12550).
ossec-testrule: Type one log per line.
192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4'
**Phase 1: Completed pre-decoding.
full event: '192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4''
hostname: 'alienvault'
program_name: '(null)'
log: '192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4''
**Phase 2: Completed decoding.
decoder: 'mysql-query'
**Phase 3: Completed filtering (rules).
Rule id: '195001'
Level: '0'
Description: 'Query set is found and ignored!'

So you see, from the above, all Query Sets are ignored. You can basically do whatever you wish by using either Regex or Match and ignore certain log messages from OSSEC itself. It’s very powerful and flexible and with enough time and effort, you can really filter out only the needed logs you want into Alienvault, which is really part of the fine-tuning process for SIEM.

So there you have it. What you have done now is to take those logs from archives.log and make sure you only put the logs you want in alerts.log (Quit, Connect, All Query except for Query Set).

The next thing you need to do is to go down to Alienvault (layer 3) and do the heavy lifting in writing plugins and get these events into the SIEM display.

For more information for Alienvault and how it can help your compliance, send us an email at alienvault@pkfmalaysia.com and we will get back to you ASAP!

PCI-DSS: Estimating the Cost

Ah money.

This is how most conversations start when we receive calls from PCI. How much will it cost?

I think this is one of the toughest subject for PCI, because it really depends on what is being done by the service provider/consultant for you, and how much you can actually do the implementation of PCI-DSS on your own. And obviously it also depends on your scope, and on top of that, depends on compensating controls if any, or any current controls you have in place. And then it also depends on the validation type – SAQ vs RoC and so on.

So, in the classic riposte to this classic question, it would be “It depends”.

Where we really need to clear the air though is the myth that once you have done PCI-DSS the first time, everything gets easier on the renewals and everything gets cheaper year on year going forward. That is for another article. There is a lot of things going on in PCI-DSS, and if you approach it from a product perspective (like most procurement do), you end up either sabotaging your entire compliance, or getting an auditor willing to sign off on God knows what, and later on realise that you’ve been out of compliance scope all the while.

To start with the pricing, you should understand a bit on the cost of PCI-DSS. And we should start with the QSA, because after all they are the focal point of the PCI program. They are the Qualified Security Assessor. Of course, you can opt to do your PCI (if allowed) without a QSA involvement (Merchant level 3 or 4) and just fill up an SAQ with or without assistance from consultants; but for the most part, a QSA would be involved in the signoff for larger projects, and this is where the cost questions take life.

Lets look firstly at the base cost of becoming a QSA. It’s very helpfully listed for us here: https://www.pcisecuritystandards.org/program_training_and_qualification/fees

So here are the maths. Imagine you are a QSA with projects in Malaysia: to start off, you will need to set aside over RM100K just to get you qualified to to audits in the Asian Region. We’re not talking about Europe or Latin America or USA here. Just APAC. That’s qualifying the company. A company, to service any region properly will probably need a bunch of QSAs trained and ready, let’s say around 3 to start off with. Each QSA will need to go for a training costing around RM12 – 13K, so let’s say you have 3 (which is very few), you are setting aside around MYR 50K for that. On top of that, there are obligations such as Insurance Coverage that is specified in the QSA Qualifications Requirement document. So it depends on which insurance you are taking, but it could be in the region of around MYR6K or above premium (spitballing). There is a requalification each year as well.

QSAs then can make their own calculations on how fast/long they need to recover their cost, but let’s say they set aside 200K just to get things set up with 3 or 4 QSAs, then they need to recover that cost. A man day of a QSA/Consultant may range from quite widely in this region but let’s say you decide to price it at “meagre” MYR2K, depending on how senior you have, so overall, you would need to have almost around 1.5 months of engagement of their QSAs just to recover the cost of setting up shop. That’s why its not unreasonable to see higher rates, because of the cost it takes.

You have salaries to consider as well. You also have to consider if something happens to one of your clients, where you happily audited them remotely and believed everything they said, and found out that they have done jack-shoot in their actual environment and you have to handle the fallout of liabilities.

Some procurement compares QSA engagements to firewall engineers. No knock on other technical engineers, but the cost of getting a Checkpoint firewall engineer and the cost to maintain one QSA is a different proposition. I am not saying one is better than another technically (I’ve seen a lot of firewall engineers who could put any auditor into their place, due to their extremely proficient technical skills), I am stating the underlying cost behind the position, which is why PCI-DSS is priced at a rate that’s comparable to say, CMMI, as opposed to say, the ISO9001.

On top of just auditing cost, QSAs take into account the actual support they are giving year on year. Some of them unburden this cost to partners and consultants who have been trained (such as PKF – and there are also other matters such as independence of audit vs implementation advisory which we will discuss later), or some of them take it upon themselves. But you must know the QSAs job is not easy. Aside from auditing and supporting, there is evidence validation and report writing. Then there is the matter of undergoing the Quality Assurance process, which brings more resources/cost to the QSA company. All this while travelling to and from audit sites, reviewing etc – the life of a QSA (ask any QSA) is itinerant and often travel heavy. Burnout may also be a concern, so if the QSAs are involved in the day to day or week to week assistance to their client’s PCI program, this isn’t sustainable.

Understanding all these underlying cost will allow the procurement or whoever is evaluating to understand how to look at projects. If a QSA is pricing extremely low, the question you will need to ask is: What’s being offered? Because all QSAs have more or less the same baseline cost and if a QSA priced themselves at RM800 per man day, and they are a small shop with less than 5 QSAs, what would then be their recovery rate? 200 man days of engagement to recover their initial cost? Most procurement wouldn’t think of things like this and they would just go to their “BAFO” Best and Final Offering – but when you break it down on what is expected, then you would understand that not all PCI offerings are the same. I could simply quote a client 3 man days of QSA work for the final audit and be done. That would be the best and final offering that would win. But what about the healthchecks, the management of the evidences and how they are submitted, the quality checking, the scope optimisation process, the controls checking etc etc?

And in line with our effort estimation, one should also split the pricing into two: Audit and Consultation vs Implementation service and products.

Because if let’s say we find your Requirement 10 is completely empty, and you are thinking to purchase a QRadar SIEM to address it, you could be looking upwards of RM60,000 just to get the product in. Couple that with training for engineers, usage, hiring etc, and you are well over the six figure stage just for Requirement 10! How about testing and application reviews? If you don’t have the personnel on this, then you have to consider setting aside another RM50K etc depending on how many applications/mobile applications/ systems you have in place. So it’s highly essential to have the QSA/consultant assist you in scope reduction. Most may not view it that way, so it’s essential to find an auditor who is experienced and who looks after your interest.

Finally, understand that cost of audit/consulting would be different depending on how you go through PCI-DSS. Level 1 certification requires the effort of validating evidences, doing gap assessments and auditing and writing the RoC. Level 2 SAQ with QSA signoff is slightly easier, as there is no RoC to write while the last option of self signed SAQ without QSA is obviously a lot less costly as you are basically doing a self-signoff. Those are just broad guidelines and not how QSAs may price it, because as I say, due to variables.

You could opt to use the rule of 1/3 when it comes to estimating these costs, although your mileage may vary. For instance, if the QSA throws a RM100K audit fees (comparing it to CMMI fees) for a Level 1 Certification, then a RM60-65K (2/3 of the Level 1) for a SAQ Signoff could be reasonable; and furthermore if you just need them in for consultancy for the non QSA signoff SAQ, it could be 30K (1/3 of the level 1) or so. But note, the SAQ self signoff can be carried out entirely on your own, so the cost could be close to zero as well.

I know its a tough one to place this as pricing varies so often. We aren’t selling a product with specific hardware/software. We are selling a service that will take you through 6 months of work to cover scoping exercise, project meetings, changes, consultancy and advisory, pre-audits and post audits checks, evidence and artefacts sample validations, audit, report writing, training and all the variables in between.

Let us know if you need us to look at your PCI today, drop us a note at pcidss@pkfmalaysia.com and we will attend to you immediately!

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑