Category: PKF Avant Edge (Page 4 of 18)

Alienvault: Working with Decoders and Rules

When we started out with Alienvault years ago, they were just a smallish, start up company and we worked directly almost with the engineers and sales team in Cork. Of course, a lot has changed since AT&T took over, but during the early days, there were a lot of knowledge and mindshare done directly between us and them. So much so that if you were to check their partner site, they still list us as the only Malaysian company as their reseller, due to the early days of listing. What attracted us to the product was that we could lift the hood and see what was underneath. Alienvault (or OSSIM) was previously a hodgepodge of many working parts that were glued together and somehow made to work. The agent was a product called OSSEC, which is an open-source HIDS. The IDS is Suricata/Snort and if you look closely at the availability tool, you would see the backend is a Nagios running. NFSen is used for their netflow data display, and PRADS for their asset discovery. OPENVAS is their vulnerability scanner and best of all, they allow you to jailbreak the system and go into the OS itself and do what you need to do. In fact, most of the time, we are more comfortable on the command line than through the actual UI itself.

The history aside, the downside of adding in these different applications and getting them all to play nice together, is that you would have to understand the interworkings of these pieces.

For instance, if you were to send logs via Syslog to Alienvault, you would have to know that the daemon rsyslog (not an Alienvault product) is the one being used to receive these logs. If you were to use the agent, then the application receiving these logs is different – it’s the OSSEC server that receives it. So it depends how logs come in, and from there you can decide what you wish to do with it.

The challenge is oftentimes to filter and ‘massage’ the logs when it hits Alienvault. There are a few approaches to this:

The basics are at stage 1 where the client (server, workstation etc) send logs (or have logs to be collected) to Alienvault. The initial filtering should theoretically happen here if possible. Many applications have the capability to control their logs – Windows server being one of them. Turning on debug logs on Linux for instance would cause a fair bit of log traffic across the network. Applications as well, have options of what to log and what not to log. We see firewalls logging traffic logs, proxies logging every single connection that goes through – this causes loads of logs hitting the Alienvault.

AV (especially the All In Ones) isn’t designed to take on heavy loads the way Splunk or other enterprise SIEM like ArcSight, that chews through 100,000 EPS like Galactus chews through planets. The AV approach has always been, we aren’t a SIEM only, we are a unified security management system, so security logs are what we are after. Correlation is what we are after. APT are what we are after. Their philosophy isn’t to overload and do generic Business Intelligence with millions of log lines, but to focus on Security and what is happening to your network. That being said, it’s no pushover as well, being able to work with 90 – 120 million events and going through 15,000 EPS on their enterprise.

The reality however is that most clients just turn on logs at Item 1 and plow these logs over to Alienvault. So it’s really up to Alienvault to start filtering these logs and stopping them coming in. At layer 2, is what we call the outer layer. This is the front line defence against these attacks of logs. These are where the engine running these log systems (OSSEC, rsyslog etc) can filter out and then trickle what is needed to Alienvault main engine itself in Layer 3. The AV main engine also has its form of defence, in policies, where we can create ‘junk’ policies to simply ignore logs coming in and not process them through the resource intensive risk assessment calculations.

So, we are going to assume that Layer 1 filtering wasn’t done. What we are going to look at is sorting out Layer 2 and we will assume that logs are coming in via OSSEC. We will have another article on Rsyslog filtering because that is a whole different novel to write.

When it hits OSSEC, it’s going via default port 1514/udp. Now remember, when logs first enters Alienvault, it doesn’t immediately go into the SIEM event display. It first needs to be logged, before it can be turned into events, before it can trigger alarms. So the basic rule is to get it logged:

Make sure you are receiving logs first.

This may seem juvenile in terms of understanding but we have been through enough to know that no matter WHAT the client says, oftentimes, their systems are not even sending the logs to us! A simple tcpdump -Xni eth0 “udp port 1514” will see if the logs are getting in, so go ahead with that first to ensure you are receiving. Just add a “and host <ip address>” if you need to filter it by the IP address.

Another way that Alienvault allows, when you are getting logs via HIDS/OSSEC is by enabling the “logall” on USM HIDS configuration, which we covered in the previous articles here. But be aware turning on logall potentially will bring a lot of logs and information into the box so we generally avoid this unless it’s really needed.

Once you are seeing logs coming into Alienvault, for OSSEC at least the next thing to do is to move these logs to “alerts.log” and from there, Alienvault can start putting it into the SIEM display.

For this to happen, you need to understand 3 things here, aside from the fact that we are currently now working on layer 2 from the diagram above – OSSEC:

a) Decoders

b) Rules

c) /var/ossec/bin/ossec-logtest

The above are actually OSSEC terminologies – not strictly Alienvault. What this means is that if you were to decouple OSSEC from Alienvault, you can. You can just download OSSEC. Or you could download other products like Wazuh, which is also another product we carry. Wazuh runs OSSEC (its own flavor) but has a different presentation layer (Layer 3 in our diagram above) and integrates with ELK to provide a more enterprise ready product but the foundation came from the same OSSEC principles. So when we talk about Rules and Decoders and using the ossec-logtest script to test your stuff, it’s not an Alienvault specific talk. Alienvault specific talk we can go later with plugins and stuff. In the actual ACSE course from Alienvault (at least the one I passed 5 years ago), there is really no mention on decoders and rules – it basically just focus on the core Alienvault items only.

At this point, we need to make the decision on whether to have the filtering done on OSSEC level (2) or on Alienvault level (3)? As a rule, the closer the filtering is done to source, the better…however, in our opinion, the filtering by Alienvault plugins is a lot more flexible and intuitive in design, compared to OSSEC (and because we are biasedly trained in Alienvault, but not so much in OSSEC). So for this article (which is taking VERY long in getting to its point), we are tasked to simply funnel the logs into /var/ossec/logs/alerts/alerts.log because that is where OSSEC sends its logs to and where we can get our AV plugins to read from.

The logs in /var/ossec/logs/archives/archives.log (remember, we turned on the logall option in the OSSEC configuration for this illustration) aren’t monitored by plugins. Because in a production environment, you won’t have that turned on. So, once you have logs into the alerts.log file, you are good to go, because then you can sit down and write plugins for Alienvault to use in the SIEM display.

OK – Firstly Decoders. OSSEC has a bunch of default decoders (like plugins in Alienvault) that is able to interpret a whole bunch of logs coming in. Basically, the decoder is set up with Regular expression to go through a particular file and just grab the information from the file and drop it into fields like IP address, date, source IPs etc. Similar to the AV plugin, but for this illustration, we are not going to use much of the OSSEC filtering, but simply to ensure we select the right logs and send them over to the alerts.log file.

So ok, let’s take the previous article example of having MySQL logs into Alienvault. Let’s say we have this example query log coming into our Alienvault (archive.log, if we turned it on)

2021 Feb 21 00:46:05 (Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:41:42.271529Z        28 Query     SHOW CREATE TABLE db.persons

So the above doesn’t really offer much, but you can technically see there is the date and time, and the command line etc and a decoder will need to be created to parse the incoming log.

Picking up from where we left off at the Alienvault link, Task 4 covers the steps to create the decoder:

a) Edit /var/ossec/alienvault/decoders/local_decoder.xml and add in the following:

<decoder name="mysql-query">
        <prematch> Query</prematch>
</decoder>
<decoder name="mysql-connect">
        <prematch> Connect\s*</prematch>
</decoder>
<decoder name="mysql-quit">
        <prematch> Quit</prematch>
</decoder>

The above is simplistic decoder to catch the 3 important events from the logs coming in from MySQL – Query log, i.e

2021-02-22T09:41:42.271529Z        28 Query     SHOW CREATE TABLE db.persons

Connect Log

2021-02-20T16:35:28.019734Z        8 Connect   root@localhost on  using SSL/TLS

Quit

2021-02-20T18:29:35.626687Z       13 Quit  

Now of course, for those aware, the Query logs have many different types of query – Query Use, Query Show, Query Select, Query Set, Query Insert, Query Update and so on. The idea of the decoder is simply to catch all the queries, and we will theoretically log all Queries into Alienvault.

Now, remember to tell Alienvault you have a new decoder file

In the USM Appliance web UI, go to Environment > Detection > HIDS > Config > Configuration.

Add <decoder>alienvault/decoders/local_decoder.xml</decoder> after <decoder> :

Adding the "local_decoder.xmll" setting to ossec_config

Adding this setting enables the usage of a custom decoder. Save it and restart HIDS.

So that’s it for the decoder.

Now, on the CLI, go to /var/ossec/bin and run ./ossec-logtest

Paste the following “2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4”

And you should the get result as below

linux:/var/ossec/bin# ./ossec-logtest
2021/03/29 09:50:10 ossec-testrule: INFO: Reading decoder file alienvault/decoders/decoder.xml.
2021/03/29 09:50:10 ossec-testrule: INFO: Reading decoder file alienvault/decoders/local_decoder.xml.
2021/03/29 09:50:10 ossec-testrule: INFO: Started (pid: 25070).
ossec-testrule: Type one log per line.
2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4
**Phase 1: Completed pre-decoding.
full event: '2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4'
hostname: 'linux'
program_name: '(null)'
log: '2021-02-20T18:29:43.189931Z 15 Query SET NAMES utf8mb4'
**Phase 2: Completed decoding.
decoder: 'mysql-query'

So basically, any logs that come into archive.log that has that sample line “Query” you will be lumping it in as mysql-query decoded. Of course you can further refine it with Regular expression to get the exact term you wish, but for the illustration, we want to catch the queries here and it’s fine for now.

The next item is the rules. Again, referring to the Alienvault writeup above, go ahead and edit
/var/ossec/alienvault/rules/local_rules.xml.

What we will do is to add the following in

<group name="mysql-connect">
<rule id="192000" level="0">
<decoded_as>mysql-connect</decoded_as>
<description>Connect log is enabled</description>
</rule>

<rule id="192001" level="1">
<if_sid>192000</if_sid>
<regex>Connect\s*</regex>
<description>Connection is found</description>
</rule>
</group>


<group name="mysql-query">
<rule id="195000" level="0">
<decoded_as>mysql-query</decoded_as>
<description>Mysql Query log is enabled!</description>
</rule>


<rule id="195001" level="0">
<if_sid>195000</if_sid>
<match>SET</match>
<description>Query set is found and ignored!</description>
</rule>


<rule id="195002" level="1">
<if_sid>195000</if_sid>
<regex>Query\s*</regex>
<description>Query is found</description>
</rule>
</group>


<group name="mysql-quit">
<rule id="194000" level="0">
<decoded_as>mysql-quit</decoded_as>
<description> Quit log is enabled</description>
</rule>

<rule id="194001" level="1">
<if_sid>194000</if_sid>
<regex>Quit\s*</regex>
<description>Quit command is found</description>
</rule>
</group>

So what the above does is to decide what to do with 3 types of MySQL logs you are getting: Connect, Query and Quit. We want to dump these logs into alerts.log so that we can work on it with Alienvault’s plugin. We don’t want to do any fancy stuff here so it’s pretty straightforward.

Each of these 3 have a foundation rule

a) Connect – 192000

b) Quit – 194000

c) Query – 195000

Each rule has a nested rule to decide what to do with it. Notice you can actually do Regex or Match on the rules which really provides a lot of flexibility in filtering. In fact, if it wasn’t for Alienvault’s plugins, OSSEC’s filtering would probably be sufficient for most of your custom logs requirement.

For this illustration, our job is simple – for each of these rules, find out the key word in the log, and then escalate it to an alert. An alert is created when you create a rule ID with level = 1, i.e <rule id=”195002″ level=”1″>

If you run ossec-logtest again, and paste the log there, you would be able to see

**Phase 1: Completed pre-decoding.
full event: '2021 Feb 21 00:46:46 (Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:42:21.711131Z 28 Quit'
hostname: '(Host-192-168-1-62)'
program_name: '(null)'
log: '192.168.1.62->\MySQLLOG/db.log 2021-02-22T09:42:21.711131Z 28 Quit'
**Phase 2: Completed decoding.
decoder: 'mysql-quit'
**Phase 3: Completed filtering (rules).
Rule id: '194001'
Level: '1'
Description: 'Quit command is found'
**Alert to be generated.

Once you see “alert to be generated” you will find that same alert in the /var/ossec/logs/alerts/alerts.log

AV - Alert - "1613881201" --> RID: "197011"; RL: "1"; RG: "connect"; RC: "Quit Command found"; USER: "None"; SRCIP: "None"; HOSTNAME: "(Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log"; LOCATION: "(Host-192-168-1-62) 192.168.1.62->\MySQLLOG/db.log"; EVENT: "[INIT] 2021-02-22T09:42:21.711131Z        28 Quit       [END]";

From there, you can go about doing the plugins and getting it into the SIEM.

Whew. That’s it.

You would notice, however, there is another sub-rules in there for Query:

<rule id="195001" level="0">
<if_sid>195000</if_sid>
<match>SET</match>
<description>Query set is found and ignored!</description>
</rule>

This is set above the “alert” rule and you notice that this is Level=0. This means whatever Query that is decoded, first runs this rule and basically if I see there is a Query “SET”, I am going to ignore it. I.e it’s not a log I want and I am not going to put it into the alerts.log. Level 0 means, not to alert.

I am ignoring Query Set because in this case, we are finding millions of query set as it is invoked a lot of times and mostly it is false positives. I am interested in Query Selects, Inserts and Updates etc.

Once you have this rule put in, it will filter out all Query Sets. This is basically the only filtering we are doing so we don’t have those millions of Query Sets jamming up my alerts.log file in Alienvault.

alienvault:/var/ossec/logs/archives# ossec-logtest
2021/03/14 12:36:33 ossec-testrule: INFO: Reading decoder file alienvault/decoders/decoder.xml.
2021/03/14 12:36:33 ossec-testrule: INFO: Reading decoder file alienvault/decoders/local_decoder.xml.
2021/03/14 12:36:33 ossec-testrule: INFO: Started (pid: 12550).
ossec-testrule: Type one log per line.
192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4'
**Phase 1: Completed pre-decoding.
full event: '192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4''
hostname: 'alienvault'
program_name: '(null)'
log: '192.168.1.62->\MySQLLOG/db.log 2021-03-14T16:22:58.573134Z 19 Query SET NAMES utf8mb4''
**Phase 2: Completed decoding.
decoder: 'mysql-query'
**Phase 3: Completed filtering (rules).
Rule id: '195001'
Level: '0'
Description: 'Query set is found and ignored!'

So you see, from the above, all Query Sets are ignored. You can basically do whatever you wish by using either Regex or Match and ignore certain log messages from OSSEC itself. It’s very powerful and flexible and with enough time and effort, you can really filter out only the needed logs you want into Alienvault, which is really part of the fine-tuning process for SIEM.

So there you have it. What you have done now is to take those logs from archives.log and make sure you only put the logs you want in alerts.log (Quit, Connect, All Query except for Query Set).

The next thing you need to do is to go down to Alienvault (layer 3) and do the heavy lifting in writing plugins and get these events into the SIEM display.

For more information for Alienvault and how it can help your compliance, send us an email at alienvault@pkfmalaysia.com and we will get back to you ASAP!

Getting MySQL logs into Alienvault

So from our previous article we have gotten Alienvault (or OSSIM) running in your own Virtualbox, and it is able to communicate with the host (your laptop). Again, the reason why we have this is for a small mini lab that you can just shoot up without worrying about doing a VPN connectivity back to the office or whereever, and just do a very basic troubleshooting or learning. It’s highly useful for us, especially we deal a lot with custom applications plugins where we need to either filter or interpret using Alienvault.

So the objective here is to first get MySQL installed and running, and then have MySQL start logging. Now, for the sake of standardisation, we are going to install MySQL Community Edition. Instead of detailing all the complexity of Windows installation, we will keep it brief: Download, click, wait, done.

The more detailed link is as below, but in all honesty, there is nothing overly difficult with clicking on the windows installation file. Locating where you downloaded it is probably a bit more difficult that the actual installation itself.

https://dev.mysql.com/doc/refman/8.0/en/windows-installation.html

Once it’s installed, (we just installed it as a service), verify that its up and running by going to the Windows services and checking for MySQL80. Or you could just run netstat – an and find the following:

TCP 0.0.0.0:3306 0.0.0.0:0 LISTENING

Next, for the sake of laziness, you probably want to add the MySQL Bin path to your windows environmental variable path: C:\Program Files\MySQL\MySQL Shell 8.0\bin\. Just go System Properties ->Advance -> Environmental Variables.

Once that is done, you can run the command

mysql -u username -p

You should be prompted with a password and you can enter that and you should be in.

Now, we are not going to go through mysql CLI commands as this isn’t the point of the article. The point here is to create some sort of logs from MySQL and fetch those logs over to Alienvault. There are many ways to do it, and it seems for windows, the easiest would be to just dump it into the event viewer and let HIDS go and fetch it. But we don’t like to do things the easy way because we are technology sadists. So the idea here is to log MySQL to a flat file and get HIDS to grab it and get Alienvault to interpret it.

Logging and MySQL does have a long history with us. We’ve written an article a few years ago on getting MySQL community edition to log queries using the MariaDB plugin: https://www.pkfavantedge.com/it-compliance/pci-dss-logging-in-mysql-community-version-with-mariadb-plugin/. 

We are a big fan of the above plugin, as most of our clients tend to end up with MySQL Community Edition, which means some plugins like the official MySQL Enterprise Audit Plugin is not available for cheapskate like us. There is the Percona Audit plugin as well which we have not tried but it seems very much focused on Percona. There is also the McAfee plugin which we tried but after a bit of tinkering decided we were probably too stupid busy to make it work. So we were left with the MariaDB plugin which we got it to work for our client.

It’s still a good read but it has been a few years old. And we will definitely relook into it in the near future.

This time around, we are going to get MySQL Windows version to write the general query log into a flat file instead and have HIDS pick it up. This provides us with a few ideas of how HIDS/Alienvault can be configured to pick up any flat file, which gives you pretty much God-like powers in terms of being flexible in getting logs to your Alienvault. If we can get any flat file and create events from those, the possibility to integrate with any custom applications is endless.

To start you need to be aware of two things:

a) There is already a native logging capability in MySQL CE to log to a flat file which we will be using for illustrative purpose: the all powerful “General query” log. Why we say illustrative is that this isn’t probably a long term solution as it’s akin to turning on a debug on your app. There is a LOT of logs, because every query is logged. Useful for troubleshooting, not so cool if you have a busy server because it grows pretty quickly.

b) Windows doesn’t have a native way to send logs, except by way of WEF (Windows Event Forwarder) which basically just sends logs to a collector (windows system). It seems like an awfully clunky way to do log centralisation, so its probably better (still in 2021!) to use either a forwarder like NXLOG or install OSSEC (HIDS) as an agent to talk to Alienvault.

So for this article, we will start by enabling general query log on your Windows MySQL instance.

mysql> set global general_log_file='C:\MySQLLOG\db.log';
Query OK, 0 rows affected (2.39 sec)

mysql> set global log_output = 'file';
Query OK, 0 rows affected (0.00 sec)

mysql> set global general_log = on;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like '%general%';
+------------------+----------------------------+
| Variable_name | Value |
+------------------+----------------------------+
| general_log | ON |
| general_log_file | C:/MySQLLOG/db.log |
+------------------+----------------------------+
2 rows in set (0.01 sec)

The above series of commands basically just tells MySQL to turn on general log, and then say where the the file is located and instruct it to be a file. You can verify if it is set by the last command.

For more persistence, you can go ahead and and edit C:\ProgramData\MySQL\MySQL Server 8.0\my.ini

And include the following under the header General and Slow Logging

#General and Slow logging.
log-output=FILE
general-log=1
general_log_file="C:/MySQLLOG/db.log"

Restart your service and you should be able to see if there is a db.log file (You can name it anything).

Try to do a login with your mysql console and you should be able to see a “Connect” log and a few query logs in the flat file.

Now, the next thing is to find a way to forward these logs over to Alienvault. So you could use NXLOG (we covered that in detailed in a series here) However, that topic has been beaten to death over that series so we won’t look at that option for now.

Or we could re-explore using the HIDS (OSSEC) to capture the flat file which we started doing in this article https://www.pkfavantedge.com/pkf-avant-edge/alienvault-usm-flat-file-log-capture-part-1/ but for some reason, never finished what we started.

So for the sake of brevity, the Part 1 link should be very straightforward to follow. Install OSSEC HIDS into your laptop (which is also your MySQL server), but of course, change the OSSEC config to reflect the proper path and file of the flat file log of MySQL that you just created here.

So the conclusion of it is this:

a) You have a running MySQL server and you are able to query and log to it. I would suggest at this point to install a MySQL gui like HeidiSQL or PHPMyadmin to interact with your database. If you are fine with Workbench or CLI, then go ahead. For me, I like HeidiSQL because its clean, and very useful for simple testing and querying.

b) Your MySQL is logging into a flatfile of your choosing, every single log. Again, since this is a test, it’s fine. For a live server, be aware to have a cleanup script in place to ensure your general query log doesn’t over grow.

c) You have HIDS (OSSEC) – we use both interchangeably, so you know – installed on your laptop server, and it’s configured to pick up the logs from the flat file of MySQL that you have configured

d) On the OSSIM (Alienvault) – we use both interchangeably, so you know – on your Virtualbox, you have enabled HIDS to logall so the raw log is now dumped into archives.log (which we would recommend to remove the logall directive once we finish configuring, since this file will also grow quickly on live environment).

At this point, if you were to open up the
/var/ossec/logs/archives/archives.log of your Alienvault, you would observe that the format of the log coming in is

2021 Feb 22 09:41:42 (Host-192-168-1-111) 192.168.1.111->\MySQLLOG/db.log 2021-02-22T09:41:42.271529Z        28 Query     SHOW CREATE TABLE dbtest.persons

Compared to the log from the database file itself

 2021-02-22T09:41:42.271529Z         28 Query SHOW  
CREATE TABLE dbtest.persons

So it is actually word for word, the actual log itself. That’s great but aside from centralisation, it’s actually not doing anything (sort of like Batman in Justice League). It’s just there, with no purpose.

In our next article (which we will call Flat file log capture part 2), we will break down the structure in which logs flow into Alienvault and how ultimately we can get these logs to eventually be seen in the SIEM event.

Meantime, feel free to drop us an email for any SIEM/Alienvault matters at alienvault@pkfmalaysia.com.

Alienvault via VirtualBox

Ok, it’s been a while since we last written anything close to being technical, so let’s quickly get to it.

Many times, we would like to get some quick testing done on an Alienvault box but we don’t have any at hand. Some of these tests are for example, to validate a plugin you just wrote, or to check on a config that you would want to implement for your client but don’t want to test it there. There are many scenarios where you’d just want to fire up a simple box and do some testing. One way is to set up an Alienvault in your office and a couple of servers to run as test systems. A simple VPN in and voila, you are done. But what if you wanted to simulate logs, but don’t have the necessary systems to do so, or rather not change any production systems you have at hand?

One way is to install Alienvault on virtualbox on your laptop, and either simulate logs from other VMs to it, or just get your host laptop to send the guest Alienvault logs. Virtualbox is probably the easiest way to get it done. You don’t need to set it up as an extremely powerful system, since mostly you would be doing testing on it. For me, simply using it for plugin verification, decoder, rules set up and simple log testing was enough. I set the VM up for 2 processors, 8GB RAM, 30GB storage (fixed) and downloaded the OSSIM image to set it up.

To further simplify, we followed this excellent tutorial here. Kudos to the writer for the details. So the idea was two fold, to get our host talking to OSSIM and for OSSIM to be able to go to the internet. The trickiest part of this relatively simple setup is to get the networking sorted.

OSSIM allows two interfaces to be setup in theory – whereby one is used for management interface and the other for log collections. The slight difference here is that I’ve set up the eth0 as ‘bridged adapter’ and selected my laptop’s wireless adapter, and in theory this should allow internet access as required. The second adapter in theory, wouldn’t really be needed, as it’s generally used if you are accessing it from internally (let’s say you set up Virtualbox in a separate box and you access it from an internal network). But because I am building it all in one (everything in a single laptop), I don’t need that second interface as I can just access my management interface through my logging interface. So go ahead and just set up eth0 as bridge and later on, assign the IP to be the same network as your laptop’s network.

The strange thing you may experience from time to time is that you may not be able to SSH into your OSSIM for some reason. It could be an IP conflict or your ARP needs to be updated, especially if you have other systems with the same IP. So you could try just pinging from the server to the host, and host to the server and that may resolve that issue.

Now you are done, go ahead and access your OSSIM. In our next article, we can start a very simple tutorial of setting up a MYSQL database (also within the same laptop), writing a log file to a file and getting Alienvault to pick up the log file via HIDS.

PCI-DSS: Estimating the Cost

Ah money.

This is how most conversations start when we receive calls from PCI. How much will it cost?

I think this is one of the toughest subject for PCI, because it really depends on what is being done by the service provider/consultant for you, and how much you can actually do the implementation of PCI-DSS on your own. And obviously it also depends on your scope, and on top of that, depends on compensating controls if any, or any current controls you have in place. And then it also depends on the validation type – SAQ vs RoC and so on.

So, in the classic riposte to this classic question, it would be “It depends”.

Where we really need to clear the air though is the myth that once you have done PCI-DSS the first time, everything gets easier on the renewals and everything gets cheaper year on year going forward. That is for another article. There is a lot of things going on in PCI-DSS, and if you approach it from a product perspective (like most procurement do), you end up either sabotaging your entire compliance, or getting an auditor willing to sign off on God knows what, and later on realise that you’ve been out of compliance scope all the while.

To start with the pricing, you should understand a bit on the cost of PCI-DSS. And we should start with the QSA, because after all they are the focal point of the PCI program. They are the Qualified Security Assessor. Of course, you can opt to do your PCI (if allowed) without a QSA involvement (Merchant level 3 or 4) and just fill up an SAQ with or without assistance from consultants; but for the most part, a QSA would be involved in the signoff for larger projects, and this is where the cost questions take life.

Lets look firstly at the base cost of becoming a QSA. It’s very helpfully listed for us here: https://www.pcisecuritystandards.org/program_training_and_qualification/fees

So here are the maths. Imagine you are a QSA with projects in Malaysia: to start off, you will need to set aside over RM100K just to get you qualified to to audits in the Asian Region. We’re not talking about Europe or Latin America or USA here. Just APAC. That’s qualifying the company. A company, to service any region properly will probably need a bunch of QSAs trained and ready, let’s say around 3 to start off with. Each QSA will need to go for a training costing around RM12 – 13K, so let’s say you have 3 (which is very few), you are setting aside around MYR 50K for that. On top of that, there are obligations such as Insurance Coverage that is specified in the QSA Qualifications Requirement document. So it depends on which insurance you are taking, but it could be in the region of around MYR6K or above premium (spitballing). There is a requalification each year as well.

QSAs then can make their own calculations on how fast/long they need to recover their cost, but let’s say they set aside 200K just to get things set up with 3 or 4 QSAs, then they need to recover that cost. A man day of a QSA/Consultant may range from quite widely in this region but let’s say you decide to price it at “meagre” MYR2K, depending on how senior you have, so overall, you would need to have almost around 1.5 months of engagement of their QSAs just to recover the cost of setting up shop. That’s why its not unreasonable to see higher rates, because of the cost it takes.

You have salaries to consider as well. You also have to consider if something happens to one of your clients, where you happily audited them remotely and believed everything they said, and found out that they have done jack-shoot in their actual environment and you have to handle the fallout of liabilities.

Some procurement compares QSA engagements to firewall engineers. No knock on other technical engineers, but the cost of getting a Checkpoint firewall engineer and the cost to maintain one QSA is a different proposition. I am not saying one is better than another technically (I’ve seen a lot of firewall engineers who could put any auditor into their place, due to their extremely proficient technical skills), I am stating the underlying cost behind the position, which is why PCI-DSS is priced at a rate that’s comparable to say, CMMI, as opposed to say, the ISO9001.

On top of just auditing cost, QSAs take into account the actual support they are giving year on year. Some of them unburden this cost to partners and consultants who have been trained (such as PKF – and there are also other matters such as independence of audit vs implementation advisory which we will discuss later), or some of them take it upon themselves. But you must know the QSAs job is not easy. Aside from auditing and supporting, there is evidence validation and report writing. Then there is the matter of undergoing the Quality Assurance process, which brings more resources/cost to the QSA company. All this while travelling to and from audit sites, reviewing etc – the life of a QSA (ask any QSA) is itinerant and often travel heavy. Burnout may also be a concern, so if the QSAs are involved in the day to day or week to week assistance to their client’s PCI program, this isn’t sustainable.

Understanding all these underlying cost will allow the procurement or whoever is evaluating to understand how to look at projects. If a QSA is pricing extremely low, the question you will need to ask is: What’s being offered? Because all QSAs have more or less the same baseline cost and if a QSA priced themselves at RM800 per man day, and they are a small shop with less than 5 QSAs, what would then be their recovery rate? 200 man days of engagement to recover their initial cost? Most procurement wouldn’t think of things like this and they would just go to their “BAFO” Best and Final Offering – but when you break it down on what is expected, then you would understand that not all PCI offerings are the same. I could simply quote a client 3 man days of QSA work for the final audit and be done. That would be the best and final offering that would win. But what about the healthchecks, the management of the evidences and how they are submitted, the quality checking, the scope optimisation process, the controls checking etc etc?

And in line with our effort estimation, one should also split the pricing into two: Audit and Consultation vs Implementation service and products.

Because if let’s say we find your Requirement 10 is completely empty, and you are thinking to purchase a QRadar SIEM to address it, you could be looking upwards of RM60,000 just to get the product in. Couple that with training for engineers, usage, hiring etc, and you are well over the six figure stage just for Requirement 10! How about testing and application reviews? If you don’t have the personnel on this, then you have to consider setting aside another RM50K etc depending on how many applications/mobile applications/ systems you have in place. So it’s highly essential to have the QSA/consultant assist you in scope reduction. Most may not view it that way, so it’s essential to find an auditor who is experienced and who looks after your interest.

Finally, understand that cost of audit/consulting would be different depending on how you go through PCI-DSS. Level 1 certification requires the effort of validating evidences, doing gap assessments and auditing and writing the RoC. Level 2 SAQ with QSA signoff is slightly easier, as there is no RoC to write while the last option of self signed SAQ without QSA is obviously a lot less costly as you are basically doing a self-signoff. Those are just broad guidelines and not how QSAs may price it, because as I say, due to variables.

You could opt to use the rule of 1/3 when it comes to estimating these costs, although your mileage may vary. For instance, if the QSA throws a RM100K audit fees (comparing it to CMMI fees) for a Level 1 Certification, then a RM60-65K (2/3 of the Level 1) for a SAQ Signoff could be reasonable; and furthermore if you just need them in for consultancy for the non QSA signoff SAQ, it could be 30K (1/3 of the level 1) or so. But note, the SAQ self signoff can be carried out entirely on your own, so the cost could be close to zero as well.

I know its a tough one to place this as pricing varies so often. We aren’t selling a product with specific hardware/software. We are selling a service that will take you through 6 months of work to cover scoping exercise, project meetings, changes, consultancy and advisory, pre-audits and post audits checks, evidence and artefacts sample validations, audit, report writing, training and all the variables in between.

Let us know if you need us to look at your PCI today, drop us a note at pcidss@pkfmalaysia.com and we will attend to you immediately!

PCI-DSS: Estimating the Effort

pci-compliance
pci-compliance

One of the most often asked question whenever we have a first call on PCI-DSS would be: How much effort will it take? The other question would be, how much would it cost? Let’s take a look at the first question first.

Misconceptions aside, which we have written on (whether PCI is a training program, a certificate, a license, a subscription etc), effort dedicated to PCI-DSS has always been a question for clients and potential clients (and ex-clients) of ours. And I admit – it’s not an easy thing to get hold of. Because, like costing and pricing, there is so much variable. But we do have some common guideline one can take – especially when it comes to initial budgeting and estimation of effort. It’s common this is needed and although it isn’t all the time accurate, it at least provides you with some idea on how this PCI-DSS standard is approached.

Before we start – we assume that you will be undertaking PCI-DSS the proper way. We say ‘proper way’ as we’ve seen a few consultants or advisors out there that just tell our client that PCI-DSS is just sorting out documentation, and sell a bunch of policy and procedures templates for RM2K and say bye. That isn’t the proper way. That’s amounting to modern day charlatans. Or some other companies knowingly decide to do a self signoff when their controls are not in place, or have any clue what a firewall is, but still mark firewall reviews as “Compliant”. If you are planning to go down this path, good luck and God speed.

So now – the best way to look at effort is (like pricing) split it into two. This makes it less overwhelming when you are evaluating vendors, because like pricing (effort generally has a direct correlation to pricing anyway) – it can vary A LOT. We see companies that price extremely high and we see companies that price extremely low. For anyone in procurement without an iota of understanding in technical projects, it’s going to be very overwhelming. Just by looking at pricing or effort by itself on paper and not understanding what it is puts you into a position of comparing apples to oranges, or potatoes to durians. Making a decision to go for the lowest price (which is common practice in Malaysia) works if you are evaluating a product, because the specifications are standard. Not so much for PCI projects as they vary significantly. So here’s a break down for procurement who may have some challenges understanding the project. Again, this is a guideline we use to help our clients and there is no ‘standard’ approach to measuring effort for PCI-DSS.

The two main portions of PCI-DSS for effort estimation are:

a) Advisory/Consulting/Audit Services

b) Implementation/Technical Services

a) Advisory/Consulting/Audit Services

This constitute a few parts :

i) Scoping and Optimisation of Scope

This is a critical part of advisory. Scope is generally determined by the customer, but most customers have no idea what the scope is. The critical thing about scoping is that it’s easy to either miss out things to do for PCI-DSS, or to ‘over-do’ it. An example here of missing things out would be: “Oh crap, I didn’t realise we had 15 other servers in scope for PCI-DSS penetration testing, and now only 2 weeks left to the deadline.” An example of overdoing it would be: “We just purchased this wonderful DLP system for PCI-DSS for RM5 million and busted our entire technology budget for the next year and a half. Cool eh? What? PCI doesn’t mandate it? We could have other processes in place to address that control? Ooops.”

ii) Gap Assessment

Nobody starts from a zero. Well, at least that’s our experience. In some form or another, most companies already have controls in place. The purpose of this assessment is to find out how close these controls are to the baseline requirements of the standard – hence the word “gap”.

iii) Remediation Support and Pre-Audit

Not to be confused by implementation services. Remediation support is the advisory work that comes during the remediation. A lot of services will be done during the remediation period and it’s often quite overwhelming, even for someone with project management background. Evidences need to be collated and submitted in specific formats. Evidences also need to be validated first before submission, as evaluation of evidences for PCI is a key part of the whole program. Often times, this is missed out and clients just submit lock, stock and barrel whatever they have and cry foul when the whole batch is rejected and they run out of time. It’s critical for evaluation of evidences to be done properly and in a proper methodology, whether by milestone a’la Prioritized approach , or based on the QSA’s approach. A pre-audit is usually done as well to ensure clients are well and prepared for the final certification audit. This pre-audit acts like an internal audit review for ISO equivalent. A good consultant here should also provide monthly healthchecks to ensure the implementation isn’t go wayward. In fact, we spend almost 2-3 days a week with our clients onsite around a month before the actual audit starts to ensure they have everything in place for a successful audit.

iv) Certification audit and post audit

Cert audit is what the QSA does. After that, there may be a period of 2 -3 weeks to clean up whatever non-compliances found during the audit. During this time, the Report of Compliance is also prepared for level 1 clients. The process of RoC can take up to 4 – 6 weeks from the QSA, so be aware of this timeline.

b) Implementation/Technical Services

The reason this should be separated from the advisory/consultation portion is because this is actually done during stage a.iii above. It can be done by your vendor, but it also can be done internally if you have the resource. PCI-DSS doesn’t specifically require stringent standards to do services. We have customers insisting that PCI-DSS requires CREST certified penetration testers to pass. That’s simply not true at all. If you have qualified individuals (and this may not even mean they need to have certifications) who can demonstrate aptitude in doing testing in both usage of tools, experience and methodology, it’s considered acceptable, as long as they are independent enough and free from conflict of interest, for instance they shouldn’t be the application developers doing the penetration testing for the app. While it’s all fine and well to have an experienced company certified with a dozen certification to do testing, the baseline interpretation of PCI has always been agnostic to these specific certifications. So now you know. On the other hand, you also can’t just do the remediation services as and how you please. Firing up OpenSSL scanner and calling it a web application pentest won’t cut it.

There are a whole lot of other services here to be done – firewall reviews, patching, logging and monitoring, physical security, encryption, policies and procedures, web app testing, secure code review, SDLC, card data scans and the list goes on. There is a lot of work here, and how you estimate the effort should depend on what sort of gaps you get.

This is the hardest portion to estimate.

For Cert and Advisory, the effort is usually based on two factors:

a) Processes – is authorisation/settlement in scope? Is backend processes in scope? Is call center in scope? Is POS /ecommerce scoped? Is managed Service scoped? etc

b) Locations – is your DC/DR in place? Are branch offices scoped in? What about outlets (for merchants like retailers/fast food/oil and gas) etc?

The more processes in place for PCI, the more needs to be audited. The more locations, the more time. One may think, what about systems in scope? Wouldn’t auditing 10 servers vs 200 servers be vastly different? The answer is: it depends. Because technically, if you have a large amount of assets, we revert to sampling basis so we can still have a control of how many systems to audit. Some QSAs will deal with either 10-15%, but it really ranges depending on the distribution, the error variance, the type of systems, the standardisation of processes etc. So because of sampling, auditors and consultants have a measure of control over the effort required for large/small projects. Locations are similar, but locations oftentimes need a physical audit, so it’s not just remotely looking at screenshots or evidences, but actually going onsite – which requires time and effort.

For implementation/technical services, there is no sampling. A lot of confusion stems from clients thinking that implementation of controls are also on a sample basis. No. If there are 250 servers in scope, all need to have PCI controls (patched, pentested, secured, hardened). The auditor may select 20-50 systems from that set to review, but that doesn’t mean you just implement controls on 20-50 subset of the systems. So for implementation, the effort is directly related to how many assets/systems are in scope to implement. Furthermore, these should be broken down into

a) Services that can be done in-house – anything that can be done in-house, whether services or with products like logging and monitoring system etc

b) Services that require external vendors(like an ASV scan or any services you may not be able to do in-house)

c) Services that require product purchases or implementation – this is important as there would be effort for implementation, migration, testing. Somewhat similar to b), but there may be products you can actually implement yourself.

Putting it all together

Whew. That’s a lot of ground.

As you can see, the budgeting process can actually be:

Advisory/Cert Budget –> after gap assessment –> Implementation Services Budget.

Because only after the gap, would we know what we need to fix, right?

Unfortunately, procurement is often faced with the prospect of budgeting for ALL phases from the get go. This produces a lot of problems, and a LOT of variations. Procurement runs the risk (without them knowing) of getting consultants/QSA on board for advisory and cert and under-budgeting/over-budgeting for the implementation service. Any QSA/consultant worth their salt should be able to do ALL the services listed above under Advisory/Cert portion. Many QSAs only do certification only with a sprinkling of support. This is a problem because their involvement is often too late and because their price point is so low, they generally don’t do any internal advisory support, healthchecks etc. This is basically you get what you pay for concept.

As for the implementation – budgeting BEFORE knowing what is wrong is akin to giving medicine before having a diagnosis. So you could either give the right medication or the wrong one. A wrong one could be providing panadol for someone facing terminal brain cancer, or providing a liver transplant to someone having stomache from eating too much nasi lemak. Both bad.

Instead, procurement should give some standard guidelines as much as possible – number of assets estimated, number of locations, number of processes, number of firewalls, number of applications etc. The more information provided, the more accurate the effort is. Also, request for a breakdown of each, so at least you know if they quantity changes later, how much would it be more or less. Armed with this, it may be worthwhile to guestimate that the implementation cost if majority services are outsourced, would be around 1 – 1.5x the cost of the advisory. That is an extremely liberal estimate, but at least that’s what we see mostly.

We do have clients that insist on us passing them a ‘generic’ PCI cost without them providing us any information. I don’t know why. But mainly because they don’t know what’s happening. In this case, we just interpret the scope for them from an external perspective and make assumptions and send to them an estimate. But because of this, the effort and price range varies INCREDIBLY.

Remember – least effort doesn’t mean that PCI-DSS is being achieved. Because this isn’t a product, there is a huge amount of variation in effort estimation by different companies. Procurement needs to get onboard and understand the process and not just look and say – oh, why does this guy give me only 10% effort of yours? Because, Mr/Mrs Procurement, they are giving you 10% of what you need. Or, on the flip side, someone is giving you 200% more than what you need.

The next article, we will have a look at price points and see what’s really there to budget for in a PCI -DSS program. Before that, drop us a note at pcidss@pkfmalaysia.com for any enquiry and we will get back to you immediately! Be safe!

« Older posts Newer posts »

© 2024 PKF AvantEdge

Up ↑