Category: Identity Access Management

Zero Trust for 2024

As we enter into the new year, lets start off with a topic that most cybersecurity denizens would have heard of and let’s clarify it a little.

Zero Trust.

It seems a good place as any, to start 2024 off with the pessimism that accompanied the end of last year – the spate of cybersecurity attacks in 2023 had given us a taste of what is to come – insurance company – check, social security – check, the app with our vaccination information – check. While breaking down the attacks is meant for another article, what we are approaching now for the coming year is not just more of the same, but much more and more advanced attacks are bound to happen.

While Zero Trust is simply a concept – one of many – to increase resistance to attacks or breach, it’s by no means a silver bullet. There is NO silver bullet to this. We are in a constant siege of information warfare and the constant need to balance the need for sharing and the need for protection. It is as they say; the safest place would be in a cave. But that’s now living, that’s surviving. If you need to go somewhere, you need to fly, you have information with the airlines. If you need to do banking, you have information with the banks. If you need to conduct your daily shopping online, you are entrusting these guys like Lazada et al the information that otherwise you may not likely provide.

So Zero Trust isn’t the fact that you conduct zero transaction, its basically a simple principle: Trust no one, Verify everything. Compare it to the more traditional “trust but verify” approach, which assumed that everything inside an organisation’s network should be trusted, even if we do have verifications of it. Here’s a breakdown of the concept, in hopefully simpler terms.

The Basic Premise: Imagine a company as a fortified castle. In the old days, once you were inside the castle walls, it was assumed you belonged there and could roam freely. At least this is based on the limited studies we have done by binge watching Game of Thrones. All historical facts of the middle ages can be verified through Game of Thrones, including the correct anatomy of a dragon.

Back to the analogy, what if an enemy disguised as a friend managed to get inside? They would potentially have access to everything. Zero Trust Architecture operates on the assumption that threats can exist both outside and inside the walls. Therefore, it verifies everyone’s identity and privileges, no matter where they are, before granting access to the castle’s resources. The 3 keys you can remember can be:

  1. Never Trust, Always Verify: Zero Trust means no implicit trust is granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet) or based on asset ownership (enterprise or personally owned). Basically, we are saying, I don’t care where you are or who you are, you are not having access to this system until I can verify who you are.
  2. Least Privilege Access: Individuals or systems are given the minimum levels of access — or permissions — needed to perform their tasks. This limits the potential damage from incidents such as breaches or employee mistakes. We see this issue a lot, whereby a C level person insist on having access to everything even if he doesn’t necessarily know how to navigate a system without a mouse. When asked why, they say, well, because I am the boss. No. In Zero Trust, in fact, because you are the boss, you shouldn’t have access into a system that does not require your meddling. Get more sales and let the tech guys do their job!
  3. Micro-Segmentation: The network is broken into smaller zones to maintain separate access for separate parts of the network. If a hacker breaches one segment, they won’t have access to the entire network.

The steps you can follow to implement the concept of Zero Trust:

Identify Sensitive Data: Know where your critical data is stored and who has access to it. You can’t protect everything. Or at least not with the budget you are given, which for most IT groups, usually is slightly more than they allocate to upkeep the company’s cat. So data identification is a must-have. Find out what is the data that you most want to protect and spend your shoe-string budget to protect it!

Verify Identity Rigorously: Use multi-factor authentication (MFA) and identity verification for anyone trying to access resources, especially important resources like logging systems, firewalls, external webservers etc. This could mean something you know (password), something you have (a smartphone or token), or something you are (biometrics). It used to cost a mortgage to implement things like this but over the years, cheaper solutions which are just as good are now available.

Contextual Access: Access decisions should consider the context. For example, accessing sensitive data from a company laptop in the office might be okay, but trying to access the same data from a personal device in a coffee shop might not be. This may not be easy, because now with mobile devices, you are basically accessing top secret information via the same device that you watch the cat playing the piano. Its a nightmare for IT security – but again, this has to have discipline. If you honestly need to access the server from Starbucks , then implement key controls like MFA, VPN, layered security and from a locked-down system.

Inspect and Log Traffic: Continuously monitor and log traffic for suspicious activity. If something unusual is detected, access can be automatically restricted. SOAR and SIEM products have advanced considerably over the years and today we have many solutions that do not require you to sell a kidney to use. This is beneficial as small companies are usually targeted for attacks, especially if these smaller companies services larger companies.

At the end, it all comes down to what are the benefits to adopt this approach.

Enhanced Security: By verifying everything, Zero Trust minimizes the chances of unauthorised access, thereby enhancing overall security. Hopefully. Of course, we may still have those authorised but have malicious intent, which would be much harder to protect from.

Data Protection: Sensitive data is better protected when access is tightly controlled and monitored. This equates to less quarter given to threat players out there.

Adaptability: Zero Trust is not tied to any one technology or platform and can adapt to the changing IT environment and emerging threats.

On the downside, there are still some challenges we need to surmount:

Complexity: Implementing Zero Trust can be complex, requiring changes in technology and culture. It’s not a single product but a security strategy that might involve various tools and technologies. This is not just a technical challenge as well, but a process and cultural change that may take time to adapt to.

User Experience: If not implemented thoughtfully, Zero Trust can lead to a cumbersome user experience with repeated authentication requests and restricted access. This is a problem we see a lot, especially in finance and insurance – user experience is key – but efficiency and security are like oil and water. Eternal enemies. Vader and Skywalker. Lex and Supes. United and Liverpool. Pineapple and Pizza.

Continuous Monitoring: Zero Trust requires continuous monitoring and adjustment of security policies and systems, which can be resource-intensive. We’ve seen implementation of SIEM and SOAR products which are basically producing so many alerts and alarms that it makes no sense anymore. These all become noise and the effects of monitoring is diluted.

In summary, an era where cyber threats are increasingly sophisticated and insiders can pose as much of a threat as external attackers, Zero Trust Architecture offers a robust framework for protecting an organisation’s critical assets. It’s about making our security proactive rather than reactive and ensuring that the right people have the right access at the right times, and under the right conditions. It’s culturally difficult, especially in Malaysia, where I will have to admit, our innate trust of people and our sense of bringing up means we always almost would open the door for the guy behind us to walk in, especially if he is dressed like the boss. We hardly would turn around and ask, “Who are you?” because we are such nice people in this country.

But, adopt we must. For any organisation looking to bolster its cybersecurity posture, Zero Trust isn’t just an option; it’s becoming a necessity. In PKF we have several services and products promoting Zero Trust – contact us at avantedge@pkfmalaysia.com and find out more. Happy New Year!

PCI-DSS Full Disk Encryption Part 1

In PCI-DSS, one of the most difficult requirement to get through would be Requirement 3, that deals with stored credit card information and how to protect it. Aside from Requirement 10: Logging and Requirement 6: Software, Requirement 3: Storage makes up a bulk of the remediation effort and cost of PCI-DSS.

The excerpt ominously states at the beginning: Protection methods such as encryption, truncation, masking, and hashing are critical components of cardholder data protection. If an intruder circumvents other security controls and gains access to encrypted data, without the proper cryptographic keys, the data is unreadable and unusable to that person. Other effective methods of protecting stored data should also be considered as potential risk mitigation opportunities. For example, methods for minimizing risk include not storing cardholder data unless absolutely necessary, truncating cardholder data if full PAN is not needed, and not sending unprotected PANs using end-user messaging technologies, such as e-mail and instant messaging.

It goes without saying that if you have credit card information on file for whatever reason, it would be a good time to relook at the necessity of it. If you don’t need it, get rid of it, because the cost of maintenance and remediation may not be worth whatever value you think you are obtaining from storage of card data.

If you do need it, well, PCI provides a few options for you to protect it: Encryption, Truncation, Masking and Hashing. In this series of articles we will be looking into encryption and more specifically Full Disk Encryption.

Encryption itself deserves a long drawn out discussion and the types of encryption – you have applications doing encryption through the application library, you have database encryption like TDE, you have file encryption or folder encryption, you have full disk encryption. One part is the encryption methodology. The other part of it is the encryption key management. The latter is the one that usually throws up a headache.

We will be exploring Full Disk Encryption or FDE, and where it can be implemented to comply to PCI-DSS.

There is a specific part in 3.4.1 stating:

If disk encryption is used (rather than file- or column-level database encryption), logical access must be managed separately and independently of native operating system authentication and access control mechanisms (for example, by not using local user account databases or general network login credentials). Decryption keys must not be associated with user accounts.

So aside from the encryption being strong encryption and key management being done properly, PCI says, there are a few more things to be aware of for full disk encryption:

a) Logical access must be separate and independent of the native OS authentication

b) Decryption key must not be associated with the user account.

What does this mean?

Let’s look at Bitlocker for now, since that’s everyone’s favourite example.

Bitlocker has gone through a lot of stick probably because it’s a native Microsoft offering. Maybe. I don’t know. The fact is Bitlocker is able to use 128 or 256 bit AES so basically, in terms of strong cryptography, it’s possible. It’s the key management that’s the issue.

For key management, the recommended usage with Bitlocker is to use the Trusted Platform Module version 1.2 or later. The TPM is a hardware in your server that somewhat acts like a key vault or key management module, to simplify it. It offers system verification to ensure there is no tampering of the system at startup. Beginning with Windows 10, version 1803, you can check TPM status in Windows Defender Security Center > Device Security > Security processor details. In previous versions of Windows, open the TPM MMC console (tpm.msc) and look under the Status heading.

Bitlocker can also be used without TPM, although that means the system integrity checks are bypassed. It can operate along with Active Directory, although the newer versions of bitlocker doesn’t store the password hash in AD anymore by default. Instead a recovery password can be stored in the AD if required.

With the TPM, it’s still not the end of it, because we need to make sure that there is a separation of authentication for bitlocker to operate. In this case we will look to configure it with a PIN (which essentially is a password that you know).

First of all, let’s see what at the end we should be seeing.

So at the end you are basically seeing both file systems being encrypted. I’ve been asked before if all volumes need to be encrypted, and the answer is no, because bitlocker can’t do that anyway. Your system drive can’t be encrypted. So for PCI, it makes sense NOT to store card data in drives that are not encrypted.

The next thing we need to check is to ensure your set up has fulfilled the strong encryption requirement of PCI-DSS:

So you have a few things to ensure that strong crypto is enabled and key protectors are in place. So what you have is bitlocker now enabled. You also basically need to ensure you properly document the key management policy – include in AES256 or 128 that you are using, which drives are protected, key expiry date.

Keep in mind also the following:

FVEK (Full Volume Encryption Key) as DEK and VMK (Volume Master Key) as KEK.

FVEK stores in Boot sector (Volume meta data) in hard disk and VMK stores in TPM chip PCR register (it’s a Hardware chip which place in Motherboard).

In general, the above would fulfill PCI requirements. In our next article, we will write out on how logical access to the encrypted file system can be separated from the native OS authentication mechanism.

Meantime, please drop us any enquiries at pcidss@pkfmalaysia.com if you need to know more about PCI-DSS or any compliance matters in IT. We are here to help!

Alienvault: File Integrity Monitoring on Linux Part 2

So based on our previous article you have so far set up OSSEC (or HIDS in Alien-speak) in your Linux host which you want to monitor. The next thing to do is to configure FIM to work.

To recap, we have a running CENTOS7 system running in our lab and we finally got our ossec to be communicating with the Alienvault server. You can verify connectivity either through the CLI logs, or using the USM Interface. Now the HIDS can be used for a lot of things – it’s obviously a Host IDS (hence the name), but it’s also a log forwarder as well, so for Linux systems, it doubles up as a security logger, so you don’t need to configure separate plugins to log, for instance SSH denied attempts. If you don’t have the HIDS, you have to forward logs from rsyslog then setup Alienvault plugin for SSH to normalise SSH logs and create those events. HIDS does this for you. Try it. You can attempt multiple logins with wrong password and you should see an event called “SSHD Authentication Failed.”

But for this article, we will be focusing on File Integrity Monitoring or FIM for short. FIM in Alienvault USM is utilising OSSEC inbuilt integrity checking process called Syscheck. Syscheck runs periodically and depending on how many files/directories it is checking can run from 10 minutes to much longer. By default, syscheck in Alienvault executes very 20 hours – if that’s too long for you , you can shorten it in the configuration.

Let’s jump straight in.

In Alienvault (Server if you are using Standard), go Environment -> Detection and on HIDS tab, click on Agent. In the lower tabs, click on SYSCHECKS.

Over here is where you configure the Syschecks on the Agents and you can modify the frequency.

Because we are using Linux, we are going to ignore the portion where Windows Registry is being configured and go straight to: ”

FILES/DIRECTORIES MONITORED

Under files/Directories, put in a sample directory you need to monitor, for instance

/etc/pkf

Don’t worry, out of the box, standard directories being monitored are

/etc

/usr/bin

/usr/sbin

/bin

/sbin

We have in some cases clients insisting on us putting in /var/log in there to inform them of changes occurring in this directory. According to them, log files are key and they need to know if these log files are being changed.

Um, yes. Agree on the first part. But /var/log changes almost every nanosecond. Syscheck is not going to be of much use here. They are probably thinking about log archives as opposed to the current log folder. Anyway, we digress.

So go ahead and put in your own directory in there under agents and then restart HIDS from Alienvault, and also for good measure restart the agent as well (you can go Agent Control -> Click on the clock symbol under the Agent Name to restart). To check, you can click on Agent.Conf tab and you will find something similar to:

<agent_config>
    <syscheck>
      <frequency>1200</frequency>
      <auto_ignore>no</auto_ignore>
      <alert_new_files>yes</alert_new_files>
      <scan_on_start>yes</scan_on_start>
<directories realtime="yes" report_changes="yes" check_all="yes">/etc/pkf</directories>
    </syscheck>
  </agent_config>

So it looks all set up. If you have restarted HIDS and also the agent, you should be able to verify on the agent itself if the configuration has been uploaded. On the client, go to

/var/ossec/etc/shared

Look into agent.conf file and you should be able to see the same thing as the configuration above. Also, you can go to

/var/ossec/logs

and look into ossec.log file and you should be able to see something like

ossec-syscheckd: INFO: Monitoring directory: '/etc/pkf'.
ossec-syscheckd: INFO: Directory set for real time monitoring: '/etc/pkf'.

So there you have it. You can do some testing now.

So we will go into the local directory of our CENTOS and go ahead to create a few random files. The first thing you notice is that even if in our config there was:

<alert_new_files>yes</alert_new_files>

We still do not get any alerts once we create new files in the directory. This is because OSSEC doesn’t check new files in realtime (just changes to files), and we will need to wait for our syscheck to run, or you can go ahead and restart the agent from the Alienvault GUI. For good measure, change a few things about the files as well.

You might notice a strange thing happening here.

Going into the SIEM, you might not find any events relating to integrity issues in your host. This doesn’t seem to be an isolated incident, if you head over to the Alienvault forum, you will see many people having the same issue: We have enabled FIM and we can’t find anything on the SIEM or any events!

If you check on the agent itself, and you click on the “modified files”

You will see a raw list of all the files modified and you will see that /etc/pkf/filename is there listed as well, so it means OSSEC is working and syscheck is working. Another way to verify is to head over to your Alienvault Server and go to

/var/ossec/logs/alerts 

grep pkf alerts.log

Basically I am doing a grep on anything that identifies the files or directories I am looking at to see if alerts are generated. You should change the grep to something related to your filename/directory name. You should be able to see that alerts are generated.

So what gives?

Plugins.

Apparently for some strange reason, some Alienvault setup by default does not have the proper plugins enabled to read the integrity alerts log of ossec. This is very strange, as FIM is touted as a feature for Alienvault, but we need to still work further to get it up and running. So go ahead to your Alienvault GUI:

Configuration -> Deployment

Click on System Detail of your Alienvault setup

Click on Sensor Configuration in the menu on the right side

Go to “Collection”

You notice you have Alienvault_HIDS and Alienvault_NIDS enabled. However, in some cases, Alienvault_HIDS-IDM plugin might be missing and can’t be found under “Plugins Available” column. IDM Is for identity management and it needs to be enabled for FIM to properly work.

The plugin that makes this happen is

ossec-idm-single-line.cfg

In our case, the plugin file was there in /etc/ossim/agent/plugins, but it wasn’t in the ossim database as a “Plugins Available” option. This generally means that it wasn’t (for some reason) written into the ossim-db. So head over to the directory in Alienvault:

/usr/share/doc/ossim-mysql/contrib/plugins

You will see that there is an ossec.sql.gz in there, so go ahead and unzip it and run

cat ossec.sql | ossim-db

alienvault-reconfig

Wait for the reconfig to occur then head back to the GUI of Alienvault, all the way back to the sensor configuration->collection and you will be able to see Alienvault_HIDS-IDM available for selection.

Go ahead and select it there, and then reconfig and now you can try to run the FIM test again.

a) Create a new file

b) Restart the agent (to simulate the syscheck being run)

c) Check SIEM , filter Data Sources to Alienvault HIDS, you should find

AlienVault HIDS: File added to the system.

d) Go to the host and edit the new file and change it

e) Go back and check SIEM and you will find

AlienVault HIDS: Integrity checksum changed.

The last event should be immediate and need not have any restart of the agent. Unless of course, we noticed if the change occurred during the time syscheck is running, if so the event will occur once syscheck finishes. It’s not perfect, but it will have to do.

Congratulations, you have FIM events up and running for Alienvault! If you need further assistance in any Alienvault matters, drop us an email at alienvault@pkfmalaysia.com and we will look into it.

 

The Single Point of Failure

As technology becomes more and more advanced, we’re seeing an amazing progress in the security field. Companies spend millions to keep the bad guys out. We have IPS/IDS, NACs, AVs, FWs, AAA, TACACS, ADS, IAM, SIEM and more acronyms than a typical teenager’s vocabulary.  Security budgets consistently spans 10 – 15% of organisation budgets, and according to the greatest oracle of all, Gartner:

“While the global economic slowdown has been putting pressure on IT budgets, security is expected to remain a priority through 2016, according to Gartner, Inc. Worldwide spending on security is expected to rise to $60 billion in 2012, up 8.4 percent from $55 billion in 2011. Gartner expects this trajectory to continue, reaching $86 billion in 2016.”

So this year, we’re seeing an IT security spending of the GDP of Cuba. Yup, Cuba. Where Havana cigars come from and Che Guevara became famous. It sounds like a lot of money. And it will get higher. As long as more automation is done. As long as more technology is needed. As long as more day-to-day banking is needed. As long as human beings are lazier and want more things faster. Information Technology will continue to grow, and along with it, all the wonderfully, naughty activities that invariably accompany such growth.

While millions are spent on equipments, many of us neglect one of the most basic problem of all.

Passwords don’t work.

That’s because humans are invariably lazy. Or we would rather remember the phone number of that girl we met at the bar, or the pizza take out than to bother remembering our 12 letter, alpha numeric, lower case, upper case, special character password that must not resemble an english word or name, and must not be the same as the last 12 passwords you have, and recycled every month. And yeah, also can’t be your name, your family name, your dog’s name or the nickname you named your car. Or your bike. Or your computer, for us geeks.

It’s a broken feature. This article is both hilarious and scary. Like a korean horror movie.

Since biometric tech like fingerprint and face scanning is too expensive at the moment, passwords are still the defacto security problem many of us face. You can’t impose too complicated passwords on your users or your IT service desk will be flooded with “I forgot my password” tickets. Or you will have to constantly implement a “Reset you password” feature every day. But having no password policies is also asking for it. Users will tend to use password as password, which if you think about it, is absolutely genius if no one knows about it. It’s like doing the most stupidly obvious thing that your enemy would not believe that you’d be stupid enough to do it. Except now, it’s a known and acceptable stupidity, like lemmings falling off a cliff.

Password123, p@ssw0rd (or any other variants of that), password1, password2012 etc have all the same funky, useless theme: we are lazy creatures. The list has some interesting ones, like abc123 (who has never used that before?) and interestingly, Jesus, which is new. I mean, is that due to lots of IT users are christians, or that would be the first word that comes out of people’s lips when they think “Now what on earth is my password already???!”

Since passwords will never leave us for the near future, the best way to use a password is  simple, specific, and only you know about it. For instance, if you met your wife in Cicero’s on June 1986, your password could be c1cer0s1986_J. Or something. Craft out something that when you see that word, you can immediately associate it with a memory you have. Or if you paraglided down Mount Mutombo in Venuzuela with a guy called Hokey who then proceeded to almost kill you because you are a secret agent: Mut0mb0V3n_Hok3y_Di3! I don’t know. You get the idea.

So put away the normal passwords, and more importantly don’t ever, ever use yellow stick it notes on your cubicle, monitor, desk, pedestal, under your keyboard or under your chair. Please.

© 2024 PKF AvantEdge

Up ↑