Tag: Cybersecurity

PCI-DSS Scope Understanding: Encryption

Scoping is one of the first and main thing that we do the moment we get engaged, after the customary celebratory drinks. In all projects, scope is always key, moreso in auditing and consulting, in standards compliance, be it PCI, ISMS, NIST 800s, CSA or all the other compliances we end up doing. Without scope there is nothing. Or there is everything, if you are looking at it from the customer’s viewpoint. If boundaries are not set, then everything is in open season. Everything needs to be tested, prodded, penetrated, reviewed. While this is all good and all, projects are all bounded by cost, time and quality. Scope determines this.

In PCI, scoping became such a tremendously huge issue that the council deem it necessary to publish an entire supplemetary document called “Guidance for PCI DSS Scoping and Network Segmentation” back in December 2016. Now, here is a trivia for you, from someone who has been doing PCI for donkey years. Did you know that this isn’t even the first attempt at sorting out scope for PCI-DSS?

Back in 2012, there was a group Open Scoping Framework Group that published the extremely useful Open PCI-DSS Scoping Toolkit that we used for many years as guidance before the council amalgamated the information there into formal documentation. This was our go-to bible and a shout out to those brilliant folks at http://www.itrevolution.com for providing it, many of its original concepts retained when PCI council released their formal documentation on scope and eventually within the standards itself. YES, scoping is finally in the iteration of v4 and v4.0.1 for PCI-DSS in the first few pages, so that people will not get angry anymore.

Or will they?

We’re seeing a phenomenon more and more in the industry of what we term as scope creep. Ok fine, that’s not our word. It’s been in existence since the fall of Adam. Anyway, in PCI context, for no apparent reason some of our customers come back to us and state their consultants, or even QSAs insists on scope being included — for NO REASON except that it is MANDATORY for PCI-DSS. Now, I don’t want to say we have no skin in the game, but this is where I often end up arguing with even the QSAs we partner with. I tell them, “Look, our first job here is to help our customers. We minimize or optimize scope for them, reducing it to the most consumable portion possible, and if they want to do anything extra, let them decide on it. We’re not here to upsell Penetration testing. Or segmentation testing. Or Risk Assessment. Or ASV. Or Policies and Procedures. Or SIEM. Or SOC. Or Logging. Or a basket of guavas and durians.” Dang it, we are here to do one thing: get you PCI compliant and move on with our lives.

The trend we see now is that everything seems to be piled up to our clients to do this and to do that. In the words of one extremely frustrated customer: “Everytime we talk to this *** (name redacted), it seems they are trying to sell us something and getting something out of us, like we are some kind of golden goose.”

Now, obviously, if you are a QSA company and doing that:- STOP IT. Stop it. It’s not only naughty and bring disrepute to your other brethren in the industry, it’s frowned upon and considered against your QSA code! Look at the article here.

Now PCI scoping itself deserves a whole new series of articles but I just want to zoom down to a particular scoping scenario that we recently encountered. It’s in a merchant environment.

Now many of our merchants have either or both of these scopes: Card terminal to process card present at the stores and E-Commerce site. There is one particular customer with card terminal POI (point of interaction), or traditionally known as EDCs (Electronic Data capture). Basically this is where the customer comes, take out the physical card and dip/wave it to this device at the location of the stores. So yes, PCI is required for the merchant for the very fact that the stores have these devices that interact with cards. Now what happens after this?

Most EDCs have SIM based connectivity now, and it goes straight to the acquirer using ISO8583 messages. These are already encrypted on the terminal itself and routes through the telco network to the bank/acquirer for further processing. Other ways are through the store network, routing back to the headquarters and then out to the acquirer. There are reasons why this happens, of course, one would be the aggregation of stores to HQ allows more visibility on the transactions and analysis of traffic. The thing here is, the terminal messages are encrypted by the terminals, that the merchants do not have any access to the keys for decryption. This is important.

Now, what happened was that some QSAs have taken into their mind that because the traffic is routed through the HQ environment, the HQ gets pulled into scope. And therefore , this particular traffic must be properly segmented and then segmentation PT needs to be performed. This could potentially lead to a lot of issues, especially if your HQ environment is populated with a lot of different segments – it could constitute multiple, tiring, tedious testing by the merchant team….or it could constitute a profitable service done by your ‘service providers’ (Again, if these service providers happen to be your QSA, you can see where the question of upsell and independence come from).

Now here’s the crux. We hear these merchants telling us, oh, their consultant or QSA say that it’s mandatory for segmentation PT to occur in this HQ environment. The reasoning is that there is card data flowing through it. Regardless whether it is encrypted or not, as long as there is card data, IT IS IN SCOPE. Segmentation PT MUST BE DONE.

But. Is it though?

The whole point of segmentation PT is that it demarcates out of scope to in-scope. By insisting to have segmentation PT done, is to concede that there is an IN-SCOPE segment or environment in the HQ. The smug QSA nods, as he sagely says, “Well, as QSAs, we are the judge, jury and executioner. I say there is an in scope, regardless of encryption.”

So, we look at the PCI SSC and the standards, and let’s see. QSAs will point to page 14 of PCI-DSS v4.0 standards under “Encrypted Cardholder Data and Impact on PCI DSS Scope”.

Encryption alone is generally insufficient to render the cardholder data out of scope for PCI DSS and does not remove the need for PCI DSS in that environment.

PCI-DSS v4.0.1 by a SMILING QSA RUBBING PALMS TOGETHER

Let’s read further this wonderful excerpt:

The entity’s environment is still in scope for PCI DSS due to the presence of cardholder data. For example, for a merchant card-present environment, there is physical access to the payment cards to complete a transaction and
there may also be paper reports or receipts with cardholder data. Similarly, in merchant card-not-present environments, such as mailorder/telephone-order and e-commerce, payment card details are provided via channels that need to be evaluated and protected according to PCI DSS.

So far, correct. We agree to this. Exactly like what was mentioned, PCI is in scope. The question here is, will the HQ gets pulled in scope just for transmitting encrypted card data from the POIs?

Let’s look at what causes an environment with card encryption to be in scope (reading further down the paragraph)

a) Systems performing encryption and/or decryption of cardholder data, and systems performing key management functions,

b) Encrypted cardholder data that is not isolated from the encryption and decryption and key management processes,

c) Encrypted cardholder data that is present on a system or media that also contains the decryption key,

d) Encrypted cardholder data that is present in the same environment as the decryption key,

e) Encrypted cardholder data that is accessible to an entity that also has access to the decryption key.

So let’s look at the HQ scope. Does it cover the following 5 criteria for in-scope PCI-DSS dealing with encrypted card data? There is no decryption or encryption process done. The encrypted cardholder data is isolated from the key management processes. The merchant has no access or anything to do with the decryption key.

So now you see the drift. Moving down the paragraph, we find noted that when an entity receives and/or stores only data encrypted by another entity, and where they do not have the ability to decrypt the data, they may be able to consider the encrypted data out of scope if certain conditions are met. This is because responsibility for the data generally remains with the entity, or entities, with the ability to decrypt the data or impact the security of the
encrypted data.

In other words: Encrypted cardholder data (CHD) is out of scope if the entity being assessed for PCI cannot decrypt the encrypted card data.

So now back to the question, if this is so, then why does the merchant still need PCI? Well, because it’s already provisioned above: For example, for a merchant card-present environment, there is physical access to the payment cards to complete a transaction and there may also be paper reports or receipts with cardholder data.

So therefore, stores are always in scope. The question we have here is, if the HQ or any other areas are pulled in scope simply for transmitting encrypted CHD as a passthrough to the acquirer. In many way, this is similar to why PCI considered telco lines as out of scope. They simply provide the highway where all these encrypted messages travel on.

Now, of course, the QSA is right about one thing. They do have the final say, because they can still insist on customers doing the segment PT even if its not needed by the standard. They can impose their own risk-based requirements. They can insist the clients do a full application pentest or ASV over all IPs not related to PCI. They can insist on clients getting a pink elephant to dance in a tutu in order to pass PCI. It’s up to them. But guess what?

It’s also up to the customer to change or have another opinion on this. There are plenty of QSAs about. And once more, not all QSAs are created equal as explored in our article here.  Here we debunk common myths like whether having a local QSA makes any difference or not (it doesn’t), and whether all QSAs interpret PCI the same way (they don’t) and how important independence and conflict of interest should play a role, especially in scoping and working for the best interest of the customer, and not peddling services.

So, if you want to have a go with us, or at least just get an opinion on your PCI scope, drop a message to pcidss@pkfmalaysia.com and we will get back to you and sort out your scoping questions!


Major Changes of PCI v4

So now as we approach the final throes of PCI-DSS v3.2.1, the remaining 3 weeks is all that is left of this venerable standard before we say farewell once and for all.

PCI-DSS V4.0 is a relative youngster and we are already doing hours of updates with our customers on the things they need to prepare for. Don’t underestimate v4.0! While its not a time to panic, it’s also not a time to just lie back and think that v4.0 is not significant. It is.

Below is a table that provides an insight of the major changes we are facing in v4.0.

Bearing in mind that most of the requirements now start off with keeping policies updated and document roles and responsibilities, the major changes are worth a little bit of focus. In the next series of articles, we will go through each one as thoroughly as we can and try to understand the context in which it exists on.

Let’s start off the one on the top bin. Requirement 3.4.2.

Req. 3.4.2: When using remote-access technologies, technical controls prevent copy and/or relocation of PAN for all personnel, except for those with documented, explicit authorization and a legitimate, defined business need

PCI v4.0

Ok, we have underlined and emphasized a few key points in this statement. Because we feel that is important. Let’s start with what 3.4.2 applies to.

It applies to: Remote Access

It requires: Technical Controls

It must: PREVENT THE COPYING/RELOCATION

Of the subject matter: Full Primary Account Number

In v3.2.1 this was found in section 12.3.10 with slightly different wordings.

Req 12.3.10 For personnel accessing cardholder data via remote-access technologies, prohibit the copying, moving, and storage of cardholder data onto local hard drives and removable electronic media, unless explicitly authorized for a defined business need. Where there is an authorized business need, the usage policies must require the data be protected in accordance with all applicable PCI DSS Requirements.

PCI v3.2.1

I think 4.0, aside from the relocation of the requirement to the more relevant requirement 3 (as opposed to requirement 12, which we call the homeless requirement for any controls that don’t seem to fall into any other earlier requirements), reads better. Firstly, putting it in requirement 3 puts the onus on the reader to consider this as part of protection of storage of account data which is the point of Requirement 3. Furthermore, digging into the sub-requirement, 3.4 section header states: Access to displays of full PAN and ability to copy PAN is restricted.

This is the context of it, where we find the child of this 3.4 section called 3.4.2 and we need to understand it first, before we go out and start shopping for the first DLP system on the market and yell out “WE ARE COMPLIANT!”

3.4 talks about displays of FULL PAN. So we aren’t talking about truncated, or encrypted PAN here. So in theory, if you copy out a truncated PAN or encrypted PAN, you shouldn’t trigger 3.4.2. Its specific to full PAN. While we are at it, we aren’t even talking about cardholder data. A PAN is part of cardholder data, while not all cardholder data is PAN. Like the Hulk is part of the Avengers but not all Avengers are the Hulk. So if you want to copy the cardholder name or expiration date for whatever reasons like data analysis, behavioural prediction, stalking etc…this isn’t the requirement you are looking for.

Perhaps this is a good time to remind ourselves what is Account Data, Card Holder Data and Sensitive Authentication Data (SAD).

The previous v3.2.1 doesn’t actually state ‘technical controls’, which goes to say that if it’s a documentary controls, or a policy control, or something in the Acceptable Use Policy, it can also pass off as compliant. V4.0 removes that ambiguity. Of course, the policy should be there, but technical controls are specific. It has to be technical. It can’t be, oh wait, I have a nice paragraph in section 145.54(d)(i)(iii)(ab)(2.4601) in my information security acceptance document that stated this!

So these technical control(s) must PREVENT copying and relocation. Firstly just to be clear, copy is Ctrl-C and Ctrl-V somewhere else. Relocation is Ctrl-X and Ctrl-V somewhere else. Both has its problem. In copying, we will end up PAN having multiple locations of existence. In relocation, the PAN is moved, and now systems accessing the previous location will throw up an error – causing system integrity and performance issues. Suffice to say, v4.0 demands the prevention of both happening to PAN. Unless you have a need that is:

a) DOCUMENTED

b) EXPLICITLY AUTHORIZED (not Implied)

c) LEGITIMATE

d) DEFINED

When a business need is both “documented” and “defined,” it means that the requirement has been both precisely articulated (defined) and recorded in an official capacity (documented). So a list of people with access is needed for the who, why they legitimately need to access/copy/relocate PAN in terms of their business, explicitly authorized by proper authority (not themselves, obviously).

Finally, let’s talk about technical controls. Now, remember, this applies to REMOTE ACCESS. I’ve heard of clients who says, hey no worries, we have logging and monitoring in place for internal users. Or we have web application firewall in place. Or we have cloudflare in place. Or we have a thermonuclear rocket in place to release in case we get attacked. This control already implies ‘remote access’ into the environment. The users have passed the perimeter. It implies they are already trusted personnel, or contractors or service providers with properly authorized REMOTE ACCESS. Also, note that the authorization here is NOT for remote access, it is for the explicit action of copy/relocating PAN. In this case, most people would probably not have a business reason of copying/relocating PAN to their own systems unless for very specific business flow requirements. This means, only very few people in your organization should have this applied to them, under very specific circumstances. An actual real life example would be for an insurance client we have, they had to copy all transaction information, including card details in an encrypted format and put it into a removable media (like a CD-ROM) and then send it over to the Ombudsman for Financial Services as part of a regulatory requirement. That’s pretty specific.

So what passess off as a ‘technical control’? A Technical control may be as simple as to completely prevent copy/paste or cut/paste ability when accessing via remote access. This can be done in RDP or disable clipboard via SSLVPN. While I am not the most expert product specialist in remote access technologies, I can venture to say its fairly common to have these controls inbuilt into the remote access product. So, there may not be a need for DLP in that sense, as the goal here is to prevent the copying and relocation of PAN.

Now that being said, an umbrella disallow of copy and paste may not go well with some suits or C-levels who want to copy stuff to their drive to work while they are in the Bahamas. Of course. You could provide certain granular controls, depending on your VPN product or which part of the network they access. If a granular control cannot be agreed on, then a possible way is to enforce proper control via DLP (Data Loss Prevention) in endpoint protection. Or control access to CDE/PAN via a hardened jump server that has local policy locked down. So the general VPN into company resources may be more lax, but the moment access to PAN is required, 3.4.2 technical controls come in play.

At the end, how you justify your technical controls could be through a myriad of ways. The importance is of course, cost and efficiency. It has to make cost sense and it must not require your users to jump through hoops like a circus monkey.

So there you have it, a break down of 3.4.2. We are hopping into the next one in the next article so stay tuned. If you have any queries on PCI-DSS v4.0 or other related cybersecurity needs, be it SOC1 or 2, ISO27001, ISO20000, NIST or whether Apollo 11 really landed on the moon in 1969, drop us a note at avantedge@pkfmalaysia.com and we will get back to you!

Gearing Up: How New Cybersecurity Guidelines Accelerate the Automotive Industry Security

So here you are, with your new spanking SUV that is fully EV and fully automated, with the most state of the art systems inbuilt. You get into the car, switch everything on, put in your favourite tune and head off to work. Suddenly, out of nowhere, your speakers go bonkers and suddenly says in an ominous voice, “Now I got you…” and your steering decides to turn despite your best effort to right it and the accelerator depresses despite you removing your feet off the pedal and your brakes don’t work anymore. You watch helplessly as your car flies over the embankment 120 km an hour.

Homicide by the car. Open your pod bay doors, Hal.

This seems far removed from current reality, but it might not be as far as we think.

Cyberattacks are on the rise in the traditional automotive industry in recent years, as cars become more dependent on circuits and electronics as opposed to mechanics and gaskets.

Connectivity defines the modern vehicle. With some cars containing over 100 million lines of code and processing nearly 25GB of data per hour, computerization radically reimagines mobility – enabling telematics, infotainment and autonomous drive capabilities that were unthinkable barely a decade ago. This software-ized transformation, securing IT components against cyber risks grows ever-more vital. As showcased by researchers commandeering functions like braking and steering via consumer Wi-Fi or compromised infotainment apps, hackers now have pathways into safety-critical vehicle controls. Highly automated models promise even larger attack surfaces.

In the future, mechanics will be phased out by electronic engineers to fix cars. You would go to an electronic shop instead of a mechanic shop. Say goodbye to the toothy uncle with the towel around his shoulder shaking his leg in his greasy shirt.

Bearing this in mind, the Japanese automotive industry is making serious efforts to improve cybersecurity. The Japan Automobile Manufacturers Association (JAMA) and the Japan Auto Parts Industries Association (JAPIA) both formed cybersecurity working groups. These two collaborated in 2019 to develop the JAMA/JAPIA Cybersecurity Guidelines, and on March 31, 2022, a second version was released to help steer the industry toward a more cyber-resilient course. Spanning 156 requirements aligned to internationally recognized standards, the guidelines furnish a sector-specific blueprint for fortifying defenses.

Who Do the Guidelines Target?

Given deepening connectivity between various players, the guidelines take broad aim across the mobility ecosystem:

  • Automobile manufacturers
  • Major Tier 1 parts suppliers
  • Software and semiconductor vendors tightly integrated into products
  • Telecommunications carriers facilitating connectivity
  • Fleet operations centers managing vehicle data
  • Components manufacturers farther down supply tiers
  • Aftermarket service providers accessing internal buses
  • Dealership networks bridging manufacturers and consumers
  • Academic partners feeding talent pipelines

Essentially, any entity handling sensitive intellectual property or providing critical products/services supporting vehicle R&D, manufacturing, sales, maintenance or communications should adhere to the prescribed cyber controls. This is fairly normal, like other standards out there, sub-contractors usually take the hit, as these standards are pushed down from the top.

While the guidelines focus on securing corporate IT environments, they spotlight risks from increasing convergence of enterprise and industrial assets. As connected platforms, analytics and cloud infrastructures provide gateway for adversaries into production systems, shoring up corporate IT protection grows imperative.

Three-Year Roadmap for Enhancing Cybersecurity Posture

Given the significant dedication for properly implementing comprehensive cybersecurity management programs, requirements are divided into three priority tiers reflecting basic, intermediate and advanced measures. The purpose of this is to demonstrate the minimum necessary countermeasures that must be used regardless of company size. This division allows organizations to methodically elevate security stature over a three-year adoption roadmap:

Level 1 – Basic Security Hygiene (Mandatory):

The 35+ non-negotiable Level 1 controls target universals like access management, malware defenses, monitoring fundamentals, compliance auditing, encryption, and security training. These form basic cyber hygiene mandatory across all auto sector entities. These requirements are intended to build a chain of security and trust between companies and their business partners and are also applicable to small and medium-sized enterprises. Non automative industry might do well to also use some of these as baseline cybersecurity practices. It’s basically cybersecurity hygiene. And we all know Japan has the best hygiene in the world, right?

Level 2 – Best Practices (2 Years):

An additional 60+ intermediate requirements call out data protection expansions, enhanced monitoring/logging, vulnerability management, security testing and supply chain risk management practices. Deeper employee training and executive awareness campaigns also feature.

Firms handling sensitive IP or high transaction volumes are expected to adopt Level 1 and 2 guidelines covering both foundational and sector-specific heightened risk areas within two years.

Companies should implement these controls, especially if they meet one of the following conditions:

1. Companies handling external confidential information (technical, customer information, etc.) within the supply chain.

2. Companies with significant internal technology/information relevant to the automotive industry.

3. Companies with a reasonable size/share that could have a significant impact on the industry supply chain due to unexpected disruptions.

Level 3 – Advanced Protections (3 Years):

Finally, over 50 sophisticated measures comprise the advanced tier targeting state-of-the-art safeguards. Encryption ubiquity, advanced behavioral monitoring, automated validation testing, penetration assessments and further elevation of risk management programs defined here help drive the industry’s cybermaturity.

These practices showcase leadership, with Level 3 representing an ultimate target for manufacturers expected to benchmark sector-wide security.

Built-in Flexibility Accounts for Organization Size

The tiered model acknowledges the varying cybersecurity investment capabilities across the industry landscape. This allows smaller players an achievable Level 1 entry point before working toward the expanded Layer 2 and 3 guidelines on a timeline proportional to organizational size and risk.

Again, in comparison to standards like PCI-DSS that also adopts similar tiered approach for compliance, this makes sense, given the number of different entities affected by this standard.

Checklist Format Provides Clear Milestones for Growth

To ease adoption, requirements trace to numbered checkpoints within a detailed appendix. This enumerated format lets companies definitively benchmark postures against guidelines and methodically strengthen defenses while tracking progress.

Shared criteria similarly help suppliers demonstrate security improvements to automaker customers through consistent maturity evaluations, facilitating trust in the supply chain.

Guidance Tuned to Automotive Sector Risk Landscape

Along with staging requirements by attainability, guidelines tailor controls and concepts to risks distinct from other industries. While mapping extensively to internationally recognized standards like NIST and ISO27K, authors customized content to the sector’s specialized threats and priorities.

For example, Level 1 mandates continuous monitoring for unauthorized access or malware activity. This acknowledges the havoc potential of a breach within an interconnected web of automakers, parts suppliers and assembly lines. Different secure zones and security focuses blur the lines on whether if (or when) a breach occurs, whose problem is that, how do we track it?

The repeated emphasis on supply chain oversight, information exchange standards and third-party security likewise reflects the complex hand-offs and trust relationships fundamental to mobility ecosystem operations.

Build Cyber Resilience Across Fragmented Environments

As vehicles evolve into software-defined platforms, cyber principles growing from these Japanese guidelines can shape sector-wide baseline resilience. Automotive IT interconnectivity will only intensify, making comprehensive, unified cybersecurity strategy essential. The scenario of the killer SUV may still be well into the future, but everything starts somewhere and as the world move more into the electronic and artificial, so too our dependence on everyday technology that we take for granted.

Whether global manufacturer or tiny niche parts maker, each player shares responsibility for hardening the greater environment. Just as drivetrains integrate thousands of precision components into harmonized mechanical systems, robust digital defenses emerge from many entities working in synch.

Implementing defined building blocks now allows the industry to preemptively navigate obstacles that could imperil revolutionary mobility pursuits ahead. For those seeking secure footing in the auto sector’s cyber journey, this three-year roadmap paves a straight path forward. This isn’t just for Japanese companies, but for any company whether in Malaysia or other regions that does business with Japanese automakers. This is a clarion call to the industry that cybersecurity should be foremost in the board’s agenda. Contact us at avantedge@pkfmalaysia.com and we will immediately get back to you. With our Japanese auditor and implementation partners, we can assist you in any way you want in navigating this standard.

Unless of course, you are in your Killer Suv. In that case, we can’t navigate that. Good luck!

Alienvault USM Anywhere Updates

We just received very good updates from the Alienvault channel team (or AT&T Cybersecurity team as they call themselves now). I think to quickly summarise our excitement into two short phrases:

a) Google Cloud Support – Heck Yeah.

b) Custom Plugin Development – Heck Yeah!

Of course, there were tons of other updates as well, such as scheduled reports, unified UI, more AlienApps support, Cloudflare integration (which is very interesting, as we can identify actions to it, effectively making Alienvault function more like an active prevention system, as opposed to its traditional detective role), new search capability incorporating wildcard searches and advanced asset importing through CSVs as opposed to rudely scanning our clients network.

But the two main courses were the Google Native support and custom plugin.

Google Native support has been a pain point for years. We do have customers moving into GCP or already into GCP where we have been constantly battling to match their expectations for Alienvault to perform as seamlessly as it does on AWS – but it can’t. We had to rely on EDR (endpoint detection and response) for instance, where the agent grabs logs a’la HIDS and sends it over to the server directly. Of course, areas where a native sensor would function, such as creating an internal VPC filter mechanism, or doing vulnerability scanning without having too much inter VPC traffic – these were not able to be done with the EDR so it was very much a bandaid. We knew that our patched up GCP solution wasn’t functioning as well as its handsomer and more dashing brother, AWS. In other words, it kinda sucked.

GCP custom applications also presented its own set of issues – custom apps were difficult to integrate – even with Stackdriver, or us logging to BigQuery, presented a lot of issues to send these logs to Alienvault. When we could configure to send to BigQuery, we couldn’t filter properly, causing our 1TB per month customer quota to be annihilated within days. Now, getting PUB/SUB to work with Alienvault requires APIs to be written, and on top of that to have Alienvault write the custom plugins – all these add to pro services costs, and more importantly, resource and time cost to the project.

So what happens now? In the next General Acceptance/Availability of USM-A, GCP will be supported. The information is sparse so more updates will be forthcoming. But the GCP sensor will be able to:

a) Perform threat detection (like all other sensors), asset discovery, provide Alarms, events, widgets, correlation etc. Basically, it will be native to GCP, doing what it is doing for AWS, Azure and on-prem Hyper and VMWare.

b) Detect VPC flow logs

c) Monitor cloud services through Stackdriver

The last bit is very important. Stackdriver, in essence, is GCP’s answer to Cloudwatch and Cloudtrail of AWS. It monitors and manages services, containers, applications and infrastructure for the cloud. If you have a Cloud services or developing cloud applications, you should be able to support Stackdriver logging. In GCP Compute, the logging agent is used to stream logs from VM Instances. It can even provide the traditional network flow logs (or VPC flow logs), which MSPs can use to monitor network health etc. In other words, this ugly GCP little brother solution is going to get buffed. We’re going to look a lot better now.

The roadmap is bright: Automatic response action against a cloud service when a security event occurs – putting Alienvault into more of a proactive than detective stance it takes traditionally. This is similar to what the Cloudflare integration is achieving. More and more GCP services will be added to be supported. There is also a topic on “User Entity Behaviour Analytics” – which is basically matching behaviour to normal baselines and telling us that Bob is having coffee at 10 am instead of his usual 8 am, which meant he was running late to work, which meant he got stuck in traffic, which meant he left the house late, which meant he woke up late, which meant he slept late last night, which meant he went out for a drink with someone and got smashed, which could possibly mean he is having an affair with a stripper named Daisy. Maybe.

So, pretty exciting times, Aliens!

The other one on the plate wasn’t on the normal discussion agenda but was brought up by us on the international call – we just bombarded the screen with around 10 – 15 queries and at least 4 made it to the table. One of them was: when the hell are we going to get to do our own plugins?

No offence to Alienvault, who currently for USM-A are doing our client’s custom plugins – but 3 – 4 weeks isn’t really going to cut it. Furthermore, sometimes we are not even getting what we want from the custom plugins. We don’t blame Alienvault. The application is ours (as in our client’s). We are the ones who know the events, the priorities. We know what we want to see. We just can’t develop the plugins like what we do now for our USM Appliance clients.

Imagine the win-win situation here. We write plugins for clients (assuming its similar to Appliance), within 2 – 3 days we are done. Testing, another 1 – 2 days. Instead of setting the project timeline back 3 – 4 weeks we are 1 week in. That’s a HUGE impact for compliance clients who are often chasing a deadline. 3 weeks squashed to 1? Hell, Yeah! The win is also for Alienvault. They don’t have to deal with nagging customers or smart-ass channel partners like us banging them for not updating us on our new application plugin. Imagine the parties engineers can now attend to instead of writing regex for a company operating in Elbonia. Imagine the time they now can save and spend socialising with the rest of the world, or having the chance to meet people like Daisy.

It’s a whole new world, really.

So, Alienvault, please, get those updates to us as soon as you can and the world will be a better place for it.

If you need any information on Alienvault, or general help on your SIEM or PCI-DSS compliance, drop us an email on alienvault@pkfmalaysia.com and we will attend to it immediately!

PPWG (Protection Profile Working Group) Workshop at the Lexis

On the 10th – 11th October 2013, we had a meeting of all the Protection Profile Working Groups (PPWG) in Lexis Hotel, Port Dickson.

The PPWG is an initiative under Thrust 3: Cyber Security technology framework of the National Cyber security policy (NCSP), which in turn is to address cyber risks pertaining to Malaysia’s Critical National Information Infrastructure (CNII). 4 PPWGs were established

1. Data Protection

2. Network Devices

3. Application

4. Smart Card and related devices

The idea behind this was to set up standards and frameworks for developers to adhere to, to ensure information security is embedded in the system, instead of tacked on. We are, in all aspirations, like the National Institute of Standards and Technology (NIST) in the US.

PKF Avant Edge was formerly invited at the beginning of this year to be part of the PPWG3 group, comprising representatives from MIMOS, Cybersecurity, IRIS, Bank Negara and a few other private companies. In our first meeting, there were several representatives from the industries aside from the ones named above; but by the time this workshop rolled in, and after several iterations of all day meetings to discuss on the standards and protection profile for banking applications; we were the only ones left.

The idea behind PKFAE’s participation and our continuous support for the PPWG is not so much for profit, than for our philosophy. We don’t get anything out of it. The meetings are all day, 9 – 5 in Technology Park, in MIMOS’ HQ, and PKFAE’s representative is the managing director himself, not any other member of the company. So time cost’s perspective, it doesn’t really make too much sense for us to be part of it. But our philosophy has always been to balance profitability and responsibility. These are reasons why we give free workshops on Personal data protection act and project management; why we give free talks and industry contribution to universities; why we spend time engaging the government and educational societies in bringing information security awareness: we don’t get paid at all, and yet we do it. The underlying idea is to contribute back to the industry in which you are part of. If not in charity or donations, then in time and value. It does sound utopian, but we started the company with these basic tenets, so why not just continue on?

As such, aside from the government agencies, we are one of the few, if not the only consulting firm that is participating in our PPWG. It takes a lot of hard work and sacrifice, as well as doing something without any fees. We are not looking for any reward, but simply as something we need to be part of, as the basic form of our existence.

Once in a while, it’s still nice to get away from it all to Port Dickson, of course.

Good View from my room

Session ongoing from one of the PPWG

© 2024 PKF AvantEdge

Up ↑