Category: Technology (Page 1 of 8)

The Trouble with UASP

A break from our compliance articles, here is a simple hack for a problem plaguing us for some time. Our backup program is fairly comprehensive…we use a lot of different backup options. We are of the philosophy that you can never back up enough. Never. And anyway, PKF Malaysia is pretty big. We have offices in KL, Penang, Ipoh, KK, Sandakan, KB, Cambodia under us. That’s a lot of data. One of the backup options we use is automated to large external USB drives for archival.

While not the primary means of backup, it’s still pretty frustrating when we faced a strange issue with two of our external drives acting up all of a sudden. We don’t know why. They were working well enough, but suddenly, just stopped functioning well. Like a teenage kid. What happened was, we found that backups would regularly hang to the external drives (we have 2 of the same brand). At first, we thought it was just a connection issue (USB failing etc) and we took them out and tried it on various Windows servers (not *nix as we didn’t want to mess around with those, as they were critical). The idea anyway was to have these function on Windows only. Same issue was observed on all Windows servers and even on workstations and laptops.

The issue was simply excruciatingly slow transfer speeds and a graph looking similar to this:

Now that doesn’t look like a healthy drive.

What is happening here was that it would start whirring up once we tried to transfer files and the active time of the disk would be 1%. In a second, it would hit 50% and then within 2-3 seconds hit 100%. At that moment, the transfer speed had probably crawled up to around 15-20 MB/s and it would drop again to zero as the disk starts to recover. And it would happen again and again. This made data transfers impossible, especially big backups we did.

Troubleshooting it, after we tried updating firmware to no success:

a) Switched cables for the drives: Same issue

b) Switched servers / workstations : Same issue. We note we didn’t try it on *nix because those were critical to us but as an afterthought, it was probably something we could have done to isolate if the problem was the drive or the OS.

c) Switched other external drives: Works fine. So we kept one of these older drives as a control group so we know what is working

d) Switched USB3.0 to USB2.0: This is where we began to see something. The USB2.0 switch made the drives work again. However, transferring at USB2.0 speeds wasn’t why we purchased these drives anyway, but again, this is a control group, which is very important in troubleshooting as it provides a horizon reference and pinpoint to us where the issue is.

We ran all the disk checks on it and it turns out great. Its a healthy drive. So what is the issue?

We opened a support ticket to the vendor of this drive and surprisingly got a response within 24 hours asking us the issue asking us for the normal things like serial number, OS, error screens , disk management readings etc. So we gave and explained everything and then followed up with:

We have already run chkdsk, scannow, defrag etc and everything seems fine. Crystaldisk also gave us normal readings (good) so we don’t think there is anything wrong with the actual drive itself. We suspect it’s the UASP controller – as we have an older drive running USB (Serial ATA) which was supposed to be replaced by your disk – and it runs fine with fast transfer speed. We suspect maybe there is to do with the UASP connection. Also we have no issues with the active time when running on the slower 2.0 USB. Is there a way to throttle the speed?

We did suspect it was the UASP (USB Attached SCSI Protocol) that this drive was using. Our older drives were using traditional BOT (Bulk Only Transport) and the USB3.0 was working fine. Apparently UASP was developed to take advantage of the speed increases of USB3.0. I am not a storage guy, but I would think UASP is able to handle transfers in parallel while BOT handles in sequential. I would think UASP is like my wife in conversation, taking care of the kids, cooking, looking at the news, answering an email to her colleague, sending a whatsapp text to her mum and solving world hunger. All at once. While BOT is me, watching TV, unable to do anything else until the football game is over. That’s about right. So it’s supposed to be a lot faster.

How UASP works though is it requires both the OS and the drive to support it. The USB controller supporting this in Windows is the uaspstor.sys. You can check this when you look at the advanced properties in device manager and clicking on the connected drive or through disk management screen and view the detailed properties. Interestingly, our older drives loaded usbstor.sys which is the traditional BOT.

UASP is backward compatible to USB2.0, so we don’t really know why USB2.0 worked while USB3.0 didn’t. The symptoms were curiously the same. Even at USB2.0, the active time would ramp up, but we think, because USB2.0 didn’t have enough pipe to transfer, the transfer rates were around 30 MB/s and the active time of the drive peaked around 95%. So because it never hit the 100% threshold for the drives to dial down, we don’t see the spike up and down like we see in USB3.0. However, active time of 90+% still isn’t that healthy, from a storage non-expert perspective.

The frustrating thing was, the support came back with

Thank you very much for the feedback. Regarding to the HDD transferring issue, we have to let you know that the performance of USB 3.0 will be worse when there is a large amount of data and fragmented transmission. The main reason is the situation caused by the transmission technology not our drive. For our consumer product, if the testing result shown normal by crystaldiskinfo, we would judge that there is no functional issue. Furthermore, we also need to inform you that there is no UASP function design for our consumer product. If there are any further question, please feel free to contact us.

One of the key things I always tell my team is that tech support, as the first line of defence to your company MUST always know how to handle a support request. The above is an example how NOT to be a tech support.

Firstly, deflecting the issue from your product. Yes, it may be so that your product is not the issue. But when a customer comes to you asking for help, the last thing they want to hear is, “Not my problem, fix it yourself.” That’s predominantly how I see tech support, having worked there for many years ourselves. We have a secret script where we need to segue the complaint to where we are no longer accountable. For instance – did you patch to latest level? Did you change something to break the warranty? Did you do something we told you not to do in one of the lines amongst the trillion lines in our user license agreement? HA!

So no, don’t blame the transmission technology. Deflecting the issue, and saying the product is blameless is what we in tech support call the “I’ll-do-you-a-favor” manoeuvre. Because here, they establish that since they are not obligated to assist anymore, any further discussion on this topic is a ‘favor’ they are doing for you and they can literally exit at any point of time. It makes tech support look good when the issue is resolved and if the going gets too tough, it’s not too hard to say goodbye.

Secondly, don’t think all your clients are idiots. With the advent of the internet, the effectiveness of bullshitting has decreased dramatically from the times of charlatans and hustlers peddling urine as a form of teeth whitening product. They really did. Look it up. Maybe, a full drive would have some small impact to the performance. But a quick look at the graph shows a drive that is absolutely useless, not due to a minuscule performance issue but to an obvious bigger problem. I mean how is it that we can make a logic that once it reaches a certain percentage of drive storage it is rendered useless?

Lastly, know your product. Saying there is no UASP function is like us telling our clients PCI-DSS isn’t about credit cards but about, um, the mating rituals of tapirs. The tech support unfortunately did not bother to get facts correct, and the whole response came across as condescending, defensive and uninformed.

So now, we responded back:

I am not sure if this is correct, as we have another brand external drive working perfectly fine with USB3.0 transmission rates, whether its running a file or transferring a file to and from the external drive. Your drive performance is problematic when a file was run directly from the drive, and also transmission to copy file TO the drive. As we say, the disk itself seems ok but regardless, the disk is not usable when connecting to our USB 3.0 port, which most of our systems have, that means the your external drive can only work ok for USB 2.0. We suggest you to focus the troubleshooting on the UASP controller.

Further on, after a few back and forth where they told us they will recheck we responded

Additional observation: Why we don't think its a transmission problem, aside from the fact other drives have no issues, is that when we run a short orientation video file from your drive, your drive active time ramps up to 100% quickly - it goes from 1% - 50% - 100%, then the light stops blinking, and it drops back to 1% again, and it ramps up again.  This happens over and over. We switched to other laptops and observed the same issue. On desktops as well, different Windows systems.  What we don't want to do is to reformat the whole thing and observe, because, really, the whole reason for this drive is for us to store large files in it as a backup, and not have a backup of backup. We have also switched settings to enable caching (and also to disable it) - same results. The drives are in NTFS and the USB drivers have been updated accordingly. We have check the disk, ran crystaldisk, WMIC, defrag (not really needed as these are fairly new), but all with same results.  We dont find any similar issues online, of consistent active time spikes like we have shown you, so hopefully support can assist us as well.  

After that, they still came back saying they needed more time with their engineers and kept asking us whether other activities were going on with the server and observing the disk was almost full. We did a full 6 page report for them, comparing crystaldiskinfo results of all our other drives (WD, Seagate) and point out specifically their disks were the ones having the spike issues and requested them again to check the controllers.

After days of delay, stating their engineers were looking into it, and our backups were stalled they came back with:

Regarding to the HDD speed performance, we kindly inform you that the speed (read/write) of products will be limited by different testing devices, software, components and testing platforms. The speed (read/write) of products is only for reference. From the print screen that you provide,  we have to let you know that the write speed performance is slow because there are data stored in this drive. Kindly to be noted that the speed may vary when transferring huge data as storage drive or processing heavy working load as storage drive. Besides, please refer to the photo that we circled in red, both different drive have different data percentage.  For our drive, the capacity is near full, then it is normal to see the write speed performance slower than other drive.

This response was baffling . It wasn’t just slow. It was unusable because there is data stored in this drive? Normal to see write speed slower?

After almost a week and half talking to this tech support, they surmise (with their engineers) that their drives cannot perform because there is data stored in it. It makes one wonder then why are disk drives created if not to store data.

Our final response was:

We respectfully disagree with you. Your drive is unusable with those crystaldisk numbers and I am sure everyone will agree to that. You are stating your drive is useless once it starts storing data, which is strange since your product is created to store data. Whether the drive is half full or completely full is not the point, we have run other drives which are 95% full and which are almost 100% full with no issues. Its your drive active time spiking up to 100% for unknown reasons, and we have not just one but two of your drives doing this. We have insisted you to look at the controller but it seems you have not been able to troubleshoot that.  I believe you have gone the limit in your technical ability and you are simply unable to give anymore meaningful and useful support, and I don't think there's anything else you can do at that will be of use for us .We will have to mark it out as a product that we cannot purchase and revert back to other drives for future hard drive purchase and take note of your defective product to our partners. 

And that was the end. Their tech support simply refused to assist and kept blaming a non-existent event (data are stored in the disk), which had absolutely zero logical sense. It was a BIZARRE tech conclusion that they came to and an absolute lesson of what not to do for tech support.

That being said, we temporary resolved the issue with a workaround by disabling one of our servers using UASP and force it to use BOT. This hack was taken from this link, so we don’t get the credit for this workaround.

Basically you go to C:\Windows\System32\drivers and just rename uaspstor.sys to uaspstor.sys.bk or whatever to back it up. Then copy usbstor.sys to uaspstor.sys.

Depending on your system, you might have to do this in safe mode. We managed to do it without. Reboot and plugged in the troublesome drive and now it works. Some other forums says to go to the registry and basically redirect the UASPStor Imagepath to usbstor.sys instead of uaspstor.sys. However, this is problematic as when you try to plugin a traditional external drive using BOT, it doesn’t get recognised, because we think that the usbstor.sys is locked for usage somehow. So having a usbstor.sys copied to uaspstor.sys seems to trick Windows into using the BOT drivers instead while thinking it’s using this troublesome UASP driver.

Now obviously this isn’t a solution, but it’s a workaround. For us, we just plug these drives into one of the older servers and used it as a temporary backup server until we figure out this thing in the long run.

But yeah, the lesson was really in our interaction with tech support and hopefully we all get better because of this! For technical solutions and support, please drop us a email or the comment below and we will get to you quickly and in parallel (not sequentially)!

PCI-DSS and Vendors

One of the things that advisors and consultants do, as part of our journey to get our clients to comply to PCI-DSS is the inevitable (and unenviable) task of dealing with vendors. A vendor can be classified as anyone or any company that is selling a service or a product to our client, that directly or indirectly relates or affect their PCI-DSS compliance. Examples include firewall vendors, encryption technology vendors, HSM vendors, server vendors, Virtual solution vendors, SIEM vendors, SOC providers, call center solution vendors, telemarketing services, hosting providers, cloud providers and the list goes on. Having dealt with hundreds of vendors over the course of the decade we have come across all kinds: some are understanding, some are hostile, some are dismissive, some are helpful and the list goes on.

But there is always a common denominator in vendors: They all start by justifying why their product or service is:

a) Not relevant to PCI-DSS compliance (because they don’t store card data, usually)

b) Why their product is PCI acceptable (but it’s really not, or when we have questions on certain aspects of it)

It always begins with these two start points and it can then branch off into a myriad of different plots, twists, turns and endings, very much like a prolonged Korean drama.

With this in mind, we recently had an interesting call with one of such vendor, who basically runs a fairly important PCI subsystem for one of our clients. The problem was that their logs and console had two things that we sometimes find: the combination of truncated and hashed values of a credit card information, grouped together.

Now, just a very quick recap:

a) Truncated card data – This is where you see parts of the card replaced by XXXX characters (or any character) where the full card number is not STOREDNow it must be noted that TRUNCATED and MASKED are treated differently, although oftentimes confused and used interchangeably. When we say something is MASKED, it generally means the PAN (Primary account number) is stored in full but not displayed in full on the console/application etc. This applies sometimes to call centers or outsourced services where full PAN is not required for back office operations but for reconciliation or references. TRUNCATED here means even in storage, the full PAN is not present.

b) Hashed Card Data – Hashing means its a one-way transformation of card data into a hash value with no way to reverse it (Unlike encryption). If we use a SHA-256 hash algorithm on a PAN, you get a fixed result. Fraud management systems may store this hash PAN in order to identify transactions by that particular card (after hashing), and not worry about the actual card data being stored. It’s like hashing of passwords where the actual password isn’t known.

It’s to be noted, when done properly, these two instances of data may even be considered entirely out of scope of PCI-DSS. The problem here is when you have both of these stored together and correlated together, it renders the data protection weaker than just having one control available. This is probably where the concept usually gets lost on clients implementing these controls, as we have seen many times before – for example, tokenized information being stored together with truncated values.

Even PCI-DSS itself states clearly in the standard item 3.4 in the Note, that “Where hashed and truncated versions of the same PAN are present in an entity’s environment, additional controls must be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.”

To clarify, it doesn’t mean that it CANNOT be done, but additional controls must be in place. A further look at this is found in the Tokenization Product Security Guidelines Supplementary document:

IT 1A-3.b: Verify that the coexistence of a truncated PAN and a token does not provide a statistical advantage greater than the probably of correctly guessing the PAN based on the truncated value alone.

Further on:

…then the vendor provides documentation to validate the security strength (see Annex C – Minimum Key Sizes and Equivalent Key Strengths for Cryptographic Primitives) for each respective mechanism. The vendor should provide a truncated PAN and irreversible token sample for each.

And furthermore in Tokenization_Guidelines_Info_Supplement.pdf:

Note: If a token is generated as a result of using a hash function, then it is relatively trivial effort for a malicious individual to reconstruct original PAN data if they have access to both the truncated and hashed version of the PAN. Where hashed and truncated versions of the same PAN are present in the environment, additional controls should be in place to ensure that the hashed and truncated versions cannot be correlated to reconstruct the original PAN.

So in short, at anytime we see there are hashed values and truncated value together, we need to validated further on the controls. A good writeup is found here at another blog which summarises the issues surrounding this.

However, as our call with this particular vendor continued on, he demonstrated just how vendors should or should NOT approach PCI-DSS compliance, which sort of inspired this post:

A) DON’T place yourself as the topical expert in PCI-DSS: Don’t. Not because you are not, but because you are representing a product or a service, so you always view certain things through a lense you have been trained on. I know, because I was with vendors for many years and most of our consultants are from vendor backgrounds. He immediately started by stating, he is extremely well verse with section 3.4 of PCI-DSS (which basically talks about the 4 options of protecting card holder data stored), and that he has gone through this conversation many times with consultants. This immediately sends the QSA red flags, once the vendor starts moving away from what they know (their product) to what they may think they know but generally may not (PCI-DSS), and in general we don’t want to put the auditor on defence once vendors sound defensive. It should be collaborative. DO state clearly that we are subject matter experts in our own field and we are open to discussions.

B) DON’T recover by going ‘technical’: In his eagerness to demonstrate his opinion on PCI, he insisted that we all should know what 3.4 is about. Concerning the four controls stated in PCI-DSS (token, truncation, hashing, encryption), he claimed that their product is superior to what we are used to because his product has implemented 3 out of 4 of these controls (hashing, truncation and encryption) and he claims this makes it even more compliant to PCI-DSS. At this point, someone is going to call you out, which is what we did reluctantly as we were all staring at each other quizzically. We had to emphasize we really can’t bring this to the auditor or justify this to our client who was also on the call, as this is an absolute misinterpretation of PCI-DSS, no matter what angle you spin. PCI never told us to implement as many of these options as possible. In fact, clearly stating if more than one of these are introduced, extra care must be taken in terms of controls that these cannot be correlated back to the PAN. We told him this was a clear misinterpretation to which his response was going into a long discourse of where we consultants were always ‘harping’ on impractical suggestions of security and where we always think it’s easy to crack hashes just because we know a little bit about ‘rainbow tables’. We call this “going technical”. As Herman Melville, the dude that wrote Moby Dick puts it:

“A man of true Science uses but few hard words and those only when none others will serve his purpose; whereas the smatterer in Science… thinks that by mouthing hard words he understands hard things”. – Dude that wrote Moby Dick.

Our job is really to uncomplicate things and not to make it sound MORE complicated, because there may always be someone in the room (or video conference) who knows a little more than what they let on.

DO avoid jargonizing the entire conversation as it is very awkward for everyone, especially for those who really know the subject. DO allow input from others and see from the point of view of the standard, whether you agree or disagree or not and keep in mind the goal is common: to make our client compliant.


C) DO find a solution together. As a vendor, we must remember, the team is with the client. The consultant is (usually) with the client. So its the same team. A good consultant will always want vendors to work together. We always try to work out an understanding if vendors cannot implement certain things, then let’s see what we can work on, and we can then talk to the the QSA and reason things out. Compensating controls etc. So the solution needs to be together, and finally, after all those awkward moments of mansplaining everything to us, we just went: “OK, let’s move on, these are the limitations, let’s see where the solution is.” And after around 5 minutes or so, we had a workaround sorted out. Done. No need to fuss. So next step is to get this workaround passed by the auditor for this round and if not, then we are back again to discuss, if yes, then done, everyone is moving out to other issues. Time is of essence, and the last thing we need is each of us trying to show the size of our brains to each other.

D) Don’t namedrop and look for shorter ways to resolve issues. One of the weirdest thing that was said in the conversation after all our solution discussion was when the vendor said that he knew who the QSA was and he dropped a few names and said, just tell the QSA it’s so and so, and we’ve worked together and he will understand. Firstly, it doesn’t work like that. Namedropping doesn’t allow you to pass PCI. Secondly, no matter how long you have worked with someone, remember, another guy in the room may know that someone longer than you. We’ve been working with the QSA since the day they were not even in the country and for a decade, so we know everyone there. If namedropping was going to pass PCI, we would be passing PCI to every Tom, Dick, Harry and Sally around the world. No, it doesn’t work that way, we need to resolve the issues.

So there you have it. This may sound like a rant, but the end of the conversation was actually somewhat amicable. Firstly, I was genuinely appreciative of the time he gave us. Some vendors don’t even get to the table to talk and the fact that he did, I really think its a good step forward and made our jobs easier. Secondly, we did find the workaround together and that he was willing to even agree to a workaround, that’s a hard battle won. Countless vendors have stood their ground and stubbornly refused to budge even when PCI non-compliance was screaming at their faces. Thirdly, I think, after all the “wayang“, I believe he actually truly believed in helping our client and really thought that his product was actually compliant in all aspects. Of course, his delivery was awkward, but the intention was never to make life difficult for everyone, but to be of assistance.

At the end, the experience was a positive one, given how many discussions with vendors go south. We knew more of their solution, we worked out a solution together and more importantly, we think this will pass PCI for our client. So everyone wins. In this case, the Korean Drama ended well!

For more information on PCI-DSS, drop us a line at pcidss@pkfmalaysia.com and we will get back to you immediately! Stay safe!

PCI-DSS Cheatsheet

As we approach the end of the decade, we are approaching 16 years since PCI-DSS was first introduced back in 2004. 16 years. That’s probably a full dog lifetime. I would imagine the guys back in 2004 would have thought: “Let’s just get version 1 out this year. I’m sure our next generation of brilliant minds will figure everything out by 2020.”

So now we are a few ticking days away from 2020 and yet, at the end of the line, I am still answering calls that are increasing as the days go by: What is PCI-DSS and how do we get it?

Most of these callers are generally calling because our names are listed pretty high on the internet when someone types in PCI-DSS Malaysia. Apart from that, a majority of these callers are calling because we were reference by one of our clients. We have faced different variations of callers coming in: Some requests us to provide them with a PCI-DSS ‘license’ in order to operate for their clients. Some requires a ‘certificate’, some are literally clueless as to what it is but their banks have mercilessly dumped this whole requirement to them.

Step 1: Who’s Asking?

First of all, take a deep breath, here is a simple cheatsheet. Whoever is asking you to be PCI-DSS, take note of it. Here are the Usual Suspects:

Bank – Very likely you are connecting to them doing some sort of payment processing like a payment facilitator, a TPA etc. Or you could be a service provider and your client just happens to be a bank, which brings us to

Customer – your customer for some reason is dealing with credit/debit cards, either directly or indirectly, and they require you to do PCI-DSS because you are servicing them or they have outsourced to you, like BPO, Data Center, hosting, call center, or even network transit

Internal – One of your internal managers have read up about PCI-DSS and decided that your company will sound very cool if you are PCI-DSS certified. Now, in this case, you could or could not be PCI. Because PCI is a contractual obligation dealing with credit/debit cards badged with Visa, Amex, Mastercard, JCB, Diners/Discover – if you don’t deal with this or have any clients dealing with it but your company just wants to get any standard out there – my suggestion wold be to go for something like ISMS (ISO27001) as that’s a better guideline rather than a contractual standard like PCI-DSS. If you still insist – well, you could still go through the SAQ but a lot of it will be not applicable to you since you are Non-CDE for everything.

Those 3 are mainly the motivations behind PCI-DSS. Why is it important to determine who is asking, is because of the next step:

Step 2: Determine your Level

Now there are guidelines out there for which level you should be at. If a service provider, then anything over 300,000 volume of card processing will bump you into level 1. For merchant, anything over 6 million for level 1 and anything over 1 million for level 2. I can’t count the times people get mixed up with Service provider levels and merchant levels. Even banks. I have banks telling our payment gateway that they are Level 4 . There is no such thing. It’s either one or 2. For merchants there are level 1,2,3 and 4 but the volumes are different.

Now while the guidance is cool and all, at the end it’s your bank or customer determining your level. If your bank decides to only deal with you if you do a full certification and RoC with a QSA, then even if you are processing ZERO transactions, they have deemed you as level 1. You can then decide to either say OK, fine, or tell them you are taking your business elsewhere. In that case, they may decide not to play hardball. I don’t know. Same as your customer. Your customer may decide you need to be assessed by a QSA, so it’s best you determine this with whoever is asking you.

The secret sauce is this: Most of the time, your bank/customer won’t have a clue what they want. They will just say, Oh, be PCI compliant. In this case, approach them with some tact. Your mission, should you choose to accept it, should be to avoid level 1 certification as much as you can, if your volume is low. It’s not justifiable. Look, if you want to be assessed by a QSA, by all means, but at least, know that you have a choice if your volume is low, and your bank/customer isn’t fussy about it. Just tell them: “OK, I’ll be PCI-DSS compliant, and I will fill up the Self Assessment Questionnaire (SAQ) and our management will sign it off and send it over to you. Is this OK?” If yes, then great, do your own self assessment. You can save up some money.

Step 3: Determine your Controls

This is probably the trickiest part of PCI-DSS. You see, being level 1 or level 2, self assessed or third party assessed, SAQ or RoC does NOT make any difference on what controls you need to have in place. An example: Level 1 compliance may require you to do ASV scans for 3 external IPs and 20 Internal IP Penetration testing. Guess what? Even if you are doing an internal self signed SAQ, you are supposed to do the SAME THING. No difference. No “Oh, since I am level 2, I will do ASV scans for 1 IP and maybe take 5 Internal IP for Pentest instead of 20.” In theory, all controls are the same, the only difference is WHO assesses and attests these controls.

Now, of course, realistically, this is not happening. Like I always illustrate, some companies consider a firewall as a wall on fire and they sign themselves off as PCI-DSS. Hence the whole passing the buck, passing the risk thing about PCI that I won’t go into discussion here. But in theory at least, same controls apply. But how do you determine what applies to your business? Well, based on your business flows of course.

Determine above all whether you are storing credit card information. If you are not, roughly 35% of PCI-DSS is not applicable (I am plucking that % out of no where, so don’t quote me). But a big chunk isn’t applicable. Second, determine whether you even interact with credit card or not. Look into all your channels. It could be complex like a call center, or simple like a network transit. In most case if you can determine that you have no access to credit card PAN or don’t store, and don’t process, the controls that are applicable to you are minimal. You should STILL be PCI compliant, but minimal controls apply.

Step 4: Determine your vendors and outsourcers

We had a client who cancelled an ongoing PCI-DSS with us because they have deemed themselves PCI-DSS compliant because they are using a PCI-DSS software. I cannot count the number of times I have to correct them – NO. Just by using a software which is PA-DSS compliant or even PCI compliance (like Cloud) DOES NOT make you PCI-DSS compliant. Will it help? Sure it will, but can you piggy back on someone else’s compliance? No. You can’t. So either you go through PCI yourself, or stay non-compliant, but don’t say you are compliant when you are only using a software that is compliant. That’s like saying you are certified to fly a plane when you are a passenger of a plane flown by a certified pilot. Or something similar.

Get your vendors on board for PCI if possible. If they refuse you can still use them, but you now have to include their processes under YOUR PCI-DSS program. Why would you want to spend extra days getting your vendor compliant when there are OTHER vendors who already are compliant?

So there you have it:- When someone requests PCI compliant – first, review your options. There is no ONE way for PCI. Go with the least resistance – self signed SAQ if your volume allows it. That saves you a lot of time and money as opposed to getting a QSA to come in.

If you have any queries on PCI-DSS, drop us a note at pcidss@pkfmalaysia.com and we will attend to it right away! Merry Christmas!

PCI-DSS For Software Developers

Of late we have been receiving numerous calls from software developers requesting us how on earth do they become PCI-DSS certified.

It’s never easy to explain over the phone, especially with misconceptions that PCI-DSS is a license, or a software, or a solution, or some sort of exam or some other thing. And also, how do we go about explaining to them that technically they don’t (or can’t) be PCI certified as a software vendor, but they can opt for PA-DSS or the new Secure Software Standard from PCI.

So the first thing to ask is (assuming this application/solution is handling credit card information):

a) Are you developing software only and selling that software to your customers?

b) Are you developing a solution where you are hosting and managing and allowing clients?

If it’s a), applicability of PCI-DSS is simply on your customer that is buying your software, not on you as a company. After all, you generally don’t handle credit card – your customer does. However, your software is likely in scope for their PCI-DSS assessment, so there could be an instance where you need to participate in your client’s assessment or to develop your software in a manner where it would be “PCI Compliant”. Developing a PCI compliant software doesn’t make it certified, but it does assist in helping your clients getting certified. An example would be to develop your solution with logging capability and able to log to a central location. Another example is your solution being able to integrate with AD, or to have PCI compliant password policies (session timeouts, password expiry etc). Other examples are to ensure there is Role Based Authentication and Authorisation. Or ensuring encryption is properly done for data at rest and in transit. By doing these doesn’t make it immediately PCI certifiable – but it does provide your client with less headache.

If it’s b), then yes, you are not considered just a software developer but a service provider. You are providing SAAS, so generally that makes you responsible for the day to day security of card data in behalf of your client. In that case, PCI-DSS is able to be applied to you on your solution and your process.

As with PA-DSS, the new Secure Software Program applies to the following software:

Software products involved in or directly supporting or facilitating payment transactions that store, process, or transmit clear-text account data.

Software products developed by the vendor that are commercially available for sale to multiple organizations.

So all the CRM systems, call systems, in house systems, customised systems are all not eligible for PA-DSS or the new program. This is typically in line with what has always been, anyway.

So that leaves us back to square one. What happens if you are not eligible for PA-DSS or Secure Software program and you are just a software developer and NOT a service provider, but your client is insisting on you being PCI-DSS certified?

Well, hopefully you can explain to them or point them out to this article. Another option you can have is to say you have developed your software that is compliant to PCI requirements. The following list shows what it should take to address PCI compliance (not comprehensive):

1.      Requirement 2 – Ensure no clear text for administrative access

2.      Requirement 3 – Application is transmitting /store and strong encryption needed

3.      Requirement 4 – Application must encrypt when transmitting over public network

4.      Requirement 6 – Software development process – secure code review, remove test data before rolling to production,  ensure application is patched, prompt when bugs are discovered.

5.      Requirement 8 – ensure the application can support PCI DSS password requirements, password is encrypted at rest and transmission

6.      Requirement 10 – the application is capable of sending logs to the SIEM, Application penetration testing is conducted and documented what methodology of testing is used.

Requirements affecting Software: Sample Evidences
For all system components in scope (servers, network devices, applications, databases, etc.) and POS devices, provide evidence of strong cryptography being implemented (ssh, TLS 1.2 or later, RDP over TLS etc.)
Provide the following for all filesystems, databases and any backup media
– Details on method (encryption, hashing, truncation, tokenization) being used to protect covered information in storage
– Evidence (screenshots or settings) showing  covered information is protected
Provide evidence of encryption being used for transmission of in-scope data over any open or public communication channel (i.e. Internet, Wireless network, GSM, GPRS, VSAT technology etc.). Encryption must confirm to strong industry standards.
For the selected sample, provide evidence of,
– Current patch levels
– Patches being deployed in a timely manner
Provide secure software development process document in accordance with industry best practices
Provide a recent secure code review report for an application that stores, processes or transmits covered information.
Provide a document that outlines
– the process for generating test data to be used in lower (test/development) environments.
– the process for removing test data and test accounts prior to moving the system to higher (production) environment.
Provide 4 sample change request (2 for software modification and 2 for security patch implementation) from the last 6 months.
Provide the following from a secure code training perspective
– Material used for training
– Attendee list showing that all developers are covered
Provide evidence of logical access account and password features to include,
– Account lockout policy
– Account lockout duration
– Session timeout policy
– Password length
– Password complexity
– Password history
– Password expiry
Provide evidence that passwords (for platform and/or consumer applications) are encrypted during transmission and storage.
Provide the audit log policy settings.
Provide actual event logs for each of the platforms identified in the sample.
Provide a documented methodology being used for penetration testing.
Provide internal penetration test report.

You would get stuck if your clients want to see the PCI-DSS certification, which obviously you won’t have. In this case, the only way forward is to talk to them saying it’s not possible for you to be PCI certified in that sense. If you want, you could actually engage a third party auditor or even a QSA to assess the application based on PCI requirements. You won’t get a certificate for PCI, but at least you have a third party attestation or report, which hopefully should be enough.

Another option is to just get a hold of us at pcidss@pkfmalaysia.com and we can maybe provide a bit more persuasion to your client in accepting your application for PCI-DSS!

Alienvault USM Anywhere Updates

We just received very good updates from the Alienvault channel team (or AT&T Cybersecurity team as they call themselves now). I think to quickly summarise our excitement into two short phrases:

a) Google Cloud Support – Heck Yeah.

b) Custom Plugin Development – Heck Yeah!

Of course, there were tons of other updates as well, such as scheduled reports, unified UI, more AlienApps support, Cloudflare integration (which is very interesting, as we can identify actions to it, effectively making Alienvault function more like an active prevention system, as opposed to its traditional detective role), new search capability incorporating wildcard searches and advanced asset importing through CSVs as opposed to rudely scanning our clients network.

But the two main courses were the Google Native support and custom plugin.

Google Native support has been a pain point for years. We do have customers moving into GCP or already into GCP where we have been constantly battling to match their expectations for Alienvault to perform as seamlessly as it does on AWS – but it can’t. We had to rely on EDR (endpoint detection and response) for instance, where the agent grabs logs a’la HIDS and sends it over to the server directly. Of course, areas where a native sensor would function, such as creating an internal VPC filter mechanism, or doing vulnerability scanning without having too much inter VPC traffic – these were not able to be done with the EDR so it was very much a bandaid. We knew that our patched up GCP solution wasn’t functioning as well as its handsomer and more dashing brother, AWS. In other words, it kinda sucked.

GCP custom applications also presented its own set of issues – custom apps were difficult to integrate – even with Stackdriver, or us logging to BigQuery, presented a lot of issues to send these logs to Alienvault. When we could configure to send to BigQuery, we couldn’t filter properly, causing our 1TB per month customer quota to be annihilated within days. Now, getting PUB/SUB to work with Alienvault requires APIs to be written, and on top of that to have Alienvault write the custom plugins – all these add to pro services costs, and more importantly, resource and time cost to the project.

So what happens now? In the next General Acceptance/Availability of USM-A, GCP will be supported. The information is sparse so more updates will be forthcoming. But the GCP sensor will be able to:

a) Perform threat detection (like all other sensors), asset discovery, provide Alarms, events, widgets, correlation etc. Basically, it will be native to GCP, doing what it is doing for AWS, Azure and on-prem Hyper and VMWare.

b) Detect VPC flow logs

c) Monitor cloud services through Stackdriver

The last bit is very important. Stackdriver, in essence, is GCP’s answer to Cloudwatch and Cloudtrail of AWS. It monitors and manages services, containers, applications and infrastructure for the cloud. If you have a Cloud services or developing cloud applications, you should be able to support Stackdriver logging. In GCP Compute, the logging agent is used to stream logs from VM Instances. It can even provide the traditional network flow logs (or VPC flow logs), which MSPs can use to monitor network health etc. In other words, this ugly GCP little brother solution is going to get buffed. We’re going to look a lot better now.

The roadmap is bright: Automatic response action against a cloud service when a security event occurs – putting Alienvault into more of a proactive than detective stance it takes traditionally. This is similar to what the Cloudflare integration is achieving. More and more GCP services will be added to be supported. There is also a topic on “User Entity Behaviour Analytics” – which is basically matching behaviour to normal baselines and telling us that Bob is having coffee at 10 am instead of his usual 8 am, which meant he was running late to work, which meant he got stuck in traffic, which meant he left the house late, which meant he woke up late, which meant he slept late last night, which meant he went out for a drink with someone and got smashed, which could possibly mean he is having an affair with a stripper named Daisy. Maybe.

So, pretty exciting times, Aliens!

The other one on the plate wasn’t on the normal discussion agenda but was brought up by us on the international call – we just bombarded the screen with around 10 – 15 queries and at least 4 made it to the table. One of them was: when the hell are we going to get to do our own plugins?

No offence to Alienvault, who currently for USM-A are doing our client’s custom plugins – but 3 – 4 weeks isn’t really going to cut it. Furthermore, sometimes we are not even getting what we want from the custom plugins. We don’t blame Alienvault. The application is ours (as in our client’s). We are the ones who know the events, the priorities. We know what we want to see. We just can’t develop the plugins like what we do now for our USM Appliance clients.

Imagine the win-win situation here. We write plugins for clients (assuming its similar to Appliance), within 2 – 3 days we are done. Testing, another 1 – 2 days. Instead of setting the project timeline back 3 – 4 weeks we are 1 week in. That’s a HUGE impact for compliance clients who are often chasing a deadline. 3 weeks squashed to 1? Hell, Yeah! The win is also for Alienvault. They don’t have to deal with nagging customers or smart-ass channel partners like us banging them for not updating us on our new application plugin. Imagine the parties engineers can now attend to instead of writing regex for a company operating in Elbonia. Imagine the time they now can save and spend socialising with the rest of the world, or having the chance to meet people like Daisy.

It’s a whole new world, really.

So, Alienvault, please, get those updates to us as soon as you can and the world will be a better place for it.

If you need any information on Alienvault, or general help on your SIEM or PCI-DSS compliance, drop us an email on alienvault@pkfmalaysia.com and we will attend to it immediately!

« Older posts

© 2020 PKF AvantEdge

Theme by Anders NorenUp ↑