Category Archives: Security

Independence versus conflict of interest in security reviews

I was giving a lecture to some soon to be graduating folks today, and at the end of the class, a student came up and said that he wasn’t allowed to work with auditors because “it was a conflict of interest”.

No, it’s not. And here’s why.

Conflict of interest

It’s only conflict of interest if a developer who wrote the code then reviews the code and declares it free of bugs (or indeed, is honest and declares it full of bugs). In either case, it’s self review, which is a conflict of interest.

The only way the auditor is in a conflict of interest is if the auditor reviews code they wrote. This is self-review.

An interesting corner case that requires further thought is a rare case indeed: I wrote or participated in a bunch of OWASP standards, some of which became bits of other standards and certifications, such as PCI DSS section 6.5. Am I self-reviewing if I reviewing code or applications against these standards?

I think not, because I am not changing the standard to suit the review (which is an independence issue),  and I’m not reviewing the standard (which would be self-review, which is a conflict of interest).

Despite this, for independance reasons where I think an independence in appearance issue might arise, I always put in that I created or had a hand in these various standards. Especially, if I recommend a client to use a standard that I had a hand in.

Independence

Independence is not just an empty word, it’s a promise to our clients that we are not tied to any particular vendor or methodology. We can’t be a trusted advisor if we’re a shill for a third party or are willing to compromise the integrity of review at a customer’s request.

The usual definition of independence is two pronged:

  • Independence in appearance.  If someone thinks that you might be compromised because of affiliations or financial interest, such as if you’ve previously held a job. For example, if you’ve always worked in incident response tool vendors, and then move to a security consultancy, you might feel you are independent, but others might perceive you as having a pro-IR toolset point of view.
  • Independance in actuality. If you own shares or get a financial reward for selling a product, any recommendation you make about recommending that product is suspect. This is the reason I want OWASP to remain vendor neutral.

If either of these prongs are violated, you are not independent. But just as humans are complex, there are many aspects to independence, and I’ve not learnt them all. I know most of the issues, been there got the t-shirt.

If you are independent, there are a few areas of independence that I refuse to relinquish (and I hope you do too!):

  • Scoping questions. I don’t mind customers setting a scope, but I will often argue for the correct scope before we start. Too narrow a scope can railroad a review into giving an answer the client wants, rather than a proper independent review.
  • Review performance independence. If the client tries to make the review so short that I can’t complete my normal review program in an effective manner or they stop me from getting the information I need, or tries to frame negative observations in a lesser or “meh” context, I will resist. I want to ensure that the review is accurate, but not at the expense of performing my methodology or being up to my usual standards.
  • Risk ratings and findings. In the last few years, I’ve had to resist folks trying to force real findings to become unrated opportunities for improvement (by definition all of my findings are such), or trying to reduce risk ratings by arguing with me, or getting words changed to suit a desired outcome. Again, I want the context to be accurate and I will listen to your input / arguments, but only I will write the report and set risk ratings. Otherwise, why bother hiring an external reviewer? You could write your own report set the ratings to suit. It doesn’t work that way.

Does independence always need to be achieved?

My personal view on this has changed over the years. I used to toe the strict party line on independence.

However, sometimes as a reviewer, you can be part of the solution. I personally believe that a good working relationship between the reviewer and the folks who produced the application or code is a good thing. Both parties can learn from working closely together, work out the best approach to resolving issues, and test it rapidly.

As long as the self-review aspect is properly managed, I believe this to be a good path forward. I don’t see this as being any different as a traditional review that recommends fix X, and then reviewing that X has been put into place. However, if the auditor is editing code, that has crossed the line.

Work with the business to document agreed rules of engagement early prior to work commencing. Both parties will get a lot more mileage from a closely cooperative review than a “pay someone to sit in the corner for two weeks” that is the standard fare of our industry.

Conclusion

Working together has obvious independance issues, particularly from an appearance point of view. So to the excellent question from a student today, working tightly with the auditors is not a conflict of interest, but it can be an independence issue if not properly managed by both the business and the reviewer.

Some people don’t get the hint

85.25.242.250 – – [28/Sep/2014:09:20:12 -0400] “GET / HTTP/1.1″ 301 281 “-” “() { foo;};echo;/bin/cat /etc/passwd”
85.25.242.250 – – [28/Sep/2014:22:30:48 -0400] “GET / HTTP/1.1″ 500 178 “-” “() { foo;};echo;/bin/cat /etc/passwd”

Dear very stupid attacker, you have the opsec of a small kitten who is surprised by his own tail. Reported.

So it’s finally happened

Passwords. Pah.

After running my blog on various virtual hosts and VPSs since 1998, my measures put into place to protect this site and the others on here were insufficient to protect against weak passwords.

Let’s just say that if you are a script kiddy and know all about press.php, tmpfiles.php and others, you have terrible operational security. There will be consequences. That is not a threat.

AppSec EU – DevGuide all day working party! Be a part of it!

Be a part of the upcoming AppSec EU in Cambridge!

Developer Guide Hackathon
Developer Guide Hackathon

 

* UPDATE! Eoin can’t be in two places at once, so our hack-a-thon has moved to Tuesday 24 June. Same room, same bat channel. *

Eoin Keary and myself will be running an all day working party on the Developer Guide On June 24 from 9 AM to 6 PM GMT. The day starts with Eoin giving a DevGuide status update talk, and then we get down with editing and writing. 

I will be working remotely from the end of Eoin’s talk to til 1 pm UK time, and I so I encourage everyone who has an interest in the Devguide to either attend the workshop in person, or consider helping out remotely. Sign up here!

https://www.owasp.org/index.php/Projects_Summit_2014/Working_Sessions/004

My goal is to get the entire text of the OWASP Developer Guide 2.0 ported to the new format at GitHub, and hopefully finish 4 chapters. To participate, you will need a browser and a login to GitHub. You will also probably want a login to Google+ so you can be a part of an “on air” all day hangout so you can ask me anything about the DevGuide, or just chill with other remote participants.

Stop. Just stop.

In the last few weeks, a prominent researcher, Dragos Ruiu (@dragosr) has put his neck out describing some interesting issues with a bunch of his computers. If his indicators of compromise are to be believed (and there is the first problem), we have a significant issue. The problem is the chorus of “It’s not real” “It’s impossible” “It’s fake” is becoming overwhelming without sufficient evidence one way or another. Why are so many folks in our community ready to jump on the negative bandwagon, even if they can’t prove it or simply don’t have enough evidence to say one way or another?

My issue is not “is it true” or “I think it’s true” or “I think it’s false”, it’s that so many info sec “professionals” are basically claiming:

  1. Because I personally can’t verify this issue is true, the issue must be false. QED.

This fails both Logic 101, class 1, and also the scientific method. 

This is not a technical issue, it’s a people issue.

We must support all of our researchers, particularly the wrong ones. This is entirely obvious. If we eat our young and most venerable in front of the world’s media, we will be a laughing stock. Certain “researchers” are used by their journalist “friends” to say very publicly “I think X is a fool for thinking that his computers are suspect”. This is utterly wrong, foolhardy, and works for the click bait articles the J’s write and on their news cycle, not for us.

Not everybody is a viable candidate for having the sample. In my view, the only folks who should have a sample of this thing are those who have sufficient operational security and budget to brick and then utterly destroy at least two or twenty computers in a safe environment. That doesn’t describe many labs. And even then, you should have a good reason for having it. I consider the sample described to needing the electronic equivalent of Level PC4 bio labs. Most labs are not PC4, and I bet most of infosec computing labs are not anywhere near capable of hosting this sample.

Not one of us has all of the skills required to look at this thing. The only way this can be made to work is by working together, pulling together E.Eng folks with the sort of expensive equipment only a well funded organisation or a university lab might muster, microcontroller freaks, firmware folks, CPU microcode folks, USB folks, file system folks, assembly language folks, audio folks, forensic folks, malware folks, folks who are good at certain types of Windows font malware, and so on. There is not a single human being alive who can do it all. It’s no surprise to me that Dragos has struggled to get a reproducible but sterile sample out. I bet most of us would have failed, too.

We must respect and use the scientific method. The scientific method is well tested and true. We must rule out confirmation bias, we must rule out just “well, a $0.10 audio chip will do that as most of them are paired with $0.05 speakers and most of the time it doesn’t matter”. I actually don’t care if this thing is real or not. If it’s real, there will be patches. If it’s not real, it doesn’t matter. I do care about the scientific method, and it’s lack of application in our research community. We aren’t researchers for the most part, and I find it frustrating that most of us don’t seem to understand the very basic steps of careful lab work and repeating important experiments.

We must allow sufficient time to allow the researchers to collaborate and either have a positive or negative result, analyse their findings and report back to us. Again, I come back to our journalist “friends”, who can’t live without conflict. The 24 hour news cycle is their problem, not our problem. We have Twitter or Google Plus or conferences. Have some respect and wait a little before running to the nearest J “friends” and bleating “It’s an obvious fake”.

We owe a debt to folks like Dragos who have odd results, and who are brave enough to report them publicly. Odd results are what pushes us forward as an industry. Cryptoanalysis wouldn’t exist without them. If we make it hard or impossible for respected folks like Dragos to report odd results, imagine what will happen the next time? What happens if it’s someone without much of a reputation? We need a framework to collaborate, not to tear each other down.

Our industry’s story is not the story about the little boy who cried wolf. We are (or should be) more mature than a child’s nursery rhyme. Have some respect for our profession, and work with researchers, not sully their name (and yours and mine) by announcing before you have proof that something’s not quite right. If anything, we must celebrate negative results every bit as much as positive results, because I don’t know about you, but I work a lot harder when I know an app is hardened. I try every trick in the book, including the stuff that is bleeding edge as a virtual masterclass in our field. I bet Dragos has given this the sort of inspection that only the most ardent forensic researcher might have done. If he hasn’t gotten that far, it’s either sufficiently advanced to be indistinguishable from magic, or he needs help to let us understand what is actually there. I bet that few of us could have gotten as far as Dragos has.

To me, we must step back, work together as an industry – ask Dragos: “What do you need?” “How can we help?” and if that’s “Give me time”, then let’s step back and give him time. If it’s a USB circuit analyser or a microcontroller dev system and plus some mad soldering skills, well, help him, not tear him down. Dragos has shown he has sufficient operational security to research another 12-24 months on this one. We don’t need to know now, now, or now. We gain nothing by trashing his name.

Just stop. Stop trashing our industry, and let’s work together.

So your Twitter has been hacked. Now what?

So I’m getting a lot of Twitter spam with links to install bad crap on my computer.

More than just occasionally, these DM’s are sent by folks in the infosec field. They should know better than to click unknown links without taking precautions.

So what do you need to do?

Simple. Follow these basic NIST approved rules:

Contain – find out how many of your computers are infected. If you don’t know how to do this, assume they’re all suspect, and ask your family’s tech support. I know you all know the geek in the family, as it’s often me.

Eradicate – Clean up the mess. Sometimes, you can just use anti-virus to clean it up, other times, you need to take drastic action, such as a complete re-install. As I run a Mac household with a single Windows box (the wife’s), I’m moderately safe as I have very good operational security skills. If you’re running Windows, it’s time for Windows 8, or if you don’t like Windows 8, Windows 7 with IE 10.

Recover – If you need to re-install, you had backups, right? Restore them. Get everything back the way you like it.

  • Use the latest operating system. Windows XP has six months left on the clock. Upgrade to Windows 7 or 8. MacOS X 10.8 is a good upgrade if you’re still stuck on an older version. There is no reason not to upgrade. On Linux or your favorite alternative OS, there is zero reason not to use the latest LTS or latest released version. I make sure I live within my home directory, and have a list of packages I like to install on every new Linux install, so I’m productive in Linux about 20-30 minutes after installation.
  • Patch all your systems with all of the latest patches. If you’re not good with this, enable automatic updates so it just happens for you automatically. You may need to reboot occasionally, so do so if your computer is asking you to do that. On Windows 8, it only takes 20 or so seconds. On MacOS X, it even remembers which apps and documents were open.
  • Use a safer browser. Use IE 10. Use the latest Firefox. Use the latest Chrome. Don’t use older browsers or you will get owned.
  • On a trusted device, preferably one that has been completely re-installed, it’s time to change ALL of your passwords as they are ALL compromised unless proven otherwise. I use a password manager. I like KeePass X, 1Password, and a few others. None of my accounts shares a password with any other account, and they’re all ridiculously strong. 
  • Protect your password manager. Make sure you have practiced backing up and restoring your password file. I’ve got it sprinkled around in a few trusted places so that I can recover my life if something bad was to happen to any single or even a few devices.
  • Backups. I know, right? It’s always fun until all your data and life is gone. Backup, backup, backup! There are great tools out there – Time Capsule for Mac, Rebit for Windows, rsync for Unix types.

Learn and improve. It’s important to make sure that your Twitter feed remains your Twitter feed and in fact, all of your other accounts, too.

I never use real data for questions and answers, such as my mother’s maiden name as that’s a public record, or my birth date, which like everyone else, I celebrate once per year and thus you could work it out if you met me even randomly at the right time of the year. These are shared knowledge questions, and thus an attacker can use that to bypass Twitter, Google’s and Facebook’s security settings. I either make it up or just insert a random value. For something low security like a newspaper login or similar, I don’t track these random values as I have my password manager to keep track of the actual password. For high value sites, I will record the random value to “What’s your favorite sports team”. It’s always fun reading out 25 characters of gibberish to a call centre in a developing country.

Last word

I might make a detailed assessment of the DM spam I’m getting, but honestly, it’s so amateur hour I can’t really be bothered. There is no “advanced” persistent threat here – these guys are really “why try harder?” when folks don’t undertake even the most basic of self protection.

Lastly – “don’t click shit“. If you don’t know the person or the URL seems hinky, don’t click it.

That goes double for infosec pros. You know better, or you will just after you click the link in Incognito / private mode. Instead, why not fire up that vulnerable but isolated XP throw away VM with a MITM proxy and do it properly if you really insist on getting pwned. If you don’t have time for that, don’t click shit.

Infosec apostasy

I’ve been mulling this one over for a while. And honestly, after a post to an internal global mail list at work putting forward my ideas, I’ve come to realise there are at least two camps in information security:

  • Those who aim via various usual suspects to protect things
  • Those who aim via various often controversial and novel means to protect people 

Think about this for one second. If your compliance program is entirely around protecting critical data assets, you’re protecting things. If your infosec program is about reducing fraud, building resilience, or reducing harmful events, you’re protecting people, often from themselves.

I didn’t think my rather longish post, which brought together the ideas of the information swarm (it’s there, deal with it), information security asymmetry and pets/cattle (I rather like this one), would land with the heavy thud akin to 95 bullet points nailed to the church door.

So I started thinking – why do people still promulgate stupid policies that have no bearing on evidence? Why do people still believe that policies, standards, and spending squillions on edge and end point protection when it is trivial to break it?

Faith.

Faith in our dads and grand dads that their received wisdom is appropriate for today’s conditions.

Si Dieu n’existait pas, il faudrait l’inventer” Voltaire

(Often mis-translated as “if religion did not exist, it would be necessary to create it”, but close enough for my purposes)

I think we’re seeing the beginning of infosec religion, where it is not acceptable to speak up against unthinking enforcement of hand me down policies like 30 day password resets or absurd password complexity, where it is impossible to ask for reasonable alternatives when you attempt to rule out the imbecilic alternatives like basic authentication headers.

We cannot expect everyone using IT to do it right, or have high levels of operational security. Folks often have a quizzical laugh at my rather large random password collection and use of virtual machines to isolate Java and an icky SOE. But you know what? When Linked In got pwned, I had zero fears that my use of Linked In would compromise anything else. I had used a longish random password unique to Linked In. So I could take my time to reset that password, safe in the knowledge that even with the best GPU crackers in existence, the heat death of the universe would come before my password hash was cracked. Plenty of time. Fantastic … for me, and I finally get a pay off for being so paranoid.

But… I don’t check my main OS every day for malware I didn’t create. I don’t check the insides of my various devices for evil maid MITM or keyloggers. Let’s be honest – no one but the ultra paranoid do this, and they don’t get anything done. But infosec purists expect everyone to have a bleached white pristine machine to do things – or else the user is at fault for not maintaining their systems.

We have to stop protecting things and start protecting humans, by creating human friendly, resilient processes with appropriate checks and balances that do not break as soon as a key logger or network sniffer or more to the point, some skill is brought to bear. Security must be agreeable to humans, transparent (as in plain sight as well as easy to follow), equitable, and the user has to be in charge of their identity and linked personas, and ultimately their preferred level of privacy.

I am nailing my colors to the mast – we need to make information technology work for humans. It is our creature, to do with as we want. This human says “no

Marketing – first against the wall when the revolution comes

A colleague of mine just received one of those awful marketing calls where the vendor rings *you* and demands your personal information “for privacy reasons” before continuing with the phone call.

*Click*

As a consumer, you must hang up to avoid being scammed. End of story. No exceptions.

Even if the business has a relationship with the consumer, asking them to prove who they are is wildly inappropriate. Under no circumstances should a customer be required to provide personal information to an unknown caller. It must be the other way around – the firm must provide positive proof of who they are! And by calling the client, the firm already knows who the client is, so there’s no reason for the client to prove who they are.

As a business, you are directly hurting your bottom line and snatching defeat from the jaws of victory by asking your customers to prove their identity to you.

This is about the dumbest marketing mistake ever – many customers will automatically assume (correctly in my view) that the campaign is a scam, and repeatedly hang up, thus lowering goal completion rates and driving up the cost of sales. Thus this dumb move can cost a company millions in opportunity costs in the form of:

  • wasted marketing (hundreds of dropped customer contacts for every “successful” completed sale),
  • increase fraud to the consumer and ultimately the business when customers reject fraudulent transactions
  • lose thousands if not hundreds of thousands of customers, and their ongoing and future revenue if they lose trust in the firm or by the firm’s lack of fraud prevention, cause them to suffer fraud by allowing scammers to easily harvest PII from the customer base and misuse it

Customers hate moving businesses once they have settled on a supplier of choice, but if you keep on hassling them the wrong way, they do up and leave.

So if any of you are in marketing or are facing pressure from the business to start your call script by asking for personally identifying information from your customers, you are training your customers to become victims of phishing attacks, which will cost you millions of dollars and many more lost customers than you’ll ever gain from doing the right thing.

It’s more than just time to change this very, very, very bad habit.

Responsible disclosure failed – Apple ID password reset flaw

Responsible disclosure is a double edged sword. The faustian bargain is I keep my mouth shut to give you time to fix the flaws, not ignore me. I would humbly suggest that it is very relevant to your interests when a top security researcher submits a business logic flaw to you that is trivially exploitable with just iTunes or a browser requiring no actual hacking skills.

If anyone knows anyone at Apple, please re-share or forward this post, and ask them to review my rather detailed description of my rather simple method of exploiting the Apple ID password reset system I submitted over six months ago with so far zero response beyond an automated reply. The report tracking number is #221529179 submitted August 12, 2012.

My issue should be fixed along with the other issues before they let password reset back online with my flaw intact.

Running Fortify SCA 3.80 on Ubuntu 12.04 64 bit Linux

I have a bit of a code review job at the moment. It’s a large code base, and you all know what that means. LOTS OF RAM! So I got me a 16 GB upgrade. Then I found that I could only allocate 8 GB to a VM in VMWare Fusion. So here’s how to scan a big chunk of code with minimal pain:

The default VM disk size for a Easy Installed Ubuntu is 20 GB, with 8 GB of swap. WTF. So don’t use Easy Install as you’ll run out of disk space doing a scan of a moderate sized application. I expanded mine to 80 GB after it was all installed, but if you are smart, unlike me, do it when you first build the system.

To add more than 8GB to a VM in VMWare Fusion, allocate 8192 MB (the maximum amount) in the GUI whilst the VM is shutdown, open the package contents of the VM by right clicking the VM (I’m on a Mac, so if you rename a folder foobar.vmwarevm, it becomes a package automagically). Find the VMX file. Open it carefully in a decent editor (vi or TextWrangler or TextMate) – there is magic here and if you edit it wrong, your VM will not boot. Change memsize = “8192” to say memsize = “12384” and save it out. I wouldn’t go too close to your total memory size as you’ll start paging on the Mac, and that’s just pain.  Boot the VM. Confirm you have enough memory!

First off, do not even try to do it within Audit Workbench. It will just fail.

Secondly, it seems that HP do not test the latest version of SCA on OpenSuse 12.2, which is a shame as I really liked OpenSuse. There’s no way to fix up the dependencies without using an unsafe (older) version of Java, so I gave it up.

Ubuntu, despite not being listed as a valid platform (CentOS, Red Hat, and OpenSuse are all listed as qualified), Ubuntu had a graphical installer compared to OpenSuse’s text only install. Alrighty, then.

Install Oracle Java 1.7 latest using the 64 bit JDK for Linux. I did it to /usr/local/java/ Weep for you now have a massive security hole installed.

Force Ubuntu to use that JVM using update alternatives:

sudo update-alternatives --install "/usr/bin/java" "java" "/usr/local/java/jdk1.7.0_15/bin/java" 1 
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/local/java/jdk1.7.0_15/bin/javac" 1 
sudo update-alternatives --set java /usr/local/java/jdk1.7.0_15/bin/java 
sudo update-alternatives --set javac /usr/local/java/jdk1.7.0_15/bin/javac

I created the following in /etc/profile.d/java.sh

#!/bin/sh
JAVA_HOME=/usr/local/java/jdk1.7.0_15
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin
export JAVA_HOME
export PATH

Note that I did not tell Ubuntu about Java Web Start. If you want to keep your Ubuntu box yours, you will not let JWS anywhere near a browser. If you did this step, it’s best to delete javaws completely from your system to avoid any potential for drive by download trojans.

Install SCA as per HP’s instructions. 

Now, you need to go hacking as HP for some reason still insist that 32 bit JVMs are somehow adequate. Not surprisingly, Audit Workbench pops up an exception as soon as you start it if you take no further action to make it work. So let’s fix that up.

I went and hacked JAVA_CMD in /opt/HP_Fortify/HP_Fortify_SCA_and_Apps_3.80/Core/private-bin/awb/productlaunch to be the following instead of the JRE provided by HP:

JAVA_CMD="/usr/local/java/jdk1.7.0_15/bin/java"

After that, Audit Workbench will run.

Now, let’s work on ScanWizard. ScanWizard the only way really to produce repeatable scans that work without running out of memory. So run a ScanWizard. It’ll create a shell file for you to edit. You need to make the following changes:

MEMORY="-Xmx6000M -Xms1200M -Xss96M "

LAUNCHERSWITCHES="-64 "

There’s a space after -64. Without that it fails.

Then there’s bugs in the generated scan script that mean it would never work when using a 64 bit scan. It’s almost like HP never tested 64 bit scans on large code bases (> 4 GB to complete a scan). I struggle to believe that, especially as their on demand service is almost certainly using something very akin to this setup.

Change this bit of the scan shell script:

FILENUMBER=`$SOURCEANALYZER -b $BUILDID -show-files | wc -l`

if [ ! -f $OLDFILENUMBER ]; then
        echo It appears to be the first time running this script, setting $OLDFILENUMBER to $FILENUMBER
        echo $FILENUMBER > $OLDFILENUMBER
else
        OLDFILENO=`cat $OLDFILENUMBER`
        DIFF=`expr $OLDFILENO "*" $FILENOMAXDIFF`
        DIFF=`expr $DIFF /  100`

        MAX=`expr $OLDFILENO + $DIFF`
        MIN=`expr $OLDFILENO - $DIFF`

        if [ $FILENUMBER -lt $MIN ] ; then SHOWWARNING=true; fi
        if [ $FILENUMBER -gt $MAX ] ; then SHOWWARNING=true; fi

        if [ $SHOWWARNING == true ] ; then

To this:

FILENUMBER=`$SOURCEANALYZER $MEMORY $LAUNCHERSWITCHES -b $BUILDID -show-files | wc -l`

if [ ! -f $OLDFILENUMBER ]; then
        echo It appears to be the first time running this script, setting $OLDFILENUMBER to $FILENUMBER
        echo $FILENUMBER > $OLDFILENUMBER
else
        OLDFILENO=`cat $OLDFILENUMBER`
        DIFF=`expr $OLDFILENO "*" $FILENOMAXDIFF`
        DIFF=`expr $DIFF /  100`

        MAX=`expr $OLDFILENO + $DIFF`
        MIN=`expr $OLDFILENO - $DIFF`

        SHOWWARNING=false

        if [ $FILENUMBER -lt $MIN ] ; then SHOWWARNING=true; fi
        if [ $FILENUMBER -gt $MAX ] ; then SHOWWARNING=true; fi

        if [ $SHOWWARNING = true ] ; then

Yes, there’s an uninitialized variable AND a syntax error in a few lines of code. Quality. Two equals signs (==) are not valid sh/bash/dash syntax, so obviously that was well tested before release! Change it to = or -eq and you should be golden.

After that, just keep an eye out for out of memory errors and any times you notice it saying “Java command not found”. To open a large FPR file may require bumping up Audit Workbench’s memory. I had to with a 141 MB FPR file. YMMV.

You’re welcome.