The recent WSJ article on banks releasing mobile banking software that stores user names, passwords and bank accounts unencrypted on phones has opened up a sore topic for me.
Apparently we have very, very large corporations chocked full of highly paid analysts, architects, developers and QA staff believing that it is perfectly OK to store banking credentials in plain text on a mobile device a decade into the 21st century.
Something is broke.
- the bank's analysts, architects, developers and QA staff are unaware of the state of application security in the 21st century. They have no idea that a fraction of the worlds population enjoys compromising other peoples systems and they use the information to steal peoples money. In other words - they are unconscious of the environment to which they are deploying their application. They are sufficiently unconscious of their environment that they didn't know that there may be some sort of best practice on the storage of banking credentials. The lack of awareness has made them incompetent to build mobile banking applications. Because they don't know what they don't know, they go ahead and build banking applications anyway.
- the bank's analysts, architects, developers and QA staff are conscious of their environment, but are not capable of designing an application that can safely be deployed in that environment. In other words - they are conscious, but incompetent. They know that they need to deploy secure applications, but it didn't occur to them that it has been a couple decades since storage of credentials in plain text has been an acceptable practice. They missed the fact that operating systems and databases stopped doing that towards the end of the last century, and a quick Google of 'how to secure banking applications' didn't turn up anything interesting on the first page.
- the bank's analysts, architects, developers and QA staff are conscious and competent, but their highly paid managers and directors told them that building an application to withstand today's on line application environment was out of scope. In that case their management is either unconscious of their environment or they are aware of their environment but chose to ignore it - due to incompetence, perhaps.
- the bank's' analysts, architects, developers and QA staff and their managers are conscious of their environment and competent enough to build software for that environment, but some external force caused them to ignore basic decade-old security practices. Perhaps deadlines, market or financial pressures forced them to release the product with known defects. If so, I hope they kept the e-mail trail.
There might be other possibilities, but writing about them wouldn't be be as much fun.
I'll qualify all this by saying that I've never managed a staff of hundreds of developers, nor have I ever written a banking application. The largest application I've written was in the tens of thousands of lines of code. That puts me somewhere near the unconscious-incompetent end of the spectrum.
The good news is that:
"The flaw has prompted the company to consider changes in its development process..." says Wells Fargo CIO Mr Tumas. (according to the WSJ.)
I wonder what they'll consider changing, the unconsciousness? the incompetence? the external factors?
I tend to have some sympathy for vendors who get hit by complex stack smashing attacks that exploit their products in ways that are obscure or complex, and I might even have sympathy for vendors who have millions of lines of old, ugly code that predates the current threats. Those are hard problems to solve. On line banking applications for iOS and Andriod were created from the ground up long after password encryption became the norm. No excuses this time.
When I think about this, I can't help but believe that we are a long, long ways from having a application development culture that values and understands security sufficiently that we can assume that software is relatively secure.