The skankworks.net were recently hired as contractors to deploy a web-application for a global bank. The app allowed customers to trade in real-time, with some fairly hefty credit limits for the corporate clients, and was replacing an older third-party app whose license was expiring.
The new app was written in Java, for Tomcat on Red Hat Linux. We were handed two-dozen globally-distrubed hosts to roll-out Tomcats, Apaches, and JVMs. Then we were to tune for performance and write maintenance and operations scripts and business rules. Finally we’d provide a set of defined firewall and load-balancing rules, deploy, and hand-over for the security team to test.
Passing the security test was a key deliverable on the contract. An external third-party had been hired to run a suite of tests for security weaknesses. The old app had about a dozen security issues, most of which were false-positives, the rest considered safe. The new app was to have none. Or at least none detected by the security sweep.
Performing an external security sweep on a bank’s IT systems requires a lot of planning and tons of sign-offs. The kind of thing that has to be planned long in advance. Banking security systems are designed not just to be secure, but also to detect threats and react accordingly. Such systems have long since acquired the ability, for example, to call the cops out all by themselves.
They don’t have automated sentry machine guns around banks just yet, so the cops will be bringing their own. Accordingly, these kind of things need to be switched off, forewarning given, and/or approval granted.
So we weren’t best pleased when the security experts delivered a list of 187 security issues they’d found. In fact, it sucked to be us. Naturally an explanation was demanded, and it was unlikely we’d be paid for the deployment without one. We set about analyzing their report but we couldn’t redo any of the tests since that would set all the alarms off. We noticed one or two SQL injections that we could certainly try on the test system, but then we couldn’t recreate their results.
Baffled, we went to the production server to examine it’s logs and see if we could find any clues as to why it was failing. The security company had neglected to provide the list of source IPs we needed, and they were 12 time zones away. Time being money, we called them up anyway. Got through to their on-call guy, and an hour or two later we had an email containing all the IP addresses that they had used in the test and we were damned if we could find one of them in our logs.
We decided to start over, and went back to their report. Then we noticed something important that we’d all missed before and issued our recommendation. Yes, the security experts found 187 defects. In somebody else’s system. We recommend a retest using the bank’s URL and not the incorrect address used in the initial test.
The “experts” were given a slot two weeks later to re-test. They reported no defects. They were not paid for the failed first test, but nevertheless for the few hours of bungled effort they put in running a bunch of scripts they’d downloaded from the Internet, they were paid ten times more than the guys who installed the system simply because they play the “tech start-up” card.
To paraphrase the old beer commercial, I bet they’re using Agile.