Stop using risk as an excuse for taking the safe route

 Nick Funnell photo
Nick Funnell VP Engineering
3min read

There is risk in everything. There is risk in getting out of bed in the morning. There is risk in not getting out of bed in the morning. There’s risk in leaving the house these days, particularly with C19 (which, if you’re a Londoner, might be confused with a particularly treacherous cycle route).

But this is not a(nother) take on Coronavirus.

If you’re releasing software in a bank, you’re not supposed to take risks. Notwithstanding the fact that the entire financial industry is based on risk (lending, trading, investing - all risk-return), software is supposed to be safe and secure, and not put the business 'at risk'. There are two problems with this zero-tolerance approach:

  1. Total risk mitigation is impractical, if not impossible in most situations
  2. We treat 'risk' as binary (black or white, risky or not risky), whereas it's really a scale.

Many years ago, I studied Avionics. When designing aircraft, you assume they will, at some point, fall out of the sky. You then do everything you can to ensure the frequency of that happening is very small indeed. But there’s a sliding scale: Catastrophic failure = less than 1 in 1x10⁹ flight-hours (this is burned into my brain). So really, risk is the product of probability and impact: We should be tolerant of minor problems occurring relatively frequently, but highly averse to catastrophic failure. An aircraft can cope with a busted landing light often, but can only fall out of the sky once.

Perhaps the problem is that we see the word 'risk' as inherently bad.

In fact, if we step back a bit, the problem may be with the term itself. Perhaps the problem is that we see the word 'risk' as inherently bad. If we instead think of risk as 'the probability of something negative happening', it's much easier to weigh this up against the probability of something good happening (or something good not happening). It's a trade-off, not an absolute. Going back to the bank, there is risk in your new service going into Production: It could be hacked, there could be critical bugs, vulnerabilities. The bank has an entire security organisation dedicated to keeping the bank safe, to ensuring nothing gets into a production environment that isn't secure and 'risk-free.' But see point 1 I made, above: Total risk mitigation is impractical. Yet this is, in effect, what security has been tasked with: Make it secure (not ‘reasonably secure’). This is not their fault. Security is what they’re paid to do.

Sidebar: When challenging (in my view) excessive security requirements, the answer has often been, ‘You can never be too secure.’. Of course you can. My house would be considerably more secure if I bricked up all the windows and doors: It would also be considerably less useful.

But if you’re building a new service, what’s the real risk in testing an app with ‘friends and family’ if you can limit the maximum exposure to 10 dollars? This - arguably - doesn’t have to be as secure as a full bank with thousands of customers with millions of pounds in deposits, where a breach could be catastrophic. In the aircraft analogy, it’s less damaging than a busted landing light (maybe a seat with a broken recliner).

The problem is, though, that everything about traditional banks is monolithic

The problem is, though, that everything about traditional banks is monolithic: There is a change control process that is audited, and overseen by regulators, and there’s usually very little discretion in how this is applied: It’s applied to everything. Add to that not only are the processes long, complicated and multi-layered (banks usually only add process, rather than revising what’s already there), and that entire internal organisations have evolved whose very existence is based around ensuring those processes are followed, and you have a recipe for stasis, for waterfall plans and ‘big bang’ releases. This is not an environment conducive to speed, optionality and experimentation. Agility is all about testing and iterating, and this ain’t it.

Banks are acutely aware of this. They try to get around it with ‘innovation departments’ where empowered teams get to experiment, and shape the future. The trouble is that in most cases, they can only innovate within an ‘innovation bubble’, and they can’t put changes into a live environment. This means that they can innovate as much as they like, but when they want to test something for real, they hit the wall, and get stuck in the big waterfall process (as well as facing considerable resistance from their ‘non-innovation’ colleagues who resent the team who are allowed to ‘play with technology’ without accountability.)

Build small, low-risk services, and put them in the hands of small groups of real users

So how do we tackle this? Well, the road will be long and hard, but one approach is to start small: Build small, low-risk services, and put them in the hands of small groups of real users: Get dispensation to launch something ‘real’ with a pragmatic subset of the full ‘belt and braces’ production-readiness process.: Get buy-in from the top, and involve empowered expert decision-makers (and when I say ‘involve’, I mean charge them with delivering: They need to be on the hook for this; they don’t just get to provide opinions (‘opinions’, in my experience have caused the slow death of many initiatives.)

And start small in lots of places. Build lots of things, make lots of (relatively!) cheap bets. Not all will be successful, but some will. And build them in such a way that, when something works, you can scale it, and have other systems reuse it. Gradually replace your estate.

Not taking risks is risky: Even inaction is an ‘action’. You’re better off taking bets that range their level risk. Take a lesson from portfolio theory: A balance of nearly sure things, right through to the rather risky, but come with a very good reward. To be secure isn’t to eliminate risk. It’s to balance the inevitable risk. It’s okay for your risk to not pay off - it happens. Learn from it and keep going. Changing your mindset about risk can open up new opportunities you’d never have had otherwise. Give it a go.

 Nick Funnell
About the author

Nick Funnell

Nick is responsible for engineering and technology within 11:FS Consulting, supporting our engagements with client-focused solutions, delivered iteratively by talented engineers using best-of-breed technology.

We are 11:FS

We believe digital financial services are 1% finished. We’re building the next 99%.

What we do

11:FS builds next-generation propositions for challengers in the financial services industry: existing firms looking to innovate, start-ups looking to scale, and everyone in between.