The Paving of Good Intentions: A FALMO Case Study

Life is funny sometimes.

If it is not coming at us fast and unexpectedly, it comes at us with radical changes even when we think we are prepared for them.

Case in point. I had planned to have a two-part discussion on proactive crisis management as being integral in 21st Century leadership. Along the way, I would share some of my personal and professional experiences, as well as a few time-tested and trusted tips when, as luck and life would have it, we would only get through the first part before the unexpected happened.

A crisis happened.

Fortunately for us, on the scale of crises, it was not a major one; where property, life, and limb were in peril. However, reputational risk was on the line (and still is when you think about it), and from the user’s perspective, their financial peace of mind, so any decisions and moves would most likely have serious and far reaching consequences and, as such, bad ones were to be assiduously avoided.

So what was this crisis? What did we learn? How did we put that learning into practice? How did we work with our people to work through it? Let’s begin, as they say, at the beginning.

The Paving of Good Intentions

It all began with our commitment to serve our customers better. Beyond simply responding to their needs, many years ago, we took a proactive step, in seeking to do even more, to consistently provide our stakeholders with the very best technology and services we had to offer.

Little surprise then that our internet banking solution, RepublicOnline, was positioned as the main vehicle for customer financial convenience. Since launched more than a decade ago, while the platform satisfied our customers’ needs for online financial services, RepublicOnline was increasingly being perceived as ancient and difficult to navigate. The access card used to provide a second level of authentication was deprecatingly referred to as the Bingo Card.

While we were proud of the platform, where it started and how it had grown, it was increasingly clear that it would not take us and our customers into the future. Change was necessary but changing IT systems is the bane of existence for many organizations. It is a prospect that fills most IT departments with dread … especially one with direct customer interfaces.

Notwithstanding this dread, the decision was made to change the solution. This was not an upgrade of the existing solution, that had occurred a few times along the way. This was a total replacement of both the online and mobile solution. So, against that backdrop, on July 5, after years and months of planning and testing, we launched our new, more intuitive and user friendly online and mobile banking service.

Within minutes of going live, due to an overwhelming response from customers who visited the website and downloaded the new mobile app, from the user’s perspective the site crashed, leaving thousands confused and even angry at not being able to access their financial information online.

While the system did not actually crash, it slowed down to a walk. For various reasons, customers who had been ported from the old system to the new system found themselves unable to access their information. Irony abounded as the very system that we created to make their banking easier was now, not delivering in the way we envisioned. While many were able to conduct their financial transactions, a substantial minority were unable to do the same. Customers were rightly indignant, social media did what it does and amplified the disappointment, the displeasure over the situation.

We knew that we had to fix it and we had to act fast.


FIX – Having the right people on your team is critical to successfully navigating and solving a crisis.

And fix we are in the process of doing.

Led by our IT Management and our Electronic Channels and Payments Departments, the issues which caused the slowing down of the system were resolved within hours of surfacing. That proved to be the easier part. The issues which caused many customers to be unable to log in to their accounts were much more nuanced and in many cases, customer specific. The concatenation of these issues resulted in a bottleneck at the support centres only aggravating the crisis further. Working with all of our partners and through the dedication and loyalty of the frontline team members throughout the organization, the process of restoring customer’s access began and continues. There were still areas being looked at, but for the most part, the system was back on track with over 40,000 users accessing their information within three days.

Problem fixed, crisis stemmed right?


In fact, if we stop at just fixing the issues, I believe that we would have only just skimmed the surface. We are addressing the problem, yes. But we needed to address the people.

APOLOGISE – If you can’t solve a problem right away, address it right away.

From the minute we knew what was happening, my team and I set about to address the problem. We knew that in order for it to be a meaningful and effective solution, it would take time to be created. Time we obviously did not have since every second the situation persisted meant that it was another second that a customer could not access their financial information. It meant another second that a customer was inconvenienced when trying to pay bills or transfer money.

During a crisis, time moves differently. What seems like a minute to us, may be an eternity to someone else and vice versa. Regardless of how time flows, always make time to accept accountability. Take the time to admit that something went wrong and apologise for it. Even if it is (and more often than not it is) beyond your control.

Apologising sets the tone for any future communications. Too many organisations waste time trying to lay blame when they could spend that time actually fixing the problem. Worse still, some organisations throw their own teammates under the bus in an attempt to deflect the heat.

When something goes wrong in an organisation, our stakeholders are not saying “Well, it’s John in Accounting’s fault” or “It’s Jane from Legal who dropped the ball.”

No, they are saying it’s the organisation’s fault.

On the one hand, looking at this case study, you may say that this was an “IT issue” or a “tech issue.” But it is not. It is a Republic Bank issue.

It is an “our people are in need” issue. It is an issue that we needed to take responsibility for and give an earnest commitment toward resolving. More often than not though, words come fairly cheaply. I was recently reminded of the classical latin, res, non verba – deeds not words. So as important as it is to apologize, it is even more important that actions back those up.

We need the will to do what’s right and take accountability with a commitment to addressing what went wrong – quickly and definitively.

LEARN – “There are some things you learn best in calm, others in storm.” Willa Cather

And that’s what we did. We faced the storm. Steeled our reserve. Owned up to our mistakes. Identified and began fixing the problem areas. We learned.

Apart from actually learning what went wrong after we did our forensic audit, we learned even more by talking and (more importantly, listening) to what our people had to say. When I say “our people”, I don’t only mean only customers and communities, who, were extremely vocal online. I am also talking our teams who were either affected directly as a result of the downtime in service or indirectly at having read many of the negative comments being said about our team or heard them from distraught family and friends.

As you can well imagine, there were many comments. Several were observational, pointing out that the platform was down and that customers were being greatly inconvenienced. Others were outright belligerent and rude. Regardless of the intent or basis of the comments, we treated them all as invaluable because they fuelled our learning.

We knew that many of our people were hurting, angry, or disappointed. So we expected them to lash out. When this happens, realistically speaking, there is not much of anything that can be done (short of instantaneously fixing the problem) to placate the proverbial angry mob baying for blood.

So what did we do? What can we do?

We continued to learn. We continued to listen to what our people had to tell us. We then used the feedback to shape our responses to the problem and to inform the future. We reached out to the public with the facts. We made sure to keep all communications channels open, even as the initial problem had been fixed the very day, to ensure that our platform works as intended. When learning from a mistake, it is essential that a defensive posture is avoided. You will never learn the full lesson if you put up a defensive wall.

More importantly, we continue to listen and learn because we believe in hearing from our people – when things go wrong, but also when things are fixed and they go right.

(Granted we have had fewer people reaching out to us to say that the problem is fixed then when we had reaching out to us when it was happening…but that’s ok because it leads to the next point.)

MOVE ON – Rally the troops.

We may have the best plans in place before, during, and after a crisis, but throughout the ordeal, it is very often our response to these experiences that ultimately defines us. The negative feedback from that first day is what fuels our passion to rectify any issues thereby converting a negative into a strong positive.

So, we cannot afford to dwell on the mistakes. Doing so leads to decision paralysis and risk aversion, which can stymie future progress and compromise service quality, structural and financial wellbeing. It can cost us our businesses that we have worked so hard to protect.

Leading up to that fateful Monday, our teams worked hard on this upgrade. Many hours, resources, and effort invested in making sure July 5 would be a watershed day in our organisation’s history.

We just did not know before just how much of a watershed day it was going to be.

Even as we continue to iron out the kinks (and there are still more), the entire experience has been a reminder of what constitutes strategic crisis management and proactive response. And there were many lessons learned along that journey. But the journey goes on. It always does. So don’t waste time getting hung up on what went wrong. Fix it, learn from it and focus on what you want to go right.

I know, looking on at this, many of you will have your own lessons, perhaps even tips, that you are willing to either share or incorporate into your respective organisations. Some may touch back on Murphy’s Law. Some may say the Devil is truly in the details. And for the most part, you would be right to call those to mind.

However, effective crisis management is not only about focussing on what went wrong. It is about Fixing the problem, Apologising for the fallout, Learning from both the mistakes and our people, and above all, making good on the promise to never let it happen again as you Move On.


Honestly, we were in a good place up until July 5. Then disaster struck and we responded.

Now, strangely, we are in a slightly stronger place; a little bit more on our toes and confident in our teams’ ability to surmount challenge.

Go to top