About Me

My photo
I am former editor of The Banker, a Financial Times publication. I joined the publication in August 2015 as transaction banking and technology editor, was promoted to deputy editor in September 2016 and then to managing editor in April 2019. The crowning glory was my appointment as editor in March 2021, the first female editor in the publication's history. Previously I was features editor at Profit&Loss, editorial director of Treasury Today and editor of gtnews.com. I also worked on Banking Technology, Computer Weekly and IBM Computer Today. I have a BSc from the University of Victoria, Canada.

Friday 24 July 2009

Getting things done

Features

How do institutions mitigate the risk involved in implementing an IT project, and how does it impact their compliance obligations? Joy Macknight looks at the issues.

All IT project implementations have a degree of risk and the financial industry knows this better than most from getting its fingers burnt in the past. Historically, financial firms have been rather poor at getting their projects done on time or on budget, with a number of high profile IT projects gone wrong.

Financial institutions also have the extra burden of compliance. The development of different regulatory acts, like anti-fraud and anti-money laundering regulations and Basel II, has increased the pressures on financial firms. Basel II, with the deadline fast approaching, is causing some headaches in the industry.

With bigger projects, driven by the replacement of old legacy systems and the drive for flexibility in delivering new products, mitigating risk is even more challenging. But it is clear that many institutions are seeing the potential of using the clean up of their technology environment to boost shareholder value by demonstrating that they are achieving greater operational efficiency and reducing risk.

A situation of massive IT infrastructure complexity due to mergers and acquisitions, and also different lines of business, is confronting chief information and technology officers in the financial sector. Rob Raponi, director of professional services at DataMirror, identifies some of the problems: “Banks, typically large institutions and even smaller ones, are usually not a monolith structure, but a collection of businesses from retail to wealth management to insurance in some cases, depending on the jurisdiction you are in. As a result of acquisitions, one of the risk elements is how do they integrate this hodgepodge of stuff because they may have disparate platforms - some stuff came in a package, some stuff was built, and the people who built it are no longer here — that kind of thing is very common in financial institutions.”

All industry verticals can be seen to have a similar problem of systems and applications complexity, but for the financial vertical, the problems don’t stop there. There is also the extra pressure of institutions having to be online 24/7 — even the smallest disruption in service can lead to a firm’s reputation taking a severe hit.

Reliability was one of the main concerns of Currenex, a US-based international foreign exchange market, when it decided to switch from Sun Solaris to HP Proliant servers running on Linux in 2004. In a market where time really is money, Sean Gilman, chief technology officer, explains: “We can never really turn the system off, so it is impossible for us to have any kind of bug that would risk the system going down.”

Currenex’s methodology when implementing typical IT projects is to limit the scope of the project and try to “nibble away” at the problem instead of implementing one big fix. “That allows us to do less work to get it out into the field quicker, get feedback from customers to see if we are really going down the right path, and because each little step is smaller, it is easier to test it in order that we have a higher degree of certainty in what we’ve done,” says Gilman.

At the beginning of the nine month project with HP, Currenex identified potential risks: process mistakes, such as human error or anything that would cause outages; bugs or errors along the way either in the hardware or in the operating system; and the ultimate risk that the end result didn’t accomplish the goal of the project. “We probably had a little of all these types of problems that we were able to mitigate throughout the process. The key is having time to address them,” says Gilman. The project came in on budget with a delay of only one month and higher than expected results.

Michel van Leeuwen, risk management chief executive, Misys Banking Systems, looks at the effect Basel II has had on the smaller banks: “Many banks, specifically from Tier 2 down, have not failed to think about it, but failed to act on the fact that they need to be compliant by the beginning of 2007. Many lower-tiered banks have woken up to the fact that they are now going to have to dive head first into projects that they haven’t properly researched, they haven’t made the right assessment with regards to the number of people, project risk, etc. But they want to deliver at some time in future because they only have one year to get this system live. We are now seeing that they are all racing for the gate and trying to make this happen, which sort of creates a feeding frenzy on the banks’ side, it creates a delivery problem on the consultant and the vendor side, and the words ‘project risk’ will be the words that will be posted in lights for the next 18 months.”

Abbey Financial Markets, the investment banking division of Abbey, now a part of the Santander Group, had more foresight and in 2002, in light of the increasing regulatory pressures and following an operational review, decided on a business-led transformation programme involving seven different streams of work. The programme was partly implementing a new trading system with some major infrastructure changes, such as migrating to Windows XP, as well as the provision of a new financial reporting system and the change of the general ledger. It was a major programme that touched all parts of the organisation.

AFM used Business Control Solution’s Blueprint methodology which is similar to an architect’s view of a building — basically an engineering survey of AFM’s starting point detailing every aspect of the firm’s IT infrastructure. Jon Mathias, a member of the strategic technology consulting team, BCS, explains: “The Blueprint approach is to take an accurate and precise survey of the existing environment. The idea is that by making a very precise assessment of the starting point, you de-risk your project implementation significantly by knowing that if I take this piece of the organisation and I change it, what’s going to happen?”

The project had the added benefit of decommissioning around 70 legacy applications, mostly satellites that sat on the edges of the system. “What we found with the decommissioning programme at Abbey was that these little applications tended to have very poor levels of documentation and what you will find is really nobody knows how they work or even what they are for, what the data is for, where it comes from or anything. It’s a big headache when you are trying to figure out the knock on effect of switching one of those things off is — you really need to know what they do to calculate the effects,” says Mathias.

Fundamentally, the issue that the chief operational and chief technology officers are grappling with when implementing IT projects is developing a methodology that can map the business objectives onto a time, cost and quality matrix. John Brashear, managing director at BearingPoint, an IT consultancy, services and solutions firm, says: “There is timing and there is budget once a project has been identified. There is also a big challenge in making sure that the size of these projects and business criticality of the project that has been taken on is aligned with business need and is actually accomplishing a function that has been set out to be accomplished, and in the way it needs to be accomplished.”

No comments:

Post a Comment