Whether deployed for power applications in the trading world or for cost reduction in processing, grids are now common. Joy Macknight explores their potential in financial services
When many people hear the words grid computing, they think either of research institutions computing quantum physics problems on supercomputers or the search for alien life across the universe with the use of home computers. But grid computing is in the process of expanding its perimeters and making deeper inroads into the financial services industry starting from the trading room floors and spreading across the financial spectrum.
“We see the financial services sector as the industry that will be the tipping point for grid transitioning from its original source, which was the high performance and technical computing scientific research area, into mainstream commercial computing,” says Peter Ffoulkes, director of marketing for high performance and technical computing, Sun Microsystems Network Systems Group. “Firstly they have been using the more established batch processing grid techniques for quite a long period of time for portfolio analysis. Financial institutions have developed a lot of in-house expertise and also the confidence of understanding what grid is good at and where it helps, so they can then start looking at other parts of the business and see where the grid technology can actually be deployed.”
The design goal of grid computing is to solve problems too big for any supercomputer by using resources spread over a heterogeneous environment. In the capital markets, a lot of the computations run by traders or risk managers in the front office, running brute force methodologies such as Monte Carlo simulations, require intensive CPU power. But there’s some finesse involved — it is not just about adding raw power.
The ability to support applications across different platforms sets grid computing apart from traditional homogenous clusters and solves the problems associated with applications being hard-wired to specific CPU resources. Banks are exploiting this ability in order to scale their computational needs and take on jobs that are computationally intensive, split them into small jobs that are done simultaneously and allow banks to run them on commodity hardware.
“From a financial perspective, it is a key tool for us to maximise utilisation and increase efficiency, which means that we will have a reduction in the total cost of ownership. Those are the benefits that we gain. In capital markets, it is critical to perform faster and it is critical to be able to do so in a reliable and flexible manner. That’s a powerful combination and you can find that in a product that can act as a platform for the services you have,” says Robert Ortega, vice president of architecture and engineering at Wachovia.
In the highly competitive financial industry, organisations must look at ways of improving the strategic objects of sales growth, reducing costs, redeploying their capital, improving the investment yield to their investors and looking to do this in a way that is highly cost effective for them. DataSynapse, a grid computing firm, identifies several reasons to deploy an enterprise service oriented architecture strategy that includes a grid component: application performance improvement, resilience and reliability, flexibility and API independence, service oriented control, dynamic provisioning, rapid development and deployment, usage-based accounting, and total cost of ownership reduction.
Arno Radermacher, head of technical applications and systems, Sal Oppenheim, says: “We have two reasons for adopting grid. One is to reduce the cost compared to traditional SMP (symmetric multiprocessing) solutions. The other issue which we are interested in entering in to is a new infrastructure technology. We believe that grid is a more than just a solution for one single problem but is a way to separate the hardware layer of IT systems from the applications layer and enable the hardware to become an overall resource.” Sal Oppenheim is a private European bank whose core business is asset management and investment banking.
“The front office, the traders, have their own set of applications and their own equipment; the back office have their infrastructure. This is the architecture that you find in almost every bank. The disadvantages are that when the users are not using these resources, the resources cannot be used in other areas. In banks, the average utilisation of servers’ resources is 15-20 per cent,” says Robert Boettcher, vice president, financial services, Platform Computing, the firm chosen by Sal Oppenheim to provide its grid computing solution.
The Canadian company has developed an enterprise grid solution that inserts a layer of technology between the applications and CPUs in order to take all the resources and turn them into virtual pools. Then it can dynamically match the workflow coming from an application with the resources available in the virtual pool.
“With the enterprise grid approach, you get decreased capital costs for a few reasons: you are able to move to the lowest cost technologies because you no longer need the supercomputers of the past; you are able to flow across different business priorities and increase the utilisation to 60 or 80 per cent; and you are able to increase resilience. One of our bigger clients, JPMorgan Chase, has moved to a model they call disposable computing — if a node on the enterprise grid fails, they won’t bother to try and fix the problem but just rip it out and replace it because that works out cheaper,” says Boettcher.
There are also some unforeseen side effects of deploying grid. Fred Gedling, director of services at DataSynapse, says: “A mature company in the US found that their data entry clouts would come into the organisation every morning at eight o’clock and the systems weren’t available for two hours. They couldn’t book new business on the day; people would have to stay late and the workflow of the organisation was broken. Grid had two effects: it accelerated the overnight processing, so it gave them headroom to run parts that failed, plus the interesting side effect they saw was staff retention. Staff turnover dropped to virtually zero almost immediately. Before deploying the grid solution, the staff was frustrated, they were having to stay late and they couldn’t do their jobs — all that stopped with improved performance and reliability.”
Most companies look at grid when they encounter a pain point in their business where they need to do an upgrade, and then they look at integrating grid into an existing secure system or an existing transaction system. “What we are seeing now is that people are convincing themselves that the technologies are stable enough and as we get into different production environments, for example a fraud detection program being developed by a major credit card company, the results are becoming so compelling that people are deploying them faster and faster,” says Ffoulkes.
“The reason why it has been limited to the risk end is because of the narrow applicability of the particular processing pattern that can be used. That’s the only thing you could do with a classic grid. Now, we have been approached by people to work out distributed back-up resettlement systems, provide global cashing capabilities and enterprise messaging infrastructure because they realise that those patterns are different patterns in the distributed computing arena. Rather than putting in a grid solution, a this-and-that solution, you put in a generic enterprise service fabric which can host any of those patterns in the way it orchestrates the components,” says Richard Nicholson, chief executive of the UK firm Paremus.
“We build from the premise of having no central point of control. The control functions are embedded across the computing infrastructure that is being used for processing. So you could go along to a non-functioning node and rip it out. Whether it was the control or processing function, it doesn’t matter for that function will be re-established elsewhere in some other node within the fabric. You can incrementally and randomly kill off as many of the resources as you can think of and the system will continue to function. The concept is based on a biological system design. There are a lot of things you can learn in building self-healing systems.”
So what is holding back the immediate uptake of grid?
“If you look at most banking applications, even most capital market applications, they require quite a large stream of data and getting the data to the right processor for the calculation is still a problem. Now there are people working on the problem and it is quite likely that solutions will be found over the next 18 months to two years. We expect to see a broadening out of the type of applications that are running on grid,” says Robert Gifford, director of EMEA research and consulting at Financial Insights.
“So far it has been the big players Deutsche Bank, Barclays, RBS, ABN Amro and the big US banks, JPMorgan and Citigroup. They tend to use grid and they have rolled it out to a number of applications. Now it is moving down into the Tier 2’s and smaller Tier 1’s because it’s fairly well proven, it works, the costs are reasonably containable, and you get a lot more power for less bucks. I think you can quite safely say that the next architecture will be in some way grid based with virtual resources and automatic scheduling,” concludes Gifford.