Yves here. One of the sources of risk in big and even moderately big banks that does not get the attention it deserves is information systems. Having mission critical systems function smoothly, or at least adequately, is crucial to a major trading operation. Huge volumes of transactions flow through these firms, and the various levels of reporting (customer exposures, funds flows, risk levels, transaction and reconciliation failures) need to be highly reliable or things get ugly fast. Witness MF Global, where the firm was unable to cope with the transaction volume of its final days and literally did not know where money was at various points in time during the day.
Now one would think that in the wake of a super duper financial crisis, that big banks would up their game on the risk management/IT front. My guess is the reverse. First, regulators haven’t thought much about operational risks; that’s only recently been considered something worth thinking about. Second, even though I suspect that over time trading managers have gotten better at managing IT, that likely means they have gone from terrible (as in too preoccupied with the press of business to do an adequate job of specing projects or being willing to try approaches like Extreme Programming) to merely garden variety not very good (as in pretty much no one in corporate-land is willing to spend the extra 20% or so to have developers document their work in sufficient detail that a completely new person could understand what was done). And banks have a monster legacy system problem. Multihundred million dollar programs to tidy up and integrate systems into the One System to Rule (Big Parts of Them) All have this funny way of being cashiered after running up monstrous bills and not getting very far.
One window into the severity of this problem: the OCC (yes, our overly bank friendly OCC) graded the 19 biggest banks as failing on a whole slough of operational measures, which included IT. And remember, the list consists mainly of traditional banks (admittedly some really big traditional banks like Wells Fargo), not firms that derive a major portion of their profits from more operationally-demanding trading activities. From American Banker in December (hat tip Richard Smith):
The Office of the Comptroller of the Currency recently graded the 19 largest national banks on five factors designed to gauge how well they are being run.
The results are startling.
Not a single bank met the OCC’s requirements for internal auditing, risk management or succession planning. Only two of the 19 banks met the regulator’s requirements for defining the company’s appetite for risk-taking and communicating it across the company. Only two banks were judged to have boards of directors willing to stand up to their CEOs…
Among the five governance areas being targeted, risk management and audit are getting the harshest eye. “We determined that for these 19 banks, their audit and risk management functions had to be elevated from wherever they were to meet our definition of ‘strong,’ ” Brosnan says.
None of the banks have met that standard for audit; 10 banks are within a year of meeting it while the nine remaining banks will need up to two more years, according to the materials the OCC disseminated at the conference. (The OCC did not identify any of the banks by name.)
None of the banks have met the risk management standards either. Four are within a year and 15 of the banks are going to need up to two years to pull their systems up to snuff, the OCC says.
My admittedly dated experience on IT (with two firms that were considered to be extremely good at it) is that the OCC will need to double its estimates on how long it will take. Independent of the fact that it alway take longer than anyone estimates, with major elements of Dodd Frank being hashed out and Basel III both in play and being delayed, a lot of IT projects will be pushed off until those are finalized.
But this UBS vignette is one of those peeks behind the curtain to see how bad things often are. Remember that UBS wa sone of the banks recognized to be most at risk in 2008. The Swiss National Bank was caught flat footed when UBS needed a monster bailout and was alone among central banks in making UBS hire an outside party to ascertain exactly what led to the meltdown (an undersupervised CDO team was a major culprit) and publish the findings. The Swiss have also imposed capital requirements of 19% on their banks, a level regarded elsewhere as draconian, and which is forcing UBS and Credit Suisse to downsize considerably.
Yet this story published originally in German by Lukas Hässig reveals that even under a tough central bank, a lot of IT messes and deficiencies remain at the big financial firms (although it is also possible that the Swiss set capital levels at 19% precisely because they knew what a mess their charges were). And, quelle surprise, no one responsible has been fired or even demoted. Once you reach a certain level in banking, you only fail upward.
By Lukas Hässig, an independent financial journalist in Zurich, who has written two books about the crash of UBS and the end of Swiss banking secrecy. He has operated the internet financial newspaper Inside Paradeplatz with daily news about Swiss banks since 2011. You can contact him at mail_at_lukashaessig.ch. Translation by the author
UBS loses hundreds of millions in a failed risk management project
After the “A-Risk” project failed, UBS risk control aggregates risks using an excel patchwork. Recently, the investment bank has been inadvertently running open risks stemming from unhedged trading positions with CHF 500m loss potential.
UBS’s top management has being grilled by the British parliamentary commission for constantly failing to get risks under control, as demonstrated by several catastrophic and reputation-wrecking scandals: the gigantic $40 billion suprime loss, the tax-evasion scheme perpetrated in the USA, the Adoboli fraud and the Libor manipulation.
The line of defense of the UBS managers is always based on the same answer: “we did not know”. In reality, UBS’s top management has always been aware of the deficiencies in risk control. For instance, Walter Stuerzinger and Philip Lofts, the former and the current Chief Risk Officer, were already warned in 2002 by two risk specialists (“The crisis at the heart of UBS”, published in “The Sunday Telegraph” on 6 July 2008) of the Zurich head office with extensive experience at main trading centers that the bank was building-up an unacceptably large risk concentration in the US structured credit sector (including subprime) and that the risk management approach was flawed and incapable to capture and hence adequately measure the true loss potential of these exposures.
After the subprime losses UBS declared that it had changed its approach and had become particularly risk-averse. However the reality was different: as history shows, UBS did never turn the corner and has remained one of the most aggressive investment banks.
The recent failure of the “A Risk” project is further evidence for this statement and it also shows that the integrity and the solidity of the risk control infrastructure is still not a top priority of top management. The “A Risk” initiative was supposed to deliver a state-of-the art and innovative risk monitoring infrastructure and it should have allowed top management to have a global view of all the risks of the bank.
After 5 years development and the spending of several hundred million Swiss Francs, “A Risk” does not run as expected and, according to an insider source, is in a “catastrophic status” and has failed to deliver: the various trading desks of the bank still run on different IT infrastructures and the various risks have to be collected from “different databases” and aggregated using Excel spreadsheets with several manual interventions. These are clearly very prone to operational errors.
Not very surprisingly, given UBS track record in dealing with risk control failures, the people directly responsible for this failure are still employed by UBS and hold highly paid positions. Among these are teh above mentioned Walter Stuerzinger and Philip Lofts, the former and the current Chief Risk Officer. But also at the next hierarchical level no major consequences seem to have been taken:
• Galo Guerra, who graduated at the Sloan School of Management of MIT and was leading the “A Risk” project is still on the payroll of UBS;
• Pieter Klaassen, also graduated at the same school, according to the insider source has been removed from his position as “Head of firm-wide risk aggregation” apparently because of his lacking leadership skills. However, on LinkedIn he appears to still hold the same position;
• Darryll Hendricks, who holds a PhD from Harvard, was in charge of the “Risk Methodology” function of the investment bank and was therefore at least partly responsible for the correct representation of the risks. He is still with the bank and has the position of “Head of Strategy” of the investment bank;
• Tom Daula, who was till the crash of 2008 the Chief Risk Officer of the investment bank and had good prospects of becoming Chief Risk Officer of the bank, changed internally and is now apparently the head of global research and analytics.
While highly paid top shots have succeeded to stay on board, many employees in the back office function are being axed due to the fact that the bank has decided to reduce the investment banking activities.
However, good specialists in this area are urgently needed. According to the insider source, in late summer 2011 the Swiss investment banking unit in Opfikon executed a very large transaction in Korean Won. However, the Treasury department that is in charge of managing the balance sheet and hedging the positions forgot to execute the hedge. As a result, the position bearing a 500 million loss potential remained open for several months. As this failure was discovered, UBS started an investigation in which also Finma (Swiss regulator) was involved. However, no consequences at personnel level have been taken. A UBS press spokesman refused to comment this event.
I’m tempted to say that the solution is to make banking IT systems open-source and allow the community to improve them :P
But seriously isn’t all that stuff even more highly classified than CIA missions in banana republics?
I’ve tried to independently research what banking and payment systems in the US are using what kind of software, where they physically are located (in Virginia), and how they manage to securely prevent hackers from, say, breaking into the financial system to print off money and send it off to some account.
I’ve always been interested in how exactly these payment systems and digital currency storage mechanisms and contract/asset safekeeping by banking IT systems.
But, I have yet to find a decent book that goes into a discussion of how on earth we trust, say, Russian banks to hold US dollars in good faith digitally. If anyone knows of any books that go into this kind of discussion or declassifies or demystifies IT systems of banks, I’d love a recommendation.
LOL – i suppose anyone who wants to hack those systems would like the same thing …
You clearly believe in the long discredited “security through obscurity” model, which relies on the idea that without access to source code, hackers won’t be able to break in. How has that worked for Microsoft, Adobe, and various other proprietary software vendors?
And as far as encryption is concerned, real security experts refuse to accept encryption algorithms unless they _have_ been published and vetted by crypto experts.
And that’s on top of the mistakes Microsoft is known to have made all on its very own. Heartening, isn’t it.
No, I don’t much believe in “models” at all – but why make it easy ….
It sounds weird, but in fact security experts such as Bruce Schneier will tell you that the key to security is to publish *practically everything*. A good system is one where absolutely anyone can stare at its guts, and where hundreds of people have done so, but where people *still* can’t figure out how to get in.
The only “secrets” in a banking system should be passwords giving access to move money, really.
The absolute minimum amount of material should be secret. This is a basic operational security principle. This is also a principle which our idiot CIA and idiot DoD and idiot NSA do NOT follow. The more “secret” information you have, the less secret any of it is.
FWIW, I think the real reason for the “secrecy” of the banking systems is that it allows the banks to conceal frauds more easily.
Every bank I’ve worked IT in relies on proprietory systems that are internally unique. Otherwise they would be sharing systems and data with each other. And no one wants that.
And every department within the bank, no doubt.
Given the fundamental nature of the credit default swap, especially when used as a naked swap, and the unlimited number of swaps which can be sold and/or purchased, and given the unlimited number of commodity future contracts which also can be sold and/or purchased, and given the DTCC’s Stock Borrow Program, the ideal of the naked short sellers, there can logically be no real risk control — ever!
Purely from the IT end of things, since I believe both Morgan Stanley and UBS were supposed to use the K language and kdb database types, one would think from that vantage point they would be covered?
here’s info on secrecy world of offshore financials-banking:
http://www.newleftproject.org/index.php/site/article_comments/the_men_who_stole_the_world
http://www.amazon.com/dp/0230341721
Money & Speed: Inside the Black Box (Marije Meerman, VPRO)
VPROinternational Dutch
http://www.youtube.com/watch?feature=player_embedded&v=aq1Ln1UCoEU#!
Skippy… E N T R O P Y
I’ve seen that movie before, it’s more about HFT than about the IT systems used to secure and manage Central and Commercial Banking storage of assets and digital currency.
In the age of Iranian and North Korean hackers screwing with various systems, who knows what they can do. I’d like to learn more about the infrastructure and how well protected it is and what software is being used and such.
http://www.inntron.com/core_banking.html
LALALALA Oracle
Part of Citicorp
Oracle Financial Software Limited was a part of Citicorp’s (now Citigroup) wholly owned subsidiary called Citicorp Overseas Software Ltd (COSL). In 1991, Mr. Ravi Apte carved out a separate company called Citicorp Information Technologies Industries Ltd. (CITIL) out of COSL and named Mr. Rajesh Hukku to head CITIL. While COSL’s mandate was to serve Citicorp’s internal needs globally and be a cost center, CITIL’s mandate was to be profitable by serving not only Citicorp but the whole global financial software market. COSL was the brain child of Mr. Ravi Apte, who convinced Citicorp, while working for Citibank, to start COSL as the offshore captive.
Many of the executive management of Oracle Financial Services, including Rajesh Hukku, R.Ravisankar and NRK Raman were at COSL and moved to CITIL when it was formed.
[edit]i-flex CITIL started off with the universal banking product MicroBanker (which became successful in some English speaking parts of Africa and other developing regions over the next 3–4 years) and the retail banking product Finware. In the mid-90s, CITIL developed FLEXCUBE at its Bangalore development center after a significant development effort spanning more than 18 months. After the launch of FLEXCUBE, all of CITIL’s transactional banking products were brought under a common brand umbrella.
CITIL changed its name to i-flex solutions to reflect its growing independence from Citicorp and to strengthen its FLEXCUBE brand.[4] The name CITIL also made the prospective client banks hesitant about trusting the company with their data, since the name alluded to a close link with Citibank which could be one of their competitors.
The first version of MicroBanker was created at COSL by Ravi Sankaran who migrated to Australia before CITIL was formed. COSL started selling MicroBanker to non-Citi banks in Africa. Ravi Apte the founder CEO of COSL decided to carve out CITIL to focus on non-Citi business. Because non-Citi was the primary target for MicroBanker, MicroBanker was moved to CITIL. Rajesh Hukku was in the United States managing COSL’s business development in North America during the time CITIL was formed. It was Mr. Apte who decided to get Hukku back to India to head the newly formed CITIL.[citation needed] This is previously
Products and services
Oracle Financial Services Software Limited has two main streams of business. The products division (formerly called BPD – Banking products Division) and PrimeSourcing. The company’s offerings cover retail, corporate and investment banking, funds, cash management, trade, treasury, payments, lending, private wealth management, asset management and business analytics.
The company undertook a rebranding exercise in the latter half of 2008. As part of this, the corporate website was integrated with Oracle’s website and various divisions, services and products renamed to reflect the new identity post alignment with Oracle.
Recently,Oracle Financial Services launched products for Internal Capital Adequacy Assessment Process, exposure management, enterprise performance management and energy and commodity trading compliance.
http://en.wikipedia.org/wiki/Oracle_Financial_Services_Software
More stuff
With a goal of enabling banks to reduce operational risks, lower costs, and enhance customer service, a new release of Oracle FLEXCUBE Universal Banking was announced by Oracle Financial Services Software. According to the Redwood Shores, Calif.-based vendor, Oracle FLEXCUBE Universal Banking release 11 provides complete support for the lending, leasing and mortgage lifecycle across origination, servicing and collections, and helps banks improve their reach and generate fee-based income through better intermediary and broker-aided distribution of products to target segments. The solution takes advantage of centralized, multi-product origination functionality to provide a uniform customer experience, standardize processes across the enterprise, and create centers of excellence.
Additionally, improved deployment and integration accelerators and extensibility features in the new release are designed to help banks launch products faster as a result of a quicker, more- efficient and less-expensive implementation of Oracle FLEXCUBE, the vendor says. Oracle FLEXCUBE release 11 features enhanced deployment accelerators with pre-configured products and processes for specific regions and countries, helping banks generate a quicker return on their investments.
As a result of integration with Oracle Enterprise Manager, customers can now manage the Oracle Database, Oracle Fusion Middleware components, as well as the Oracle FLEXCUBE application from a single console. Oracle FLEXCUBE release 11 also helps to improve operational and analytical insight for bankers through enhanced business intelligence support and integration with Oracle Reveleus as well as Oracle Mantas’ Know Your Customer, Anti Money Laundering and Fraud, according to Oracle. Along with this release, Oracle also is launching the Oracle FLEXCUBE Integration Lab to provide customers with secure access to an instance of Oracle FLEXCUBE in order to get a first-hand feel for how to integrate with the application using existing Web services.
“Improving their reach and generating fee-based income through enhanced origination and distribution is at the top of the 2010 agenda for banks,” said Joseph John, Executive Vice President, Banking Products, Oracle Financial Services Software, in a press release. Added Karen Massey, Senior Analyst, Consumer Banking, IDC Financial Insights, in the press release, “The new decade will bring a renewed spotlight on core banking renovation after a prolonged period of delaying core projects during the global recession. The focus will be on refreshing the core to support new business models as financial institutions look to improve profits and increase efficiency, questioning the tremendous maintenance and integration costs of legacy core systems. The flexibility of a component-based approach will be key to core renewal, supporting unique strategies as FI executives invoke innovation to take a leadership position in the marketplace.”
http://www.banktech.com/core-systems/oracle-financial-services-software-intro/222200673
Skippy… I thought *Oracles* inhaled gases from over cracks in the ground and were mumbled via young virgins[?] to pedophile priests in antiquity… digital virgins? Craazymans going to be pissed…
More good times!
Higher Return On Investment, Lower Total Cost of Ownership
The bank’s IT department head count was reduced by 42%
The bank’s back office operations department was completely centralized and head count reduced by over 50%
The bank’s front office personnel was relieved from back office duties and entirely focused on sales and customer services
Customer’s information gathered, stored and analyzed has increased by over 70% for retail customers and over 50% for corporate ones
http://www.google.com.au/url?sa=t&rct=j&q=flexcube%20software&source=web&cd=3&cad=rja&sqi=2&ved=0CEAQFjAC&url=http%3A%2F%2Fwww.sirmabc.com%2Ffiles%2FCase_Study_Bank_Allianz_v2.ppt&ei=0vvvUKizKuLcmAX0toHYCw&usg=AFQjCNGGD1jnyIcqivvySVnbc6BiVuT9tQ&bvm=bv.1357700187,d.aGc
Yup, five or six underpaid and demoralized employees attempting to do the job of ten. Management laying off senior engineers and hiring rookie sales reps.
Don’t get me started on aging hardware and spotty security.
And the executive and managerial people are worth every dime of share holder value… cough… bonus remuneration!
Skippy… answering service… I’m on the beach, yacht, bordello, swanky restaurant, private jet, listing to wife’s new reality show…. your call is important… it may be recorded for training purposes… http://www.youtube.com/watch?v=36ireOG4Q84
What security? Proprietary systems are not security.
Not to worry, Chris, NASDAQ, parts of the NYSE, Soc Gen, and many others (FBI, NSA, etc.) use ClearForest, from the same people who gave the world, Narus (DPI tech to ferret out those pro-democracy types so they can be marked for torture and death in China, Egypt, Bahrain, Myanmar, and elsewhere, now a subsidiary of Boeing).
All financial risk management is grounded in fantasy. Anyone who doubts this need only read Keynes’ Treatise on Probability.
Avoiding large exposures is the only way to intelligently manage risk. Try telling that to Jamie Dimon.
Fraud and bad record-keeping fit together like an iron fist and a velvet glove. It’s all about appearances, all about smoke and mirrors. The objective is to make malice appear like incompetence. It’s an age-old trick.
The stratosphere of banking is inhabited by a small group of individuals who have pulled off a coup. They control government, and as long as this situation exists they will engage in high-risk behavior, because they know when their high-risk gambles pay off, they pocket the proceeds, but when they don’t, it is the public who picks up the tab.
The alpha bankers blame incompetence and government for everything, but in reality it is they who are in full control and pull all the strings.
Exactly.
Ditto!!
Very exactly.
A RISK MEAN THEY ARE ZERO ..SO my opinion is they have to staring not by risk but by one strong corporation who have possibility ..NOT risk BUT WINN bank system have very similar ruls and this is Alpha JOB FOR GO FRONT .. i think MAnagers in bank needed to be ( i geted) ”so i fis not like this they lose because banking become big industry and this mean concurence ..to cover the deficite they have needed not risk but winn ..CORPORATION BRASIL ..CORPORATION RUSSIA .POLONIA …
THINKING ..
ORTHOGAPHICE MISTAKES ..I MAKE SO SORRY
A case in point
http://www.dailymail.co.uk/news/article-2260250/Senior-Lloyds-banking-executive-forced-bosses-blowing-whistle-IT-failures-cost-200million-fix.html
“Once you reach a certain level in banking, you only fail upward.”
There is a recently discovered law of management that has not received the attention it deserves: the Law of Promotion to Mediocrity. This law is very simple and seems to work not just in banking but on every large collectivist enterprise. It is as follows. A professional is found to do a good job at his/her position so it is promoted. The promoted new responsibilities entails different abilities for which he/she is not exactly as good as he/she was in his/her former position. The process of promotion continues until the promoted finds his/herself in a position for which he/she is a complete mess or an incompetent so most of the burden of doing well falls on the people he/she is in charge of. When the promoted reaches this position, he/she stays there, no more promotions. There you are, the Law of Promotion to Mediocrity. There is, of course, a scientific study showing this phenomenon; if pressed, I may find the reference. It lends credibility and support to the often held view that bosses are essentially incompetent.
You’re describing the Peter Principle, which was formulated under that name in 1969 (though likely the observation long predates that). Even scarier is the Dilbert Principle, which says that people get promoted in order to get them out of the way.
Thanks for the contextual info. I read a paper a couple yrs ago demonstrating this principle at work, IIRC, both empirically and theoretically.
This is just a rehash of the Peter Principle that everyone rises to the level of their own incompetence – a principle postulated back in the 1960s.
The Peter Principle with a new name. I see it constantly within the IT Department I work within, although to be fair, there are many legitimate and good promotions, too.
One of the bigger problems is that people are brought in to the risk management area that have no clue what they’re doing, and they’re brought in because we are so short-handed in the first place. Lack of understanding of “systems” is rampant, all-encompassing policy documentation is sparse, and training is almost non-existent… it costs money, you see… unlike the near complete incompetance :-)
“Lack of understanding of “systems” is rampant”
Hear, hear!
It’s not just in IT either. I see it, for example, in ASIC (application specific integrated circuit) design. A designer who knows something, or at least wants to learn, something about what happens outside his own little niche, is much more valuable to a project. They often spot problems that in fairy tale land would be covered in the spec, but in the real world of imperfections often aren’t.
Of course it’s hard to find people like that when management actively discourages that curiosity. Best to keep your people fungible, even if that means barely useful.
Lack of understanding of systems – methinks that is a major problem in the commercialization of the life sciences as well, with potentially much more lethal fall out …
I worked at the top levels of one of the largest IT companies working exclusively with the biggest banks for ten years. All I can say is that it’s an even bigger cluster than everyone says.
What happens is the BOD get fed up with IT, not delivering, excess costs, operational failures etc. They think they need a new CIO, so they fire the existing guy.
The new guy comes in, and needs a good 6 months to figure the place out, find out where the bodies are hidden, find the “third rail” systems (touch them and you die). Then the new guy announces his uber mega-project that will re-align the entire IT landscape from head to toe. He names it something like “Project Next” or “NewBank” or “Project Vishnu” or something equally grandiose.
The rest of IT staff now scramble to see if their application domain, that they’ve spent their entire career milking, is included in the new blueprint. They attend the cluster mega-meetings on the new initiative, usually held in an auditorium. If their pet system is not included, is going to be phased out or replaced, then it’s time for war. They do everything they can to slow down or stop the new initiative. I’ve seen everything done here, putting new applications on a part of the network that doesn’t work right, falsifying data, crashing other applications to take the heat away.
Year One passes with no visible progress. New CIO can still get away with arm-waving and big promises and slide decks, the branding of the “New New Thing” is going well, the CEO is still mentioning it on the quarterly shareholder calls, but the BOD is already having buyers remorse and cringing at the additional costs.
During Year Two, there are a few “wins” that distract the BOD. By the end of Year Two, though, the board gets frustrated with the ever-balooning costs. They start the search for a new CIO. The CIO knows this and starts working his/her network for his next 18-30 month gig, doing the same thing for the bank down the street.
Meantime in the trenches the actual workers keep the 40-year old mission-critical applications running. There’s only one guy left who understands the core systems, all the kludges and workarounds that have piled on over his last decades, he’s 68 years old and boy he better not step in front of a bus.
I do data management in the non-profit sector and this is the same here too, except that NPO’s have so few resources that the severe system transition screw-ups can actually tank the entire company or put it on life support.
“Witness MF Global, where the firm was unable to cope with the transaction volume of its final days and literally did not know where money was at various points in time during the day.”
Wasn’t that a consequence of their business model and deliberately piss-poor oversight policies, though?
Extreme Programming is yet another buzzword for yet another pie-in-the-sky “software development methodology”. The software industry produces such “revolutionary” ideas on a regular basis, and in general buzzwords seem to be their main product (if nothing else you can make money from the books and seminars about them).
Nevertheless there are a number of long recognized good practices for creating good software (including some that got recycled into ‘Extreme Programming’). One of the most important is extensive testing, including corner cases, at all levels (unit, subsystem and system). Writing software to automate these tests is important so that you can perform regular regression tests. Writing the test software may even take more effort than writing the “main” software. It’s the sort of thing that companies are quick to skimp on. Most difficult is writing tests that usefully exercise a large system, but it’s also one of the most important.
Good documentation is mostly about thoroughly and accurately documenting the interfaces (a practice that would have kept a Mars probe from crashing), rather than every stupid line. All too often though the quality of documentation is measured by word count.
It’s also important that the programmers understand the application, instead of being treated like code monkeys being handed a magic spec. All specs have flaws, and they’re most often found by knowledgeable programmers.
It’s also been shown time and again that low personnel turnover leads to higher quality software. No matter how well documented something is, the best practice is to have it maintained by the original programmers to the extent possible (or at least have the original programmers around to answer questions). Large software contractors who treat programmers as fungible and put the programmers du jour on projects are the worst offenders.
Finally, it strikes me as penny wise and pound foolish to outsource much of the IT work, or even to use off-the-shelf packages for critical functions. No matter how well documented, or supposedly tested, the stuff is, nobody understands it like the original developers.
While not banking, it’s interesting that a number of large companies seem to be pulling their IT work back in house these days. Hilton Hotels pulled it from IBM and GM pulled it from HP. Does anyone know of other examples?
I was a developer for 25 years. The kind of thorough testing that led to pretty reliable software early in my career simply isn’t done anymore. If it passes client testing then it’s good enough now; but clients can’t test nearly as well as developers can. I used to deliver bug-free software; by the end my managers didn’t really care if I tested it or not before I turned it over. The result was garbage that I was ashamed to be associated with. And that included calculation of health insurance claims that were known to be incorrect. As long as no one squawked, no one cared; all that mattered was meeting the deadline and getting through client signoff.
Amen to that!!
I worked in IT for my entire career for a non-US government department, and I was there when we transitioned from in-house systems to COTS-based systems (with, inevitably, enough customisation to make them WORSE than the in-house systems when upgrade time came).
To make matters worse, we were forced by politics to obtain SEPERATE vendors for various functions, all of whom vigorously touted the openness of their systems, and all of whom had systems which assumed that they ALWAYS held the data of record.
I remember raising cross-system data validation as a ‘what are we missing’ topic at a branch open session (quarterly round-table get together with team leaders and a big boss: his idea, and IMHO a good one). The senior IT executive hosting the session admitted he didn’t know what I was thinking of from the bare title of the topic; when I described what I knew of the issue from my own experience with rejected interface transactions he got very quiet and then commented that I had
1. Pointed out a can of worms that would require tens of millions to fix, and
2. That the fix almost certainly couldn’t get through the budgetary approval process (until either top departmental management and/or their political masters got burned by a consequence of the data discrepancies).
…but, but, but what about the stress tests? One can only imagine the intrigues this makes for.
Software is a tool used to achieve an operational end point or aid in a product or service implementation – it is not social – it is a tool.
You know the sound of an empty sales pitch when it amplifies it’s no-shit-Sherlock aims – reduce operational risks, lower costs, enhance customer service, increase profits, provide drill-down, ease manageability – and on and on – Never once going into how the thing will actually do it or how it’s collateral impacts hit the bottom line.
The commodity-plug-and-play black-box is always top sold to folks most interested in increasing the profit-center profits without looking at the cost center’s (usually the ones keeping the place together and within the direct supply chain) requirements such as audits, risk “management”, contract production – basically the rest of the on the ground trench work. The whole .com bubble and bust was a good example – most of the promise of those companies was this optimization of the supply chain and cutting out the “middle-man” — so funny that these entities became the “middle-man” themselves and, from what I saw, the products were all copies of each other, of course, the internal names and functional names were changed to disguise the guilty.
What I saw was that ‘top-management’ was looking for ways to avoid the real work of – insurance, financial services or banking. I saw it in the (failed) implementations of six-sigma and ‘black-belt’ implementations that became ways for incompetent poachers and back-stabbers to rise up the ranks via the gaming of these programmatic implementations. The hiding behind the verbiage of these programs enabled them to disguise their incompetence. Institutionally, these dashboard demons could not be bothered with the lower ranks or the business in which they operated.
The melding of the financial services, banking and insurance industries made for some entertainment. Insurance companies are well versed in risk analysis – the gamblers on the other side (financial services) could care less. Top management in the new insurance industry could not name the product they were selling (contracts)anymore because they were no longer run by the same people. The Financial service side could not conceive of risks and, the proper (IMO) banks were not given a voice and required to much work for the profit. Both banks and Insurance companies were taken over at the highest level by a bunch of incompetent yahoos via the -programmatic survival of the fittest (survival of the incompetent)-misapplication of project management in the financial arena.
Sprinkle in true incompetence, Gresham’s Dynamic, Control Fraud and you end up with the financial meltdown run by a bunch of folks who are too incompetent to recognize their own incompetence.
“I saw it in the (failed) implementations of six-sigma and ‘black-belt’ implementations that became ways for incompetent poachers and back-stabbers to rise up the ranks via the gaming of these programmatic implementations.”
With enough buzzwords and “processes”, you can make anybody forget what the goal of their work is.
To my knowledge, in the financial crisis, there was exactly one firm that had a solid big data system that gave an integrated real-time picture of firm risks: Goldman Sachs, which averted the near-death experience of other shops (although at risk if other shops went down). Goldman’s system priced every instrument, however exotic, in real-time using the same curves, forward rates, volatilities, correlation matrices, and allowed them to compute risk and test shocks against the whole firm. They were the only ones who said, we’re losing money every day on this when our model says we shouldn’t, let’s take a close look at this… and then cut losses and go short. Unlike Goldman Sachs, which grew organically, every other firm was cobbled together with mergers, and had different groups which did their own analytics in disparate systems which someone then struggled to integrate at the top level in Excel or something. They lost billions because, unlike Goldman, they didn’t use a big data approach. And also because at Goldman, when management says to cut, traders cut. At other banks, people dragged their feet and started negotiating.
MMT?
Negative, jmd, negative.
FpML
http://en.wikipedia.org/wiki/FpML
It’s also worth noting that Goldman was one of the only places to maintain power when the grid went down in hurricane Sandy.
Divine intervention – they do God’s work.
Everyone who’s been on the inside at Goldman says it’s like a cult, but that generally the culture stresses people across the company sharing knowledge and working with each other.
Though obviously there must be plenty of exceptions, as with H. Paulson’s knifing of J. Corzine.
I also remember that Goldman Sachs had an ex-CEO in the government as Secretary of Treasury. It also got paid 100% on the dollar US government money for its bad deals with AIG. How many billions was that?
But I’m sure ObamaCare’s online health exchanges will be peachy.
We should make predictions on what they’ll say when costs don’t come down.
I think they’ll say “No one could have anticipated….”
I also think the public movement for single-payer will get even stronger. By then they’ll have single-payer in all of Mexico.
My friends keep telling me how much this site is censored, then they put on a demonstration for me today.
Truly a heavily censored, vanilla site.
Just like Huffington Post, Cory Doctorow’s boingboing.net, The Guardian newspaper site, and a bunch of others.
Guess I’ll be skipping this non-free press site for good.
Provide evidence – oh savvy – internet user.
Skippy… Baseless accusations = door meets troll ass.
We have moderation tripwires to catch trolls and spammers. So apparently your “friends” are one, the other, or both.
Act like a troll or spammer, you’ll be treated like one. This is not a chat board, nor is any other top financial blogs’ comment section. This happens to be my private, hosted property. “Free press” means publishers are independent of government and can say what they want as long as it isn’t libelous. It does not mean every Tom, Dick, and Harry gets free, unrestricted access to the publisher’s investment in brand and readership building. Do you have your head so deeply in the sand that you never noticed that in the old days of print media, letters to the editor were screened by the publisher and only a few appeared?
The quality of this comment section is an important asset of the community and I am correctly protective of it. I don’t tolerate abuse and anyone who have been abusive after being warned to cut it out (people can have bad days) is not welcome here. And that is independent of point of view, I’ve had harsh words with regular commentors who are aligned with this site’s general point of view when they get too aggressive and more often than not, they’ve gone off in a huff. Sometimes I’ve put them in moderation (some for a short time, some permanently) and some I’ve banned depending on how out of line their behavior has been.
Go start your own blog and pay for hosting if you don’t like our minimal rules here. See this from the highest traffic finance blog, The Big Picture, for a basis of comparison, and look at how many comments they let through on any post (hint, a fraction of ours. Every comment at The Big Picture is moderated. By contrast, aside from one overly frequent commentor who is in moderation for that reason and gets most of their comments waved through, we have well under one comment per post on average that gets hung up in moderation, so your “heavily censored” is empirically untrue):
Trolls and Asshats: This may be a free country, but The Big Picture is my personal fiefdom. I rule over all as benevolent dictator. I will ban anyone whom I choose from posting comments — usually, for a damned good reason, but on rare occasions, for the exact same reason God created the platypus: because I feel like it.
I encourage a broad range of perspectives, philosophies, market positions, sexual orientations. Dissent is good. I want to see a debate of views, a battle in the market place of ideas ala Thomas Jefferson. You can post on nearly anything, so long as it is at least tangentially related to the topic at hand.
On occasion, I will “unpublish” a comment if I feel it is too impolite, harsh, ad hominem, inappropriate, or off-topic. Off-topic posts have been rising, and I have taken to unpublishing them en masse. Publish too many comments on a given post (3 or 4 relevant comments out of 30 are fine, 10 out of 30 is excessive). It takes me ~10 seconds to un-publish 10 comments. If you find yourself publishing way too many comments, consider this: This humble blog is my forum for expressing my ideas. Get your own damned blog.
Lately, I have been doing more than unpublishing nonsense posts — I simply en masse mark them as spam, and kiss that IP address good bye.
I also have been reviewing the IP addresses of posters, and looking at all of their comments. If they either publish under multiple names (I love a comment and then a subsequent comment agreeing with themselves) or simply post alot of jacked up nonsense, they get the same treatment. They won’t be missed.
Penalty Box: If you are consistently pushing comments that are off topic, flame baiting, or are inappropriate, but do not quite rise to the level of a full Troll, well we have a solution for you as well: The Penalty Box.
Do you engage in false rhetoric? Consistently make straw man arguments, ignore logic, don’t use sources or post junk sources? Your comments will be posted — eventually — typically on a 24 hour delay. You can have your say, but you won’t influence the debate, as it will have moved on without you.
Getting Banned for Life: A few things that will get you permanently banned from commenting at The Big Picture. The fastest way to lose posting privileges is to misrepresent your host’s complex and nuanced views in some inane bumper sticker comment.
Other fast tracks to getting banned:
– Knowingly posting false or malicious material;
– multiple postings under different names;
– generally engaging in troll-like behavior;
– misquoting your host/overlord;
– being impolite in the extreme;
– ad hominem attacks;
– being an asshole.
Right now, someone is reading this and saying to themselves “What does he mean, being an asshole?” If you wondered that to yourself, well the odds strongly favor that you yourself have sphincter-like qualities. Thus, you should consider it likely that you will be banned as a rectoid from posting comments sometime in the near future.
______________
*GYOFB stands for Get Your Own Fucking Blog
http://www.ritholtz.com/blog/2008/05/comments-trolls-asshats/
IT Risk a universal problem. One of the reasons it keeps manifesting itself is that IT is not an area that senior managers and the Board are overly familiar with – i.e. have lived it day to day for some time – and so when IT comes up on the agenda the brain centre for uncertainty (not stochasticity, but Knightian uncertainty) kicks in. This raises a subconscious feeling of disgust; the automatic response to disgust is to remove the disgusting object. How to do that in a meeting? Adopt any superficial solution that sounds like it solves the problem and then don’t think about it again.
The pervasiveness of IT risk is because all this is subconscious.
Concur, having gone through more than one T1 international corporation undergoing complete refit. Being the middleman between the two mindsets is like translating Klingon to a non – Trekkie. If management can’t grok it… how the hell do they even know what they want?
Skippy… oh yeah… a cover for bad decisions, own it if it works out and blame if it goes splat.
Gah. This leads to a conclusion that we should never have any managers who can’t program.
Similar to my conclusion after reading the lead poisoning studies that we should be very suspicious of putting anyone born in the US before 1972 in power over anything.
Another sobering thought:
Since the recent Administration decisions on mortgage forgery have converted land ownership records from paper and ink in county courthouses to database entries in computers owned or run by banks, the 5% of mortgages they’re allowed to convert without penalty can be joined by any number of homes that never had mortgages, and they always have the ‘bad data entry’ defense to stay out of jail.
Some reports on the latest mortgage scam negotiations include depriving wronged homeowners their right to sue. Frankly, since the banks are provided comprehensive defenses, there’s no point in suing.
All power grows out of the point of a pen.
The good news: none of that has any ability to affect state land law; it is impossible for anything the Feds do to remove the right to sue in state court.
If your state courts are as defensive of state land law as the Massachusetts courts (Ibanez, etc.) none of the federal-level coverups will be able to deprive homeowners of their state-level legal rights.
If your state is as corrupt as Florida or Arizona or Pennsylvania, or has legalized MERS (Minnesota), then you have problems.
The 50-state situation is, for once, a help; it means that not every state is forced to suffer just because criminals got hold of the federal courts.