About Us  |  Search  | FAQ  | Contact Us
Risk Management
Human Resources
Smarts Cards
Optimise CRM
Data Warehousing
Disaster Recovery
Swift Messaging
BPM & Workflow
Capital Markets
Global Custody




What is enterprise application integration?


In the early 1970s, most large enterprises, whether they were banks, or telephone companies or other utilities, used large main frames to capture, store and process large amounts of data. The data themselves were usually entered in the form of punch card jobs. Computer operators were largely required to work at key-punch machines, and their work was processed in the form of batches of cards. Data processing comprised a small number of large computer programs, mainly written in Cobol or assembler, which were run either daily or periodically to produce hard copy reports, statements, accounts and other paper based output.

The advantages of such systems were clear. They reduced the amount of repetitive effort required from mathematicians and accountants to produce the same results, and they consolidated data across the whole organisation to ease the production of enterprise wide reports and accounts. Unfortunately there were significant disadvantages. Even the simplest report could require the input of additional data, or the coding of an additional suite of programs before it could be produced.

In the mid 1970s, the arrival of the microprocessor made it possible for desktop computers to be built which were capable of processing reasonable amounts of data using relatively simple languages, such as Basic, and to produce either written reports or screen based enquiries. Initially these personal computers could not be connected together, nor could they be used as input or enquiry devices to attach to main frames, therefore they had the same disadvantage that they required additional data input, and some programming to use.

By the mid 1980s, PC technology had evolved to an extent that PCs, main frames and mini-computers could be connected together on a network.

Sometimes the networks were crude and required heavy duty specialised cabling, such as Twinax, Coax, or Token Ring, and they were usually centralised around a single mainframe or mini so as to be used as a data storage device. The reasons for this approach were obvious at the time. PCs had limited local capacity for storage (256kilobytes on a floppy drive or 10 megabytes on a hard drive), and there was little or no capability to share adjacent PCs resources such as disks or printers, without additional significant effort and expenditure. As a result, before about 1987, there was little or no reason to consider the implications of distributed data in any large enterprise, except where there were branches of the enterprise that could not easily be accommodated on the one large machine. For example in the case of the Chase Manhattan Bank, with its large international branch network, each branch had its own IBM mini-computer, and extracts of data were performed from each branches Midas system overnight, in the form of printed reports, to feed, by re-keying, into a centralised system in New York.

The advent of the PC processor based server, and the increasing reduction in size of mini-computers manufactured by IBM, Digital and Hewlett Packard significantly altered the playing field for the large enterprise. All of a sudden individual departments were acquiring their own "server" or mini-computer, and were starting to automate or computerise the records of the business being carried out. This led directly to a huge new market for departmental programming tools (such as lotus 123, supercalc and excel) and pre-built applications for the SME market (such as Sage accounting). This in turn led to a significant increase in the distribution of data vital to the efficient operation of a large number of enterprises across a number of main frames, mini-computers, servers and individual PCs.

The disadvantages of this situation quickly became evident. Some parts of a company's general ledger might be found in a Lotus 123 spreadsheet, whilst the transaction history might be held on an IBM AS/400, and the customer credit exposure history could be on a departmental Novell Server. This led some companies to experience problems with security, back-up, recovery, consolidation and consequential inaccuracies were bound to creep into consolidated view of the data.

Enterprise Application Integration (EAI) aims to eliminate these problems, by allowing all the data in a large enterprise to be consolidated, interrogated, reconciled and interpreted. Of course errors cannot be eliminated where the same data is keyed into different systems, as even the most expert tool cannot be expected to understand that, for example, one system holding the name of the Russian Federation as "Russian Federation of Independent States" might in commercial terms be the same thing as the "Confederation of Independent States". The use of ISO codes for some types of standing data (currencies, countries etc) does help, but there are, for example, several different agencies that each provide their own code to identify bonds, equities, CDs and other debt and equity instruments.

One way that has been identified to move forward with these problems of incorrect duplication is the Common Object Request Broker Architecture (CORBA). In terms of CORBA's application to data processing, the idea is to have one "view" of each piece of data in terms of a common code interface available on each platform, which is to share the common data. An example might be to hold currency exchange rates on a central server, and export CORBA based objects to each platform so that the distributed applications can access the common data. Unfortunately the CORBA approach requires a significant investment in new or amended programs on each hardware platform in the enterprise.

EAI has been described as "an application that can move data from any application to any other in a real time, without doing any hard coding". The globalisation, and the high level of competition of the markets has tended to demand that technology, particularly IT, is now the key factor in the success of all companies, and conversely the obsolescence and incompatibility of systems is a threat to every company. After analysing which applications should be integrated, a company will find a large number of solution providers, hardware vendors and software houses to choose from. The main difficulty arises from the types of solutions and software that are now being marketed as EAI solutions. Many of them were previously known as terminal emulation software, data transfer packages, message management solutions etc, which have been re-badged as EAI products. From an end users point of view, these software products can be considered to be a component in an EAI project. Unfortunately even the most advanced data extraction and transportation middleware products such as TSI's Mercator and Neon's integration servers and adapters cannot be used "out of the box" without defining the data flows between the adapters. This in itself is leading to a market for consultancies to "program" the middleware, in much the same way as freelance consultants have been programming in Cobol and RPG for twenty or thirty years.

In fact EAI can end up comprising a huge range of applications based around Thin and Thick Clients, Application Servers, Remote Procedure Calls (RPCs), Messaging and Queuing systems, Object Request Brokers (ORBs), Transaction Processing Monitors (TPMs), Object Transaction Monitors (OTMs), Integration Brokers and Database Middleware. Some EAI vendors, such as TSI, are also trying to leverage combinations of middleware architectures so as to provide payment, liquidity and other financial services related solutions. Certainly this kind of approach should lead to a reduction in the lead-time on the development of interfaces to support new international banking practices such as Continuous Linked Settlement. Unfortunately some software vendors appear to believe that the addition of an html based front end is an EAI solution, as it enables the application across the intranet and the Internet. Of course this solution is inefficient, as it causes all the data captured at each point of entry in the network to be sent across the LAN or WAN, rather than just the component which needs to be shared. It is worth considering, for example, the relative merits of Financial Objects S2 front end to IBIS with its intermediate server, and transactional interface to the IBIS "end of day" processes, with e-Midas from MKI. The latter places a simple "web enabled" client from Seagull software over the existing Midas input screens. MKI would no doubt also point out that they also have a middleware package "Meridian Middleware" which allows capture of some of the same transaction data on any platform compatible with IBM's MQ Series product, but it is not as yet a complete replacement for the RPG input functions behind the scenes, but then again neither is S2 a comprehensive replacement for IBIS.

EAI may not be attainable in its own right in such a way that a whole organisation's data is available without duplication, omission or error. The key idea is that an organisation will support zero latency, in the production of new queries, reports, enquiries and interfaces to other external systems. However it should be concluded that on the way to EAI, many milestones would be achievable which may be of considerable benefit in their own right.

Dr Cliff Patterson
Sales and Marketing Director
ABS Consultants
Specialists in:
Enterprise Application Integration consultancy and project management.


Section Menu

Decision Cycle
Web Enabled
Drivers for STP
Personal Touch
Micro Finance
Intelligent hub
Treasury Solutions
Mobile e-commerce
Copy of Supplier Financing
Relationship Management
Data Management
Rise of e-commerce
Computer Crime
FX deals
Intranet Problems
Operational risk
Successful e-commerce
Wireless payments
Net Impact


Record keeping



Home  |  About Us  |  Search  | FAQ  | Contact Us