Enterprise Integration : QoS is What not How…

June 30, 2008

Looking at reasons why we adopt certain technical approaches to enterprise systems integration I’m increasingly concerned by a ‘how’ approach, using proprietary, expensive technology to create relatively simple styles of interaction effectively eclipsing the ‘what’. This ‘how’ approach appears to be rooted in technical policy and commercial structures rooted in the past, during which times proprietary/vendor solutions had more credibility than home-grown options. However looking at the ongoing commoditisation of application integration, the dominance of the web, and all manner of open-source technical options, it strikes me that we need to review our position, and attempt to find ways of selectively breaking out of the vicious circle of ‘licence renewal’ and ‘false economy’ in favour of a more blended approach.

Systems integration is dominated by TLA’s with a heavy vendor influence, and as such it’s easy to get lost in the cloud of complexity associated with MOM, JMS, MQ, XA, SSL, PKI, REST, HTTP, SYNCH, ASYNCH, PERSISTENT, QOS, etc. It’s no surprise, therefore that systems integration shoulders the burden of the difficult stuff getting brushed under the ‘proverbial carpet’ as opposed to the aspiration of the ‘intelligent application-aware network’. As that carpet gets ‘lumpier’, we throw more TLA’s into the mix and add more and more intelligence into the network, resulting in more complexit and a vicious circle. It’s no surprise that we’ve struggled to gain the confidence of the enterprise to the extent where we are in a position to move to commoditise what has become a very complex array of sticking plasters upon sticking plasters.

Shifting the focus to ‘the web’ and my interactions across that global ‘unreliable’, diverse, evolving network.  I note initially that I have a natural understanding of what I want to happen each time I select one or my many application’s. My mail client gives me the ability to create asynchronous 1:1 or 1:n distribution flows, with the ability of conveying large payloads and attachments. My instant messenger client allows me to enage in synchronous 1:1 or 1:n interactions, my feed-reader will sink event streams at a frequency I define. My blog client let’s me cache up a range of 1:n broadcast documents which are periodically published to a hosting platform. My twitter client lets me post informal event snippets. The list goes on.

The key point here is that I care not how any of my selected applications undertake my requested interaction, and in all honesty (even though I’ve implemented countless protocols in my time) I don’t have the time to care because if they are working ok, then I understand the QoS I expect, and if I don’t get that QoS I vote with my feet. We sometimes hear terms like Jabber, XMPP, SMTP, POP3, HTTP, HTTPS, IRC, etc. in assocation with the applications we use, but I would argue that only a tiny proportion of users of Thunderbird/Outlook actually have any clue about the implication of those SMTP settings at a technical level.

So to my conclusion. Systems integrators (and adopters of such technology in the enterprise) evolved from a time and a market-place differentiated by the ‘how’ factor. Subsequent up-selling on that original platform has created a momentum of expectation and desire in the technology consumer space, by constantly mapping the ‘how’ factor to the ‘what’ factor, eclipsing the emergence of a more commoditised alternative gaining a foothold. This is a highly effective point of attack when coupled with a reminder of achieving ROI on the last n-years of similar investment right ?!

The web, by contrast has simplifed and commoditised the same kind of end product, and evolved a user community purely focused on the ‘what’, without a care for the ‘how’ factor. As such it is relatively simple to decouple a user of a web-application from any underlying technical infrastructure, so long as the QoS and the ‘what’ factor is maintainted. Would I even know if my Thunderbird/Outlook mail client began using a competely different store-n-forward protocol? I don’t believe I would.

I believe we need to bring this learning into the enterprise space, and detatch our ‘how’ users from the underlying detail and coaching them to become ‘what’ users, such that we in the integration layer can get to work commoditising the technical fabric in such a way that we do gain the necessary ‘specialisation’ from key vendors, but in the main, we regain control/choice of what it takes to actually pass an ‘xml document’ from source to destination, across a trusted network we control, and guarantee it gets there in one piece.

We seem to have missed this…as it doesn’t cost me six 0’s to hook into, and run an effective real-time business over the web now does it?

Powered by Qumana

Advertisements

ESB Part 3: Product or Mindset

January 22, 2008

Once we have established whether the ESB makes sense in our integration problem space (Part 1), and once we have established whether we want to approach the delivery as a fully distributed, fully centralised or hybrid ESB deployment (Part 2), then we have the relatively trivial task of making our minds up if we need to buy something, embrace opensource or simply ‘refactor’ what we already have.

This article presents my personal experiences as an Enterprise Architect looking at integration architecture within a very large (LARGE) scale SOA transformation programme. As such my views are influenced by that specific context and I emphasise that certain conclusions I draw here are not entirely relevant for smaller scale or application-centric integration work.

So the landscape I face consisted of  thousands of legacy applications, hundreds or emerging strategic applications, 14 logical groupings of related applications, 160 strategic services offered by the 14 platforms, average of 10 operations per service, a mandate to introduce common-information-model, a mandate to exploit WS*/SOAP conventions. Add to this the organisational model in which suddenly, organisational divisions were aligned to platforms and as a result engineering work commenced in parallel across the enterprise to provide and consume strategic services based on predefined contract. So the task of identifying an integration architecture which would support such a heavily distributed implementation model was quite a challenge, and clearly laid waste to our traditional domain-hub centric appoach which effectively centralised the integration problems and pushed them through a finite pipeline based on the limited, specialised resource of the hub teams.

Issues ranging from ownership, funding, scheduling, delivery capacity, scalability, runtime capacity, and so on effectively put nails in the coffin of our existing mechanism, and it soon became apparent that our only chance of facilitating such a wide ranging integration storm was full distribution of the infrastructure and therefore the ownership of the problem-space into the active delivery programmes. The bus concept was formed…based on a pragmatic reaction to a whole range of environmental factors – NOT – vendor promises.

Next was the question of ‘do we need to buy new stuff’ or do we just ‘adopt a mindset’. Clearly our massive intellectual and fiscal investment in all manner of middleware to that point provide a baseline componentry which could be refactored. There was absolutely no case to justify further investment in emerging pure-play options, especially when one considered that we would not use the majority of the ‘potential’ elements at that stage in our SOA transformation. As such we needed a consistent messaging backbone which could be introduced as an infrastructure backplane to connect the emerging endpoints. We had MOM software all over the place, and there were no real technical problems with it, we just used it badly and inconsistently. Refactoring and introducing more rigour into the exploitation of this component would instantly enable us to leverage all our MOM experience to that point whilst forming our ESB connectivity backplane.

Next came the service container component, in other words the unit of integration logic we would look to deploy at the endpoints, effectively connecting the MOM backplane to the logical platform offering service. We examined our current portfolio, found we had a suitable container for that layer of our ESB, again  something we had a lot of confidence in using albeit supporting different patterns to those which we were about to depend on, but overall some confidence is better than no confidence. So we reused an existing component, I would refer to it as our lightest integraton broker with efficient integration with our MOM layer.

At that point we stopped worrying about orchestration, registry, monitoring, management, complex-event-processing, and all the other bells-n-whistles we could add later. The pitch of ESB vendors actually reinforced this view in that the ESB was inherently so modular that you can add and remove both infrastructure service or business service providers at any point :-).

So we refactored existing infrastructure. We combined a simplified infrastructure blueprint with a core set of service design patterns, along with the newly formed protocol and document standards driven out of the strategic architecture.  That was enough to get the integration backbone embedded into the delivery programmes in such a way as to simplify the model of how they would ‘plug’ into the backplane.

Did we suffer for not buying a pure-play?

Nope. We had the basic componentry on the books already, it was the clarity of vision which mattered in the early stages of our decentralised approach. We had that.

Would we have achieved more with adoption of a pure-play on day 1?

Nope. We would still have only used the same base-layers we are using today. We’re still breaking into the higher layers of the ESB food-chain anyhow. A vendor stack would have been gathering dust for 18 months.

There were implementation problems…more later…


ESB Part 2 : The Shapeshifter…

December 21, 2007

So the obvious, and much publicised, question one asks when hearing about the Enterprise Service Bus, is what is it? Is it a style, is it a thing, or is it #*%!!

I have a particular take on this borne out of my own experience in having to justify such a thing within a large scale enterprise architecture programme. As per my earlier post on SOA==WS?, one’s perspective on SOA has a huge bearing how the ESB shapeshifter can be manifested within your problem-space.

 In a traditional EAI landscape, we centralise the integration problem. This means that we enable the interacting systems to utilise local dialects when firing requests into the EAI space. The EAI brokers then deal with the transformation from external dialect, through the common, canonical dialect and into the external outbound dialects of the target systems (I am generalising in a big way however….). Difficult business integration problems are pushed away from the endpoints into the magic-in-the-middle, and this is where much of the EAI bad-press has come from, simply because it trades in peoples pain ! Would we need it if integration was easy ?

 So my primary driver when looking to evolve my enterprise integration roadmap to support the unfolding SOA transformation, was to decentralise the pain and give it back to the owners. Not just to reverse the trend, but to drive convention and middleware infrastructure templates back to the edges of the relevant IT domains such that the runtime becomes federated.

This federation has an implicit but significant reliance upon consistency and contracts at the endpoints, whilst opening out a consistent, scalable messaging conduit (finally!) between the domain brokers. Standing back from this – we have clients and servers adderssing the transformation in/out of service contracts at the edge of their domain, with common representation, messaging and infrastructure management in the space between. The EAI Bus architecture was taking shape.

The most significant thing in my opinion was the reduction in diversity by prescribing inter-domain standards. Finally the EAI brokers didn’t need the full range of COTS and Technology adapters they were usually encumbered with, nor did they require the high-end process-management abstractions, and as a result the technical requirements on these components reduced dramatically.

Over time, the movement of endpoint-specific transformation, standard messaging and document convention and runtime independence to the endpoints became analagous with the vendorised ESB concept which came along subsequently.

 However – my experience places the term ESB (and with major Emphasis on the ‘E’) as a scalable Enterprise framework, incorporating disciplined distribution of the integration problem space, and the consistency evolving int the messaging backbone.

I do see, and understand alternative perspectives (and I will use the term ‘eSB’ where the casing of the ‘e’ implies the technology being used in a less than Enteprise capacity. Where we see a the (systems not Enteprise) architecture incorporating a single messaging and transformation broker – supporting service contracts at the edges…implying the clients and servers are already emitting/consuming the expected dialect. I’m less convinced by this model and the shades of grey between this and my own perspective – purely because I am focused on the Enterprise ‘E’ in ESB as opposed to the ‘e’ in eSB…


ESB Part 1 : The Sales-Tool or the Strategic-Backbone

December 19, 2007

The Enterprise Service Bus (ESB) is a much maligned, scorned, mocked and generally over-vendorised concept which has resulted in it’s value being shrouded in the usual layers of skepticism. As a former Integration Architect within a large scale Enterprise SOA transformation program, I have spent a huge amount of time looking at the optimal formation of hundreds of middleware silo’s as the strategic backplane of the Enterprise SOA service-tier.

My interest in ESB started way before the ESB hit the hype-cycle, whilst deputising for an amazingly talented Integration Architect, and initially we were simply looking for an architectural model that would allow controlled decentralisation of the middleware bottleneck responsible for strangling (whilst bankrupting) the wider mobilisation of SOA implementation across our enterprise. The initial term we used, coming from a hub-centric EAI landscape, was EAI Bus. I am referring to an EAI hub proliferation on a major scale in terms of cost and numbers. The net result was the spreading loss-of-control of the business systems to achieve integration on anything other than a tactical basis.

 So the EAI Bus referred to the creation of a standardised inter-hub protocol over MOM infrastructure effectively extending the remit of any domain-hub (be that product-line, business-unit, or whatever….) into forming an component part of a larger, addressable enterprise backplane. The motivations were:

  1. Establish a common inter-domain integration standard, leveraging existing investment in domain EAI (BEA) and Enterprise MOM (Webpshere MQ, BEA JMS) technologies.
  2. Extend addressability across the federated hub namespace – effectively defining the common rail across which services (either strategic SOA assets or legacy EAI endpoints) can be located.
  3. Empower the IT domains at the edges, who are seeking to integrate on a strategic service-basis, but avoid the inter-domain EAI complexities in favour or working through a local integration unit where possible.

The decentralisation of the integration platform infrastructure and it’s dissemination into the service provider/consumer endpoints was the fundmantal goal. This was an evolution from our EAI reality, and I stress this commenced before the ESB band-wagon kicked in. I’ll be posing additional stages of our ESB evolution story.

The key point I can offer here is that WE knew what we were looking for, prior to the ESB arriving so we had a strong vision and strong motivation that allowed us to refactor existing infrastructure as opposed to re-investing in dreams.