I have recently created a secondary and more specialised blog called ‘Semantic Integration Therapy‘ given my focus is now beginning to shift to this particular discipline. Semantic Integration in my terminology and context relates to achieving more effective application integration and SOA solutions by extending the traditional integration contract (XSD and informal documentation) with a more sementically aware mechanism. As such I’m now deep-diving into the use of XMLSchema pluse Schematron, RDF, OWL and commercial tooling such as Progress DXSI. There is a heavy overlap with the Semantic Web Community, but my focus is moreso on the transactional integration space within the Enterprise, as opposed to the holistic principles of the third generation or semantic web.
Once we have established whether the ESB makes sense in our integration problem space (Part 1), and once we have established whether we want to approach the delivery as a fully distributed, fully centralised or hybrid ESB deployment (Part 2), then we have the relatively trivial task of making our minds up if we need to buy something, embrace opensource or simply ‘refactor’ what we already have.
This article presents my personal experiences as an Enterprise Architect looking at integration architecture within a very large (LARGE) scale SOA transformation programme. As such my views are influenced by that specific context and I emphasise that certain conclusions I draw here are not entirely relevant for smaller scale or application-centric integration work.
So the landscape I face consisted of thousands of legacy applications, hundreds or emerging strategic applications, 14 logical groupings of related applications, 160 strategic services offered by the 14 platforms, average of 10 operations per service, a mandate to introduce common-information-model, a mandate to exploit WS*/SOAP conventions. Add to this the organisational model in which suddenly, organisational divisions were aligned to platforms and as a result engineering work commenced in parallel across the enterprise to provide and consume strategic services based on predefined contract. So the task of identifying an integration architecture which would support such a heavily distributed implementation model was quite a challenge, and clearly laid waste to our traditional domain-hub centric appoach which effectively centralised the integration problems and pushed them through a finite pipeline based on the limited, specialised resource of the hub teams.
Issues ranging from ownership, funding, scheduling, delivery capacity, scalability, runtime capacity, and so on effectively put nails in the coffin of our existing mechanism, and it soon became apparent that our only chance of facilitating such a wide ranging integration storm was full distribution of the infrastructure and therefore the ownership of the problem-space into the active delivery programmes. The bus concept was formed…based on a pragmatic reaction to a whole range of environmental factors – NOT – vendor promises.
Next was the question of ‘do we need to buy new stuff’ or do we just ‘adopt a mindset’. Clearly our massive intellectual and fiscal investment in all manner of middleware to that point provide a baseline componentry which could be refactored. There was absolutely no case to justify further investment in emerging pure-play options, especially when one considered that we would not use the majority of the ‘potential’ elements at that stage in our SOA transformation. As such we needed a consistent messaging backbone which could be introduced as an infrastructure backplane to connect the emerging endpoints. We had MOM software all over the place, and there were no real technical problems with it, we just used it badly and inconsistently. Refactoring and introducing more rigour into the exploitation of this component would instantly enable us to leverage all our MOM experience to that point whilst forming our ESB connectivity backplane.
Next came the service container component, in other words the unit of integration logic we would look to deploy at the endpoints, effectively connecting the MOM backplane to the logical platform offering service. We examined our current portfolio, found we had a suitable container for that layer of our ESB, again something we had a lot of confidence in using albeit supporting different patterns to those which we were about to depend on, but overall some confidence is better than no confidence. So we reused an existing component, I would refer to it as our lightest integraton broker with efficient integration with our MOM layer.
At that point we stopped worrying about orchestration, registry, monitoring, management, complex-event-processing, and all the other bells-n-whistles we could add later. The pitch of ESB vendors actually reinforced this view in that the ESB was inherently so modular that you can add and remove both infrastructure service or business service providers at any point :-).
So we refactored existing infrastructure. We combined a simplified infrastructure blueprint with a core set of service design patterns, along with the newly formed protocol and document standards driven out of the strategic architecture. That was enough to get the integration backbone embedded into the delivery programmes in such a way as to simplify the model of how they would ‘plug’ into the backplane.
Did we suffer for not buying a pure-play?
Nope. We had the basic componentry on the books already, it was the clarity of vision which mattered in the early stages of our decentralised approach. We had that.
Would we have achieved more with adoption of a pure-play on day 1?
Nope. We would still have only used the same base-layers we are using today. We’re still breaking into the higher layers of the ESB food-chain anyhow. A vendor stack would have been gathering dust for 18 months.
There were implementation problems…more later…
The adoption of a common information model is an important consideration within any large scale application integration scenario such as that which underpins Enterprise SOA. The common model is the standardised representation of the key information artefacts moving between domain boundaries, and the inclusion of such an artefact within the SOA infrastructure is a natural decision. In conceptual terms this kind of a approach makes complete sense, where every endpoint maps it’s own localised dialect into central, shared, common model to facilitate a more efficient integration design process than the alternative of negotiating an exchange model with every remote provider around every single interface. On the face of it, whether or not a common model is injected into an integration scenario, a common model will evolve as a by-product of the integration work. As such it’s far better to retain a level of pragmatic control over the evolution of such a corporate asset then to allow natural selection and incremental evolution shape such an important asset. The main problem with the use of a common model is the question of design-time or runtime and the integration methodology applied by the integrators has a bearing over which of these modes can be leveraged.
From experience, however, the problem with a common model can be it’s lack of accessibility and the implications of it’s abstraction on the concrete interfaces engineered through it by local dialects. On the one hand, taking a SOA scenario, one could mandate that all service providers, regardless of internal model and interface technology dialects, expose a single service interface expressed in terms of the common model, upon a single SOA infrastructure blueprint. That way providers have the comfort of sitting all their localised, legacy and evolving assets behind a common interface which becomes the only way to consume that service. Such an approach also implies that providers and consumers can be decoupled during design and delivery to the extent where the service contract derived from the common model would form the basis of a testable component further down the line…
The alternative to this kind of explicit representation of the common model in service interfaces is the adoption and the use of the common model as a design ‘platform’, through which integrators assemble and agree service contracts, which are then engineered as consumer-specific runtime interfaces, albeit expressed as a variation of a fundamental common model. This is not a p2p integration approach, but more of an efficient use of a common model, to facilitate integration through a re-usable service capable of supporting a range of consumer dialects. To achieve this kind of model, there are some pre-requisites in the design-space which, in my experience have been difficult to achieve such that the extent of my hands-one experience is the use of the model as a concrete interface standard. (More on this latter option in a subsequent post).
So when we push SOA services with interfaces represented explicitly in terms of a common, abstract model, there are pain points:
- All endpoints must achieve seamless mapping into the common syntactic and semantic model. Semantics are always the poor relation, and structural mappings are easier to nail than the content-centric domain rules which must also be formalised.
- All service providers must provide an interface presentation with back-end integration into their applications implementing the service logic. Whilst ‘fronting’ an evolving IT stack with new, strategic interfaces is advantageous, the additional ‘layer’ is often seen as an issue although in my experience the overhead of an additional layer of xml processing is trivial compared to the usual latency of executing business logic in the application tier.
- All consumers must conform to a particular, common/non-native dialect when consuming a remote service. This is the primary area where I feel there is justifiable negativity, especially where a consumer of a strategic service may actually be a transient component, targeted for removal in the near future, and as such investment in consumer-side integration kit to facilitate interaction with newly established remote services is difficult to justify.
- Nobody likes a common model…as everybody has to do some work. Moving from a traditional EAI mindset – where we broker all integration solutions centrally, in which case the ‘how’ is hidden within black-box integrations, there is always this debate to be had. However when SOA is simply distributing ownership of common model to the endpoints as opposed to centralising it in an EAI scenario, I feel this point of issue is relatively trivial.
So I believe I can argue and justify a case for a common-model in all cases apart from point 4. The willingness of consumers to ‘take the pain’ of conforming to a foreign model – which may be non-trivial, is a tough nut to crack. Historically I have predominantly been in situations where the consumers of remote services are also providers of strategic services to remote consumers, so investment in new infrastructure to facilitate the ‘provider’ role can be leveraged to support the needs of local applications requiring consume-side adaptation to facilitate interaction with remote, common-model based services.
However I have done so, with an understanding in the back of my mind that there has to be a more effective way of linking strategic services based on a common-model with a diverse collection of consumers and client-side funding and organisational constraints.
In conclusion – using a common model as a concrete interface standard is do-able, but is a pretty heavy-handed and brute-force approach to something which ‘should’ be making life easier. As such I truly believe that a more collaborative framework in the design-space will facilitate a more adaptive integration approach, still 100% supportive of a common model, whilst facilitating low-cost integration. I will be expanding more on this new approach in my next post.
The Enterprise Service Bus (ESB) is a much maligned, scorned, mocked and generally over-vendorised concept which has resulted in it’s value being shrouded in the usual layers of skepticism. As a former Integration Architect within a large scale Enterprise SOA transformation program, I have spent a huge amount of time looking at the optimal formation of hundreds of middleware silo’s as the strategic backplane of the Enterprise SOA service-tier.
My interest in ESB started way before the ESB hit the hype-cycle, whilst deputising for an amazingly talented Integration Architect, and initially we were simply looking for an architectural model that would allow controlled decentralisation of the middleware bottleneck responsible for strangling (whilst bankrupting) the wider mobilisation of SOA implementation across our enterprise. The initial term we used, coming from a hub-centric EAI landscape, was EAI Bus. I am referring to an EAI hub proliferation on a major scale in terms of cost and numbers. The net result was the spreading loss-of-control of the business systems to achieve integration on anything other than a tactical basis.
So the EAI Bus referred to the creation of a standardised inter-hub protocol over MOM infrastructure effectively extending the remit of any domain-hub (be that product-line, business-unit, or whatever….) into forming an component part of a larger, addressable enterprise backplane. The motivations were:
- Establish a common inter-domain integration standard, leveraging existing investment in domain EAI (BEA) and Enterprise MOM (Webpshere MQ, BEA JMS) technologies.
- Extend addressability across the federated hub namespace – effectively defining the common rail across which services (either strategic SOA assets or legacy EAI endpoints) can be located.
- Empower the IT domains at the edges, who are seeking to integrate on a strategic service-basis, but avoid the inter-domain EAI complexities in favour or working through a local integration unit where possible.
The decentralisation of the integration platform infrastructure and it’s dissemination into the service provider/consumer endpoints was the fundmantal goal. This was an evolution from our EAI reality, and I stress this commenced before the ESB band-wagon kicked in. I’ll be posing additional stages of our ESB evolution story.
The key point I can offer here is that WE knew what we were looking for, prior to the ESB arriving so we had a strong vision and strong motivation that allowed us to refactor existing infrastructure as opposed to re-investing in dreams.