So we aspire to loose-coupling, re-usable interfaces model abstraction as a means of implementing our SOA. Why? Well we’re told that the alternative is bad ! That alternative is unconstrained tactical wiring between applications, with the resulting unsustainable wiring being the essence of bad practice. I do agree to a point about the unconstrained integration being a bad thing, but there’s also some marketing greyness I need to dispel.
Point-to-Point or tactical integration is the term used to describe the creation of an application integration solution between 2 components, where the aspiration, the design, and the solution is only concerned with that specific requirement at that point in time. Shock horror – who would do such a thing?! Well there’s plenty of reasons for why this kind of approach may be suitable in some scenarios – in fact this IS the most popular approach to integration right!
However the subtle difference between the archetypal P2P interface and a reusable service is in how the design is approached – bear in mind P2P interactions still exist via re-usable services too. Has the interface been based on open standards in the infrastructure layer, has the interface been abstracted at a functional and information level to support additional dimensions (i.e. products, customer types etc) over time? Whether we use web-service technology or not we can still create re-usable services in the application integration landscape.
Now at the other end of the food chain we have our new-friend the coarse-grained, heavily abstracted, reusable Business Services driven out of the mainstream SOA approach to rewiring the Enterprise. Here we have, from the outside looking in, a single exposure for a complex array of related functions (i.e. multi dimensional product ordering), based on WSDL/SOAP/XSD/XML/WS* standards. This kind of approach is the current fashion, and is purported to simplify integration. Wrong!
What we do find is that the new, extensible interface simply creates a thin but strategic veil over the previous P2P interfaces, and effectively causes 2 areas of complex integration. Firstly – behind the new exposure, the service provider has to manage the mediation of an inbound request across his underlying domain models. Secondly the consumers of this newly published service have to deal with their own client-side mediation to enable their localised dialects to be transformed into a shape which can traverse the wire and be accepted by the remote service provider – or at least by the new strategic facade.
My point here is that SOA and the inherent style of wrapping functionality introduce integration challenges in their own right! So it’s not all rosy in the SOA garden, and this is where I’m seeing opportunity for a hybrid approach….and (appologies for the heresy, I’ll burn in hell if I’m wrong) a resurgence of P2P runtime integration based around a well managed reusable service design process.
Eh!? Have I unwittingly turned to the dark-side?
What I mean is P2P is OK if the cost of change is minimal – and if we minimise the client specific aspect we can reduce this cost to a point where it’s comparable to that of the alternative of exposing the common model to the wire. In traditional approaches, cost of change is high because the entire design of the solution was hardwired to one specific purpose. Introduce a requirement to flex that solution and we have to rip and replace. However if the P2P ‘design’ is managed correctly and involves the creation of mappings between a common model and the provider domain models, then in addition to exposing that generic interface to the wire, we have a facility to enable the consumers of the service to declaratively derive their own native transformations, which can cut out a transformation step in the runtime.
If we use a toolset such as Progress DXSI, for capturing the Service Provider mappings into the Common model, and then capturing the Consumer mappings into the Common model, then we can relatively simply derive transformations between the Consumer dialect and the Provider dialect. Any changes to the provider, or the Common model would simply require a re-generation of the transformation code that would then execute on the client. This sounds sort of logical…unless my logic has become skewed somehow 🙂
So this hybrid approach simply blends the best of a fully decoupled SOA approach with the runtime efficiencies of a tightly coupled P2P approach, based on the fact that the design-framework is declarative and can reduce the cost of change so as to mitigate the risk of P2P solutions.
I’m going to explore this in more detail, but I’m confident there’s a way of getting the best from both worlds…unless the SOA police catch me first…