New Semantic Integration Blog…

September 7, 2010

I have recently created a secondary and more specialised blog called ‘Semantic Integration Therapy‘ given my focus is now beginning to shift to this particular discipline. Semantic Integration in my terminology and context relates to achieving more effective application integration and SOA solutions by extending the traditional integration contract (XSD and informal documentation) with a more sementically aware mechanism. As such I’m now deep-diving into the use of XMLSchema pluse Schematron, RDF, OWL and commercial tooling such as Progress DXSI. There is a heavy overlap with the Semantic Web Community, but my focus is moreso on the transactional integration space within the Enterprise, as opposed to the holistic principles of the third generation or semantic web.

Advertisements

Where a Semantic Contract Fits…

October 10, 2008

I’ve been posting about the rise of the informal semantic contract relating to web-services and the deficiencies of XML Schema in adequately communicating the capability of anything other than a trivial service. Formalising a semantic contract by enriching a baseline structural contact (WSDL/XSD) with semantic or content-based constraints, effectively creates a smaller window of well-formedness, through which a consumer must navigate the well-formedness of their payload in issuing a request. Other factors such as incremental implementation of a complex business service ‘behind’ the generalised service interface compound the need for a semantic contract.

To clarify the relationship between structural and semantic, I happened upon a great picture which I’ve annotated…


The Rise of the Semantic Contract

October 9, 2008

This is my stake in the ground for now. SOA in the market-place places total emphasis on 2 things. Web Services as a basis for communication and Re-use as a basis for convincing the boss to put some cash into your middleware bunker..no…play-pen…err…seat of learning.

In addition the militant splinter-groups of the new-wave of RESTafarians (of whom I am an empathising skeptic on this specific 🙂 point about service specification) call for the death of WSDL and the reliance on (WSDL-lite) WADL in the case of the less extremist, but plain-old inference from sample instance documents in the case of the hard-core….

I am finding myself sailing down the no-mans land between these two polarised viewpoints, and see the need for specification in the more complex end of the interface spectrum, but similarly don’t see how specifications help when decoding the specification is harder than inferring from samples when interface contracts are relatively intuitive. So there we have a basic mental picture of my map of  the universe.

Now I’m getting to the point.

I’m now convinced that SOA’s push for re-use established through WSDL everywhere, but equally the more recent RESTafarian voices relating to unspecification both have flaws when we are attempting to open up a generalised interface into a service endpoint capable of dealing with a range of entity variants (say product types for example).

My view here is that in the SOA landscape, the static and limited semantic capability of XMLSchema, and in the RESTian lanscape, the inability of humans to infer correctness without a large number of complex instance document snapshots, leads me to the conclusion that there is a vast, yawning, gaping, chasm of understanding in what constitutes an effective contract in locking down the permissible value permutations – aka the semantic contract.

I’ve seen MS-Word. I’ve seen MS-Excel. I’ve seen bleeding-eyes-on-5-hour-conference-calls-relating-to-who-means-what-when-we-say-customer, and best of all I’ve seen hardwired logic constructed in stove-pipes behind re-usable interfaces, aka lipstick on the pig.

I reckon the semantic contract – the contract locking down the permissible instances is far more important than the outer structural contract who’s value decays as the level of re-use and inherent complexity of the interface increases. In addition there are likely to be multiple iterations/increments of a semantic contract within the context of a structural contract as service functionality is incremented over successive iterations – adding product support incrementally to an ordering service of example. This leads to to the notion of the cable cross-section:

In the SOA context…WSDL drives tooling to abstract me from the structural contract. But the formation of the semantic contract as the expression of what the provider is willing to service via that re-usable and loose structural contract is the key to effective integration.

If we don’t pay this enough respect we’ll be using our system testing to mop-up the simple, avoidable instance-data related problems that could be easily avoided if we’d formalised the semantic contract earlier in the development lifecycle…

Powered by Qumana


Enterprise SOA, Continuous Integration and DXSI

September 5, 2008

Creating an approach to CI’ing large scale enterprise SOA initiatives has unearthed a potentially significant efficiency gain in the semantic layer. Semantics relate to instance data – and specifically in the context of re-usable, extensible service interfaces the semantic challenge eclipses that of achieving syntactical alignment between consumer and provider.

Evidence shows that the vast proportion of integration failures picked up in testing environments (having taken the hit to mobilise a complex deployment of a range of components) are related to data/semantics, not syntax.

As such I’ve been focusing on how to front-end the verification of a consumer ‘understanding’ the provider structurally and semantically from day 1 of the design process. The CI framework I’m putting together makes use of a traditional set of artifact presence/quality assessment, but significantly introduces the concept of the Semantic Mock (SMOCK) – which is an executable component based on the service contract with the addition of a set of evolving semantic expressions and constraints.

This SMOCKartifact allows a service provider to incrementally evolve the detail of the SMOCK whilst having the CI framework automatically acquiring consumer artifacts such as static instance docs or dynamic harnesses (both manifesting earlier in the delivery process than the final service implementation (and I mean on day 1 or 2 of a 90 day cycle as opposed to being identified through fall-out in formal test-environments or worse than that – in production environments for example).

Over time as both consumer and provider evolve through and beyond the SMOCK phase, the level of confidence in design integrity is exponentially improved – simply based on the fact that we’ve had continuous automated verification (and hence integration) of consumer and provider ‘contractal bindings’ for weeks or months. This ultimately leads to a more effective use of formal testing resource and time in adding value as opposed to fire-fighting and kicking back avoidable broken interfaces.

The tool I’m using to protoype ths SMOCK is Progress DXSI. This semantic integration capability occupies a significant niche by focusing on the semantic or data contract associated with all but the most trivial service interfaces. DXSI allows a provider domain-expert to enrich base artifacts (WSDL/XSD) and export runnable SMOCK components which can then be automatically acquired, hosted and exercised (by my CI environment) to verify consumer artifacts published by prospective consumers of the service. Best of all kicking back compliance reports based on the semantic constraints being exercised in each ‘test case’ such that my ‘CI Build Report’ includes a definition of why ‘your’ understanding of ‘my’ semantic contract is flawed…

Beyond SMOCK verification – DXSI also allows me to make a seamless transition into a production runtime too but that’s another story…

Powered by Qumana


Enterprise Integration: SOA “re-use” is a wolf ?

July 16, 2008

Is our focus on achieving Service re-use actually harming the work we’re doing in the creation of the SOA? Are we so focused on the re-use model that we’re damaging the implementation of the SOA blueprint? Let me try explain why I believe this is the case.

Re-use is a measure of what? In some cases it means how much common code is duplicated and leveraged in new software developments. In other cases it means the more literal measure of concurrent exploitation of a shared resource. In our SOA governance structure, re-use is definitely analagous to the latter case – the one-size-fits-all coarse grained, generic service that enables me to service all variants of any particular requirement through a single, common service interface. Breathe………..!

That’s great for the provider who can now hide behind the ‘re-usable’ facade, pointing knowingly at the SOA governance literature every time you try to have the conversation that his service requires a PhD in data-modelling and xml cryptography to use it?

“But it’s re-usable” comes the reply as you weep, holding out your outstretched arms, cradling the tangled reams of xml-embossed printer-paper representing the only ticket into the service you need to use to avoid being branded a heretic. “Are you challenging the SOA governance policy? Let me just make that call to the department of SOA Enforcement…err I mean the Chief Architect…what’s your name again?” comes the prompt follow-up as the hand reaches for the phone…

I’m now convinced that re-use is a proverbial wolf in sheep’s clothing. We must look more closely at the cost of creating and operating these ‘jack-of-all-trades’ services, not just the fact that we can create them. If it transpires that we’re incurring more cost (both financial and operational) at both the provider-side and the consumer-side, by virtue of aspiring to ‘re-use through generalisation’ then we have truly lost the plot. Could this be part of the reason that SOA ROI is such a difficult subject to discuss and often results in a “year-n payback” kind of response?

I also think there’s an analogy to make here. Take one of our local government services which are considered part of the service fabric of our community. If the services are made too generalised and therefore spawn highly complex forms and a high level of complex dialogue in face-to-face scenarios, who is that helping? Yes we can say that we’ve consolidated ‘n’ simpler services into a single generalised service and saved on office infrastructure and so forth. But if we then increase the cost of processing requests for that generalised service both in terms of ‘steps’ to get from the generalised input to the specific action (therefore requiring larger offices in which to house the longer queues) and also in terms of now having to cater for the increased and excessive fall-out volumes based on the fact that it’s just so damn complex to fill the forms in (therefore requiring even larger offices to in which to house the even longer queues)….then we really missed the point about re-use.

It makes complete sense to look to generalise and re-use in the SOA design-space, such that we can converge similar designs to a common reference model and avoid the unconstrained artistry of technicians with deadlines. In terms of service contracts and interface specifications through,  translating that design-time re-use to a wire-exposed endpoint seems like we’re stopping short of the ultimate goal of accelerating re-use by empowering consumers to bond more efficiently with that service.

Powered by Qumana


Enterprise Integration: Don’t Look Down !

July 4, 2008

Can SOA truly be successful if service consumers have to be technology consumers? The service layer is supposed to insulate us from the technical complexities and dependencies of the enabling technology, but I see more and more the technology being the centre of attention.

The promise of Web Serivces whilst standardising on the logical notion of integration,  has embroiled us in a complexity relating not to the ‘act’ of exchanging documents, but instead relating to the diversity within the Enterprise, the various technologies and tools used across a widely disparate ecosystem, and debating the finer details of which interpretations of which standards we want to use.

More significantly the large deployed base of messaging software, and the service endpoints exposed to MQ or JMS endpoints are left out of the handle-cranking associated with the synchronous style of endpoint. As such – a SOA layering ‘consistency’ across such a diverse ecosystem is a myth in my experience. We’re still struggling to find the SOA ‘blue-touch-paper’ despite all of the top-down justification and policy.

I believe that until we push the technology further down towards the network such that it becomes irrelevant to the service consumer, and raise the service interaction higher up in terms of decoupling the ‘interaction’ from the ‘technology’ we are going to struggle to not only justify the benefit of service orientation, but more significantly we’ll continue to struggle to justify the inevitable rework and technical implications of that service orientation in mandating conformance to brittle and transient technical standards.

I’m going to explore an approach to doing this – by encapsulating middleware facilities as RESTian resources, and then looking at the bindings between WSDL generated stubs and these infrastructure resources…effectively removing technologies (apart from the obvious RESTian implications) from the invocation of a web-service. Various header indicators can flex QoS expectations in the service invocation (i.e. synch or asynch, timeouts, exception sinks etc) but that has no relationship to any given protocol or infrastructure type. Furthermore, the existence of such a set of ‘resource primitives’ would enable direct interaction where WSDL-based integration does not yet exist…where I resolve, send, receive and validate though direct interaction with RESTian services from any style of client-side application.

This is motivated purely by the belief that, much like the chap in the picture, our focus is on the endpoint and not what lies beneath…


Enterprise Integration: The Target Architecture

June 30, 2008

Refactoring architectural roadmaps. Where to start after time in the wilderness? Surveying the scale of the landscape, evolving mass of vendors, old and new initiatives and legacy sprawl. Getting that cold feeling in the pit of the stomach, like the one you get when the ‘flight’ option is removed from the ‘fight or flight’ juncture…

So how do you actually make headway and add real value to the enterprise?

My revised target architecture is summarised in the following image. We need keep the technology partners trolls under the bridges, and make sure our diverse users know where the bridges are, how wide they are, how much it costs to build one, what it costs to keep a bridge safe,  and make them forget about the noises down below…

Phase 2 will entail sealing the foundations in concrete we mixed ourselves, and re-routing the rivers. Therefore no space for ‘noise’ from the space beneath…happy times.

I’m starting to see the light…

Powered by Qumana