ESB Part 3: Product or Mindset

January 22, 2008

Once we have established whether the ESB makes sense in our integration problem space (Part 1), and once we have established whether we want to approach the delivery as a fully distributed, fully centralised or hybrid ESB deployment (Part 2), then we have the relatively trivial task of making our minds up if we need to buy something, embrace opensource or simply ‘refactor’ what we already have.

This article presents my personal experiences as an Enterprise Architect looking at integration architecture within a very large (LARGE) scale SOA transformation programme. As such my views are influenced by that specific context and I emphasise that certain conclusions I draw here are not entirely relevant for smaller scale or application-centric integration work.

So the landscape I face consisted of  thousands of legacy applications, hundreds or emerging strategic applications, 14 logical groupings of related applications, 160 strategic services offered by the 14 platforms, average of 10 operations per service, a mandate to introduce common-information-model, a mandate to exploit WS*/SOAP conventions. Add to this the organisational model in which suddenly, organisational divisions were aligned to platforms and as a result engineering work commenced in parallel across the enterprise to provide and consume strategic services based on predefined contract. So the task of identifying an integration architecture which would support such a heavily distributed implementation model was quite a challenge, and clearly laid waste to our traditional domain-hub centric appoach which effectively centralised the integration problems and pushed them through a finite pipeline based on the limited, specialised resource of the hub teams.

Issues ranging from ownership, funding, scheduling, delivery capacity, scalability, runtime capacity, and so on effectively put nails in the coffin of our existing mechanism, and it soon became apparent that our only chance of facilitating such a wide ranging integration storm was full distribution of the infrastructure and therefore the ownership of the problem-space into the active delivery programmes. The bus concept was formed…based on a pragmatic reaction to a whole range of environmental factors – NOT – vendor promises.

Next was the question of ‘do we need to buy new stuff’ or do we just ‘adopt a mindset’. Clearly our massive intellectual and fiscal investment in all manner of middleware to that point provide a baseline componentry which could be refactored. There was absolutely no case to justify further investment in emerging pure-play options, especially when one considered that we would not use the majority of the ‘potential’ elements at that stage in our SOA transformation. As such we needed a consistent messaging backbone which could be introduced as an infrastructure backplane to connect the emerging endpoints. We had MOM software all over the place, and there were no real technical problems with it, we just used it badly and inconsistently. Refactoring and introducing more rigour into the exploitation of this component would instantly enable us to leverage all our MOM experience to that point whilst forming our ESB connectivity backplane.

Next came the service container component, in other words the unit of integration logic we would look to deploy at the endpoints, effectively connecting the MOM backplane to the logical platform offering service. We examined our current portfolio, found we had a suitable container for that layer of our ESB, again  something we had a lot of confidence in using albeit supporting different patterns to those which we were about to depend on, but overall some confidence is better than no confidence. So we reused an existing component, I would refer to it as our lightest integraton broker with efficient integration with our MOM layer.

At that point we stopped worrying about orchestration, registry, monitoring, management, complex-event-processing, and all the other bells-n-whistles we could add later. The pitch of ESB vendors actually reinforced this view in that the ESB was inherently so modular that you can add and remove both infrastructure service or business service providers at any point :-).

So we refactored existing infrastructure. We combined a simplified infrastructure blueprint with a core set of service design patterns, along with the newly formed protocol and document standards driven out of the strategic architecture.  That was enough to get the integration backbone embedded into the delivery programmes in such a way as to simplify the model of how they would ‘plug’ into the backplane.

Did we suffer for not buying a pure-play?

Nope. We had the basic componentry on the books already, it was the clarity of vision which mattered in the early stages of our decentralised approach. We had that.

Would we have achieved more with adoption of a pure-play on day 1?

Nope. We would still have only used the same base-layers we are using today. We’re still breaking into the higher layers of the ESB food-chain anyhow. A vendor stack would have been gathering dust for 18 months.

There were implementation problems…more later…

SOA and the Common Information Model

January 22, 2008

The adoption of a common information model is an important consideration within any large scale application integration scenario such as that which underpins Enterprise SOA. The common model is the standardised representation of the key information artefacts moving between domain boundaries, and the inclusion of such an artefact within the SOA infrastructure is a natural decision. In conceptual terms this kind of a approach makes complete sense, where every endpoint maps it’s own localised dialect into central, shared, common model to facilitate a more efficient integration design process than the alternative of negotiating an exchange model with every remote provider around every single interface. On the face of it, whether or not a common model is injected into an integration scenario, a common model will evolve as a by-product of the integration work. As such it’s far better to retain a level of pragmatic control over the evolution of such a corporate asset then to allow natural selection and incremental evolution shape such an important asset. The main problem with the use of a common model is the question of design-time or runtime and the integration methodology applied by the integrators has a bearing over which of these modes can be leveraged.

From experience, however, the problem with a common model can be it’s lack of accessibility and the implications of it’s abstraction on the concrete interfaces engineered through it by local dialects. On the one hand, taking a SOA scenario, one could mandate that all service providers, regardless of internal model and interface technology dialects, expose a single service interface expressed in terms of the common model, upon a single SOA infrastructure blueprint. That way providers have the comfort of sitting all their localised, legacy and evolving assets behind a common interface which becomes the only way to consume that service.  Such an approach also implies that providers and consumers can be decoupled during design and delivery to the extent where the service contract derived from the common model would form the basis of a testable component further down the line…

The alternative to this kind of explicit representation of the common model in service interfaces is the adoption and the use of the common model as a design ‘platform’, through which integrators assemble and agree service contracts, which are then engineered as consumer-specific runtime interfaces, albeit expressed as a variation of a fundamental common model. This is not a p2p integration approach, but more of an efficient use of a common model, to facilitate integration through a re-usable service capable of supporting a range of consumer dialects.  To achieve this kind of model, there are some pre-requisites in the design-space which, in my experience have been difficult to achieve such that the extent of my hands-one experience is the use of the model as a concrete interface standard. (More on this latter option in a subsequent post).

So when we push SOA services with interfaces represented explicitly in terms of a common, abstract model, there are pain points:

  1. All endpoints must achieve seamless mapping into the common syntactic and semantic model. Semantics are always the poor relation, and structural mappings are easier to nail than the content-centric domain rules which must also be formalised.
  2. All service providers must provide an interface presentation with back-end integration into their applications implementing the service logic. Whilst ‘fronting’ an evolving IT stack with new, strategic interfaces is advantageous, the additional ‘layer’ is often seen as an issue although in my experience the overhead of an additional layer of xml processing is trivial compared to the usual latency of executing business logic in the application tier.
  3. All consumers must conform to a particular, common/non-native dialect when consuming a remote service. This is the primary area where I feel there is justifiable negativity, especially where a consumer of a strategic service may actually be a transient component, targeted for removal in the near future, and as such investment in consumer-side integration kit to facilitate interaction with newly established remote services is difficult to justify.
  4. Nobody likes a common model…as everybody has to do some work. Moving from a traditional EAI mindset – where we broker all integration solutions centrally, in which case the ‘how’ is hidden within black-box integrations, there is always this debate to be had. However when SOA is simply distributing ownership of common model to the endpoints as opposed to centralising it in an EAI scenario, I feel this point of issue is relatively trivial.

So I believe I can argue and justify a case for a common-model in all cases apart from point 4. The willingness of consumers to ‘take the pain’ of conforming to a foreign model – which may be non-trivial, is a tough nut to crack. Historically I have predominantly been in situations where the consumers of remote services are also providers of strategic services to remote consumers, so investment in new infrastructure to facilitate the ‘provider’ role can be leveraged to support the needs of local applications requiring consume-side adaptation to facilitate interaction with remote, common-model based services.

However I have done so, with an understanding in the back of my mind that there has to be a more effective way of linking strategic services based on a common-model with a diverse collection of consumers and client-side funding and organisational constraints.

In conclusion – using a common model as a concrete interface standard is do-able, but is a pretty heavy-handed and brute-force approach to something which ‘should’ be making life easier. As such I truly believe that a more collaborative framework in the design-space will facilitate a more adaptive integration approach, still 100% supportive of a common model, whilst facilitating low-cost integration. I will be expanding more on this new approach in my next post.

Summer Beach Rugby…

January 10, 2008

A rare log entry about R&R. My days of full contact rugby were ended after a pair of serious injuries about 10 years ago. I’ve since attempted to get back into the game but just can’t get motivated enough simply because it now takes longer than a week for me to recover from each game…yes I’m getting old. That aint good when you’ve gotta play each week. So gradually I became deskbound and my active/potato ratio began to take a turn for the worst. Recently my brothers and a group of friends came up with a master stroke. We’d play informal touch-rugby as a means of getting fit without getting broken. It started small – and grew, and now we’re turning out weekly with an extended squad of about 20 guys. If you enjoy rugby as we do, but can no longer commit to the required level to compete in full-contact, then touch-rugby is an excellent way of getting a fix and is a fast, skillful, addictive game too.

The beauty of this plan is that now that our standard is improving we’ve broadened our horizons, and are now heading to the South of France to compete in a sponsored tournament circuit. Combining this with a family weekend in the sun – excellent. I never would have thought is – competitive touch-n-pass rugby on a beach in the sun with no physical risk to life and limb.


This link provides a taster

and so does this


Ruby/Rails URI Patterns and JSR311

January 10, 2008

I had been working with a rails project mapping a uniform REST interace onto complex resource representations assembled at runtime, and as such no object/relational mapping was used to pull information from a local datasource.  Instead I use ‘virtual’ resource representations which are fulfilled on-demand, within synch or asynch interactions, from back-end complex back-end data sources and interfaces.

As such the primary Rails convention of mapping resource URI’s to controllers and actions, and through an object/relational layer to a datasource is effectively useless in my scenario. Instead of mapping resource to a datasource, I instead have to  map URI’s to a generic controller, still supporting the GET, POST, PUT, DELETE semantics, but essentially data-driving a dynamic integration tier with the resource context.

So if I were resource-enabling a database with  2 tables, customers and orders, then I’d use the following routes.rb entries (along with a rails resource scaffold) to build my URI to datasource relationships.

map.resources :customers
map.resources :orders

This would route requests for http://domain:port/customers to the Customers controller, and route the http://domain:port/orders to the Orders controller, where each would sit across O/R mappings to specific tables in my database. Nice and easy !

However in my world the Customers resource and the Orders resource are virtual within the front-end runtime, and must be instantiated by using a dynamic mapping algorithm to initiate back-end integration transactions over a range of mechanisms to instantiate the correct representations for each resource type (focusing on GETS’s for this sake of this example). As such it makes  sense for me to have a single VirtualResource Controller, which uses request context information to infer the correct representation algorithm. Rails has a neat way of enabling this in the routing table:

map.resources :operations, :controller=>"virtualresources"
map.resources :customers, :controller=>"virtualresources"

This enables me to pipe a range of URI patterns into my virtual handler and manage a lot of complexity through very few generalised routines.

At this point I’m considering the migration across to Jersey on Glassfish – using the JSR311 annotation scheme for this kind of URI pattern matching.

 public class Widget {

Only problem is – at the moment I see no way of assigning multiple URI patterns to a single Virtual Resource class in the way I have with Rails. Only one URI pattern is enabled for each POJO class – which would mean I’d have to create many proxy classes into my virtual resource handler? I’m gonna have to dig a little deeper here but hope this kind of flexibility and learning from the Rails community will be factored into the JSR spec?

Point-2-Point Integration versus Incremental Service Reuse…?

January 9, 2008

Service and asset reuse is the cornerstone of most SOA business-cases. From the top-down, with an Enteprise Architecture perspective it makes complete sense to group analogous integration projects into a common service-creation program, and inject governance and policy to ensure the resulting service is reprentative of the combined needs of the various individual integrated endpoints. The traditional point-to-point approach, of funding just enough integration to get you to the next problem is therefore frowned upon, and we often hear how P2P is at the opposite end of the CIO ‘must-have’ scale to the more saleable SOA reuse story. So what is so bad about an incremetnal P2P integration approach. Clearly the worst-case scenario is a relatively organic integration landscape with reactive, narrow spikes evolving between systems over time with arguably limited vision in terms of the long game. Individual stakeholders fund their specific integration projects based on simpler and therefore a relatively quantifiable cost/benefit story. Exponentially increased maintenance cost, lack of clear ownership/sharing of the integation assets and errosion of agility generally grind this one into the ground over time. SOA as a strategic programme would aim to get all those stakeholders into one room, make them link arms, make them agree on their combined ‘requirement’ and fund that through a business case of lower incremental cost over time based on initial capital injection to get connected to the SOA backplane. However, in reality, even within an Enterprise SOA Transformation programme where all the policy and governance frameworks are in place, I still see a P2P approach to integration albeit under the thin veil of ‘strategic integration’ using approved design patterns and models. The result is simply down to the fact that despite having an aggressive drive towards standardisation and conformance, the business is still operating on departmental targets, budgets, risk, and accountability. As such it’s almost impossible to find an optimal intersection between the moving parts of the organisation such that real service reuse can be achieved initially. So the incremental SOA implementation, funded by specific stakeholders, is intended to grow the reusable assets over time, based on the assumption that all increments will have the big-picture in mind and not compromise the quality of the reuse-potential. Yeah right ! What we do see is successive increments, bending and skewing the core services such that incompatibility and runtime duplication begins to emerge. This is a real problem as it is little better than an explicit p2p approach. The significance of this issue is related to the scale of the Enterprise, as well as the level of planned reuse for any particular component, but it takes a very strong governance body to win the argument between departmental bonus (based on delivery and customer benefit) versus architectural purity and long-game ROI?  My point is simple. There is little or no difference between the explicit P2P and incremental Service Reuse approaches – other than you get the former for nothing, and the latter relies on a startup cost associated with a SOA programme. If they deliver the same result….then it becomes a perfect candidate for a Dilbert sketch…

Rails, JSR311, Restlet and Jersey

January 8, 2008

In an earlier post ( I had declared my intention to depart from using Ruby/Rails as a means of creating RESTful web-services fronting what I regard as a more complex set of resource implementations than Rails appears to be comfortable with.

Since that time I’ve been investigating JSR3-11 and 2 early implementations on the Java platform, Restlet and Jersey respectively. The creator of Restlet (Jerome Louvel) is a member of the JSR3-11 expert group working on the evolution of the spec and as such is well placed to oversee the development of his very tidy Restlet framework. Jersey on the other hand is the Sun reference implementation of the spec, and relies more on convention and pojo annotation in place of the more structured/prescriptive framework put together in Restlet framework.

My initial feelings were that moving up the food-chaing from Rails (I stress that when I say food-chain I am implying an order of relative capability within my specific problem-space and not disrespecting the Rails community at all) that Restlet would be the more natural home or stepping stone towards an Enterprise class framework – which is effectively where I am exploring the applicability of RESTian principles….as an alternative to Enterprise SOA WS*. However as I have begun to dig deeper into the real requirements of ‘my next framework’ I am beginning to favour Jersey as my next hop. But I’m currently unsure of the relative merits…


Well Restlet offers a neat and slick way for me to assemble components with certain roles into chains, behind URI patterns enabling a lightweight RESTful facade to be created quickly but with a little more enterprise-integration power under-the-hood than rails. However the current evolution of Restlet appears to be a little limited in the area of connector classes and the tools that help me do the real work of applying my RESTful uniform interface to my complex enterprise engine-room. As a result – I believe I’d be dropping back to engineering the guts of my integration code on an alternative platform such as Glassfish. For that reason – I’m starting to think the Jersey option may initially offer the same ability to create a resource facade but also be a natural point of extension when I need to bolt heavier back-end integrations into the RESTful presentation. As such I can’t help but feel that starting out with Jersey on Glassfish would be a foundation that I could grow, as opposed to taking a step forward with Restlet but then relying on Glassfish hosted components down the line…

 One thing is for certain in my mind right now. RESTful principles applied to the Enterprise need more than Rails – but I’m still unsure of the Restlet/Jersey route…but Jersey on Glassfish just feel like they may just withstand a little more heat in an Enteprise scenario…

Glassfish v2 UR1 and MySQL Connection Pool

January 8, 2008

I’ve just spent the last 2 days getting a MySQL connection pool enabled on Glassfish. I need to caveat what I’m about to say with the fact that this is the first time I’ve attempted to do this – hailing from a Ruby/Rails background so there may be obvious points I’ve missed, but there-again I’ve been on the Sun site, Google and all sorts of technical resources for 2 days trying to get a clear example of this. My initial goal is simply to prove connectivity between user, servlet, jdbc connection pool and datasource in a very simple way. (My intended next steps will start to introduce more advanced data access and persistence methods).

The scenario is simple. A Glassfish hosted Http Servlet which relies upon a container connection-pool to get access to a a simple MySQL5.1 datasource. I had the initial  direct jdbc connection working immediately, such that my servlet created all necessary connectivity with the datasource in-line. Here is a snippet from my servlet doGet() function:

String url = "jdbc:mysql://localhost:3306/mysql_database_name";
 String query = "SELECT * FROM APP_TEMPLATES";
 try {
  Class.forName ("com.mysql.jdbc.Driver");
  Connection con = DriverManager.getConnection( url, "user", "password" );
  Statement stmt = con.createStatement ();
  ResultSet rs = stmt.executeQuery (query);
  printResultSet ( resp, rs );
  } // end try

 Next step was to drop the connectivity down into a container-managed scheme and the JDBC RESOURCES and JDBC CONNECTION-POOLS in Glassfish seemed like the best way to go. This is where my pain started. I was only able to find partial configuration examples, where some code samples showed the servlet end (i.e. JNDI resolution to a notional JDBC resource) and others showed XML container descriptors used by a notional client. I had to make assumptions about the linkages between the configs…this being my first attempt at configuring one of these things.

So, I followed the basic Glassfish Administration steps of creating a Connection Pool, using the Administration console (http://localhost:4848). The creation of the pool is relatively straight forwards. Firstly specify the basicis:

connection pool basics

 And then clicking next, fill out any connector/driver specific attributes. Most of the important settings are right at the bottom of the list of properties – I left all of the upper properties at their default values specified by the driver type.

cp advanced

Once you have completed the advanced settigns you can save and test the pool connectivity with the datasource by using the PING option at the top of the page:

cp ping

Once we have the fully-functional container connection-pool, the next step is to declate the JDBC Resource which presents this connection pool to the application code. For this you must use the other Administration option in the JDBC section:

jdbc resource

This is pretty straight forwards – and the JNDI Name must have the ‘/jdbc’ prefix to show up in the correct context branch for when you locate the resource programatically (more on this later). At this point you have the Glassfish container configured correctly to offer you a jdbc connection pool and resource.

Turning my attention to the application code, I then modified my Servlet code to do two things:

1. Introduce an init() method to enable me to resolve and obtain required references to the connection pool via the JNDI resource name I’ve configured via container Admin earlier. Assume that the name of my JDBC Resource is “jdbc/myJDBCResourceName“.

public void init() throws ServletException{
  Context env=null;
  env=(Context) new InitialContext().lookup("java:comp/env");
  throw new ServletException("'unknown DataSource");
  }catch(NamingException ne){
  throw new ServletException(ne.getMessage());

 2. Modify my doGET() method to take advantage of the pool object, thus replacing the earlier direct driver connection I was using. Notice that the pool object is my reference to the underlying array of connections at runtime. This really simplifies my doGet() connectivity code:

  // Do Stuff With the Result Set
  }catch(Exception e){
  throw new ServletException(e.getMessage());
  if(stmt!=null) stmt.close();
  if(conn!=null) conn.close();
  }catch(SQLException sqle){}

Here I begain to hit problems – where each time I deployed and executed my application I was seeing JNDI resolution exceptions telling me that ‘no object is bound to java:comp/env/jdbc/myJDBCResourceName’. This is where my lack of familiarity with container deployment descriptors hit very hard!

I then learned that I had to include references to the JDBC resources in the deployment descriptors web.xml and sun-web.xml. I found various instructions about how to go about this – and Netbeans6.0 (my IDE) provides a nice visual overlay for the XML files in question. The first file web.xml requires the registration of an entry in the Resource References section. The reason I believe is to provide a linkage between the resource expression used in the web-app code, to a known resource in the container config.

Similarly the sun-web.xml provides more specific container mappings between ‘abstract’ resource references in the web application and specific resources named within the container (Glassfish config). As such a similar entry is needed in this secondary file….and this is where I had most difficulty.

  • In simple terms my code was referencing pool=(DataSource)env.lookup(“jdbc/myJDBCResourceName”);
  • I had configured a JDBC resource names “jdbc/myJDBCResourceName” in the Glassfish container, which mapped to a Connection Pool called MyConnectionPool which I had confirmed as working correctly.
  • I had a web.xml  Resource Reference of:    


  • I had a sun-web.xml Resource-Reference of:


So what was the problem? I was constantly being informed that my Servlet could not locate the “MyConnectionPool” object…but all of my mappings from application code down to container were intact! Frustration was not the word.

However I then learned that I had made a mistake when inferring meaning for one of the settings – specifically the sun-web.xml resource reference of <jndi-name>MyConnectionPool</jndi-name>

I had made a false assumption that the 2 levels of xml descriptor were there to enable an APPLICATION to RESOUCE, and then a RESOURCE to CONNECTION POOL mapping. However the connection pool should not be referenced at all and in fact, the JNDI name was required to be set to the same as as the resource name? The following tweak to the sun-web.xml fixed the problem and my Servlet worked perfectly:


At this point I lost the plot in terms of understanding the need for this second descriptor which effectively has to have the same JNDI name specified twice – and is effectively a null mapping and therefore just unecessary complexity!?!  At this point I realised that I was potentially able to decouple the resource references in the application code with those at the web.xml level and again from the sun-web.xml level and use the following configuration:





With this configuration my servlet connectivity routine has use the new resource reference ‘aResourceName’ which has no relationship to the container JNDI resource name, and it is the xml descriptors which enable the linkage to be established at runtime. As such with this config in place, the container connection pool is working fine, although I’m currently at a loss in understanding the relative merits of each approach. I’d appreciate input from anyone who knows the official line on how the servlet, web.xml, sun-web.xml and container JNDI naming conventions should be established – but at least I have a way forwards now.