Spring MVC: Serving Static and Dynamic Content

April 6, 2010

I have been revisiting SpringMVC recently as a basis for creating a web application. I had a positive experience some time ago building out a RESTful abstraction layer across a range of disparate data-sources. primarily for programmatic access, so I wanted to build on that learning and create a fully featured web-app this time around.

Whilst I really like the Spring MVC framework, I am completely amazed that in some cases it is almost impossible to find simple explanations about relatively simple things involved in building out a web-application. This post is motivated by that lack of clarity, and after a few days of googling, I now believe I’ve evolved at least one of the myriad common-sense approaches to serving up both static and dynamic content from a Spring MVC applicaiton.

Firstly my environment:

Mac OS/X 10.6
Java 1.6.0_17
Spring 2.5.5
Tomcat 6.0.24

Secondly the goal was to create a simple presentation layer with a largely static html homepage, with embedded images, with links to Spring controllers which would deliver dynamic content through JSP’s. When I started down this road I had assumed this would be pretty common, but my experiences on google have proven that wrong so here goes…

As a backdrop for my own variations I used the very helpful article “Developing a Spring MVC Application step-by-step” which is great, but lulled me into a false sense of security before dropping me flat just as the real questions started emerging. I followed all steps in that tutorial and progressed very quickly to section 2, at which point I stopped, because I was less concerned with the addition of business logic, service PoJo’s and a persistence tier, than I was about breaking away from the text only presentation layer used in the example. So I have to point out – I’m not a presentation guy, I rarely move into the front-end discipline, but in this situation I have to – and as a result my inclination is immediatley to want to know how this static/dynamic stuff works.

So at section 2.1 in the tutorial I started to feel uneasy! Before I delve into this in detail let me backtrack.

1. I have a basic project structure as per the tutorial, with a web.xml and a <servlet>-context.xml for my primary controller. The relationship between these config files is covered in detail in the tutorial.

2. All of my JSP‘s live in the WEB-INF/jsp sub-folder of my source tree, and are packaged into that location in the WAR (I cover how these are located when I explain my view resolver later in the article) I deploy into tomcat.

3. I have a single index.jsp in the war/ root folder, and this index.jsp is referred to in my web.xml as follows:


No surprises here – I’m pushing all the HTM URL’s entering my application context (i.e. requests arriving at Tomcat for host/port/context-root) towards my frontEndController which I’ll come-back to in a short while, but more specifically you’ll see that I’m using the *.htm naming convention in my presentation layer for all the links into my application. In other words I’m not allowing users to directly reference *.JSP files in my app – I could use any suffix for that matter (*.private for example) given it is only relevant for routing requests for content into my controller so my MVC engine can deal with it.

Now – I did have a static HTML index.html as my welcome page in the WAR root folder from where it was directly accessible by Tomcat. As such I could type http://localhost:8080/myapp/index.html&#8217; and get my page back ok, but there is a Spring MVC/JSP convention for not doing this – but instead redirecting the request for the ‘welcome’ page into the MVC/JSP engine such that it becomes instrumented and visible to the spring context. This is covered in section 2.1 of the tutorial – but for the record my war/index.html is now war/index.jsp, and it contains the following:

<%@ include file=”/WEB-INF/jsp/include.jsp” %>
<%– Redirected – we can’t set the welcome page to a virtual URL. –%>
<c:redirect url=”/index.htm”/>
where include.jsp is the inclusion of the necessary JSP tag libraries to support features such as this redirecting:

<%@ page session=”false”%>
<%@ taglib prefix=”c” uri=”http://java.sun.com/jsp/jstl/core&#8221; %>
<%@ taglib prefix=”fmt” uri=”http://java.sun.com/jsp/jstl/fmt&#8221; %>

So what we have here is a request to my application for ‘/’ or ‘index.jsp’ will redirect to a request for virtual ‘index.htm’ page. This forces the request into my SpringMVC controller ‘frontEndController’ on the basis of the *.htm pattern-match. At that point I simply wanted to return a JSP, containing html source, but also containing static images. Try as I might I could not find a simple explanation of how to achieve this !!

So here it is. Firstly – only requests for dynamic content are be pushed into the frontEndController. The *.htm pattern is a ‘codename’ in my application context for ‘give me some JSP content’. Fine. But what about the statics that those dynamics might require?? !!WATCH-OUT!! I used a /ui/* pattern initially in my <servlet>-context.xml which caused me problems with static content. Any embedded CSS or IMG resources also fall under that path name, and as such all requests hitting tomcat for /ui/page.htm or /ui/img/banner.jpg were being pushed into my frontEndController – from where I could not serve the images, nor did I want to. As such changing to the *.htm pattern means that only the htm page names are pushed in the MVC controller and all embedded resources can be managed separately.

There are 3 key messages here. Firstly my index.jsp in the WEB-INF is not really dynamic (other than I have used a templating solution to assemble my pages but that is irrelevant here). But by handling it in this way I have ensured that ‘all’ html content is generated/served under the control of my MVC engine – and therefore visible to all the nice things such as logging, access control, and all the other stuff I aint even thought of yet that I’m able to implement as cross-cutters in my MVC context. I don’t have a blind-spot where my static content may be beign hammered yet I’m seeing low usage on my MVC generated content for example.

Secondly you do NOT have to implement a custom controller for JSP’s which ARE largely static content. The normal flow is that the <servlet>-context.xml for the relevant controller declated in web.xml offers a second level of routing. Normal convention would be to add an entry such as:

<property name=”urlMap”>
<entry key=”/index.htm” value-ref=”homepageController”></entry>

which pushed such requests to the POJO controller declared in that same XML file as below:

<bean name=”homepageController”
class=”com.myapp.web.HomepageController” />

But this forces me to implement a java object HomepageController to effectively do ‘nothing’ other than return a ‘view’name of ‘index.jsp’ which then returns the ‘static’ index.jsp to the client. Have no fear there is a better way – and we use something called the UrlFilenameViewController. As such – for any ‘pages’ that I want to serve in my MVC engine, which are static – I can re-wire the request as follows:

<property name=”urlMap”>
<entry key=”/index.htm” value-ref=”urlFilenameViewController” />
<entry key=”/static-about.htm” value-ref=”urlFilenameViewController” />
<entry key=”/termsandconditions.htm” value-ref=”urlFilenameViewController” />
<!– For direct mapping between URL (i.e. index.htm -> index) and the JSP to render –>
<bean id=”urlFilenameViewController”

By convention this means that I would need to have 3 JSP files in my WEB-INF/jsp index.jsp, static-about.jsp, and termsandconditions.jsp. The urlFilenameViewController simply converts the requested html resource name into a token, which is passed to the view-resolver in the same servlet context, which then needs to be configured to look in the WEB-INF/jsp folder as such:

<bean id=”viewResolver” class=”org.springframework.web.servlet.view.InternalResourceViewResolver”>
<property name=”viewClass” value=”org.springframework.web.servlet.view.JstlView” />
<property name=”prefix” value=”/WEB-INF/jsp/” />
<property name=”suffix” value=”.jsp” />

Hey presto ! We now have a consistent mechanism to serve up static html content as JSP from within the SpringMVC engine. I like symmetry and as such like to have my static and dynamic pages all being managed in a consistent way – but there may be plenty of counter arguments to suggest this approach has problems. (If so I’m very keen to hear them !).

Secondly the static content such as images, and CSS for example, should NOT be inside the WEB-INF folder, but instead such static resources referenced by the returned JSP pages, should live in the WAR root folder ‘where Tomcat can serve them directly’ to the user outside of the controller back-end. As such I have a war/img, and war/css folder alongside my war/WEB-INF folder. Any html content I generate from my JSP handling framework refers to static resources such as:

<img src=”img/application-logo.jpg” alt=”some text”/>

which means that User-Agents will resolve the address for the subsidiary resources to:


and this means that Tomcat can serve up those resources directly from the application without touching the SpringMVC engine. This now means that I have a seamless framework for taking all page requests into my controller architecture (even pages that may be largely static which I still want to be managed in a consistent way), and storing all supporting collateral in a simple place from where they can be served up.

This may seem like trivial or common-sense but it has taken me a long time to grind through the frustrating combination of too many option combinations offered by SpringMVC framework, and a lack of clear explanations on how to achieve this effectively including images or other static resources.

Hope this is useful.


The CI, TDD and OO Love Triangle

July 14, 2009

I would not describe myself as a purist in any aspect of my life, least of all my software engineering practices. I’ve been re-establishing myself as a coder after a long while in the more abstract realms of architecture and design, and on the return-path I’m seeing things in a somewhat different light, and I think it’s an interesting observation hence this post.

Object Orientation was an approach I previously adopted to partition my problem-space into a group of components – so I could start hacking, rather than putting on the ‘everything is an object and so is its mother’ spectacles. I never quite got my head around why colleagues and ‘real developers’ would sit for hours debating the 99th level of object decomposition and whether an object had to be so fine-grained it should actually contain no code?

My return journey to the world of software development has intersected with new (I say ‘new’ given I was a full-on hacker in the mid 90’s so this is new to me) and emerging trends around Continuous Integration and Test Driven Development and I have to say these intersections have made a huge impression on me to the extent that I’m now passionate about the benefit brought about by these techniques. Maybe I’m so pro-CI and pro-TDD because I lived through the appalling software engineering practices of the 80’s and early 90’s, or may be it’s just because it makes so…much…sense.

So to my point. Continuous Integration has a simple benefit. You know when your code-base has been tainted. The more frequently you get to know that the better. I’ll avoid quoting the scientists who waffle about cost exponentials for fixing defects today versus tomorrow blah. No – it just…makes….so….much….sense ! I didn’t need a PhD in chin rubbing to work that out.

Next Test-Driven Development. Right this one took time, just a little time to get my head around, even though I was convinced that CI was the way to go no questions asked. I found myself asking the stereotypical questions like “won’t I be 50% productive if I spend half my time writing tests for the code I am writing?”. Let me just pause while I give myself 10 lashes for being so narrow-minded. THWACK, THWACK, THWACK, THWACK, THWACK, THWACK, THWACK, THWACK, THWACK, THWACK…and one for luck THHHHHWACK!!  So it’s a natural inclination to feel like test-cases are just code that you’d traditionally throw into the code-base, but when you consider the fact that you’re actually building inherent certainty into the code-base at every step of the way the lightbulb goes on in a big way. When I sat back and appreciated that I wouldn’t have to run huge amounts of code to verify and exercise a small method I just changed it….just….makes….so…much….sense. All the regression testing scenarios I was carrying round in my head ! The lack of repeatability in my approach to testing !! The gradual feeling of dread that I’d become used-to, after deploying a stable version of code, in case I had to, god-forbid, change it at some point??! It all cries out for an inherent approach that means you lock-down every function you implement as you implement it – such that you can purge all that baggage from your poor little cranial-walnut. It….just….makes….so….much…sense. I was soon a TDD convert, but only at that point did it strike me like a bolt from the blue…finally my life made sense….finally I could truly embrace the love-triangle of CI->TDD->OO.

The effectiveness of my CI/TDD regime relied totally on the level of granularity I could descend to with my class definitions, and the simplification of each and every function to it’s thinnest possible form.  Real fundamental Object Orientation finally became the underpinning foundation – notice I place it in that underpinning role – as I’ve only now seen OO as an enabler for the ‘common-sense’ and ‘value-adding’ techniques of CI and TDD. I’m a little weird that way – but I found my way into rigorous OO not via the brain-washing from the chin-rubbing ranks of “my object is more objecty than yours” brigade, but instead through the simple realisation that I can squeeze ever more value from my CI/TDD framework if I force my test-cases into finer and finer levels of granularity (within reason of course, I aint lost my roots yet !! )

So there you have. CI drives me to TDD. TDD drives me to appreciate why OO should exist, NOT vice versa…

VMWare Fusion XP VM Losing DNS !

October 25, 2008

I’ve been running OSX Vmware Fusion 1.x and XP SP2 & SP3 for over a year and it’s been ROCK SOLID ! I run a web-connected OSX host, and a XP VM VPN’d into a corporate network all day, every day and I have not had a single problem. Until this week…when my calm seas were interrupted…

Out of the blue I notice witin the XP SP3 VM was failing to resolve DNS queries. OK. Why? I basically ran a number of diagonstics, checked driver versions and everything checked out in terms of VM integrity.No idea. The worst kind of problem…

I checked out the net and located numerous threads deliberating over the XP “Unable to flush DNS cache” type of errors, with long and elaborate threads falling into the detail of comparing router firmware versions and other such infinate variables. Eject..Eject…

It was only when I ran the VMWare packet sniffer on the OSX host I could see that the XP VM was requesting DNS, and the resoponses from the OSX host were being dispatched. From my understanding of low level IP it appeared that all was performing as expected. However I then started thinking about reasons why UDP packets were being neglected by the XP VM’s IP stack….BINGO !

I then checked out the Windows XP Event Viewer under the Security event list, and there I see all my DNS responses (from the OS X host) arriving back at my XP VM as UDP packets, all being summarily discarded by my failed/corrupted firewall. Couple of minutes later, having run the ‘support’ utility from the firewall supplier, the flood-gates openend and UDP/DNS was back in business.

Symptoms I encountered in XP:

  1. DNS resolution (i.e. ping http://www.xyz.com) within XP VM failing but direct direct addressing worked ok (i.e. ping
  2. nslookup in the console returned ‘no response from server’ errors in response to queries.
  3. Right-clicking on the network connection icon in XP, and executing Repair proceeded through all steps apart from the final DNS cache at which point Unable to repair connection was returned.
  4. ipconfig /registerdns failed with a non-specific error

In hindsight the symptoms all point to UDP return-path and firewall but verifying the request path with the OSX VMWare vmnet-sniffer utility (located in /Library/Application Support/VMWare Fusion folder) made this a whole lot simpler.

Where a Semantic Contract Fits…

October 10, 2008

I’ve been posting about the rise of the informal semantic contract relating to web-services and the deficiencies of XML Schema in adequately communicating the capability of anything other than a trivial service. Formalising a semantic contract by enriching a baseline structural contact (WSDL/XSD) with semantic or content-based constraints, effectively creates a smaller window of well-formedness, through which a consumer must navigate the well-formedness of their payload in issuing a request. Other factors such as incremental implementation of a complex business service ‘behind’ the generalised service interface compound the need for a semantic contract.

To clarify the relationship between structural and semantic, I happened upon a great picture which I’ve annotated…

Enterprise SOA, Continuous Integration and DXSI

September 5, 2008

Creating an approach to CI’ing large scale enterprise SOA initiatives has unearthed a potentially significant efficiency gain in the semantic layer. Semantics relate to instance data – and specifically in the context of re-usable, extensible service interfaces the semantic challenge eclipses that of achieving syntactical alignment between consumer and provider.

Evidence shows that the vast proportion of integration failures picked up in testing environments (having taken the hit to mobilise a complex deployment of a range of components) are related to data/semantics, not syntax.

As such I’ve been focusing on how to front-end the verification of a consumer ‘understanding’ the provider structurally and semantically from day 1 of the design process. The CI framework I’m putting together makes use of a traditional set of artifact presence/quality assessment, but significantly introduces the concept of the Semantic Mock (SMOCK) – which is an executable component based on the service contract with the addition of a set of evolving semantic expressions and constraints.

This SMOCKartifact allows a service provider to incrementally evolve the detail of the SMOCK whilst having the CI framework automatically acquiring consumer artifacts such as static instance docs or dynamic harnesses (both manifesting earlier in the delivery process than the final service implementation (and I mean on day 1 or 2 of a 90 day cycle as opposed to being identified through fall-out in formal test-environments or worse than that – in production environments for example).

Over time as both consumer and provider evolve through and beyond the SMOCK phase, the level of confidence in design integrity is exponentially improved – simply based on the fact that we’ve had continuous automated verification (and hence integration) of consumer and provider ‘contractal bindings’ for weeks or months. This ultimately leads to a more effective use of formal testing resource and time in adding value as opposed to fire-fighting and kicking back avoidable broken interfaces.

The tool I’m using to protoype ths SMOCK is Progress DXSI. This semantic integration capability occupies a significant niche by focusing on the semantic or data contract associated with all but the most trivial service interfaces. DXSI allows a provider domain-expert to enrich base artifacts (WSDL/XSD) and export runnable SMOCK components which can then be automatically acquired, hosted and exercised (by my CI environment) to verify consumer artifacts published by prospective consumers of the service. Best of all kicking back compliance reports based on the semantic constraints being exercised in each ‘test case’ such that my ‘CI Build Report’ includes a definition of why ‘your’ understanding of ‘my’ semantic contract is flawed…

Beyond SMOCK verification – DXSI also allows me to make a seamless transition into a production runtime too but that’s another story…

Powered by Qumana

Ruby Wrapper for WS-I Analyzer tools

August 19, 2008

I’ve been developing some scripting to enable me to assess the integrity of integration artifacts created across a range of development teams in large scale SOA integration programmes. One of the most basic forms of assessment is the WS-Interoperability (WS-I) analysis of WSDL, to identify any specific non-compliances at grass-roots. (I can hear the RESTian hordes tooling up to brow-beat me into submission as I type this….and YES I understand, YOU don’t like WSDL, nor do I, but they DO exist so I’m gonna analyse em ! Right – now that’s out of the way on with the WSDL baiting…).

There’s a set of java/c# tools out there from the clever folk over at WS-I (http://www.ws-i.org/deliverables/workinggroup.aspx?wg=testingtools) which are already used informally within our organisation, and I’ve created a ruby-based wrapper for this. It’s not mature enough to be packaged as a Ruby Gem, and it does rely on an installation of the java tools in some folder on your machine – but the following ruby source does allow you to spin up objects in your Ruby code and kick it to analyse your WSDL artifacts without being exposed to java code. In line with my usual train-of-thought programming style and my few months of Ruby exposure, it’s not pretty but it works well enough to be parked for a while… Usage is pretty simple. The analyzer object can be initialised once and used thereafter to manage the analysis of many discrete WSDL resources. With each call to WSIAnalyser.analyze_wsdl() the first parameter is a file: or http: URI to a WSDL, and the second is a path/folder into which the outputs from this analysis will be placed. In any given WSDL the analyzer iterates through every <definitions/services> node, creating a report for each node. The outputs (in the output folder) are:

  1. A copy of the WSDL source
  2. A copy of the WS-I Tools configuration file generated by this script, one per <definitions/service> element in the source
  3. A WS-I analysis report generated by the underlying WS-I tool – one per <definitions/service> element in the source WSDL

a.analyze("http://www.domain.com/someservice.wsdl", "c:/dev/wsdloutputs")
a.reports.each_pair do |r,s|
puts "WSDL [#{a.wsdl_uri}] Report [#{r}] Status [#{s}]"

The source:

WSI_ANALYZER_CONFIG_TEMPLAGE = %{<?xml version="1.0" encoding="UTF-8"?>
<wsi-analyzerConfig:configuration name="WS-I Basic Profile Analyzer Configuration" xmlns:wsi-analyzerConfig="http://www.ws-i.org/testing/2004/07/analyzerConfig/">
<wsi-analyzerConfig:description />
<wsi-analyzerConfig:assertionResults type="all" messageEntry="true" failureMessage="true"/>
<wsi-analyzerConfig:reportFile replace="true" location="xxxxxxxx">
<wsi-analyzerConfig:addStyleSheet href="" type="text/xsl"/>
<wsi-analyzerConfig:testAssertionsFile />
<wsi-analyzerConfig:wsdlElement type="port" parentElementName="" namespace="" />
<wsi-analyzerConfig:wsdlURI />

#As we are modifying the path environment variable, we need to ensure that the correct delimiter is used

#This must be changed to point to the physical root of the wsi-installation
WSI_HOME_VAL = "c:/dev/wsi-test-tools"
WSI_JAVA_OPTS_VAL = " -Dorg.xml.sax.driver=org.apache.xerces.parsers.SAXParser"
WSI_TEST_ASSERTIONS_FILE = "#{WSI_HOME_VAL}/common/profiles/SSBP10_BP11_TAD.xml"
WSI_STYLESHEET_FILE = "#{WSI_HOME_VAL}/common/xsl/report.xsl"
WSI_EXECUTION_COMMAND = "#{WSI_JAVA_HOME_VAL}/bin/Analyzer.bat -config "


class WSIAnalyzer
VERSION = "1.0.0"

attr_reader :wsdl_uri
attr_reader :wsdl_name
attr_reader :wsdl_source
attr_reader :analyzer_config_uri
attr_reader :analyzer_config_source
attr_reader :wsdl_namespace
attr_reader :wsdl_service_name
attr_reader :wsdl_port_name
attr_reader :workspace
attr_reader :report_filename
attr_reader :wsi_approved
attr_reader :wsdl_service_declarations
attr_reader :reports
attr_reader :errors

def initialize
@log.level = Logger::DEBUG
#Check the installation location to ensure the wsi-test-tools are installed on the host
if not File.exists?(WSI_HOME_VAL)
@log.fatal("Unable to locate WSI-Test-Tools installation at [#{WSI_HOME_VAL}]")
return nil
#Ensure environment variables are in place
@log.warn("No WSI-Test-Tools environment variables present [#{WSI_HOME_TAG}]")
rescue => ex
@log.error("Unable to initialise WSIAnalyzer: Ex #{ex.message}\n"+ex.backtrace.join("\n"))
return nil

def analyze_wsdl(wsdl_uri, workspace)
@log.info("Analyzing WSDL[#{@wsdl_name}] from URI[#{@wsdl_uri}] in workspace [#{@workspace}]")
#obtain the wsdl source
#obtain key wsdl attributes required to create ws-i configuration
#There may be more than one pass to make here so iterate
@wsdl_service_declarations.each_pair do |svc, port|
#create dynamic ws-i configuration
@log.info("Excuting WS-I analysis for WSDL [#{@wsdl_name}] Service[#{svc}] Port[#{port}]")
create_dynamic_wsi_config(svc, port)
#execute the analysis
rescue => ex
@log.error("Unable to analyze WSDL [#{@wsdl_uri}]: Ex #{ex.message}\n"+ex.backtrace.join("\n"))
return false


def execute_wsi_analyzer
#Now kick the external WSI script to generate a report
commandline="#{WSI_EXECUTION_COMMAND} #{@analyzer_config_uri}"
@log.info("Execution WS-I Analyzer with shell [#{commandline}]")
#Verify if a report has been created
if(not File.exists?(@report_filename))
@log.warn("No report file [#{@report_filename}] produced by WS-I Analyzer")
@log.info("Scanning for WS-I summary status...")
dom=REXML::Document.new File.new(@report_filename)
if dom.elements["report/summary"].nil?
if not dom.elements["report/analyzerFailure"].nil?
#A failure code has been identified in the report doccument
@log.warn("WSDL [#{@wsdl_name}] WS-I Failure Report [#{msg}] ")
@errors << msg
#Summary status is passed (not sure if summary can be failed/error
@log.info("WS-I Approval Status [#{status}] for WSDL [#{@wsdl_uri}]")
#Add a report summary to the list of reports
rescue => ex
#Unable to acquire status - treat report as invalid/missing
@log.error("Unable to complete WS-I Analysis of [#{@wsdl_name}]: Ex #{ex.message}\n"+ex.backtrace.join("\n"))

def create_dynamic_wsi_config(svc, port)
#cdom.elements.each do |e|
# puts e.inspect
@log.info("WS-I configured to report to [#{@report_filename}]")
#Now write the configuration file into the workspace of the artifact
open("#{@analyzer_config_uri}",'w'){ |f| f << cdom.to_s}
@log.info("Written dynamic WS-I config into [#{@analyzer_config_uri}]")
rescue => ex
@log.error("Unable to generate dynamic config for WS-I: Ex #{ex.message}\n"+ex.backtrace.join("\n"))

def extract_target_elements
#Need the servicename, portname and other stuff from the wsdl
#Get the DEFINITIONS/SERVICE/PORT.name attribute
#TODO: Need to ensure I can handle a WSDL with multiple SERVICE/PORT combinations
dom.elements.each("definitions/service") do |servicenode|
@log.info("Extracted Namespace[#{@wsdl_namespace}] Services[#{@wsdl_service_declarations.inspect}] from WSDL")
if @wsdl_service_declarations.size<1
raise "No Service elements [definitions/service] detected in WSDL [#{@wsdl_name}]"
rescue => ex
@log.error("Unable to extract WSDL markers for WS-I: Ex #{ex.message}\n"+ex.backtrace.join("\n"))

def acquire_wsdl
@wsdl_source = open(@wsdl_uri).read
@log.info "Acquired WSDL [#{@wsdl_uri}] [#{@wsdl_source.length}] bytes of WSDL"
rescue => ex
@log.error("Unable to acquire WSDL [#{@wsdl_uri}]: Ex #{ex.message}\n"+ex.backtrace.join("\n"))

def configure_environment

#display WSI environment variables
unless ENV["OS"].nil?
unless ENV["OS"].downcase.index("windows").nil?
#Now add the path segments
WSIClasspath.each do |val|

Powered by Qumana

VMWare Fusion and Intermittent XP VM Networking

August 19, 2008

This is a strange one that’s been perplexing me. VMWare Fusion 1.1.3, running WinXP SP2 solid as a rock for nearly 9 months. I use a NAT networking configuration on my XP VM, relying on my Macbook Pro host to establish a wired/wireless connection to the web, which is then shared by the VM. As I say – s-o-l-i-d as a rock!

Recently though I noticed an increasing trend of the XP VM networking manager informing me that the XP networking connection was partially configured after a failed initialisation, and therefore offered limited functionality – which meant NO connection in reality.

I trawled the web/forums for support but predictably stumbled across the same mix of affliction and useless assistance like “..er…reinstall everyting and that should do it” from “yours faithfully the new guy on the support desk just trying to make a living by reading page 1 of the firefighting guide” or the equally ridiculous and unhelpful help from the geek who believes packing as many three-letter-acronyms into every uttered statement as possible.

Ultimately I drew a blank on the support front which was very disappointing. I did however happen across an innocuous statement relating to DHCP and some kind of limitation when acquiring IP addresses over a wireless link. As such I began musing about whether ‘things’ or ‘services’ in my XP startup may be interfering with my XP IP stack obtaining full configuration. However, logic was screaming at me that all services using IP rely on the underlying IP stack to obtain it’s address which is then shared by port number specific socket users…but still I went with the flow…

What follows cannot be explained (by me anyhow) in scientific terms. I opened XP servivce manager, and stopped a range of services like Postgres, Mysql, Sqlserver, and some others I wasn’t clear on the use of. That was the only change I made.

To my surprise, my rock-solid XP VM stability returned with guaranteed networking every time ! I offer this post not as a technically enriching article, but as a last stop for those as desperate as I was when afflicted by this intermittent yet highly painful symptom !

Powered by Qumana