Web Services between Dot Net and Not Net

Published on 07 January 2008

If you've only ever worked with Web Services in Dot Net, you could be forgiven for expecting it to be easy to use Web Services to interface with other platforms. In Visual Studio, it's all a bit Fisher-Price: you define your Web Methods, then add a Web Reference to the client and everything ticks along nicely. You don't even see any XML.

Recently I've been working for a client getting a Dot Net Web Service to work with a third-party system build in Perl. I have now discovered there are two sorts of web services:

  1. Mickey Mouse Web Services in which the client and server both run Dot Net
  2. Proper, Serious Web Services in which the server runs Dot Net and the client runs Not Net (anything else).

Other people in the industry seem to have noticed this too, and the current official term for Proper Serious Web Services is 'Interoperable Web Services'; that is, Web Services That Actually Operate.

Other people have already written lots of advice for building Interoperable Web Services. Here's a few articles I've found useful:
Returning DataSets from WebServices is the Spawn of Satan and Represents All That Is Truly Evil in the World (from Scott Hanselman's blog)
Top 5 Web Service Mistakes (by Paul Ballard, at theserverside.net)
Top Ten Tips for Web Services Interoperability (by Simon Guest at Microsoft)

One piece of advice that keeps cropping up for building Interoperable Web Services is to build them 'Contract First'. The teams working on the client and server ends of the web service get together and agree the XSDs that define the request and response of each Web Method. This irons out any problems with supported or unsupported types at an earlier stage. Code is then generated from the XSDs (or the WSDL) rather than the other way round.

The third party people we were working with suggested the contract first approach so we ended up swapping XSD's back and forth. This worked pretty well. When it came to code generation, Visual Studio 2005 provides two command-line utilities, xsd.exe and wsdl.exe for generating code from XSDs or WSDL files.

We went down the XSD.exe route, as we didn't have any tools handy for building the WSDL from scratch. We then built the Web Methods using the objects that had been generated by XSD.exe. This helped somewhat, but we still had a series of 'interoperability' problems getting our Interoperable Web Services to interoperate. I'll describe some of them to give an idea of how inoperable interoperability can be:

Problems With Blank Namespaces

We had a proper Namespace for the Web Methods (which was defined using a [WebService(Namespace = "something")] attribute on the class that held the Web Methods) but some of the classes generated by xsd.exe had XmlRootAttribute() attributes that specified a blank namespace, like this:

\[XmlRootAttribute(Namespace="", IsNullable=false)\]
```It turned out that this lead to the web service expecting a blank xmlns attribute in the incoming SOAP message, like this:  




When Dot Net tried to call this web method, it had no problems (because it was following the WSDL exactly and so put the blank namespace in). But the third-party PERL guys were hand-coding their request code and were getting tripped up by the lack of a xmlns="" in the SOAP request they were sending. Eventually we figured it out and removed all the blank namespace definitions from the code, so that the XmlRootAttribute looked like this:  


We also had to change some of the XmlElementAttribute attributes from  




(If we had used targetNamespace in our XSDs, or build a WSDL file and generated code from that, we probably would have avoided this issue)  
**Problems with SOAPAction**  
In SOAP 1.1, there is an HttpHeader called SOAPAction that is supposed to get sent along with the SOAP request XML. That is, if you browse to a Dot Net asmx file using a browser and look at the example SOAP 1.1 request, the top of it looks like this:  

POST [webservice url] HTTP/1.1
Host: [host]
Content-Type: text/xml; charset=utf-8
Content-Length: [length]
SOAPAction: "[namespace]/[method name]"

... then all the xml stuff ...

where the bits between the square brackets \[ \] are filled in with the right values.  
SOAPAction is a bit odd because it plays an important part, but its not actually in the XML bit of the SOAP request. It's there so that the server can route the request to the right method without having to actually parse the SOAP to find the method name. But XML fans were a bit miffed about having an important part of their SOAP system not actually in the XML at all, so it was dropped from SOAP 1.2.  
The problem we had with SOAPAction is that although it is mentioned in the official WSDL 1.1 definition, no specific format is defined. And guess what?  

*   Some platforms, such as CGI web services and PERL, use "<namespace>#<method name>" as the format, e.g. "something.com#test"
*   Dot Net uses the format "<namespace>/<method name>", e.g. "something.com/test"

The Perl guys though we were doing it wrong, and we weren't sure why our Web Methods were insisting on a backslash instead of a hash, and we couldn't find any definitive definition of what format it was supposed to be in, and it went back and forth for a while. It turns out the SOAPAction in Dot Net can be overridden by manually changing the WSDL. On the other hand, there was a little line of Perl code that got Perl to use the Dot Net format for SOAPAction and that eventually solved the problem.  
If you're using Perl's SOAP::Lite library to call Dot Net web services, this page may help you:  
[Simplified SOAP Development with SOAP::Lite](http://www.perfectxml.com/articles/perl/soaplite.asp) at PerfectXML  
**Problems with Geography**  
The Geography problem was that we had two different teams in different organisations, trying to work together to solve plumbing issues in the low level Http and SOAP. In the end, most of the problems turned out to be pretty trivial. However, because of the time lag between us putting up a new version of our web services, and the other team trying to call them using Perl, and then getting back to us with the results, what should have been trivial troubleshooting took days.  
In the end, even though we don't know any Perl, installing it on our local network and running the test code that the Perl guys had provided proved to be pretty helpful. The benefit of being able to run the Perl code in-house whenever we wanted and seeing the results immediately actually offset the cost of not having any Perl skills. When we did need to change the Perl code, a bit of googling always led us in the right direction.  
So there's probably a more generalised lesson there: When you're in a situation like this, even if you've never used the other platform, it's likely to be worth setting it up locally just so ideas can be checked and tested in one location instead of two locations having to work together.