Tuesday, December 14, 2010

Adventures with BizTalk: HTTP "GET" Part 4: Custom HTTP Adapter

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

So, if we can't use the HTTP Send Adapter to retrieve files from a dynamically-obtained URL, can we override the behaviour in a custom HTTP Adapter?

Hmm... Well, first of all, writing a custom Adapter for BizTalk isn't something that I'd normally embark upon lightly. After all, you have the BizTalk OOTB Adapters and 3rd party Adapters, why write your own that you then have to test, support and maintain, particularly just to retrieve a file?!

So, in this case, I wasn't REALLY considering writing a custom HTTP Adapter... What I did briefly consider though was whether we could leverage the existing BizTalk HTTP Send Adapter, and just override its use of the "POST" HTTP verb.

The plan was coming together nicely, I'd identified the piece of code in the BizTalk HTTP Send Adapter that hardcodes the HTTP verb... It's in Microsoft.BizTalk.HttpTransport.dll, in the HttpAsyncTransmitter class's SetRequestProperties method:

request.Method = "POST"

So temptingly close, yet so far. Alas, the majority of the classes in Microsoft.BizTalk.HttpTransport.dll are sealed, and HttpAsyncTransmitter is one of them... So I couldn't inherit from it and override this one line!

I was left with the alternative of potentially disassembling the entire DLL and using it as the basis for writing my custom Adapter, or just going it alone...

Although I was already strongly leaning away from the custom Adapter approach, this really hammered the final nail in place...

So, it seems as though we've exhausted the alternatives for leveraging the OOTB BizTalk HTTP Adapters...

[UPDATE]

Another alternative to my rather brutal efforts to extend the OOTB BizTalk Adapter is to use the HTTP Adapter sample in the BizTalk SDK, also known as the "HTTP.NET" Adapter.

This sample is a rather rudimentary custom .NET implementation of an HTTP Adapter, so it's nowhere near as fully-featured as the OOTB HTTP Adapter, but it's a good start. And the good part is we have the full source code, so can change it to suit our needs.

The HTTP.NET Adapter implementation already supports specifying the HTTP verb as part of its configuration [why this hasn't been worked back into the OOTB HTTP Adapter I'm not sure].

In our case, the only change required came down to modifying the SendHttpRequest method in the HttpTransmitterEndpoint class (excerpt below):

private WebResponse SendHttpRequest(IBaseMessage msg, HttpAdapterProperties config)
{
  // ...
  // Original code...
  // Note: both the body part and the stream maybe null, so we
  // need to check for them
  string charset = string.Empty;
  IBaseMessagePart bodyPart = msg.BodyPart;
  Stream btsStream;

  if (request.Method != "GET")
  {
    // Original code...
    if (null != bodyPart && (null != (btsStream = bodyPart.GetOriginalDataStream())))
    {
      // ...
    }
  }

  return request.GetResponse();
}


The only change we've made is to wrap the code that deals with the request message body inside a check against the HTTP method (verb). If it's GET, we don't want to pass a request message body. Otherwise, we play on as per the original code.

This change could certainly be tidied up: It could be possible that there are some instances where we DO want to pass a body in the request message even when we're using GET, so you could introduce another configuration property such as "SuppressMessageBody" to cater for this.

The end-result however was that this change allowed us to achieve the desired result.

Looking back at it, I'm still not sure where my preference would be: I really do prefer to use OOTB Adapters where possible, not write and maintain my own, particularly when the custom version is a lot less feature-rich than the OOTB Adapter. However, I do kind of like using Adapters over custom .NET code... Now if the OOTB HTTP Adapter just included configuration properties for HTTP verb and message body suppression... Isn't this kind of requirement likely to become more prevalent with the increased focus on using REST?

When is String.Empty != String.Empty?

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]
Have just spent the last few days at a client site investigating a particularly obscure issue...

We've been working with a third-party API developed in .NET 1.1. A while ago, some ASMX services were built on top of the API, in VS2003 / ASP.NET 1.1. We wanted to upgrade these to VS2008, and eventually to WCF instead of ASMX.

So, I ran the upgrade on my development machine, everything upgraded fine, the services were functional, and I could successfully call the .NET 1.1 API. I deployed the services to an integration web server, and they stopped working with an obscure custom API error. I then spent a day reverse engineering the relevant bits of the API out into plain code using Reflector and the FileGenerator add-in (once again, yay for Reflector!), only to discover that the issue was with the following lines of code:

Private _OpenTags As String = ""
'...
If Not _OpenTags Is String.Empty Then
  '...

When the value of _OpenTags was "", the If test was returning False on my development machine (expected), True on the integration web server (unexpected).

OK... so after investigating potential differences in Framework configuration (eg, does Culture make a difference to how empty strings are evaluated?), and OS (development = WinXP, integration = Win2k3), I abandoned those paths and investigated the Framework versions on each machine.

Development machine had .NET 1.1, .NET 2.0 SP2, .NET 3.0 SP2, and .NET 3.5 SP1.

Integration machine had .NET 1.1, .NET 2.0 SP1, .NET 3.0 SP1, .NET 3.5 (no SP1).

OK... so, check out the list of fixes in .NET 3.5 SP1 (which also happens to include .NET 2.0 SP2 and .NET 3.0 SP2). Nothing there...

So, I built a test Win2k3 server, installed the same framework versions as the integration server, and ran my test... Sure enough, _OpenTags wasn't "equal to" (at least in a reference sense...) String.Empty, even when it was set to "". Installed .NET 3.5 SP1, and re-ran the test... Success: _OpenTags was String.Empty when set to "".

Eventually, to justify my desire to install .NET 3.5 SP1 on the integration server, I found the following in the .NET Framework 3.5 documentation:

http://msdn.microsoft.com/en-us/library/system.string.intern.aspx

The relevant part is the "Version Considerations" section, particularly the following statement:

In the .NET Framework version 3.5 Service Pack 1, the Intern method reverts to its behavior in the .NET Framework version 1.0 and .NET Framework version 1.1 with regard to interning the empty string.

... (example)

In the .NET Framework 1.0, .NET Framework 1.1, and .NET Framework 3.5 SP1, str1 and str2 are equal. In the .NET Framework version 2.0 Service Pack 1 and .NET Framework version 3.0, str1 and str2 are not equal.

Hmm... that would explain it... Thanks Microsoft. I haven't been able to find this listed in the "Breaking Changes" for .NET 2.0, maybe because it's so obscure, or just slipped their minds...

Note that it also depends on the way that you compare your string to String.Empty... In my case, as I was working with a third-party API, I didn't have any control over that...

Thanks for listening...

Thursday, December 9, 2010

CloudApp Part 1: Business Purpose & Architecture

A week or 2 ago I mentioned building my first app in the cloud and that I'd be blogging about some of the challenges this had posed. Well, I thought a good start (like in any project) would be to provide an overview of the business purpose the app was trying to fulfil, and the architecture we were proposing to employ.

Purpose

Although we started with a very lofty goal of providing a resource management application to help manage the resources within Chamonix, we decided that our first version should focus on capturing information about our people - things like basic resource profile information (name, office etc), as well as more interesting things like skills and education. This was a basic need for our sales guys to be able to know who had experience in what, whether it be technical, business, or industry oriented. Our plans were to build this basic data capture capability, and over time to flesh this out with further capabilities such as resource planning (potentially based on skills and availability).

Obviously a very important secondary goal for this exercise was to explore building apps in the cloud and get a better grip on some of the potential challenges - as well as some of the opportunities this presents. One of those opportunities (that we're still in the process of evolving) was the ability to allow users to sign in to our app using federated identity - eg Google, Facebook, or Windows Live. And of course while we were at it, if possible, to pull in information from their profile, such as their profile picture. As you'll see as we go along, this is getting increasingly easier, but also raises its own considerations.

Architecture

So, this is the high-level technology architecture we're using:
  • Data Store: SQL Server -> SQL Azure + Azure Storage
  • Data Access: Entity Framework 4.0
  • Business: WCF RIA Services
  • Presentation: ASP.NET 4.0
  • Authentication + Authorisation: Forms -> Azure Access Control Service
  • Hosting: IIS -> Azure Web Role
There are a few things to note here. Firstly, I've indicated roughly where we've made changes to the app in order to deploy it to the cloud. Secondly... Entity Framework 4.0 + WCF RIA Services + ASP.NET isn't such a common combination, but I wanted to try it out. You'll see my observations on the results in a future post...

Next time I'll get into some of the fun of building the parts of this app.

Monday, December 6, 2010

Adventures with BizTalk: HTTP "GET" Part 3: BizTalk HTTP Send Adapter

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

In my last post on this subject, I investigated the use of the BizTalk HTTP Receive Adapter for retrieving files from a dynamically-obtained URL, with limited success. Today I'll look at the HTTP Send Adapter as an alternative option...

So, where the HTTP Receive Adapter polls a remote HTTP location looking for files that match its configuration and then retrieves them into the BizTalk MessageBox, the HTTP Send Adapter is (typically) used to send a message to a remote HTTP location. Most often this would be initiated either through BizTalk messaging or orchestration engine, with the message sent being constructed by the relevant engine before being POSTed to the remote URL.

And here's the problem with the HTTP Send Adapter... Even though HTTP includes a number of verbs that a client can use to initiate a request to a remote HTTP location, the BizTalk Send Adapter only supports "POST" - and what we want in this scenario is "GET". So although you could quite reasonably expect that you could use the HTTP Send Adapter to send out a request for a file via HTTP "GET", and to receive the response (the file itself) back, that's not the way the Adapter works. In fact, alas, the HTTP verb ("GET", "POST" etc) is hardcoded inside the Adapter's .NET code to only send messages via "POST".

Hmmm, so if using the OOTB HTTP Send Adapter isn't going to work, might we not be able to override this behaviour in a custom Adapter? Well, that's a question for next time...

Adventures with BizTalk: HTTP "GET" Part 2: BizTalk HTTP Receive Adapter

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

Last time I gave an overview of some potential different approaches to achieving a dynamic HTTP GET using BizTalk.

In this post I'll talk about the first one of those, using the BizTalk HTTP Receive Adapter.

As you probably know, BizTalk includes an HTTP Receive Adapter, which is designed for the scenario where you point it at an HTTP URL, and the Adapter polls that location looking for files that match some criteria on a periodic basis. The idea of this (similar to other Receive Adapters) is that when a file matches, BizTalk pulls that file in through the Receive Location configured with the Adapter (using HTTP GET), and it can then be used by the BizTalk messaging and/or orchestration engines.

There are two main problems with using this Adapter for our scenario.

Firstly, the intended use for the HTTP Receive Adapter is when you know in advance what the URL you're going to be polling is, and can configure it "statically". In our case, we don't know this URL until we're processing our "source" message in our orchestration instance. And although you can dynamically set transport properties for Send Ports, it's not possible to do the same for Receive Ports / Locations.

You could do something tricky like dynamically creating Receive Ports / Locations, and indeed that's something I've considered in the past (but with the FTP Adapter) when you have the scenario of files that you want to receive from a fixed location, but whose file name changes each day. You can certainly achieve this using the classes in the BizTalk Explorer Object Model... The problem though is that the Explorer OM is only supported in a 32-bit process, and ideally we don't want to constrain ourselves in that way.
The second problem with this approach is that the HTTP Receive Adapter is a Receive Adapter, so it's mainly of use when you want to use it to poll a location, find a file that matches, receive the file into BizTalk, and activate an orchestration instance based on the file having been received. In our case though, we're already in an orchestration instance... so how do we trigger the Receive Adapter to start polling for a file, and even once it's been received by BizTalk, how can it be receieved by our "source" orchestration instance and / or by another dedicated orchestration that is somehow tied to our "source" instance.

Again, there are certainly ways that you can achieve this:

(a) Using dynamically created Receive Ports / Locations, and having the "source" orchestration Listen to the BizTalk MessageBox for a message that matches the file you're waiting for;
(b) Having a separate orchestration that "dumbly" receives the file (asynchronously) and then sends a "trigger" message that the "source" orchestration Listens for;
(c) If the "source" orchestration really doesn't care whether the file is received or not before continuing, it can send a message to the MessageBox that triggers a separate orchestration that asynchronously retrieves the file and performs any processing required.

I guess the main take-away from these two problems is that neither of them are exactly "easy" to overcome, and can potentially make the solution much more complicated (and less scalable) than is ideal.

Surely there must be a better alternative?

Wednesday, November 24, 2010

Adventures with BizTalk: HTTP "GET" Part 1

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]


A while ago I was involved in a BizTalk project where we had a conceptually simple (and I would expect common) requirement: We would be receiving messages from an external party, and as part of the orchestration processing of those messages, we needed to use information in the message to dynamically retrieve one or more files from an external website based on a provided URL and then write it out to a pre-configured location in the filesystem.
 
I'll point out at this stage that the problem I'm attempting to solve here is actually getting the remote file: the parts about extracting the URL from the original incoming message and also writing the file once retrieved out to the filesystem are trivial...
 
Anyway, it couldn't be that hard to retrieve a remote file, could it? After all, BizTalk is all about connectivity, and has a host of adapters that should be able to solve this problem!
 
Over the next few posts, I'll describe my adventures in devising an appropriate solution to this problem, and you'll see that it wasn't as easy as it sounds (or as it should be)!
 
Potential Solutions


To start off in this post I'll list each of the potential solutions I considered:
That's it for this time, next time I'll start making my way through each of the solutions, discarding them at will!!!

Getting started in the Cloud

I haven't posted in a while because I've just started with my new company, Chamonix IT Consulting. Chamonix covers a number of core competencies such as architecture, systems integration, BI, portals and collaboration, and traditional app dev. However, one of the most exciting competencies is our focus on cloud computing. In fact, Chamonix practices what it preaches, and all of our LOB systems are cloud-based - we don't have a single on-premise server to host our LOB systems.

Over the last few weeks I've been rapidly ascending into cloud and getting my head around what it all means from an architectural and business perspective, what are its strengths and weaknesses, and when and how a client might consider a cloud solution as opposed to traditional on-premise. Having all of your LOB systems cloud-based certainly helps in this regard, as you experience the pleasure and the pain first-hand.

To get a deeper perspective on some of the challenges for developing in the cloud, we decided to prototype a relatively simple resource management application that we'd deploy to Microsoft's Windows Azure platform. It's certainly proved to be an eye-opening exercise with a number of challenges, some related to cloud technologies, some related to emerging technologies on the Microsoft platform that could be used on-premise or in the cloud.

Over the next few weeks I'll post about some of our experiences, some of the challenges and how we've overcome them, and my overall take on whether we were successful in what we set out to achieve or not. So you know what's in store, here's a summary of some of the technologies we've touched and I'll be mentioning:

  • SQL Azure
  • Windows Azure (Web Role)
  • Azure Platform AppFabric Access Control Service
  • OpenID & oData
  • ASP.NET 4.0
  • Entity Framework
  • WCF RIA Services
Until next time!

Friday, November 5, 2010

A few of my not so favourite things...

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

ASP.NET LoginName control casing

The ASP.NET LoginName control displays the name of the logged-in user. Unfortunately, it displays the name of the logged-in user using whatever the user typed in to the log-in form... so, if we have a user named "David", and I type in "daVID" into the log-in form, the LoginName control will faithfully display my name as "daVID", rather than retrieving and displaying my name using the casing it was defined with.

I had a client that was unhappy with this behaviour. There are a number of work-arounds including deriving a custom LoginName control from the ASP.NET LoginName control. In the end, I went with an approach that handled the OnAuthenticate event of the Login control on the log-in form (ie, the control the user types their username and password into), validated the user's credentials, and if valid, retrieved the user from the membership store and set the value of the Login control's UserName property to the value retrieved. The Login control then takes over and creates the FormsAuthentication cookie etc using the correctly cased user name, and the LoginName control uses that throughout the lifetime of the login...

A small but annoying "feature"...

IIS 5.1 MaxConnections

I was creating a test harness for conducting some performance benchmarking of a BizTalk solution using LoadGen. The development environment in this case was based on Windows XP, and hence IIS 5.1, whereas the actual test environment was based on Windows Server 2003, and hence IIS 6.0.

Whilst developing the test harness however, I was encountering "Access denied" errors back from IIS whenever I ramped up the number of messages I was sending to the WCF endpoint BizTalk was exposing...

After checking the obvious security-related bits, my first thought was that it was something BizTalk-specific that was causing message throttling to occur. It wasn't. Nor was it some WCF-level setting.

In the end it turned out to be an IIS 5.1 limit on the number of "active" connections it allows at any one time: 10, by default. Because LoadGen was spinning up multiple threads to load the endpoint, and the response from BizTalk was taking a while to be generated (from another system) and IIS was holding the connection open, I was running into this limit.

You can change the limit up to apparently 40 concurrent active connections, using the following command:

adsutils.vbs SET w3svc/maxConnections 40
iisreset

IIS 6.0+ doesn't suffer from this limitation, as far as I've read.

MS DTC on Windows XP & Vista: Error Message 5: Access is Denied

If you receive something like this:

ERROR MESSAGE 5 - ERROR MESSAGE 5 - Access is Denied
Invoking RPC method on TURTLE86
Problem:fail to invoke remote RPC method
Error(0x5) at dtcping.cpp @303
-->RPC pinging exception
-->5(Access is denied.)

This error will only occur if the destination machine is a Windows XP machine or a Windows VISTA machine. This is an additional security in the RPC layer which is configured on the client operating systems. More details on this security aspect is described in the article "RPC Interface Restriction" on Technet: RPC Interface Restriction http://technet.microsoft.com/en-us/library/cc781010.aspx.

To get rid of this error just follow these steps to configure the registry key and REBOOT the machine:

1. Click Start, click Run, type Regedit, and then click OK.
2. Locate and then click the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT
3. On the Edit menu, point to New, and then click Key. Note: If the RPC registry key already exists, go to step 5.
4. Type RPC, and then press ENTER.
5. Click RPC.
6. On the Edit menu, point to New, and then click DWORD Value.
7. Type RestrictRemoteClients, and then press ENTER.
8. Click RestrictRemoteClients.
9. On the Edit menu, click Modify.
10.In the Value data box, type 0, and then click OK. Note To enable the RestrictRemoteClients setting, type 1.
11.Close Registry Editor and restart the computer.

From http://blogs.msdn.com/distributedservices/archive/2008/11/12/troubleshooting-msdtc-issues-with-the-dtcping-tool.aspx

Thursday, November 4, 2010

A few of my favourite things

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]


Deployment Framework for BizTalk
http://www.codeplex.com/biztalkdeployment

As if BizTalk development wasn't tricky enough, deploying BizTalk solutions can be a very painful exercise, particularly when you're doing it repeatedly.

The Deployment Framework for BizTalk is a blessing, providing an MSBuild-based highly configurable deployment framework that's integrated right into Visual Studio, and making deployment as simple as clicking a toolbar button. On top of this, it also provides a suite of extra features on top of more manual BizTalk deploment techniques including generation of server installation MSI's and SSO-based runtime configuration.

VB.NET XML Literals & Linq to XML

VB.NET seems to receive less attention than C# in many cases, but one case where it surpasses C# in .NET 3.5 is the ability to create XML literals. Using XML literals you can create an anonymous type variable, assign it an XML literal, and its type will be inferred from its assignment.

Add to this Linq to XML, which donates the power of LINQ to manipulating XML fragments, and you can do some pretty cool stuff.

The following example demonstrates both to create an XML representation of a set of search criteria.

Assuming we have a variable called searchCriteria that is an array of SearchCriterion objects with Name and Value properties, it transposes this array into an XML representation (in my case, for logging purposes).

Dim searchCriteriaXml = <searchCriteria><%= From c In searchCriteria Select <searchCriterion name=<%= c.Name %> value=<%= c.Value %> /> %> </searchCriteria>

The result of this is something that looks like the following:

<searchcriteria>
  <searchcriterion name="..." value="...">
  <searchcriterion name="..." value="...">
</searchcriteria>

Resize a VHD
http://vmtoolkit.com/forums/post/331.aspx
http://xtralogic.com/products_vhd_utility.shtml

For ages I thought it wasn't possible to resize a virtual hard disk that had been created at a particular fixed size... I thought it was stuck that way. We had several virtual development environments that had been created based on a pathetically small Windows XP virtual image, and they were, as far as I thought, marooned on a 7Gb C: drive.

I'm not sure what caused me to take another look, but I'm glad I did... I won't post the exact process, but using information and the tools from the following URLs, I finally managed to resize these VHDs. Hurray!

Taking an ASP.NET application offline
http://weblogs.asp.net/scottgu/archive/2005/10/06/426755.aspx


Did you know you can take an ASP.NET application "offline" by placing a file named App_Offline.htm in the root of the virtual directory for the application? I didn't, until recently...

Entity Framework

I trialled the EF 1.0 on a project recently. I'd been looking to try it out for a while to see how it compared to our MyGeneration-based DAL approach, but had struggled to find a good place for it. I'd done a fair bit of reading in the meantime (including Julia Lerman's outstanding Programming Entity Framework book), so knew heading in that it's a huge topic of itself, and also, being a v1.0, not the finished product yet.

Given all that, and working within its limitations, I have to say it was actually on the whole a very pleasant experience to use, and it fit the bill in this case very nicely.

There are still a raft of issues and limitations with v1.0 that mean that you need to evaluate whether it's the right fit for your situation (many of which are being addressed by v2.0 = EF 4.0, released with .NET 4.0), but I have to say that in my case it was very "nifty" to use and to write Linq to Entities queries against the conceptual model, and performed very well, especially if you optimise the model and the Linq to Entities queries.

So all in all I'd say that in my experience it's not as "bad" as the wrap it sometimes seems to get, you just need to know what to expect heading in, and evaluate whether it's really the right fit for your purpose.

WCFExtras
http://www.codeplex.com/WCFExtras

Provides the following:
  • SOAP Header support for WCF
  • Adding WSDL Documentation from Source Code XML Comments
  • Override SOAP Address Location URL
  • Single WSDL file for better compatibility with older SOAP tools.
Of these, I've utilised it for the WSDL documentation from source code (does however require that you deploy the VS-generated .xml documentation file along with your bin folder) and the single WSDL file option.

WSCF.blue
http://www.codeplex.com/WSCFblue/

I really like the philosophy of WSCF.blue: Develop the contract for WCF services (schemas and WSDL) first, and then generate the implementation code (.NET data / message / service contracts & interfaces) from it. It's kind of the reverse to the "traditional" code-first approach.

The download is an add-in for VS2008 that automates a fair bit of this for you. Again, I haven't had a chance to actually use this in anger, and I'm interested if any of these sorts of capabilities will be built into WCF 4 / VS 2010, but I like the idea...

XSLT Profiler
http://code.msdn.microsoft.com/xsltprofiler

It's been around for a while now, but I only just had a reason to use it. Essentially, it does what the name suggests, it profiles the performance of your XSLT and gives you a raft of information on where it's running slow. I used it to compare two different XSLT approaches to produce the same result to determine which was more speedy!

Microsoft Architecture Journal
http://msdn.microsoft.com/en-us/architecture/bb410935.aspx

Sometimes a bit dry, but usually filled with interesting articles that are less technically-focused than MSDNMag (which is also brilliant).

Wednesday, November 3, 2010

Technical Pain Points

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

Some recent technical pain points...


Visual Studio 2008 Web Application Projects & Profile Properties

To cut a long story short, VS2008 Web Application Projects don't natively support ASP.NET Profile Properties. In VS2008 Web Site Projects (which, in case you hadn't heard me ranting previously, I loathe), the "Profile" class is dynamically generated behind the scenes when the Profile is set up in the config file. However, the same doesn't occur in Web Application Projects. You can utilise this tool [http://code.msdn.microsoft.com/WebProfileBuilder] to enable support for building the Profile class as part of the build process. Think hard about using Profile Properties though: should these properties really be "first-class citizens" of your underlying database schema, rather than "tacked-on"?

Using .NET TransactionScope with SQL Server 2000

Another goodie. TransactionScope, introduced in .NET 2.0, makes transactional .NET code a breeze! It uses a lightweight transaction in most cases, until the transaction requires escalation to a distributed transaction. Unfortunately, one of the cases where it doesn't use a lightweight transaction is when you're working against a SQL Server 2000 database. Yes, even if you're only accessing a single database on the SQL Server 2000 instance, you still start out with a distributed transaction: which means MS DTC becomes involved, and must be suitably configured on both the web server and database server.

MS DTC Authentication between a Computer in a Domain and a Computer not in a Domain

Following on from the previous item...

Of course, in our situation, we were deploying to an environment where we had a database server that was a member of the corporate domain, and a web server that was in the DMZ, and not a member of the domain. There are 3 options for authentication between DTCs: Mutual Authentication (preferred), Incoming Caller Authentication, and No Authentication. Because in our scenario there's not common point of reference for authentication between the DTCs (and no trust can be established), we had to go with No Authentication. We then had to do what we could to further secure DTC communication between the web and database servers through firewall rules restricting ports and IPs.

SQL Server Collation Differences between the Server and the Database

SQL Server has a default collation for the server instance. By default, when you create a new database, it uses this collation. However, it's also possible to specify a different collation for the database (it's also possible to specify a different collation again on a column-by-column basis inside tables within the database, but that's just an aside).

Normally this isn't a problem (other than it's nice to decide on a collation and stick with it unless you really need a different collation). However, when you have stored procedures or functions in your database that create and use temporary tables, a difference in collation between the database and the server instance can be a problem, particularly if you're trying to join between tables in your database and the temporary tables you've created. In this case, you'll get a collation mismatch error.

The workaround is to specify the COLLATE DATABASE_DEFAULT statement for collation-aware columns when creating temporary tables in tempdb. This will ensure that the temporary table column collation matches that used in the database. Then you'll only have a problem if for some reason you've used yet another collation for the specific columns in your database you're joining on... Yay for collation!

Tuesday, November 2, 2010

Back online

Hmmm... So, you may have noticed I haven't posted in a while. In fact, you probably won't have noticed, because you don't exist, but anyway...

I've been busy...

For the last few months I've been working on a large app dev and integration project for a Defence client. The project has been focused around the provision of a web-based application to manage design data related to the electrical system for products that are assembled by the client. The most challenging (and hence interesting) part of the project has been that the design for the electrical system actually comes from an international design authority. So the majority of design data within the application, as well as design drawing files, are actually authored in another country, and need to be integrated into our application. I won't go into the details of the business, communication, and technical challenges (which were many), but in the end our solution has been based on a combination of system and human processes supported by technologies including SQL Server (database engine, integration services, and transactional replication), Oracle (database engine), .NET (ASP.NET, WCF) and BizTalk. The solution is now nearing system and acceptance testing, and initial results and feedback have been very positive.

Right in the middle of that project, I had the opportunity to start a new role with Chamonix IT Consulting (http://www.chamonix.com.au/). I've just started with Chamonix this month, and my focus here will be on enterprise architecture, integration, and the Cloud. It's a very exciting opportunity for me, particularly given that Chamonix is really just getting started - I'm looking forward to helping create the culture of a new IT consuluting busines, working with some great people, and with a particular focus on the Cloud.

So, I hope that explains in part why I haven't posted in a while.

What you should see in the next few days and weeks are quite a number of posts I've had in my backlog, so hopefully I'll be as good as my word and you'll see them soon. There's (I think) some pretty cool content to come, including some experiences I've had with BizTalk, WCF, ASP.NET, and Entity Framework over the last few months. I'll be back soon!

Sunday, August 15, 2010

My Love / Hate Relationship with Microsoft ESB Guidance 1.0

The ESB Guidance 1.0 for BizTalk Server 2006 R2 has been around for a while (in fact it's now been replaced by version 2.0 for BizTalk 2009). Not only does it offer an Enterprise Service Bus framework on top of BizTalk, but it also includes a very comprehensive framework for managing exceptions encountered by BizTalk orchestrations and messaging. 

I've had a few encounters with version 1.0, some of them good, some of them... not so good.

Encounter 1

My first time was as part of a POC for a client who I'd recently implemented a set of BizTalk 2006 R2 environments for, and they were looking to leverage a standard framework for exception handling. I'd been reading up on the ESB Guidance 1.0 at the time, and thought it could be just the thing to fill the gap.

The results of the POC were very disappointing.

Issues encountered. During the 3 days of the POC, 4 reasonably significant issues were encountered with the ESB Guidance code. These included:
  • Installation on Windows XP was not straight-forward and the install scripts utilised Windows Server features. I know a Windows XP development environment for BizTalk is less than ideal (trust me, I know), but this client had a tight restriction on what OSes could be used for what purposes, and we had to use Windows XP.
  • The pre-built ESB Guidance binaries signed with the Microsoft key incorrectly referenced a component using the source key (issue logged with Microsoft). This effectively means we couldn't use the pre-built binaries, and had to recompile and deploy based on the source code (something the client wasn't overly keen on doing - maintaining their "own" version of the source).
  • The ESB Management Console assumed UTC time offsets to be integer values (we’re +9.5h).
  • The date/time of exceptions submitted to the database used MM/dd/yyyy format.
Deprecation by Microsoft. The ESB Guidance 1.0 is based on BizTalk Server 2006 R2. With the release of BizTalk Server 2009, Microsoft released version 2.0 of the Guidance, named ESB Toolkit 2.0. They also announced plans for the rapid deprecation of ESB Guidance 1.0 (in fact it's gone from Codeplex).

Unproven feature: Resubmission. Resubmission through WCF ESB itinerary-based receive locations and HTTP-based receive locations was unsuccessful.

Requirement for Dundas Charts. The ESB Management Console utilises a third-party ASP.NET charting control package called Dundas Charts, which we were able to download a trial version of, but which the client was unwilling to buy licenses for.

Encounter 2

Following my first encounter I was most disheartened. However, as it had occurred many months ago, on a more recent engagement I decided to revisit the Guidance, again with a focus on using it for exception handling (yea, I know, why use all that other great stuff in the box?)

This time however I decided to just install the exception handling components from the separate MSI, not the "install everything" MSI. And much to my surprise, this MSI didn't suffer from some of the problems I'd encountered previously.

So, despite its deprecation, the client decided to implement its BizTalk exception handling strategy based on the ESB Guidance components. And it worked great! We had email notifications going to specific mailing lists when key exceptions occurred, we had all exceptions being logged to the ESB faults database... It was just what had been missing. Until...

Encounter 3

This encounter just happened (this week in fact), and it's still ongoing. I was assisting another developer implementing the ESB exception handling components in an existing  BizTalk application, and to test that the framework was working, we decided one of the easiest ways would be to simply "turn off" an endpoint that a Send Port was targeting. Our orchestration was suspended, not as we expected from our own Suspend shape in our exception handler (post ESB), but instead because of a failure in the ESB exception handling components themselves.

Fortunately BizTalk logged the details to the Windows event log when it suspended the orchestration instance, with an error message along the lines of:
Inner exception: Error 115001: An unexpected error occurred while attempting to create the ESB Fault Message.

Exception type: CreateFaultMessageException
Source: Microsoft.Practices.ESB.ExceptionHandling
Target Site: Microsoft.XLANGs.BaseTypes.XLANGMessage CreateFaultMessage()
Additional error information: An error occurred while parsing EntityName. Line 6, position 106.
The exception coming back from the "missing" endpoint was actually an EndpointNotFoundException, but for some reason the ESB exception handling components were struggling to create the initial FaultMessage in our expression shape in the scope exception handler.

Fortunately the ESB Guidance 1.0 also came with source code (it would have been close to useless without the source for all the bugs that needed to be fixed otherwise), so I was able to dig through the source for the CreateFaultMessage method and see what it was doing. Nothing really leapt out at me, but I could trace the exception to its attempt to create the initial FaultMessage from some template XML it was loading from a resource file. Something that was getting injected into one of the placeholders in this template was causing the exception, which given the "An error occurred while parsing EntityName" part, appeared to be related to XML content.

I reconstructed each of the values that were substituted and slowly built up the XmlDocument until it broke... When I passed in the value for the placeholder in the element, from the exception's Message property. For whatever reason, for this particular EndpointNotFoundException (don't know if it's something that BizTalk does or we were just lucky, because I haven't seen this behaviour for "standard" EndpointNotFoundExceptions), the Message property had the full stack trace in it as well as the actual exception message... And of course the stack trace included unescaped "&" characters, which need to be escaped as "&" in XML. The ESB component wasn't doing this, hence the issue loading the XmlDocument object.

That seems pretty dumb, I thought. So I checked out the corresponding class in the ESB Toolkit 2.0 (via Reflector), and sure enough, it comes with a handy "CleanForXml" method that the exception's Message property gets passed through to escape XML reserved characters. So obviously someone noticed at some point and this has been fixed in version 2.0.

Anyway, now we're left in a bit of a dilemma... do we (a) fix the ESB 1.0 source code, but have to maintain our "own" version of the source, (b) stop using ESB 1.0 for exception handling and go back to the dark days of, well, suspended service instance mania, or (c) wait until the client upgrades to BizTalk 2009+ so we can use version 2.0 instead... [ignoring (d) cross our fingers and hope that there's never an XML reserved character in an exception message].

I'll let you know what we decide, but this has once again soured my temporarily restored faith...

Friday, July 23, 2010

BizTalk is not a part-time job

Over the last few years I've been engaged by a number of clients to work on projects involving Microsoft BizTalk Server. As I was embarking on this journey, one of my colleagues who had previously worked with BizTalk encouraged me with "You've done everything except BizTalk!"

What he said was true, I had done a lot of work with many of the technologies that support BizTalk such as XML, XPath, XML Schema, SQL Server, web services and WCF, and .NET. Having worked with these technologies did give me a very good grounding for getting started with BizTalk, and I was in most cases able to satisfy the requirements of each client in their use of BizTalk. It probably "helped" that the use of BizTalk is still somewhat in its infancy here, and hence clients' use (or expectations) for it is in many cases rather immature.

However, as I've come to appreciate the breadth of what BizTalk has to offer and the depth of knowledge required to fully leverage it, I've realised that working with BizTalk is not something that should be a part-time job.

Working with BizTalk is not something that you can pick up for 6 months, put down for another 6 months, and pick straight up again like other technologies I've worked with. Even though those other technologies (like .NET) actually change pace more rapidly than BizTalk does, I think it's more due to the underlying skillset and philosopy required to use BizTalk well. It's completely different to a typical "app dev" approach (which I've also had a good deal of experience with).

In the majority of app dev projects I've worked on, although the specifics of the functionality are different from client to client, there is usually a consistent set of patterns and architecture you can follow that you know will meet 90% of requirements.

BizTalk projects on the other hand are more usually EAI or SOA related: you're connecting a client's enterprise systems to their other enterprise systems or their partners' enterprise systems, or providing enterprise services that will be leveraged by other systems. The challenges are usually different on each engagement, and (at least I've found) require you to "think outside the box" more often than not. It's one of the things that most attracts me to working on projects involving BizTalk.

This is really where working with BizTalk shouldn't be part-time though: because in my opinion, the only way to learn what the product can do (out of the box and through extension where required) is to work with the product full-time. There is actually an excellent community and body of knowledge around using BizTalk (despite the expectation I was given when I first started working with it), but it really is a full-time job to keep up and stay across it all, and to get to play with interesting ideas and leverage them on clients' projects.

I know that to experienced BizTalk consultants I'm probably not saying anything new, this has just been my observation from my last few years. If you're serious about BizTalk, it's worth considering where it lies within your organisation's strategic direction and your own personal and professional development goals, whether your organisation is a consultancy or a direct consumer of BizTalk.

PS: And BizTalk administration is another job that shouldn't be part-time, but often is... But that's a story for another time.

Tuesday, July 13, 2010

Hi there!


Hi, and welcome to my first blog post!

My name is Dave Sampson, and I'm a software IT consultant located in Adelaide, Australia. I've been in the IT industry over 10 years, working in government, and as part of both small and large IT shops. Technically, I've focused mainly on the Microsoft platform, but more recently have been expanding my horizons into the Oracle / Java world. My focus is primarily on application and connected systems development, which on the Microsoft platform means a whole lot of .NET, SQL Server, WCF, and BizTalk Server.

Anyway, this blog is intended to share my experiences through my consulting career. There'll be a whole lot of deep technical stuff that you may or may not find useful, but from the fact that I'm posting it, I've probably found it useful at some stage. Hopefully there'll also be just as much consulting and professional skills sharing, because I'm always keen to learn and share about new, better, or just different ways of doing things.

I'm going to try to commit to posting once a week about something. As I'm moving from an internal blog, some of my posts will likely be re-posts of old [censored internal] material that I still think is relevant. And moving forward, you'll hopefully hear all about the ups and downs of my consulting career and the technical and professional challenges I've faced.

So anyway, after putting it off for a few weeks now, here's my first post. Hopefully it won't be so long til the next one.

See you next time!