Wednesday, September 14, 2011

BizTalk SSO Application Configuration Usage Note

I've been using the SSO Application Configuration tool for storing key/value config data for my BizTalk solutions, described here: http://blogs.msdn.com/b/teekamg/archive/2009/08/19/sso-configuration-application-mmc-snap-in.aspx, and available here: http://www.microsoft.com/download/en/details.aspx?id=14524.

I discovered today that an assumption I’d been making about this tool was incorrect.

You can export the configuration for a particular application to an encrypted “.sso” file (you need to provide a password during the export process). Commonly I do this and include the file in my Visual Studio solution, so that others working on the project can use it to subsequently import the same settings into their local SSO Application Configuration tool. It also enables the settings to be under source control.

The assumption I’d been making was that if you import the .sso file over the top of an existing application, it overwrites the existing settings with those in the file. This is not correct. The import only adds new key/value pairs that are not present in the application you’re importing into – it doesn’t delete key/value pairs that are no longer present in the file, and it doesn’t update existing key/value pairs if the value in the file is different to the existing value. I don’t like it.

This behaviour is documented in the Readme.txt that comes with the tool, but I must admit I hadn’t ever paid it much attention:

4.  Export Configuration Application

... if you add key/value pairs to an application and you wish to export the new values, export the application and when you import only the new key/values will be imported.  It is important to note that if existing values are changed then you will need to open the snap-in in the other environment and make those changes manually.  This was done intentionally because there are times that the same keys will have different values for different environments.
5.  Import Configuration Application
... If the application was new for this environment you will see a new application in the application tree.  If this application already existed you will see new key/pairs added to the existing application.

The only safe way to ensure that you have a verbatim copy of the settings in the exported .sso file is to delete the application first, then import it.

BTW, I came across this because I’m looking at trialing the msbuild task that comes with this tool to take care of the import as part of a dev deploy. Looks like I'll probably need to also include a build task to delete the SSO affiliate application first...

Code Contracts in .NET 4.0

Today I started using a new .NET 4.0 feature called “code contracts” in my Visual Studio solutions. What these do is enable you to enforce preconditions and postconditions in your .NET methods. So for instance, a method that previously looked like this:

public static string GetValue(string key)
{
  if (string.IsNullOrEmpty(key))
  {
    throw new ArgumentException("Key is required.");
  }


  string result = default(string);

  // Code to determine result...

  if (string.IsNullOrEmpty(result))
  {
    throw new Exception("Unable to determine Value based on Key.");
  }


  return result;
}


Can be simplified to this:

using System.Diagnostics.Contracts;

//...

public static string GetValue(string key)
{
  Contract.Requires(
    !string.IsNullOrEmpty(key),
    "Key is required.");
  Contract.Ensures(
    !string.IsNullOrEmpty(Contract.Result()),
    "Unable to determine Value based on Key.");

  // Code to determine and return result...
}

Obviously the more checks your method has, the more it simplifies thingsRequires checks a precondition, and Ensures checks a postcondition.

Support for code contracts is baked in to .NET 4.0, but you need to install an additional component to enable design-time support in Visual Studio. You can download it from here: http://msdn.microsoft.com/en-us/devlabs/dd491992.aspx. Make sure you close any running instances of VS prior to installing.

After the install you’ll have an extra "Code Contracts" page in your project’s property pages. The only settings I’ve changed are:
  • Assembly Mode: Standard Contract Requires – according to the doco it sounds as though you should use this unless you have a good reason not to.
  • Perform Runtime Contract Checking: Checked – without this, your contracts won’t be enforced.
  • Assert on Contract Failure: Checked – I check this for my Debug build so that I get a visible Assert message when my contracts are violated.
Some more information here:
I’ve only just started using this, so my understanding will probably evolve as time goes on, but it looks pretty useful.

Thursday, August 11, 2011

Repairing & Configuring the SharePoint 2010 User Profile Service

I've been working on and off with SharePoint for a few years now. At the start of this year I had the opportunity to build a "kitchen sink" SharePoint 2010 VM using VirtualBox. At the time I configured most SharePoint features, but one thing I couldn't get working was SharePoint 2010's User Profile Service, which provides data synchronisation between user directories such as Windows Active Directory and the SharePoint user profile store. Earlier this month though I finally needed this feature, so had to revisit my configuration woes and battle through until I could get it working.

First of all, I'm extremely indebted to the following amazing blog posts and articles that helped hugely in rectifying the various ailments my UPS config was suffering from:

  1. Harbar.net: Rational Guide to implementing SharePoint Server 2010 User Profile Synchronization
  2. MSDN: Configure profile synchronization (SharePoint Server 2010)
  3. Clever Workarounds: More User Profile Sync issues in SP2010: Certificate Provisioning Fun
  4. When Technology Works: User Profile Sync provisioning remains in ‘Starting’ status (stops at ULS Eventid 9qh1 ILM Configuration: Configuring Certificate)
  5. Harbar.Net: “Stuck on Starting”: Common Issues with SharePoint Server 2010 User Profile Synchronization
Similar to the circumstances in posts (3) and (4) above, my UPS was stuck in the "Starting" state, with an error in "configuring certificate". What I ended up needing to do was to delete my UPS service application completely and start again. So that I could start with a truly clean slate, I followed the steps in (4) to delete the duplicate certificates that had been created in both the Trusted Root Certification Authorities store and the Personal Certificates store. I then used the PowerShell script in How to reset the Sync Machine Instance to unconfigure the UPS service application, and then deleted the UPS service application from SharePoint Central Administration. I then followed the steps in (1) exactly, and soon had a fully functional UPS! In particular, I found it was important to:

  • Delegate control for "Replicating Directory Changes" in Active Directory Users & Computers.
  • Add the "Allow" right for "Replicating Directory Changes" to the Configuration container in ADSI Edit.
  • Add the "Allow Logon Locally" right for the SharePoint farm service account in group policy.
  • Ensure the SharePoint farm service account is a member of the local Administrators group while configuring the UPS.
I also found that having the ULS Viewer running during configuration of the UPS was VERY informative and let me know that it WAS progressing (as per How to view progress of UPS provisioning).

One last thing I needed to do was to ensure that the FIM services (Forefront Identity Manager Service and Forefront Identity Manager Synchronisation Service) were set to "Delayed Start" in the Services applet.

HTH!

Thursday, July 28, 2011

BizTalk ESB Toolkit 2.1 Exception Handling bits


A few bits & pieces gleaned from using the Microsoft BizTalk ESB Toolkit 2.1 to provide a standard exception handling framework for BizTalk 2010.

ESB Fault Message Infinite Loop (CPU 100%)

There are certain conditions under which creating an instance of the ESB Fault message using a call similar to the following will cause your orchestration to enter an infinite loop, and your server's CPU to hit 100%:

faultMsg = Microsoft.Practices.ESB.ExceptionHandling.ExceptionMgmt.CreateFaultMessage();

While the call is perfectly valid, there is a bug in the ESB framework that under certain conditions will cause the CreateFaultMessage method to enter an infinite loop. The conditions are either:
  1. You call CreateFaultMessage outside an exception handling block inside a Scope shape.
  2. You call CreateFaultMessage after catching an exception that derives from Microsoft.XLANGs.BaseTypes.XLANGsException.
(1) means that you can't use CreateFaultMessage to create an instance of the ESB fault message schema outside an exception handling block. You can work around this by defining and throwing your own custom exception at the point where you would otherwise have called CreateFaultMessage, and then leave it to an exception handling block that catches your custom exception to call CreateFaultMessage and perform your exception handling pattern... I think this is probably a pretty good pattern anyway.

(2) means that you have to be careful with what you catch and handle in your exception handling block, and if it derives from Microsoft.XLANGs.BaseTypes.XLANGsException, don't call CreateFaultMessage.

The following post has some suggestions for how to rectify this bug in the source: http://www.bizbert.com/bizbert/2011/05/06/Improving+The+ESB+Toolkit+Fixing+The+Endless+Loop+Bug+When+Creating+Fault+Messages.aspx

Creating Custom Exceptions for use with ESB Toolkit

If you decide to head down the path of defining and throwing your own custom exceptions for use with the ESB exception management framework, you need to follow certain rules in the custom exceptions:
  1. Decorate your class with SerializableAttribute.
  2. Inherit from System.Exception.
  3. Define a protected deserialization constructor.
For example:

[Serializable]
public class MyException : System.Exception{
    internal MyException() : base() { }
    internal MyException(string message) : base(message) { }
    protected MyException(SerializationInfo info, StreamingContext context) : base(info, context) { }
}

Also note if you define any custom properties in your custom exception, these will need to be catered for in the deserialization constructor and by overriding the GetObjectData method, for example:

[SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)]
protected MyException(SerializationInfo info, StreamingContext context) : base(info, context)
{
  this.mScope = info.GetString("Scope");
}


[SecurityPermissionAttribute(SecurityAction.Demand, SerializationFormatter = true)]
public override void GetObjectData(SerializationInfo info, StreamingContext context)
{
  if (info == null)
  {
    throw new ArgumentNullException("info");
  }
  info.AddValue("Scope", this.Scope);
  base.GetObjectData(info, context);
}


Beware Null Values in Fault Message Properties

After you've created your instance of the ESB fault message using CreateFaultMessage, you would normally set properties of the message using its distinguished fields. Just beware settings any of these values to a null value - this causes the serialization of the message to fail. I usually use some sort of helper function that checks if the value that will be populated into the property is null and uses a default value if it is, for example:

faultMessage.Body.FailureCategory = MyExceptionManager.EsbPropertyProvider.GetFailureCategory(caughtException, MyExceptionManager.FailureCategories.Default);

Writing Exception Details to the Windows Event Log

Lastly, a rather obscure one, but the ESB framework provides a helper function for writing exception details to the Windows Application event log. You need to add a reference to Microsoft.Practices.ESB.Exception.Management.dll, then in an expression shape you can use:

Microsoft.Practices.ESB.Exception.Management.EventLogger.LogMessage(
  exceptionToHandle.Message,
  System.Diagnostics.EventLogEntryType.Error,
  (System.Int32)Microsoft.Practices.ESB.Exception.Management.EventLogger.EventId.Default);


HTH!

Thursday, July 21, 2011

Issue opening orchestrations in Visual Studio 2010

This was something that had cropped up now and then when designing BizTalk orchestrations in Visual Studio 2010...

Once the orchestration had been opened in the "source" view (to edit the raw XML), from that point onwards Visual Studio 2010 would open the orchestration in text view all the time... The workaround was to use "Open With" and choose the designer...

It wasn't until I came across this post by Randal van Splunteren that I discovered a permanent way to fix the issue: Edit the .btproj file in Notepad and remove the <SubType>Designer</SubType> tags associated with each orchestration that suffers from the issue.

Thanks Randal!

Wednesday, July 20, 2011

WCF, Enterprise Library & Cruise Control

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

I recently had an interesting experience with Cruise Control automated builds. The scenario was this:
  • A set of web services implemented in WCF, with the constituent parts separated out into distinct projects in the Visual Studio solution: Common, Contracts, Implementation, Host.
  • The Exception Handling Application Block (EHAB) from the Microsoft Enterprise Library 4.1 was used for its great configurable exception handling & logging and exception shielding features.
  • We had automated builds set up on a build server using Cruise Control. Anytime you checked in changes to the solution, the build server would rebuild a new version and make it available for deployment to the "official" dev, test, and production environments. [This has since been changed, because it was kind of overkill and led to about a million (overstating) builds per day]
I was working on a local dev machine, modifying the web services, running them locally, and performing my unit tests successfully. Checked in my changes and asked for the latest build to be deployed to the "dev" web server, ran the same unit tests, the unit tests ran fine until I executed a test that was intended to produce an error that needed to be handled by the EHAB configuration. Instead of the logging and custom Fault I was expecting, I got the default "shielded" Fault and exception message produced by the EHAB "An error has occurred while consuming this service. Please contact your administrator for more information. Error ID: {handlingInstanceID}".

Hmm... I double-checked that I was indeed sending in exactly the same "bad" request, that should be generating the exception I was expecting, and that the EHAB configuration should be handling it. Yes indeed.

To cut a long story of investigation and frustration short, it came down to a "clash" between the automated build in Cruise Control and my EHAB configuration.

My EHAB configuration was attempting to transform from a particular Exception type to a particular (custom) Fault Contract type using the Fault Contract Exception Handler. The EHAB configuration was referring to the Fault Contract type using the fully qualified strong name of the assembly, including the version number.

Now here's where Cruise Control was coming into the picture. The "standard" Cruise Control build script was, prior to build, performing a substitution within any AssemblyInfo files it was building, to replace the text "1.0.0.0" with the current Cruise Control build number. In my case, as this was a new solution, I hadn't changed the AssemblyInfo version numbers from their defaults of 1.0.0.0, and hence when my solution was built by Cruise Control, the assemblies ended up with the Cruise Control-generated version numbers. Of course, this led to EHAB looking for the assembly within which my Fault Contracts were located with a particular version (1.0.0.0), and the actual assembly that was deployed had a version number nothing like this.

Although I kind of objected to the rather agricultural textual "find-and-replace" in the Cruise Control build script, I wasn't in a position to be able to change it. The solution ended up being to modify my EHAB configuration to include the "short" version of the Fault Contract & Assembly name, rather than the "long" version (I'm sure there are official names for these). So, instead of something that looked like this:

XYZ.Service.Contract.ServiceOperationFault, XYZ.Service.Contract, Version=1.0.0.0, Culture=neutral, PublicKeyToken=...

I replaced it with something like this:

XYZ.Service.Contract.ServiceOperationFault, XYZ.Service.Contract

This works fine in my case because the assembly is deployed alongside everything else, in the bin folder for the web services.

Concatenate values in a SELECT statement

I was digging through some old (very old) notes I had on SQL Server 7.0 and came across this one and thought I'd post it for my own reference... (updated a bit).

To concatenate the values of a particular column in a SELECT statement, do something like this:

DECLARE @technicianNames VARCHAR(max)
SET @technicianNames = ''

SELECT @technicianNames = @technicianNames + t.TechnicianName + ','
  FROM dbo.Technician t
 ORDER BY t.TechnicianName ASC

IF Len(@technicianNames) > 0
BEGIN
  SET @technicianNames = Left(@technicianNames, Len(@technicianNames )-1)
END

Say you had the following values in the Technican table:

TechnicianName
--------------
Dave
Trevor
Agnes

The value of @technicianNames would be "Dave,Trevor,Agnes".

Hopefully it's useful to someone else too... There may well be a better way to do this in the post SQL Server 7.0 world, if there is, please let me know!

Tuesday, July 5, 2011

Not so RelativeSearchPath

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

Another .NET version compatibility issue encountered working with the same third party API as described in my earlier post When is String.Empty!= String.Empty?

Under certain conditions when calling this third party .NET 1.1 API from .NET 3.5 we were receiving an exception "Invalid directory on URL". Fortunately the stack trace included enough information for me to whip out my best friend Reflector to reflect inside the API code to see what was going on.

The exception occurred when the API was trying to dynamically load another DLL using Activator.CreateInstanceFrom. In particular, it constructed the path to the DLL using the following:

AppDomain.CurrentDomain.BaseDirectory + AppDomain.CurrentDomain.RelativeSearchPath

This seems to be a fairly common practice where this type of dynamic loading is required, and you need to construct the path at run-time. Unfortunately most of the examples on the web that use this approach use exactly this approach, string concatenation, to construct the path, and don't construct it (or check that it's valid) using System.IO.Path (another of my favourite friends).

When I checked out what the result of the AppDomain.CurrentDomain.BaseDirectory + AppDomain.CurrentDomain.RelativeSearchPath line was, I was somewhat bemused: It was something of the form "c:\projects\webapp\c:\projects\webapp\bin\" (with names changed to protect the innocent).

Huh? Surely that couldn't be right, otherwise it would never have worked!

I whipped up a simple ASP.NET 3.5 web app and examined the values of the two properties used to construct the path, and sure enough they were "c:\projects\webapp\"  and "c:\projects\webapp\bin\" respectively. By this stage, I was assuming that the .NET 1.1 API was expecting RelativeSearchPath to be simply "bin\"...

So, next stop, whip up a simple ASP.NET 1.1 web app and check out the values for the two properties... Hmm, interesting: as the API expected, they were "c:\projects\webapp\" and "bin\" respectively...

So, it would seem that under ASP.NET 2.0+, when the AppDomain is initialised for your web app, the RelativeSearchPath is actually evaluated to the complete physical path to the web app's bin folder... Yay... not so "relative"...

My work-around in this case (as I can't change the third-party API) is to change the AppDomain's RelativeSearchPath just before the call to the API to be "bin\", and just afterwards to be whatever it was before the call... Not pretty, but it works. What I'd really like to understand is why under ASP.NET 2.0+ it's not relative! My suspicion is that it may be initialised to "~/bin" by ASP.NET when the AppDomain starts, and is somehow evaluated to the absolute path as a result of the inclusion of the "~", but I can't be sure...

Anyway, thanks for listening...

BizTalk backups to network share

Over the last few months, as well as doing actual development work, I've been assisting a client build and configure a set of BizTalk environments to cater for everything from development, system integration testing, user acceptance testing, training, pre-production, production and disaster recovery.

One of the final steps in our build for certain environments has been to configure the BizTalk backup SQL job to regularly backup the BizTalk databases to a network share. Not only is this good practice, but it's a mandatory part of setting up BizTalk log shipping as part of a DR capability.

We created a hidden ($) network share, and assigned "Full Control" permissions to a specially-created "BizTalk_Backups" Active Directory group - both at the share level and the filesystem level. We then placed the service account used to execute the SQL job in this group.

However, when it came to executing the BizTalk backup job, we encountered an "Access denied" type error: "BackupDiskFile::CreateMedia: Backup device '...' failed to create. Operating system error 5(failed to retrieve text for this error. Reason: 15105)."

We double-checked the permissions we'd configured, re-created the share not hidden, even checked whether the same service account could write to a different share on the same server... Nothing succeeded.

What did work however was adding the "Everyone" group with "Full Control" permissions on the share and filesystem... but hang on, the SQL service account was a member of the "BizTalk_Backups" group which already had "Full Control" permissions to the share etc... Hmmm... So, we removed "Everyone", and explicitly added the SQL service account with "Full Control" permissions to the share and filesystem... and it worked!

We're still not sure why exactly, but it seems as though the account needed to be added explicitly, rather than via membership in a group... well, other than the "Everyone" group... So problem solved, but a mystery nonetheless. Interested to hear if anyone else has had a similar experience.

UPDATE: I was speaking recently with a colleague of mine who suggested the issue may have been a result of not having restarted the SQL Server and SQL Agent services after we'd added the SQL service account to the "BizTalk_Backups" group. These services may have been caching the group membership of the SQL service account - and a restart of the service may have caused this to be refreshed. I haven't had a chance to check this out, but it sounds plausible.

Sunday, April 17, 2011

Fun with the BizTalk 2010 Oracle E-Business Suite Adapter

I recently had an opportunity to use the BizTalk 2010 Oracle E-Business Suite WCF adapter, and thought I'd share some useful tips for getting it up and running and connected to an EBS instance.

We'd already installed the WCF adapters that come as part of the "Install BizTalk Adapters" (aka Adapter Pack) in the BizTalk installation media.

Next thing was that we needed the Oracle database client installed on the BizTalk dev machine and to be able to establish connections to Oracle databases. We were using Windows Server 2008 R2 x64 for our development OS, but had actually had quite a bit of trouble first locating and then getting the x64 Oracle client to work with BizTalk. It was fine from Visual Studio as it's a 32-bit application and uses the 32-bit Oracle client, but as soon as we attempted to interact with Oracle from the BizTalk runtime via a 64-bit host, no dice.

In the end we created a 32-bit BizTalk host and configured the Oracle adapters (Database and EBS) to use the 32-bit host for their receive & send handlers. This saved us wasting a lot more time attempting to get x64 Oracle client to work... unfortunately, there doesn't seem to be a heap of good documentation on this scenario anywhere.

Once we'd overcome that hurdle, the next was in establishing a connection to E-Business Suite itself. The documentation for the EBS adapter is VERY good, but there are a few things that can be a bit confusing when you're starting out.

First of all, when you're reading the documentation that comes with the adapter, make sure you know the "model" that the section you're reading relates to: the adapter can be used as a pure WCF adapter (ie, straight from .NET) or from BizTalk... there are some great tutorials in the documentation, but if you're trying to work with the adapter from BizTalk, the tutorials on getting it to work from .NET probably are going to be slightly less relevant and possibly even more confusing.

The next point of confusion is configuring the adapter binding itself. There are a host of properties to configure, and I'll list the key properties we used below. In particular though, note the fact that you need to specify 2 different sets of credentials when configuring the adapter: a set of credentials to connect to the Oracle database underlying E-Business Suite, and then a set of credentials for connecting to the E-Business Suite application itself.

The key properties we configured were:

Binding:
  • Application Short Name - For example, "SQLGL". You can identify the Application Short Name from the Action property of the binding that is generated within Visual Studio - something that looks like eg "InterfaceTables/Select/SQLGL/GL/GL_INTERFACE".
  • Client Credential Type - Identifies the type of credentials you're going to be using & where you'll specify them.
  • Oracle E-Business Suite Organization ID - Identifies the Organization you're interacting with in E-Business Suite. This will be a number, something like "83".
  • Oracle E-Business Suite Responsibility Key - Identifies the responsibility the E-Business Suite user has to interact with the application identified by Application Short Name for the Oracle E-Business Suite Organization ID. Will be something like "1_GL_SUPER_XXX".
  • Oracle Password - Password for the credential type NOT specified by the binding Client Credential Type property.
  • Oracle Username - Username for the credential type NOT specified by the binding Client Credential Type property.
  • Use Ambient Transaction - In our case set to False, as we didn't want our interactions with E-Business Suite to be wrapped in an encompassing transaction.
  • Enable BizTalk Compatibility Mode - Set to True in our case because we were using the adapter from BizTalk.
Credentials:
  • Username - Username for the credential type specified by the binding Client Credential Type property.
  • Password - Password for the credential type specified by the binding Client Credential Type property.
Your E-Business Suite administrator should be able to assist you with locating the correct values for each of these properties, but the 2 most important things to note are:
  • You need to enter credentials into the binding Oracle Password / Oracle Username properties and also into the credentials Username / Password properties. If you set the binding Client Credential Type property to "Database", then the Oracle Password / Oracle Username binding properties should be set to the E-Business Suite application-level credentials, and the credentials Username / Password properties should be set to the Oracle database-level credentials; if you set this property to "EBusiness", the Oracle Password / Oracle Username binding properties should be set to the Oracle database-level credentials, and the credentials Username / Password properties should be set to the E-Business Suite application-level credentials.
  • The specific credentials you choose depend on the values you specify for the Application Short Name, Oracle E-Business Suite Organization ID and Oracle E-Business Suite Responsibility Key properties: the E-Business Suite user you choose must have the responsibility identified by the Oracle E-Business Suite Responsibility Key property in the organization identified by the the Oracle E-Business Suite Organization ID property for the application identified by Application Short Name property.
Once we'd actually configured the adapter correctly, using it was very straight-forward, and the adapter documentation provides some great examples of each scenario.

HTH!

Adventures with BizTalk: HTTP "GET" Part 6: Custom .NET Code

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

So, we've seen that we can retrieve files from a dynamically-obtained URL inside BizTalk using the WCF-Custom Adapter, but it's not exactly straight-forward.

In the absence of any other bright ideas, next step is resorting to custom .NET code, so let's look at how we might achieve the desired result using this approach.

So, the basic design of the utility we'll build is that we'll pass in the URL of the file we want retrieved, and the utility will take care of retrieving the file and populating a BizTalk message instance with the results. That way, similar to the previous WCF-Custom Adapter approach, we can then do whatever we want with the message back in our orchestration.

.NET Component

Without further ado, here's the code for the .NET component that will do our job for us:

public class WebUtility
{
  public static void Get(string sourceUrl, XLANGMessage targetMessage)
  {
    if (targetMessage.Count == 0)
    {
      throw new ArgumentException("Parameter 'targetMessage' must have at least one message part.", "targetMessage");
    }

    long bytesProcessed = 0;

    // Assign values to these objects here so that they can be referenced in the finally block.
    Stream retrievedStream = null;
    WebResponse response = null;
    MemoryStream memStream = null;

    // Use a try/catch/finally block as both the WebRequest and Stream classes throw exceptions upon error.
    try
    {
      // Create a request for the specified web-based resource.
      WebRequest request = WebRequest.Create(sourceUrl);
      if (request != null)
      {
        // Send the request to the server and retrieve the WebResponse object.
        response = request.GetResponse();
        if (response == null)
        {
          throw new System.Exception("GetResponse returned null.");
        }

        // Once the WebResponse object has been retrieved,
        // get the stream object associated with the response's data.
        retrievedStream = response.GetResponseStream();

        memStream = new MemoryStream();

        // Allocate a 1k buffer.
        byte[] buffer = new byte[1024];
        int bytesRead;

        // Simple do/while loop to read from stream until no bytes are returned.
        do
        {
          // Read data (up to 1k) from the stream
          bytesRead = retrievedStream.Read(buffer, 0, buffer.Length);

          // Write the data to the local file
          memStream.Write(buffer, 0, bytesRead);

          // Increment total bytes processed
          bytesProcessed += bytesRead;
        } while (bytesRead > 0);

        memStream.Seek(0, SeekOrigin.Begin);

        //Load the Binary representation into the first message part:
        targetMessage[0].LoadFrom(memStream);

        //Set properties of the message being returned.
        targetMessage.SetPropertyValue(typeof(FILE.ReceivedFileName), response.ResponseUri.Segments[response.ResponseUri.Segments.GetUpperBound(0)]);
        targetMessage.SetPropertyValue(typeof(HTTP.ContentType), response.ContentType);
      }
    }
    finally
    {
      // Close the response and stream objects here to make sure they're closed even if an exception
      // is thrown at some point.
      if (response != null) response.Close();
      if (retrievedStream != null) retrievedStream.Close();
      if (memStream != null) memStream.Close();
    }
  }
}

The key parts here are:
  • The Get method accepts two parameters, the url to retrieve the file from, and the target XLANGMessage instance that we want to populate with the retrieved file. It's important to note that we force this to be passed in from the calling orchestration rather than being constructed in our component. This forces the orchestration to control the lifetime of the message instance, as opposed to the custom .NET component, and is (apparently) a recommended practice.
  • We use the .NET WebRequest and WebResponse to retrieve the file from the specified URL.
  • We then force the entire response to be loaded into a MemoryStream that can be better consumed by BizTalk.
  • We use the LoadFrom method of the XLANGMessage Part to load the first Part of the provided target XLANGMessage instance from the MemoryStream.
  • We set a couple of Message Context properties for good measure that might be of use to the calling orchestration, FILE.ReceivedFileName and HTTP.ContentType.
Usage

To use this component from an orchestration, we need to:
  • Declare a message variable of type System.Xml.XmlDocument, eg "myMessage".
  • Use a Construct Message and Message Assigment shape with the following code:
myMessage = new System.Xml.XmlDocument();
WebUtility.Get("http://localhost/vdir/some-file.txt", myMessage);
  • Replacing the hard-coded URL with our dynamically-obtained URL.
  • After this Construct Message shape, we'll then have the dynamically-retrieved file in our "myMessage" message variable, ready to do what we'd like with it.
Introspection...

Pros:
  • Much, much simpler to implement than the WCF-Custom approach.
Cons:
  • We lose the "power" of using a BizTalk Adapter. If we wanted to extend this to allow for a proxy server, we'd need to extend our code accordingly, etc...
Overall though, the simplicity of this approach probably wins out over the WCF-Custom solution. I still don't particularly like moving away from using BizTalk's Adapters (after all, isn't that what BizTalk's meant to be good at?), but the simplicity of this approach makes it much more straight-forward to get up and running with.

Thursday, April 14, 2011

Forcing a .NET Application to Execute as 32-bit

I recently had a requirement to force a .NET application (EXE) to execute as a 32-bit application on a 64-bit operating system. To cut a long and painful story short, I needed the application to execute as a 32-bit application because only the 32-bit version of the Oracle database client was installed, and when the application executed in its default form, it attempted (and failed) to use the 64-bit Oracle database client. Anyway, that's a story for another day.

During the course of solving this problem, I came across a .NET utility I've not used previously: CORFLAGS.EXE. This utility essentially lets you force a .NET assembly (EXE or DLL) to execute as 32-bit or 64-bit.

In my case, all I needed to do was execute the following command on the target .NET EXE file:

CORFLAGS.EXE <target-exe> /32BIT+

where <target-exe> is the path to the .NET EXE to set the 32-bit flag on.

There are a number of other options the utility accepts.

You can find CORFLAGS.EXE in the Windows SDK, typically installed to eg %programfiles%\Microsoft SDKs\Windows\v7.0A\bin.

Adventures with BizTalk: HTTP "GET" Part 5: WCF-Custom Adapter

[Note: This post is based upon an old blog post that I'm migrating for reference purposes, so some of the content might be a bit out of date. Still, hopefully it might help someone sometime...]

[Note 2: Since I originally wrote this blog post, the following Microsoft article provides a very good description of some of the techniques applied below: http://social.technet.microsoft.com/wiki/contents/articles/invoking-restful-web-services-with-biztalk-server-2010.aspx]

OK, so we've essentially ruled out the BizTalk HTTP Adapter for retrieving files from a dynamically-obtained URL, so where to next?

Well, one of the options that searching the net suggested was the use of the WCF-Custom Adapter. Most of these were in the context of getting BizTalk to talk to REST-based services via HTTP, but that's half the battle: Communication with REST-based services relies on being able to control the HTTP verb, as that's an important part of REST.

So, the first thing we needed was to replace the Adapter on the BizTalk Send Port for retrieving the remote file with the WCF-Custom Adapter. Once we've done that, we need to be able to control the HTTP verb used by the Adapter...

HttpVerbBehavior

This is where WCF extensibility really shines. We can create a custom WCF Behavior and associate it with the Endpoint within the WCF-Custom Adapter configuration. This WCF Behavior, which we'll refer to as the HttpVerbBehavior, implements a WCF Message Inspector to enable specifying the HTTP verb through the WCF endpoint configuration. The implementation is beyond the scope of this post, but is well described in the following articles:
The key piece of code looks like:

public class VerbMessageInspector : IClientMessageInspector
{
  // ...
  public object BeforeSendRequest(ref System.ServiceModel.Channels.Message request, IClientChannel channel)
  {
  HttpRequestMessageProperty mp = null;
  if (request.Properties.ContainsKey(HttpRequestMessageProperty.Name))
  {
    mp = (HttpRequestMessageProperty)request.Properties[HttpRequestMessageProperty.Name];
  }
  else
  {
    mp = new HttpRequestMessageProperty();
    request.Properties.Add(HttpRequestMessageProperty.Name, mp);
  }
  mp.Method = this._verb;

  if (mp.Method == "GET")
  {
    mp.SuppressEntityBody = true;

    Message msg = Message.CreateMessage(MessageVersion.None, "*");

    msg.Properties.Add(HttpRequestMessageProperty.Name, mp);
               
    request = msg;
  }
  return null;
  }
  //...

}

The key parts are highlighted, namely that we set the Method to the HTTP verb specified through configuration, and if the verb specified was "GET", we suppress the message body and ensure that the MessageVersion is set to None.

Once the HttpVerbBehavior is compiled and added to the GAC, it also needs to be added to the section of the machine.config file. You also need to restart the BizTalk host instances, and re-open the BizTalk Admin console before it will show up. Once all that's done, you can go to the Behavior tab of the Send Port's WCF-Custom Adapter configuration, and add and configure the httpVerbBehaviour.

WrappedTextMessageEncoder

If we were dealing with XML-based content (such as in the REST scenario), we'd be very close to done. However, in our case we're not, we're dealing with binary content being returned... If you try to send a message out through the Send Port as it's currently configured, the message will successfully reach the remote URL via HTTP GET, and the file will be returned: but it won't make it into BizTalk. Actually, it won't even make it into the client-side WCF endpoint, which needs to process the response before BizTalk can take over.

The reason for this is kind of involved, and it's to do with the Message Encoder that is being used by the WCF-Custom adapter. WCF is (largely) Message-based, and WCF Messages are (largely) assumed to be XML-based. So when the client-side WCF endpoint receives a response that is not XML-based, it doesn't know what to do, and bails out. The layer this happens at is the WCF Message Encoder, and we can get around it by creating our own. The idea behind our WrappedTextMessageEncoder is that it will receive the binary contents of the file, Base-64 encode them, and wrap them in a configurable "wrapper" XML element, before sending the message on through the WCF infrastructure. It's a pain, but something that seems to be required to get any further in our scenario.

The following articles provide more information on creating custom WCF Message Encoders:
The guts of our WrappedTextMessageEncoder though is really the following excerpt:

public class WrappedTextMessageEncoder : MessageEncoder
{
  //...


  public override Message ReadMessage(Stream stream, int maxSizeOfHeaders, string contentType)
  {
    XmlReader reader;

    if (!string.IsNullOrEmpty(this.factory.InboundWrapElementName))
    {
      byte[] contents = new byte[stream.Length];
      long bytesRead = stream.Read(contents, 0, (int)stream.Length);

      string encodedContents = System.Convert.ToBase64String(contents);
      string wrappedContents = String.Format("<{0}>{1}", this.factory.InboundWrapElementName, encodedContents);

      StringReader sr = new StringReader(wrappedContents);

      reader = XmlReader.Create(sr);
    }
    else
    {
      reader = XmlReader.Create(stream);
    }
    return Message.CreateMessage(reader, maxSizeOfHeaders, this.MessageVersion);
  }
  //...
}

The exciting bits are highlighted, namely that we allow the configuration of an "InboundWrapElementName" property, and if it's specified, we Base-64 encode the contents of the incoming Stream and then wrap them in an XML element named according to the "InboundWrapElementName". It's basic, but works...

Again, we need to deploy the WrappedTextMessageEncoder to the GAC, configure it in the section of the machine.config file, and refresh BizTalk. Then we can configure it through the Binding tab of the Send Port's WCF-Custom Adapter configuration (or via the Import/Export tab if it doesn't show up on the Binding tab).

So what we'll have now is a response message that makes it through the WCF client, and can potentially be consumed by the BizTalk WCF-Custom Adapter back into the Send Port. Unfortunately though it's now Base-64 encoded and wrapped in an XML element, which we need to strip out to get it back to a usable form!

Send Port Configuration

To recap on our Send Port configuration, at this stage we should have:
  • Transport Type: WCF-Custom
    • Endpoint Address: Hard-coded for now, but will eventually be dynamically specified within orchestration
    • Binding: wrappedTextMessageEncoding + httpTransport
    • Behavior: httpVerbBehavior
  • Send Pipeline: PassThruTransmit
  • Receive Pipeline: PassThruReceive
One thing I haven't mentioned before now is using the PassThru pipelines for the Send/Receive... as we're not actually sending any message body out, and as we're not anticipating doing anything useful in the pipeline with the message body in, we don't need anything more than these... the XML pipelines would just add overhead that doesn't add any value...

To get the message back to its original state (ie, just the binary contents, not wrapped, not Base-64 encoded), I originally thought (and implemented) that I'd need to strip this out in the orchestration, or at best in a custom pipeline component. Fortunately though, we can do even better!

In addition to the configuration above, we also want to apply the following:
  • Transport Type: WCF-Custom
    • Messages:
      • Inbound BizTalk Message Body:
        • Path:
          • Body path expression: /*[1]
          • Node encoding: Base64
This does our job for us! How cool is that? We tell BizTalk that the message body is located in the first child node of the incoming message, and that the message body is Base-64 encoded. BizTalk then extracts the message body, and decodes it for us!!!

Send / Receive inside the BizTalk Orchestration

So, now we've got the response message back into the BizTalk MessageBox through our WCF-Custom Send Port, how do we send the request message and receive the response message inside the orchestration?

Well, this bit is actually pretty straight-forward:
  • Define a new Port Type, eg GetFilePortType, with a single Operation, eg GetFile, and with a Request and Response both of type System.Xml.XmlDocument.
  • Define a Port, eg GetFilePort, of type GetFilePortType.
  • Define two Message Variables, eg getFileOutRequest and getFileOutResponse, both of type System.Xml.XmlDocument.
  • Use a Construct Message and Message Assignment shape to construct the request message, getFileOutRequest. The Message Assignment shape should include the following code:
getFileOutRequest = new System.Xml.XmlDocument();
getFileOutRequest.LoadXml("");
  • This initialises the getFileOutRequest message variable, and gives it some content. As this content will be suppressed by the HttpVerbBehavior, it could be anything, but it needs to be something (ie, not empty).
  • Use a Send shape to send getFileOutRequest out through the Request operation message of the GetFilePort's GetFile operation.
  • Use a Receive shape to receive the Response operation message from the GetFilePort's GetFile operation into the getFileOutResponse message variable.
  • The getFileOutResponse message variable will now contain the bytes of the retrieved file. You can now do what you like with it, which may include sending it out to the filesystem via a Send Port using the FILE Adapter and a PassThruTransmit pipeline...
Introspection...

So, there you have it, retrieving a remote file using the WCF-Custom Adapter. What do you think? My thoughts are:

Pros:
  • We make use of an existing BizTalk Adapter, and hence can benefit from tranditional BizTalk Adapter scalability and configuration (eg, setting a proxy, backup transports etc).
  • We can, if required, extend and customize the behaviour through WCF extensibility points such as custom behaviors.
Cons:
  • There are lots of moving pieces to develop, test, deploy, and support and maintain. Just by the length of this post, you should get a feel for how involved this solution is...
All in all, when it came down to it, this option just seemed like too much hard work for something that should be a simple requirement. Don't get me wrong, I really like taking advantage of the BizTalk Adapters where possible, for the reasons listed above. But at the end of the day, this solution really is pretty complicated to comprehend, let alone build and maintain - for such a "simple" requirement.

So, what else can we try?