Thursday, April 22, 2010

Complex Exchange migrations, part 1

One fun thing about being where I'm at is that the previous IT regime was -- how shall I put this -- less than competent. That leaves me in the position of working with two forests and an Exchange 2003 implementation that likes to go down harder than a drunken baby in a roller derby. To fix this, I've got a plan of attack that involves migrating all of the users into the new, working, 2008 R2 functional level forest (we'll call it new.company.com) and onto a shiny new Exchange 2010 server (mail.company.com). Because this is a staged roll out that's happening at the same time as a Windows 7 implementation, we have to maintain coexistence and both forests for some time. That means that I'm doing a cross-forest Exchange migration from a legacy version, which is what I like to think of as "complex."

In the next posts on this topic, I'm going to walk you through how to pull this off in a mid-sized business. The basic steps are:

  • Install Exchange 2010 in the new domain;
  • Figure out how to synchronize the GALs for mail flow;
  • Create cross-forest accounts and connectors;
  • Script the movement of mailboxes;
  • Clean up any messes that arise.
I'll assume you can figure out how to install 2010; it's pretty straightforward. I'll pick up with GAL synchronization and what exactly you need to create in the other domain to get mail working. I personally used ILM, but I know enough now about the schema to provide you with a PowerShell script to sync the GAL.

Thursday, April 15, 2010

Adding the BizTalk Adapter Pack "Add Adapter Service Reference" plugin to Visual Studio 2010

Visual Studio 2010 is out at last, and it's awesome. It's got a slick new interface design, support for C# 4.0, and impressive integration with Team Foundation Server. What it doesn't have, however, is backwards compatibility out of the box with certain key components -- such as the BizTalk Adapter Pack and its required component, the WCF LOB Adapter. This means that you lose the ability to consume adapters with the Add Adapter Service Reference wizard. Now, you could wait for an official release that's compatible with VS 2010, or you can follow my instructions here and magically enable the Adapter Pack for 2010! Please note that these instructions involve editing machine.config and the registry, so remember to back things up first. They work for me, but they could break something on your machine.

Phase 1: The Registry
 
The WCF LOB Adapter registers the Visual Studio add-in through the registry. It actually does it in an "unofficial" way, which is to say that it adds itself as
a Visual Studio feature package, then adds a reference to itself in the Menus key. To get the menu option to appear, you need to copy these registration entries from your Visual Studio 2008 registry tree to your Visual Studio 2010 one. Luckily for you, I've written a registry script that'll do that very thing.
 Copy the script below into a file with a .reg extension, then double-click on it to import it into the Registry (elevating, of course, if you're on Vista or 7). You may need to perform a few checks before importing it, such as:
  • Removing the Wow6432Node and SysWOW64 references from the script if you're using a 32-bit operating system.
  • Making sure you have the 64-bit version of the WCF LOB Adapter installed if you're using a 64-bit OS (which, I should note, is the only supported configuration)
The script follows.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\10.0\Packages\{fff9759c-767b-4327-b8c2-f2ff2e36144d}]

@="Microsoft.ServiceModel.Channels.Tools.PlugInPackage.PlugInPackage, Microsoft.Channels.Tools.PlugInPackage,Version=2.0.0.0,
Culture=neutral, PublicKeyToken=null"
"InprocServer32"="C:\\Windows\\SysWOW64\\mscoree.dll" "Class"="Microsoft.ServiceModel.Channels.Tools.PlugInPackage.PlugInPackage"
"CodeBase"="C:\\Program Files\\WCF LOB Adapter SDK\\Tools\\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll"

"ID"=dword:00000001
"MinEdition"="Standard"
"ProductVersion"="2.0.0.0"
"ProductName"="Add Adapter Service Reference" "CompanyName"="Microsoft"


[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\10.0\Menus]

"{fff9759c-767b-4327-b8c2-f2ff2e36144d}"=", 1000, 1"

Once this script is imported, when you run VS 2010, you'll now see the Add Adapter Service Reference menu item in the context menu of Solution Explorer. However, when you click on it, you'll be told there are no valid adapters installed. To solve this, we need to proceed to... 

Phase 2: machine.config 
Because VS 2010 runs as a Framework 4.0 application, the plugin binds using the 4.0 configuration files.You can verify this yourself by turning on assembly binding logging (http://msdn.microsoft.com/en-us/library/e74a18c4.aspx) and viewing the logs for devenv.exe's binds to Microsoft.ServiceModel.Channels.Tools.PlugInPackage.

Here's what that looks like in VS 2008, emphasis added:

*** Assembly Binder Log Entry (4/15/2010 @ 11:35:25 AM) ***
The operation was successful.
Bind result: hr = 0x0. The operation completed successfully.
Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorwks.dll
Running under executable C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe
--- A detailed error log follows.
=== Pre-bind state information ===
LOG: User = APIMEM\pchipman
LOG: Where-ref bind. Location = C:\Program Files\WCF LOB Adapter SDK\Tools\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll
LOG: Appbase = file:///C:/Program Files (x86)/Microsoft Visual Studio 9.0/Common7/IDE/
LOG: Initial PrivatePath = NULL
LOG: Dynamic Base = NULL
LOG: Cache Base = NULL
LOG: AppName = NULL
Calling assembly : (Unknown).
===
LOG: This bind starts in LoadFrom load context.
WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
LOG: Using application configuration file: C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.Config
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v2.0.50727\config\machine.config.
LOG: Attempting download of new URL file:///C:/Program Files/WCF LOB Adapter SDK/Tools/Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll.
LOG: Assembly download was successful. Attempting setup of file: C:\Program Files\WCF LOB Adapter\SDK\Tools\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll
LOG: Entering run-from-source setup phase.
LOG: Assembly Name is: Microsoft.ServiceModel.Channels.Tools.PlugInPackage, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
LOG: Re-apply policy for where-ref bind.
LOG: Post-policy reference: Microsoft.ServiceModel.Channels.Tools.PlugInPackage, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
LOG: Found assembly by looking in the GAC.
LOG: Switch from LoadFrom context to default context.
LOG: Binding succeeds. Returns assembly from C:\Windows\assembly\GAC_MSIL\Microsoft.ServiceModel.Channels.Tools.PlugInPackage\3.0.0.0__31bf3856ad364e35\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll.
LOG: Assembly is loaded in default load context.

And what it looks like in VS 2010:

*** Assembly Binder Log Entry (4/15/2010 @ 10:53:05 AM) ***
The operation was successful.
Bind result: hr = 0x0. The operation completed successfully.
Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll
Running under executable C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe
--- A detailed error log follows.
       
=== Pre-bind state information ===
LOG: Where-ref bind. Location = C:\Program Files\WCF LOB Adapter SDK\Tools\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll
LOG: Appbase = file:///C:/Program Files (x86)/Microsoft Visual Studio 10.0/Common7/IDE/
LOG: Initial PrivatePath = NULL
LOG: Dynamic Base = NULL
LOG: Cache Base = NULL
LOG: AppName = devenv.exe
Calling assembly : (Unknown).
===
LOG: This bind starts in LoadFrom load context.
WRN: Native image will not be probed in LoadFrom context. Native image will only be probed in default load context, like with Assembly.Load().
LOG: Using application configuration file: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe.Config
LOG: Using host configuration file:
       
LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\v4.0.30319\config\machine.config.
LOG: Attempting download of new URL file:///C:/Program Files/WCF LOB Adapter SDK/Tools/Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll.
LOG: Assembly download was successful. Attempting setup of file: C:\Program Files\WCF LOB Adapter\SDK\Tools\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll
LOG: Entering run-from-source setup phase.
LOG: Assembly Name is: Microsoft.ServiceModel.Channels.Tools.PlugInPackage, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
LOG: Re-apply policy for where-ref bind.
LOG: Post-policy reference: Microsoft.ServiceModel.Channels.Tools.PlugInPackage,            Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
LOG: Found assembly by looking in the GAC.
LOG: Switch from LoadFrom context to default context.
LOG: Binding succeeds. Returns assembly from C:\Windows\assembly\GAC_MSIL\Microsoft.ServiceModel.Channels.Tools.PlugInPackage\3.0.0.0__31bf3856ad364e35\Microsoft.ServiceModel.Channels.Tools.PlugInPackage.dll.
LOG: Assembly is loaded in default load context.

As you can see, we're using the 4.0 configuration file in 2010, which means we need to copy the magic words from our 2.0 configuration file so our plugin can see the valid adapters. Note that this doesn't necessarily mean that 4.0 programs will be able to consume adapters -- that's an investigation for another day.

The magic words, in this case, happen to be located in the system.servicemodel part of the machine.config tree. You want to copy any Microsoft.Adapters references in the extensions/bindingElementExtensions and extensions/bindingExtensions nodes. You'll also need to copy any client/endpoint nodes that make similar references. For example, to get SAP bindings to work, you need to merge the following XML into your 4.0 machine.config file:

<system.serviceModel>
    <extensions>
        <bindingElementExtensions>
            <add name="sapAdapter" type="Microsoft.Adapters.SAP.SAPAdapterExtensionElement, Microsoft.Adapters.SAP, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
        </bindingElementExtensions>
        <bindingExtensions>
            <add name="sapBinding" type="Microsoft.Adapters.SAP.SAPAdapterBindingSection, Microsoft.Adapters.SAP, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
        </bindingExtensions>
    <client>
        <endpoint binding="sapBinding" contract="IMetadataExchange" name="sap"/>
    </client>


         
Now, you can use the Add Adapter Service Reference wizard in VS 2010 with your older Framework projects.

(Oh, and by the way, I want you to know how much effort it took to make this look pretty. Stupid HTML editor...)

Wednesday, April 14, 2010

Beware CORRIDOR MRO software

I want to warn people today about a specific piece of software known as CORRIDOR. Produced by Continuum Applied Technology, this program is a mess of poor design decisions that I unfortunately have to deal with. I'll set aside what I consider flaws that are really more matters of taste, such as using Oracle as the back-end database (I could spend years ranting about how Oracle and PL/SQL need to be set on fire; use DB2, PostgreSQL, or SQL Server instead) and thinking that putting the word "Please" in front of a UI resource string somehow makes the system user-friendly. I'll also remain mum on the pricing model, save to say that you'll be paying Continuum forever for practically anything you need if you actually buy the software. You may actually find SAP Business One is cheaper (!!!) over time. Instead, I'm going to focus on the major flaws in the product's architecture.

First, CORRIDOR is constructed as a C++ user interface layer over what appears to be a business object layer (called "Rita," at least to those of using the API). These two components form the UI, which connects and writes directly to the Oracle database using the full version of a specific Oracle client. This two-tier architecture is a major problem for maintenance on the end user and ISV side, as we can't run different client versions against different server versions. We can't upgrade the Oracle client without possibly breaking its ABI to the Rita layer. We can't upgrade Rita without possibly breaking its ABI to the actual UI program. Perhaps most importantly, we can't upgrade the database at all -- whether in schema or version -- without possibly breaking everything. So, if I want to upgrade my version of Oracle to fix a security flaw, I better hope it doesn't need a new client, because then I'm stuck. Likewise, I can't run multiple different minor versions of the client software if there were any database changes, because then the logical interface between Rita and the database will no longer function. Finally, this also means that any applications that use the poorly-documented API (more on that later) have to have:
  • A full installation of the CORRIDOR software, both UI and Rita
  • A full installation of the Oracle client
  • Some extra undocumented dependencies in the CORRIDOR directory if they're a .NET application
This is one of the reasons why the three-tier model, where clients talk to application servers and application servers talk to databases, all through fixed interfaces, is popular. If CORRIDOR were a proper three-tier application, API programs wouldn't need the entire copy of CORRIDOR to work. They'd just need the DLL that defines the interface and a pointer to the server. Additionally, if the interface is properly versioned, you'd be able to run different client versions and, at worst, get a message that your client is no longer supported.

Another major issue with CORRIDOR is the API. Any time you want to automate some process, you need to use the API, which is available in C++, Java, and .NET CLR versions. The C++ API is essentially a set of calls directly into the Rita DLL, while the Java and .NET versions are wrapper classes atop this DLL. Unfortunately, the .NET wrapper is very lightly wrapped. Oracle exceptions are manifested as native memory corruption exceptions, regardless of their cause. Major issues manifest as requested (!!!) abnormal program terminations in the C++ runtime used by the Rita DLL, which result in your program unpredictably and irrevocably crashing. Communication is handled by passing Hashtables with const strings to identify what you want to do.

All of that would be fine to suffer through except for the fact that the API isn't really documented. At all. Oh, sure, there's a help file you get that explains what parameters are used by each "request type," but key information is missing. For example, if the Oracle connection fails, you get an exception indicating that native memory is corrupted. You don't get any indication as to why, exactly, the Oracle connection failed. One common cause, as I discovered, is that you need to encrypt the password before passing it in, using an undocumented function. Other critical missing information is the format and type of the primary key to select items in the database (it's claimed to be the "lot number," but the API's notion of a lot number is different from the UI's, and it's not a unique key in any event) and the meaning of certain terms (e.g., "system status" with regards to an item and "lot extension numbers"). Some of the information is just patently incorrect, such as the required parameters for certain API calls.

Normally, none of this would really annoy me, because I'd just skip past the program and go right into the database. However, indications thus far are that such access isn't allowed -- not that it's unsupported, but that you can't get the username and password for direct table access even if you want it. While there are definitely ways around that, this sort of disregard for the needs of users and ISVs is reprehensible. If I purchase your software outside of a SaaS agreement, it's mine to do with as I please. If I have to buy the database software, you better believe I have the right to access the database as I see fit.

I said I wasn't going to rant about the cost of CORRIDOR, but I'm going to break that promise and point out that you have to pay for every API that you want to use and for which you want support. That means that, normally, you can't use the API for anything. You have to pay for that access. In fact, almost everything you want to do that isn't "use the UI" is a pay feature. It's nice to know that Continuum is taking a cue from Oracle in selling crippled software, then charging for upgrades to the full version. I expect full API and database access from ERP software. SAP's been doing that for years. It's a standard feature at this point, not a pricey value-add.

So, to sum up, I really dislike CORRIDOR, and I don't feel it's appropriate for production environment use. Some of my associates are looking at other programs, such as Pentagon and Quantum, and I'm sure I'll have some commentary on them as well. I would say that, for now, you should avoid CORRIDOR -- and in general, you should always remember that ERP software packages are like sewers. Some of them are nicer than others, but they all still stink.

Tuesday, April 13, 2010

DPM 2010 RC and the MSDEWriter for SQL Server 2000 on a mixed platform

Lately I've been playing around with the DPM 2010 release candidate. For those who don't know, Data Protection Manager (DPM) 2010 is Microsoft's latest enterprise backup software. One of the neat features it offers over junk such as Asigra (offered by a variety of third party service providers, including one local to the Memphis area called Electronic Vaulting Services) and more useful software such as Backup Exec is its deep connection into Windows -- specifically, the wonderful Volume Shadow Copy (VSS) service. Shadow Copy allows you to make real-time snapshots of data, which is quite handy when backing up systems such as SQL Server and Exchange. Both of these have backup functionality built-in, but these "backups" -- data dumps, really -- have to be done first, then those files have to be backed up. This can create a long delay between the snapshot time and the actual backup time, exposing you to risk. Also, without VSS, you can't do any real-time copying of changes. In a 24/7 operation such as the one I support, we can't afford an entire day of exposure. We'd like to have zero minutes of exposure (hot standby), but that's for another day.

Generally, DPM 2010 is pretty slick. It automatically detects tape libraries and intelligently manages them, letting you know via alerts in its console or in System Center Operations Manager when a tape needs to be swapped out. You can specify short-term and long-term retention goals, which means you can hold data on disk, then periodically do full backups to tape. The real downsides to the RC are that the disk backups require full ownership of an entire disk or array (partitions aren't sufficient), USB and 1394 devices aren't supported, and some of the functionality doesn't work -- such as the AD schema extensions for self-service restores by users. Also, it's an agent-based backup solution, so it won't work on any systems that aren't Windows-based. You can work around this by writing batch files or PS scripts to copy snapshots to a Windows server for backup, but that's clunky and suffers from the time delay issue I noted earlier.

One fun feature of DPM is that it has both 32-bit and 64-bit agents. It selects the agent based on the architecture of the target OS. I say this feature is "fun" because it can cause some issues with mixed-architecture software. If, for instance, you run 32-bit SQL Server 2000 on a 64-bit platform, the DPM agent can't see the 32-bit instances because it talks to the 64-bit VSS service. The 64-bit service only looks for 64-bit sources, so your instances don't show up in DPM. The way to get around this if everything else is working (i.e., when you use vssadmin list writers from a command prompt and you see the MSDEWriter, which is the SQL Server 2000 VSS "driver") is to make a registry edit.

In the registry, go to HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server. Create a new key here with the name of your instance. Then, create a new key under that key named "MSSQLServer". Then, create a new key under that key named "CurrentVersion". At this point, you should be at HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\\MSSQLServer\CurrentVersion. Create a new string value named "CurrentVersion" here and enter the string "8.00.194". The astute will note that this matches the same structure you can find under the Wow6432Node -- basically, we're copying enough of the structure to let the 64-bit VSS system "see" the instance.

Now, next time you connect with DPM to initiate protection, you should see the instance you set up in the registry. Props go to Hitesh Sharma of Microsoft for this (http://www.eggheadcafe.com/software/aspnet/32831431/data-protection-manager-n.aspx is the original thread where the solution became apparent).