This blog has moved, permanently, to

Thursday, December 10, 2009

Visual SVN :: Setting up Subversion the easy way

I had to install a subversion repository on a server today, and I thought it was going to be a headache because the last time I dealt with Subversion was years ago and I was a rather uninterested spectator at the time. I had seen Rohland using VisualSVN before, so I thought I'd give it a try.

It was so simple to install, there's barely anything worth mentioning. Download the file, load up the documentation, follow the steps and there you go. I created an administrators group, a user to slot into that group, and removed access to "Everyone", and that was the access side done. I created a repository, let the tool create the default structure, and that was pretty much the sum total of the installation.

Note that any settings you apply during creation are not cast in stone - once the installation is complete you can always change your configuration by loading up the VisualSVN Server snap-in, right clicking "VisualSVN Server" and clicking "Properties".

The only difference in my configuration was changing the port - instead of using 443 I wanted to use a different port number. I stopped the VisualSVN service, changed the value in the configuration (see previous paragraph), and restarted the service. My repository was then available via https://myserver:7443/svn/myrepo/trunk/.

A total learning curve of about 3 minutes. Fantastic.

Monday, November 23, 2009

nServiceBus - The Good and the Bad

We've been using nServiceBus at work for the last week or two to send messages between a client and server in a (hopefully) robust fashion.

As a tool, I really like nServiceBus. Once you have it up and running, the use of MSMQ is just brilliant - force a crash in your application and when you start up again they're just there still, ready to be processed - right out the box. Wrap them in a transaction (MSMQ supports MSDTC) and anything goes wrong, the sending of the message gets rolled back. Awesome stuff - it's definitely something I will use again.

But that's not the point of this post. I'm going to point out all the BAD stuff - in the hope that someone else doesn't have the same fights I did. Let me say upfront that I recommend using it - just be aware that the document isn't just bad, it's DISMAL. The author seems to assume that we all understand how the bus works and how to use it - methods are undocumented and even the documentation on how to get it up and running is hopelessly inadequate. So, here goes.


The nServiceBus site does have some documentation regarding publish/subscribe scenarios at However, the way it's worded just didn't make sense to me, and it was Rohland who finally worked out what the hell they mean.

Exhibit A

The above configuration specifies the INPUT to your assembly's message handler. In this example, the application using this configuration will listen for messages on the MSMQ "myqueue" on the local machine. It will NOT place messages on that queue - this is the queue it will pick messages up from.

Exhibit B

  <UnicastBusConfig DistributorControlAddress="" DistributorDataAddress="">
      <add Messages="My.Messages" Endpoint="somequeue" />
This configuration specifies where your application will publish messages TO! In this example, any messages in the "My.Messages" assembly will be placed on the "somequeue" MSMQ on the local machine. You can specify message types instead of assemblies if you want to. I quite like this - it makes it very easy to publish different messages to different queues, and it's all configurable. Also note that the end point doesn't have to be local - you can specify remote machines as per the nServiceBus documentation.

Message Handlers

In order to receive messages, your assembly that handles those messages needs to incorporate a message handler class, with support for the message types you want to receive. For example, if you want to subscribe to message types "A" and B", you can create a message handler like so:
public class MessageHandler : IMessageHandler<A>, IMessageHandler<B>
    public void Handle(A message)
        // handle messages of type A

    public void Handle(B message)
        // handle messages of type B


You don't hook this class up anywhere - nServiceBus will find it and instantiate it. I don't like that. It makes it really easy to make mistakes - make a typo on your namespace and your event handler is never invoked, and you sit there scratching your head wondering why until you finally realise your config is fine, and your typing sucks. Another downer is that this is running on a separate thread to your application, so you need to invoke a delegate on appplication's main thread to handle the code, which can be a little iffy in forms development - you need to write code like the following to get a handle on the form that you want to use, like so:
FormMain fm = FormMain.GetInstance();
fm.Invoke(fm.AddInboundMessageHandler, message);

Enumerations and/or More Complex Data Structures

One problem I found, when using version 1.9, was when sending an enumeration of custom objects via a message. My message class contained an array of custom objects, which themselves had two properties: a string property and a byte array property. Nothing complex here. However, when I picked up the message on the subscriber, the collection was there, with the correct number, but the properties of the objects in the array were always null. I just couldn't get this to work at all. Converting the array to a dictionary was even worse - I got exceptions from nServiceBus, which seemed to fall over on some rather dodgy reflection code.

One thing I did find out is that your sub-entities do need to have a default, parameterless constructor, which makes sense as nServiceBus needs to recreate the serialized objects from the queue on the other end. However, try as I might, I never got this working on 1.9. Upgrading to version 2.0, however, and it all started working. I did try converting my array into a typed list for convenience, but this didn't work off the bat and wasn't that important to me so I reverted back to an array.

That's all I can think of for now - I'm sure I'll update this post in the future.

References When I started looking at this there was NOTHING out there - but people are certainly using it. Her are some decent articles on nServiceBus:

Monday, November 16, 2009

MS DTC Timeout error

I had a very frustrating Friday evening fighting with MSDTC. I made a one-line, 10 second change to a page I was working on. It was one of those things that you really don't even need to test usually, but I always test, no matter how small the change, and 2.5 hours later I was still pulling my hair out. I just kept getting MSDTC timeout exceptions.

Now, this was a page that had been working literally hours before, with no other code changes. The ONLY difference I could think of, was that I had applied windows updates to my machine - but this turned out to be a red herring. The same page, with the same code, worked on a colleague's machine. I checked settings, triple checked settings, flushed my DNS cache, restarted my machine, all to no avail. Comment out the TransactionScope - it all worked fine, put it back in, boom!

Anyway, in case this happens to anyone else ever - this turned out to be a DNS issue on the SERVER. Logging into the server I could ping my machine by IP, but pinging by host name failed. Finally, I was getting somewhere. It turned out there was a dodgy DNS entry - flushing the DNS cache on the server worked and everything returned to normal. What a waste of 2.5 hours.

Friday, November 13, 2009

Moq :: Ignoring Arguments

I've always used Rhino Mocks as a mocking framework, but at work we recently made the decision to give Moq a try.

My initial reaction is: WOW! It is just SO much easier to use. I've always found the setting up of mocks and verification of method calls a tedious task, but using Moq I've actually found it's really simple. It's a far more intuitive framework, and once I've got around my Rhino Mock habits, I've really found that the amount of code required for mocking is drastically reduced.

One very simple thing that stumped me today though, was trying to verify a method was called, but telling Moq to ignore the argument that was passed. In this case Rhino Mocks is a little more intuitive - it has an IgnoreArguments() method that chains off the setup. In the end though, the Moq implementation is actually easier - you just make your Setup or Verify call and use the "It" class to generate your stub.

In my example:
  myMockedClass.Verify(x => x.Connect(It.IsAny<MyArgumentType>()));

Tuesday, October 6, 2009

.NET Assembly Probing

I hate the way Windows Forms applications get built with all their binaries in a single folder. I also hate the publish options in Visual Studio - those crappy .application files stink in so many ways. I prefer to just deploy the executable and related assemblies using some kind of packaging tool.

The problem here is all the dll's by default need to be in the base appdomain directory, which can end up being a horrible mess in the base install directory. There is a configuration option you can use with your application, though, to tell it to search in other folders for required assemblies, like so:


WatiN with CruiseControl running as a service

We've started using WatiN for our automated tests: nice and easy to use and integrates easily, but I did hit one stumbling block: CruiseControl.NET. When running tests locally, there were no issues - check the code in and bang, the tests fail. It was immediately apparent that it was a security thing - the CruiseControl.NET service runs by default with the SYSTEM user, but I could find little to no help anywhere on how to set this up correctly.

One solution was to just tick the service option to "Allow service to interact with the desktop", but this didn't work for me - the failure remained the same when the browser instance was being created.

It took a few hours of testing (I ended up using a local Scheduled Task to test this out) and I finally found a fairly easy way to get it to work - on your build server you need to use a local account that the service runs under. I used a local admin account - you might not be able to do this in which case you're on your own (it'll still work - you'll just need to be more careful about how you apply permissions), but the steps to set this up are as follows:
  • Create a local admin account (e.g. CCAdmin) - make sure you set it so the password does not need to be changed and does not expire
  • Log out, and log back in using the new account
  • Open up IE (or whatever your testing browser is) and go through all the dialog screens. Set the home page to blank. If you're using Firefox, make sure all the required plugins are installed.
  • Open up services.msc (, and stop the CruiseControl.NET service. Right-click and go to Properties
  • Go to the Log On tab, and instead of a Local System account, set it so the service uses your new account
  • If you have not created an admin account, make sure the account has all necessary security permissions
  • Click OK, and fire up the service again

Tuesday, September 29, 2009


I've been fighting with Selenium and automated testing lately, and I just couldn't shake the feeling that it just wasn't the right tool for the job. The IDE is flakey in terms of files you save (making changes to the project often results in JS errors in the tests), the server is a java server which I had to jump through hoops to get running as a Windows service, and the loading of a browser for the tests is hellishly slow.

It really is a great tool, and I love it's ease of use for development (I use it to autocomplete forms instead of typing all that crap in over and over again), but as an automated testing suite on a Windows technology stack, I'm just not a huge fan.

So today, I downloaded WatIn and gave it a run. I had the automated nUnit side up and running in a matter of minutes - the tests ran fast and the API is intuitive and easy to use.

The test recorder is not as good as Selenium's, but it's decent enough, and there is a beta version out that is supposed to be a vast improvement. I did download and try the Beta version, but it wouldn't even run for me, so I guess it's REALLY in Beta. Anyway, I'm more interested in the CruiseControl automated side of testing than visual testing, so it's easy, understandable unit test code that I want to be able to produce - and for this WatIn wins hands down.

Friday, September 18, 2009

Running Selenium RC as a Windows Service

We've recently started using Selenium as a testing tool at work. The IDE is great, particularly for filling in forms during development, but I'm a stickler for automated testing.

Selenium does off Selenium RC, which combined with nUnit allows for this in a variety of programming languages. The problem here, is that the server is a jar file, and I wanted it running as a windows service on our build server.

Anyway, I found a great article on that shows how to run just about anything as a windows service - and I successfully used that article to get Selenium RC running as windows service on our build server. Here are the steps:
  1. Install the latest Java runtime if you don't have it installed already
  2. Install the Selenium server (e.g. C:\Selenium\selenium-server-1.0.1
  3. Download srvany.exe and instsrv.exe (these are part of the Microsoft Windows Resource Kit - you will need to download the correct version for your OS) and copy these files to a location on your server (e.g. C:\reskit)
  4. Browse to this folder, and type: instsrv "Selenium RC" "C:\reskit\srvany.exe" - this will create the windows service named "Selenium RC", which will now be installed as a Windows service, although it won't start up yet
  5. Open up regedit, and browse to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Selenium RC
  6. Right-click "Selenium RC" in the tree, and click New - Key, and add a key called "Parameters"
  7. Open Parameters, and create a new string value called "Application"
  8. Add the following to the data value for the new string value: "C:\Program Files (x86)\Java\jre6\bin\java.exe" -jar "C:\Selenium\selenium-server-1.0.1\selenium-server.jar", substituting paths to the java exe and the selenium server jar file where appropriate
  9. Load up the windows services console (services.msc) and start the service

Thursday, September 10, 2009

SQL Server: Finding tables by column

Assuming you use a decent naming convention in your database, it's sometimes useful to be able to find a list of tables with a common column - for example if you want to write a script to clean out all data related to a specific item, referenced by foreign key columns of the same or a similar name.
SELECT distinct
    information_schema.columns c
    INNER JOIN information_schema.tables t 
      ON c.table_name = t.table_name 
    c.column_name LIKE '%my_column_name%'
    AND t.table_type = 'BASE TABLE'

Thursday, August 27, 2009

Roll on ASP.4

Scott Guthrie has started creating posts about the upcoming features of VS 2010 and .NET 4.

I've never been a Microsoft fanboy, but you have to give it to them, they really seem to have listened to their users the last few years as they continue to change all the issues people have griped about. The first two posts indicate more good things to come - the web.config files have been cleaned up (at last!) and now it appears there are some great project templates coming. What I am really excited about though is the complete control of client IDs that's coming - the whole UniqueID/ClientID pain in the ass has always bothered me about ASP.NET, and it looks like that has finally been addressed in .NET 4. Flubba dubba.

Friday, August 21, 2009

SQL Server: Currently Executing Queries

I found this great article by Ian Stirk on, regarding finding queries that are currently executing on SQL Server. The article contains the following VERY useful query:
    -- Do not lock anything, and do not get held up by any locks.

    -- What SQL Statements Are Currently Running?
    SELECT [Spid] = session_Id
 , ecid
 , [Database] = DB_NAME(sp.dbid)
 , [User] = nt_username
 , [Status] = er.status
 , [Wait] = wait_type
 , [Individual Query] = SUBSTRING (qt.text, 
 (CASE WHEN er.statement_end_offset = -1
        THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
  ELSE er.statement_end_offset END - 
 ,[Parent Query] = qt.text
 , Program = program_name
 , Hostname
 , nt_domain
 , start_time
    FROM sys.dm_exec_requests er
    INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid
    CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)as qt
    WHERE session_Id > 50              -- Ignore system spids.
    AND session_Id NOT IN (@@SPID)     -- Ignore this current statement.
    ORDER BY 1, 2

Friday, August 14, 2009

SQL Server: Changing Object Schema

I'd forgotten the syntax for moving an object to a different schema:
alter schema NewSchema transfer OldSchema.ObjectName

Thursday, August 13, 2009

SQL Server: Finding and Killing Database Connections

This is a query I use often to check what active connections exist for a specified database:
select  spid, sp.cmd, sp.hostname, sp.loginame, sp.nt_domain, sp.nt_username, sp.program_name
from    master.dbo.sysprocesses sp
where   db_name(dbid) = 'mydatabase'
and DBID <> 0  
and spid <> @@spid 

If I want to take the database offline I'll then kill these processes using the spid.

I also found a great post today that had a nice way of killing ALL active connections, by running the following sql:

alter database dbName set single_user with rollback immediate  

This can then be reverted out with

alter database dbName set multi_user with rollback immediate 

Friday, August 7, 2009

Backing up and Restoring SQL Server Databases with .NET

I need to create a tool that would do some backing up and restoring of databases as part of a long-running job this week. I had heard it was fairly simple C# code, but I was pleasantly surprised when I realised just HOW simple it is.

The namespaces of the SMO libraries required changed between 2005 and 2008, so if you're using the SQL Server 2008 objects, you need to reference the following libraries (usually located in C:\Program Files\Microsoft SQL Server\100\SDK\Assemblies\). If you're using SQL Server 2005 you only need the first two.

Microsoft.SqlServer.ConnectionInfo Microsoft.SqlServer.Smo Microsoft.SqlServer.SmoExtended Microsoft.SqlServer.Management.Sdk.Sfc

Backing Up Code

SqlConnection conn = new SqlConnection("ConnectionString!");
Server dbServer = new Server(new ServerConnection(conn));
Backup backupMgr = new Backup();
backupMgr.Devices.AddDevice("E:\Backups\YourFile.bak", DeviceType.File);
backupMgr.Database = conn.Database;
backupMgr.Action = BackupActionType.Database;

Restoring Code

SqlConnection conn = new SqlConnection("ConnectionString!");
Server dbServer = new Server(new ServerConnection(conn));
Restore restoreMgr = new Restore();
restoreMgr.Devices.AddDevice("E:\Backup\MyFile.bak", DeviceType.File);
restoreMgr.Database = conn.Database;
restoreMgr.Action = RestoreActionType.Database;

Wednesday, August 5, 2009

Building Code Documentation with SandCastle

I've always used nDoc for building code documentation, and then (around 2 years ago) I shifted to Sandcastle Help File Builder. We're wanting to start documenting code at work, so today I downloaded the latest version (along with Sandcastle to give the latest versions a whirl.

I was presently surprised. The last version of SHFB was just an nDoc clone, but now it's moved on a lot. It's got a whole pile of nice features now, including caching of all the stuff that used to make a build really slow; HTMLHelp 2; and more.

The best addition though is the support for static content. I actually couldn't figure out how to do it at first - in nDoc there used to be an AdditionalContent option (or something to that affect), and I knew the latest version of SHFB had support for this but I couldn't find an option for it. Google returned pre-2006 results which were incorrect. It was staring me in the face all along though - the newest version works like a VS solution - all you need to do is add the files to your solution and they get automatically incorporated. Very nice.

Overall, I'm very impressed - it's literally hundreds of times faster than the previous verison I used (if you enable the custom build components that come built-in) and it creates a far slicker output file.

Update: Error Using MSBuild

The latest version of SHFB no longer has a console application included - you need to use MSBuild instead. However, when trying to hook up my project to MSBuild as per the documentation, I kept getting the following error:

error MSB4057: The target "Build" does not exist in the project.

I think there's a bug in SHFB where it isn't building the project file correctly. Adding the following line at the bottom of my .shfbproj file sorted the issue out:

  <Import Project="$(SHFBROOT)\SandcastleHelpFileBuilder.targets" />
(You can add this directly above the closing </Project> tag.

Tuesday, July 21, 2009

SQL Server Locking, Blocking and Waiting

Being able to see what is blocking or waiting on a SQL Server machine is essential for most developers these days. I find myself using the following queries almost daily at the moment, so I thought I'd post them here:

    tx.[text] as ExecutingSQL, 
from sys.dm_os_waiting_tasks wt
inner join sys.dm_exec_connections ec on wt.session_id = ec.session_id
cross apply 
    select * from sys.dm_exec_sql_text(ec.most_recent_sql_handle)
) as tx
where wt.session_id > 50 and wt.wait_duration_ms > 0

    Blocked.session_id as Blocked_Session_ID
    ,Blocked_SQL.text as Blocked_SQL
    ,waits.wait_type as Blocked_Resource
    ,Blocking.session_id as Blocking_Session_ID
    ,Blocking_SQL.text as Blocking_SQL
from sys.dm_exec_connections as Blocking
inner join sys.dm_exec_requests as Blocked on Blocked.blocking_session_id = Blocking.session_id
cross apply
    select * from sys.dm_exec_sql_text(Blocking.most_recent_sql_handle)
) AS Blocking_SQL
cross apply
    select * from sys.dm_exec_sql_text(Blocked.sql_handle)
) as Blocked_SQL
inner join sys.dm_os_waiting_tasks as waits on waits.session_id = Blocked.session_id

Wednesday, July 1, 2009

SQL Server IDENTITY Columns

IDENTITY columns can be a real pain in the ass when data gets out of sync. GUIDs are generally a much better option when it comes to replication, but you aren't always in control of data structures and sometimes you inherit old systems that weren't designed to be replicated.

That being said, here are some tips on how to get around problems with IDENTITY columns and getting data in sync:


You can temporarily turn identity insertion off with the following statement:
SET IDENTITY_insert <table> ON
You can now insert the identity values into your table. Don't forget to run the same statement on the table, substituting ON for OFF!

Reseeding the IDENTITY value

For example, if you want to reset the current identity value on a table to 100, you can use the following statement:
 'schema.table', RESEED, 100

Friday, June 19, 2009

The ASP.NET Page Life Cycle

The ASP.NET life cycle is as much a fantastic model for web developers as it is a pain the butt. Whenever I change jobs, I encounter different approaches to implementation of pages and controls, rarely with any regard to the correct order of doing things and usually with code that is very difficult to maintain as a result. If you can't rely on your controls to obey the page life cycle in terms of rendering, you've got a problem. For example, setting presentation properties during page load may cause trouble if developers set that property on events firing or during PreRender.

So, when should you do things? As a rule of thumb, I try to do data binding early (OnInit) and presentation setting late (OnPreRender, or in controls that output custom markup in the Render override). It's a tricky topic and something that seems to catch most developers - the concept of top-down development with classic ASP and PHP just doesn't apply.

There's an excellent article on MSDN about the order of events here:

For my future reference, some items to consider:
  • Init and Load need to be handled carefully: the Init event for controls fires BEFORE the Init event for the Page; the Load event for controls fires AFTER the Load event for the Page.

Tuesday, June 2, 2009

SQL Server Express Remote Connections

I've been doing some work with SQL Server replication over the last few days, and in order to acheive this I was replicating between one of our DB servers and a SQL Server Express instance on my desktop. To my surprise, setting up remote connections to Express is actually a major pain in the butt, so I thought I'd document it here.


The first step is to ensure that remote connections are allowed through your firewall. I was using the default Windows firewall, so I set up a new exception: I called it "SQL Server", Port number 1433, Protocol TCP, and I changed the scope to be "My network (subnet) only"

SQL Server Configuration

There are some hidden settings that you need to enable before SQL Server express will allow remote connections. You will need to load up the SQL Server Configuration Manager and do the following:
  • Under SQL Server 2005 Network Configuration, mark the TCP/IP status as Enabled
  • Right-click TCP/IP and go to Properties
  • Go to the IP Addresses tab, and scroll down to IP All
  • Remove the value in TCP Dynamic Ports, and enter "1433" (same value as you used in firewall) in the TCP Port value
SQL Server

The next step is to ensure SQL Server itself allows remote connections. From within SQL Server, go to the SQL Server instance, click Properties, Connections, "Allow remote connections to this server"). Finally, restart the express service and you should be able to connect to your machine's SQL Server instance remotely.

Thursday, May 21, 2009

Protecting .NET Code with Dotfuscator

Although .NET compiles into binary .dlls, these assemblies are extremely easy to reverse-engineer using Reflector, or any other decent Reflection tool. Although it's not completely secure, the simplest way to protect your code is to run it through the free obfuscation tool that comes with Visual Studio: Dotfuscator Community Edition. This is available on your programs menu under Visual Studio Tools.

All you need to do, is open up the program and load up your assembly. There are piles of options available, but the default settings are generally fine for most cases. Under the Input tab you can add all your input assemblies, provide an output folder under the Build tab, and hit the run button - this will build copies of your dll's but the namespaces, classes, methods, etc are obfuscated into code that is extremely difficult to read. Of course, it is still readable: it can be decompiled and debugged, but it's a LOT harder to use, particularly for larger projects.

Some notes on the tool:
  • when specifying the output directories it sometimes had issues with long folder names - using "C:\Temp" was a simpler option and the build worked flawlessly from there
  • the community edition can be upgraded to the Enhanced Edition, just by registering (which is free)

Wednesday, May 20, 2009

Using Microsoft's Performance Monitor Tool

Knowing how to use PerfMon can be absolutely critical when faced with unknown performance problems on large information systems. When you're faced with a performance bottleneck in a large server farm, it can be very difficult to track down issues as minor hiccups can have knock-on effects that are far more apparent to the user than the server experiencing the source of the error, as other servers in the chain sit waiting for the problem to be resolved. PerfMon can be used to track hundreds of performance issues, this article purely serves as an introduction on how to use the tool as it isn't apparent when you load it up.

When you run PerfMon, you can see current statistic and current counters running, but it doesn't provide an option to "Save". It doesn't quite work like most applications. In order to start your own log, you need to expand the "Performance Logs and Alerts" tree item, and then select either Counter Logs, Alerts, etc, depending on what you want to log. In the main display area, you can then right-click and create a new log.

If you are creating a Counter Log, you will see a log file name, and you will have the ability to add Objects or Counters to that log file. I usually select individual Counters rather than whole objects, so you can zone in onto the exact items you need to check.

One item to note is that by default, PerfMon will log to a binary file. You can change it to log directly to a .csv file or even to a database. However, if you do log to a binary file format, you can always use the command-line "relog" tool to convert it into other formats. For example, to convert to csv:
  relog MyLogFile.blg -f csv MyLogFile.csv
Once you have finished setting logging options, the log is then saved on the machine. You can stop and start logging by right-clicking on the log and clicking Start/Stop. The settings can be exported and imported to/from html format, and you can adjust properties of the log.

Tuesday, May 19, 2009

Encrypting cookies with ASP.NET

I hadn't noticed it before, but ASP.NET provides a really simple way to encrypt your cookies. Cryptography is a field best left to the expert, but for simple encryption purposes this method is perfectly adequate.

First off, you will need to add an entry to your machine/web.config:
    validation="SHA1" decryption="AES" />
You can then encrypt/decrypt as follows:
  // encryption
  var ticket = new FormsAuthenticationTicket(2, "", DateTime.Now, DateTime.Now.AddMinutes(10), false, "mycookievalue");
  var encryptedData = FormsAuthentication.Encrypt(ticket);

  // decryption
  string myValue = FormsAuthentication.Decrypt(encryptedData).UserData.ToString();

SQL Server Query Plans

The caching of query plans in SQL Server is extremely important when it comes to application performance. The way this works is not very well understood - I still don't get all the intricacies of it but as a general rule of thumb, it's a good move to either:
  1. Use stored procedures: these get pre-compiled and allow the re-use of execution plans. They allow for parameters, allowing for a "shared" execution plan
  2. he use of stored procs is not possible or not part of your design, use sp_executesql - do NOT use EXEC when running dynamic sql. sp_executesql, unlike EXEC, can be parameterised and therefore also allows for "shared" execution plans.
You can analyse the cached execution plans on SQL Server with the following statement:
with CachedPlans as (
select top 100
    left([sql].[text], 100) as [text],
from sys.dm_exec_cached_plans p
outer apply sys.dm_exec_sql_text (p.plan_handle) sql
select * from CachedPlans
where text like '%Select * from MyTable%'
order by usecounts desc

Friday, May 15, 2009

JavaScript Dimensions and Element Positioning

Now that the main browsers have (sort of) merged in terms of DOM element positioning, it's a lot easier to position your elements on a web page than it used to be. I still forget off-hand what is what, so here's a table of the document objects and their properties that are useful when writing dynamic element sizing and/or positioning.

Screen, Document and Element Sizes

The width and height of the user's screen e.g. 1440 x 900
The width and height of the user's screen excluding taskbars.
The actual width and height of the document (see blue arrow above).
The width and height of the document, not taking into account any scrolling (see red line above)
The width and height of an element. All style elements MUST be applied before getting this value, including display, visibility, etc.

Element Positioning

Unfortunately, getting an element position isn't quite as easy, due to different implementations in the browser. The following JavaScript function can be used to get the axt co-ordinates of an element within the document.

// gets the absolute position of an element on the screen
function getAbsolutePosition(element)
    var left = 0;
    var top = 0;
    // internet explorer
    if (element.offsetParent)
        while (element.offsetParent)
            left += element.offsetLeft
            top += element.offsetTop;
            element = element.offsetParent;
    // other browsers
    else if (element.x)
        left = element.x;
        top = element.y;
    return { x:left, y:top };
For example, if you have an element with id 'myElement':
var pos = getAbsolutePosition(document.getElementById('myElement'));
var left = pos.x;
var top = pos.y;

Thursday, May 14, 2009

Custom Dictionary Sections in .NET Config Files

When you need to add complex configuration structures to .NET config files, you will generally create your own custom configuration section classes, and implement them within your application. However, if you just need a standard key/value pair, you don't need a custom configuration type at all. Instead, you can just define a section using the System.Configuration.DictionarySectionHandler, and there you go - no code required.


Say, for instance, you want a list of status codes that get checked by your application:

In the App.config, define the section and implement the required values:
<section name="StatusCodes" type="System.Configuration.DictionarySectionHandler"  />
  <clear />
  <add key="2" value="Two" />
  <add key="3" value="Three" />
  <add key="4" value="Four" />
  <add key="5" value="Five" />
To read these values in code, all you need to do is the following, and you have a Hashtable containing all the values defined in the config file:
Hashtable statusCodes = ConfigurationManager.GetSection("StatusCodes") as Hashtable;

Monday, May 11, 2009

Creating and Consuming .NET Web Services

I quite liked this article by Dimitrios Markatos:

FxCop: Excluding rules in code

One of the tools we use at my current job to validate code as part of our automated build is FxCop. It can be a pain, but we've found it very useful in terms of standardising our code and rooting out all those un-used variables that seem to grow to epic proportions with a project of any decent size.

However, as with all tools, it's not perfect, and sometimes it throws errors that really aren't of any significance and can be safely ignored. This can be done from within the FxCop project, but it can also be done via code, which means it will be forever ignored, even if the FxCop project file changes.

This is a 2-step process:
  1. Declare conditional compile symbol for your project named CODE_ANALYSIS (In Visual Studio 2005 under the Build Tab of the project properties, there is a "Conditional compilation symbols" input field)
  2. Mark your method or property with a "SuppressMessage" attribute, as follows:
[SuppressMessage("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")]
public string MyProperty
  get { return 123; }
The first parameter is the category of the error, and the second is the rule code and name split by a colon.

Saturday, May 9, 2009

Google syntax highlighter

In setting up this blog, I was weighing up my options as to how to highlight code. You can do it manually by setting the colour of individual words, but that is just a pain in the ass, particularly for larger posts so I considered writing a JavaScript class for doing it. Fortunately, sense prevailed and I searched the web first, and I came across the Google syntax highlighter. This thing is awesome. It handles a number of different languages, displays line numbers automatically, and even has a number of utility options you can display like printing, clipboard copying, and more. Great stuff. Best of all, it's so easy to use. All you need to do is include the JavaScript files and the css file, and mark your code with some simple attributes:
 <pre name="code" class="brush: html">
   ... some code here ...
Finally, add some JavaScript to the end of your page, and that's it.
  SyntaxHighlighter.config.bloggerMode = true;
  SyntaxHighlighter.config.clipboardSwf = '';
You don't need the first two lines of this script unless you're adding it to a blog. If you ARE adding it to a blog, you'll need to host the .css and .js files on an external site and add them to your blog template. The rest of the instructions remain the same.


I'm creating this blog as a repository for anything technical I come across. As a programmer, this will generally involve discussions surrounding code, but I'll use it as a store for anything to do with computers.