The worst program I ever worked on

The worst program I ever worked on was something I was asked to maintain once. It consisted of two parts. The first was a web application writen in ASP. The second portion was essentially Microsoft Reporting Services implemented in 80,000 lines of VB.NET.

The first thing I did was chuck it into VS2010 and run some code metrics on it. The results were, 10 or so Methods had 2000+ lines of code. The maintainability index was 0 (number between 0 and 100 where 0 is unmaintainable). The worst function had a cyclomatic complexity of 2700 (the worst I have ever seen on a function before was 750 odd). It was full of nested in-line dynamic SQL all of which referred to tables with 100+ columns, which had helpful names like sdf_324. There were about 5000 stored procedures of which most were 90% similar to other ones with a similar naming scheme. There were no foreign key constraints in the database. Every query including updates, inserts and deletes used NOLOCK (so no data integrity). It all lived in a single 80,000 line file, which crashed VS every time you tried to do a simple edit.

I essentially told my boss I would quit over it as there was no way I could support it without other aspects of work suffering. Thankfully it was put in the too hard basket and nobody else had to endure my pain. I ended up peer reviewing the changes the guy made some time later and a single column update touched in the order of 500 lines of code.

There was one interesting thing I found with it however, there was so much repeated/nested if code in methods you could hold down page down and it would look like the page was moving the other way, similar to how a wheel on TV looks like its spinning the other way.

Clean Repository Data Access in C#

Mostly as a self reference here is an extremely clean data access pattern possible using C# and Entity Framework. It saves you the effort of mocking the database context as the code you end up writing is so simple it is all compile time checked.

Essentially you define a very simple class which provides a single method for getting data (although you may want a save data method too) and make sure you add an interface to make unit testing/mocking easier.

public interface IUrlRepository
	IQueryable GetUrl();
	void Save(Url url);

public class UrlRepository : IUrlRepository
	public DbContext _context = null;

	public UrlRepository()
		_context = new DbContext();

	public IQueryable GetUrl()
		return from u in _context.Urls
			   select u;

	public void Save(Url url)

As you can see rather then returning a list you return an IQueryable. Because entity framework is lazy you can then add extension methods over the return like so.

public static class UrlRepositoryExtention
	public static IQueryable ByCreatedBy(this IQueryable url, string User)
		return url.Where(p => p.Created_By.Equals(User));

	public static IQueryable OrderByCreateDate(this IQueryable url)
		return url.OrderByDescending(x => x.Create_Date);

With this you end up with a very nice method of running queries over your data.

var url = _urlRepo.GetUrl().OrderByCreateDate();

Since it can all be chained you can just add more filters easily as well.

var url = _urlRepo.GetUrl().OrderByCreateDate().ByCreatedBy("Ben Boyter");

What about joins I hear you ask? Well thankfully you this pattern takes care of this too. Just have two repositories, pull the full data set for each and do the following.

var users = _userRepo.GetUser();
var locations = _locationRepo.GetLocation();

var result =  from user in users
              join location in locations on user.locationid equals && = "Parramatta"
              select user;

The best thing is that its all lazy evaluation so you don’t end up pulling back the full data set into memory. Of course at a large enough scale you will probably hit some sort of leaky abstraction issue and end up rewriting to use pure SQL at some point, but for getting started this method of data access is incredibly powerful with few chances of errors.

Finally you get the advantage that you can provide pure unit tests over your joins. Because you can mock the response from your repository easily you don’t have to create a seed database and provide a connection. This is fantastic for TDD especially when running offline or on your local machine.

jQuery Hello World Plugin

I was doing a simple job test the other day and one of the questions involved creating a simple jQuery plugin. Having never created one myself I had to look into how to do it. I couldn’t find a dead simple hello world plugin example so I thought I would create a simple example here for people to look at.

(function( $ ){
  $.fn.HelloWorld = function() {
    $(this).html('Hello World!');
})( jQuery );

The above essentially just attaches a new function called HelloWorld to the basic jQuery object. You can then call it using the below,

$(document).ready(function () {

What the above does is calls the HelloWorld function attached to the jQuery object from the body element. This then in turn calls our plugin which then appends “Hello World!” to the body element. The result is you see the test “Hello World!” appear in the body.

You can download the jQuery hello world plugin and supporting files here to run it yourself if you feel so inclined.

How I store Enumerations in the Database

One of the things I come across in databases now and then is a collection of single tables with a name like “MessageType”. You have a look in them and it turns out to have 6 or so rows with no foreign key relationships. Every single time it turns out to be someone had the idea to store an Enumeration (Enum) type in the database. Not a bad idea as it turns out since you can add sort options, soft deletes and the like, but the implementation of a single table for each one is flawed.

The following is how I deal with it (probably not ideal but works well for me). Essentially you define two tables in the database with names like Lookup and Value. Inside lookup you have something similar to the following.

|id    |
|name  |

This is basicly a representation of the enum name. Id is usually an autoincrementing id to make joins easy while lookup is the primary key. This is the definition of the enum, IE the name part in the database, or in our example “Message”.

Then you add the enum values to your Values table which looks similar to the below,

|id       |
|lookupid |
|name     |
|deleted  |

Then through the power of a simple join you can get your enum values,

SELECT * FROM Value v INNER JOIN Lookup l ON = v.lookupid WHERE = '?';

Adding a simple index on lookupid and id ensures that any lookups are pretty fast (its a very simple join), and you are away. A single place to look for your enum values, the ability to add all sorts of extra metadata to your enums and no more creation of dozens of small tables in your database.