Monthly Archives: January 2014

Migrations v.s DatabaseInitializer

An index of the HomeLibrary application posts can be found here.

As I mentioned in my previous post, I ran into an interesting situation with Entity Framework 6. It went a little like this…

I kicked off my data layer with some domain objects, a HomeLibraryContext class (deriving from DbContext) and a HomeLibraryInitializer class which inherits from DropCreateDatabaseAlways. The HomeLibraryInitializer contains a Seed method which inserts some data into the newly created database.

I also decided that I was going to use EntityFramework Migrations to enable me to make changes to the database as I evolve my domain model. To that end, I enabled migrations on the HomeLibrary.EF project using the Package Manager Console and created the first Migration:

internal sealed class Configuration : DbMigrationsConfiguration<Db.HomeLibraryContext>
	public Configuration()
		AutomaticMigrationsEnabled = false;

	protected override void Seed(Db.HomeLibraryContext context)
		new List<Person>
				new Person {FirstName = "Terry", LastName = "Halpin"},
				new Person {FirstName = "Alan", LastName = "Turing"}
			}.ForEach(p => context.People.Add(p));


The initial Migration:

public partial class Initial : DbMigration
	public override void Up()
			c => new
					Id = c.Int(nullable: false, identity: true),
					BookId = c.Int(nullable: false),
					Edition = c.Int(nullable: false),
					Cover = c.Binary(maxLength: 4000),
			.PrimaryKey(t => t.Id)
			.ForeignKey("dbo.Books", t => t.BookId, cascadeDelete: true)
			.Index(t => t.BookId);
			c => new
					Id = c.Int(nullable: false, identity: true),
					Title = c.String(nullable: false, maxLength: 4000),
					Edition = c.Int(nullable: false),
					PublisherId = c.Int(nullable: false),
					TypeOfBook = c.Int(nullable: false),
			.PrimaryKey(t => t.Id)
			.ForeignKey("dbo.Publishers", t => t.PublisherId, cascadeDelete: true)
			.Index(t => t.PublisherId);
			c => new
					Id = c.Int(nullable: false, identity: true),
					Email = c.String(nullable: false, maxLength: 4000),
					IsAuthor = c.Boolean(nullable: false),
					FirstName = c.String(nullable: false, maxLength: 4000),
					LastName = c.String(nullable: false, maxLength: 4000),
					Sobriquet = c.String(maxLength: 4000),
			.PrimaryKey(t => t.Id);
			c => new
					Id = c.Int(nullable: false, identity: true),
					BookId = c.Int(nullable: false),
					BorrowerId = c.Int(nullable: false),
					DateLent = c.DateTime(nullable: false),
					DueDate = c.DateTime(),
					ReturnDate = c.DateTime(),
			.PrimaryKey(t => t.Id)
			.ForeignKey("dbo.Books", t => t.BookId, cascadeDelete: true)
			.ForeignKey("dbo.People", t => t.BorrowerId, cascadeDelete: true)
			.Index(t => t.BookId)
			.Index(t => t.BorrowerId);
			c => new
					Id = c.Int(nullable: false, identity: true),
					BookId = c.Int(nullable: false),
					CommentText = c.String(nullable: false, maxLength: 4000),
			.PrimaryKey(t => t.Id)
			.ForeignKey("dbo.Books", t => t.BookId, cascadeDelete: true)
			.Index(t => t.BookId);
			c => new
					Id = c.Int(nullable: false, identity: true),
					Name = c.String(nullable: false, maxLength: 4000),
			.PrimaryKey(t => t.Id);
			c => new
					Person_Id = c.Int(nullable: false),
					Book_Id = c.Int(nullable: false),
			.PrimaryKey(t => new { t.Person_Id, t.Book_Id })
			.ForeignKey("dbo.People", t => t.Person_Id, cascadeDelete: true)
			.ForeignKey("dbo.Books", t => t.Book_Id, cascadeDelete: true)
			.Index(t => t.Person_Id)
			.Index(t => t.Book_Id);
	public override void Down()
		DropForeignKey("dbo.Books", "PublisherId", "dbo.Publishers");
		DropForeignKey("dbo.BookCovers", "BookId", "dbo.Books");
		DropForeignKey("dbo.Comments", "BookId", "dbo.Books");
		DropForeignKey("dbo.Lendings", "BorrowerId", "dbo.People");
		DropForeignKey("dbo.Lendings", "BookId", "dbo.Books");
		DropForeignKey("dbo.PersonBooks", "Book_Id", "dbo.Books");
		DropForeignKey("dbo.PersonBooks", "Person_Id", "dbo.People");
		DropIndex("dbo.Books", new[] { "PublisherId" });
		DropIndex("dbo.BookCovers", new[] { "BookId" });
		DropIndex("dbo.Comments", new[] { "BookId" });
		DropIndex("dbo.Lendings", new[] { "BorrowerId" });
		DropIndex("dbo.Lendings", new[] { "BookId" });
		DropIndex("dbo.PersonBooks", new[] { "Book_Id" });
		DropIndex("dbo.PersonBooks", new[] { "Person_Id" });

I then started running into difficulties. Strange things were happening and the SQL Server Compact database was not responding to my Migrations commands in the way I expected. So, I turned to Google.

As it turns out, there are two options for seeding the database using Code First and they are mutually exclusive:

  1. The original EF way of creating an Initializer which inherits from DropCreateDatabaseAlways or DropCreateDatabaseIfModelChanges. You can see the code for this option in my last post.
  2. Using Migrations, which uses the seed method in the Configuration file which inherits from DbMigrationsConfiguration

For this project, I preferred the original EF way as I have found it much simpler to work with. I am, however, going to try and have my cake and eat it. When I made this decision, I found that all I had to do to disable Migrations was to exclude the Migrations directory from my solution. However, if I do want to change the schema again and use Migrations to create a Migration, I can just include that folder again, comment out the code which sets the DatabaseInitializer and create a new Migration based on the current state of the domain classes. Let’s see whether this approach continues to work!

Home Library

An index of the HomeLibrary application posts can be found here.

This year I have set a little project for myself. A bit of background first. Last year I released a project called WinformsMVP which is an MVP framework for the Winforms platform. The example code that I provided with the source was quite trivial. Previous to that, in my first year as a programmer, I created a Winforms application which assisted my study for the Winforms MCTS certification. I thought it would be nice to take that application and re-implement it using the WinformsMVP framework. I will also take the opportunity to use up-to-date tools like Entity Framework 6 (code first) and perhaps a few other little utilities which I have come across in my travels.

I have already made a start on the application which I am going to call Home Library. It is basically an application which individuals can use to track their lendings of books to others. I’ve lost many books over the years where they have not been returned and I did not track who I lent it to. This application helped me address that and it was fun to make.

The code for this project is available at this GitHub repository. I plan to tag the code at various milestones and the tag for the current state of the code is called DataAccessAndDomain_1. Mind you, that doesn’t mean I won’t go back and re-factor code. But I think tagging it at various milestones will be helpful as I make blog posts with regards to the project, as it progresses.

I don’t have much in the way of Business Analysis skills; and in any case, I am the client and subject-matter expert here. So I got to work and created the domain classes which I used to generate my database. The database schema looks like this (diagram generated using the Entity Framework Power Tools): Home Library Schema

To give you an idea of the domain classes, here are a couple which I have created:

public class Book
	public int Id { get; set; }
	public string Title { get; set; }
	public Edition Edition { get; set; }
	public Publisher Publisher { get; set; }
	public int PublisherId { get; set; }
	public BookType TypeOfBook { get; set; }

	public virtual ICollection<Person> Authors { get; set; }
	public virtual ICollection<Comment> Comments { get; set; }
	public virtual ICollection<BookCover> Covers { get; set; }
	public virtual ICollection<Lending> Lendings { get; set; }
public class Person
	public int Id { get; set; }
	public string Email { get; set; }
	public bool IsAuthor { get; set; }
	public string FirstName { get; set; }
	public string LastName { get; set; }
	public string Sobriquet { get; set; }

	public virtual ICollection<Lending> Lendings { get; set; }
	public virtual ICollection<Book> Books { get; set; }

Those classes are in a separate project called HomeLibrary.Model.
You can see in the book class I have created a couple of enums. As Entity Framework 6 supports enums, it made sense to use them for abstractions which represented a finite number of options. The BookType enum looks like this:

public enum BookType
	TextBook = 0,
	Novel = 1

For the classes which will do the actual querying, I created a separate project called HomeLibrary.Model.EF. The HomeLibraryContext (which inherits from DbContext) is as follows:

public class HomeLibraryContext : DbContext
    //  DbSets go here
    public DbSet<Book> Books { get; set; }
    public DbSet<BookCover> BookCovers { get; set; }
    public DbSet<Comment> Comments { get; set; }
    public DbSet<Lending> Lendings { get; set; }
    public DbSet<Person> People { get; set; }
    public DbSet<Publisher> Publishers { get; set; }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
        //  set up the Publisher's table
        modelBuilder.Entity<Publisher>().HasKey(p => p.Id);
        modelBuilder.Entity<Publisher>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
        modelBuilder.Entity<Publisher>().Property(p => p.Name).IsRequired().IsVariableLength();
        //  set up the People table
        modelBuilder.Entity<Person>().HasKey(p => p.Id);
        modelBuilder.Entity<Person>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
        modelBuilder.Entity<Person>().Property(p => p.Email).IsRequired().IsVariableLength();
        modelBuilder.Entity<Person>().Property(p => p.FirstName).IsRequired().IsVariableLength();
        modelBuilder.Entity<Person>().Property(p => p.LastName).IsRequired().IsVariableLength();
        modelBuilder.Entity<Person>().Property(p => p.Sobriquet).IsOptional().IsVariableLength();

        //  set up the Comment table
        modelBuilder.Entity<Comment>().HasKey(p => p.Id);
        modelBuilder.Entity<Comment>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
        modelBuilder.Entity<Comment>().Property(c => c.CommentText).IsRequired().IsVariableLength();

        //  set up the Book table
        modelBuilder.Entity<Book>().HasKey(p => p.Id);
        modelBuilder.Entity<Book>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
        modelBuilder.Entity<Book>().Property(c => c.Title).IsRequired().IsVariableLength();

        modelBuilder.Entity<BookCover>().HasKey(p => p.Id);
        modelBuilder.Entity<BookCover>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);

        modelBuilder.Entity<Lending>().HasKey(p => p.Id);
        modelBuilder.Entity<Lending>().Property(p => p.Id).HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);


You can see that I have opted for using the Fluent API for setting up the various tables rather than the Attribute-based API. Otherwise, it is a very straightforward context.

I faced an interesting scenario with seeding which I will leave for a separate blog post. The upshot of it was that I elected to do the database creation and seeding using the original EF Code First method of inheriting from a class which implements IDatabaseInitializer. As I wanted to drop and re-create the database each time I ran the application during development, I opted to inherit from DropCreateDatabaseAlways. The HomeLibraryInitializer looks like:

public class HomeLibraryInitializer : DropCreateDatabaseAlways<HomeLibraryContext>
    protected override void Seed(HomeLibraryContext context)
        new List<Person> { new Person { FirstName = "Terry", LastName = "Halpin", Email = "hi", IsAuthor = false},
            new Person { FirstName = "Alan", LastName = "Turing", Email = "hi", IsAuthor = false }
            , }.ForEach(p => context.People.Add(p));



Stay tuned for future posts!

Extant – Testing for null and undefined in One Fell Swoop

I’m currently reading the book Functional JavaScript: Introducing Functional Programming with Underscore.js

Early on in the first chapter, I’ve stumbled upon an awesome line of code which is beautiful in its simplicity and effectiveness. One of the foibles of Javascript is that you have to test for both null values and the undefined value. The following function does that in one sweet line of code:

function existy(x) { 
	return x != null 

If x equals undefined, the interpreter will see that it has two values with different types on each side of the != operator. So, it will use type coercion and cast both sides to a boolean value. As null and undefined are both falsey, the function returns false. That is, it is false to say that false does not equal false!
The only problem with it is the name. I’m not really sold on existy. So whenever I use that function, it will be called extant. It can be used as follows:

//=> false
//=> false
//=> false
//=> false
//=> true
//=> true

The thing I love about that function is that it abstracts away the annoyance of checking for undefined and null into one short and sharp function-call. Admittedly, I tended to test for undefined more than null, as null is something typically set by developers, whereas undefined is set by the environment. But still, it’s a lot nicer than:

if(typeof someVariable === 'undefined')

Robocopy Powershell Script for Home Back-up

In this post, I am going to share my backup script for my home data needs. First, I will explain the problem that my script addresses.

When I built my computer a little over a year ago, I decided I was going to do backup properly and purchased a licence for Acronis True Image Home 2012. This product has proved to be quite a disappointment. Acronis does a fine job (I guess) for backing up a system drive. So, I still use it to create images of my C drive. This is important because the operating system is running on an SSD. SSD’s are known to drop dead without providing any advanced warning. So having an image of your system drive is imperative, if you are running it on an SSD.

Data is a different story. The problem with an application like Acronis is that it doesn’t offer a way of mirroring a data drive. All of its backup types result in a container/image being created with a .tib extension. I want to be able to mirror my data and to be able to surgically pick and choose which directories get backed up.

So I started hunting around and came across this very cool script. In the event that that blog disappears, you can download the original script itself clicking the following button:

That was my starting point. I finessed the script a little to tailor it more to my needs and to make it more flexible. The script in its original form is hard-coded to 1 directory. So, if you had a swag of different directory-trees which you wanted to backup, you would have to create a separate version of that script for each of those directories (which is what I originally did). My amended version is more generalised and includes some input parameters, such that it can be re-used for various directory-trees. I also changed the flags which Robocopy is being called with to:

  • suit my goal of creating a mirror image of whatever directory I am backing up; and
  • to meet my logging needs.

My script is as follows:

## ================================================================
## Script name: MirrorDirectories.ps1
## ================================================================

## This Script Mirrors a directory tree from source to destination with the Windows builtin command robocopy.
## Exit codes from robocopy are logged to Windows Eventlog.

## Usage: Run with administrative rights in Windows Task Scheduler or Administrator:PowerShell
## If not executed with administrative privileges the script will not write to eventlog.

## Amended by David Alan Rogers Esq. tailored for his devious designs!!!

## ================================================================
## Change these parameters as relevant
## ================================================================

Param([string]$dirToBackup, [string]$fullPathOfDirToBackup, [string]$jobName, [string]$nasShare, [string]$logDirectory)

## Name of the job, name of source in Windows Event Log and name of robocopy Logfile.
$JOB = $jobName + ($dirToBackup -replace "\s+", "")

## Source directory
$SOURCE = $fullPathOfDirToBackup

## Destination directory. Files in this directory will mirror the source directory. Extra files will be deleted! 
$DESTINATION = join-path -path $nasShare -childpath $dirToBackup
Write-Host "JOB: $JOB"
Write-Host "SOURCE: $SOURCE"

## Path to robocopy logfile
$LOGFILE = join-path -path $logDirectory -childpath $JOB
## Log events from the script to this location
$SCRIPTLOG = $LOGFILE + "-scriptlog.log"

## Mirror a directory tree. Equivalent to /e /purge
## /e		: Copies subdirectories. Note that this option includes empty directories. 
## /purge	: Deletes destination files and directories that no longer exist in the source. 
$WHAT = @("/MIR")

## /R:3		: Retry open files 3 times
## /W:5 	: wait 5 seconds between tries.
## /FFT 	: assume FAT file times (2 second granularity). Target folder is ext2/ext3, & those file systems also implement file times with 2 second granularity. NTFS does not assume that -
## /Z		: ensures Robocopy can resume the transfer of a large file in mid-file instead of restarting.
## /XA:H	: makes Robocopy ignore hidden files, usually these will be system files that we're not interested in.
## /COPY:DT : turn off the attribute copying. /COPY:DAT copies file attributes and is default. Remove the A to prevent attributes being copied -
## /NP      : no progress - don’t display % copied.
$OPTIONS = @("/R:3","/W:5","/FFT","/Z","/XA:H","/COPY:DT","/NP") 

## This will create a timestamp like yyyy-mm-yy
$TIMESTAMP = get-date -uformat "%Y-%m%-%d"

## This will get the time like HH:MM:SS
$TIME = get-date -uformat "%T"

## Append to robocopy logfile with timestamp

## Wrap all above arguments

## ================================================================

## Start the robocopy with above parameters and log errors in Windows Eventlog.
& C:\Windows\SysWOW64\Robocopy.exe @cmdArgs

## Get LastExitCode and store in variable
$ExitCode = $LastExitCode

Write-Host "ExitCode: $ExitCode"


## Message descriptions for each ExitCode.
"16"="Serious error. robocopy did not copy any files.`n
Examine the output log: $LOGFILE`-Robocopy`-$TIMESTAMP.log"
"8"="Some files or directories could not be copied (copy errors occurred and the retry limit was exceeded).`n
Check these errors further: $LOGFILE`-Robocopy`-$TIMESTAMP.log"
"4"="Some Mismatched files or directories were detected.`n
Examine the output log: $LOGFILE`-Robocopy`-$TIMESTAMP.log.`
Housekeeping is probably necessary."
"2"="Some Extra files or directories were detected and removed in $DESTINATION.`n
Check the output log for details: $LOGFILE`-Robocopy`-$TIMESTAMP.log"
"1"="New files from $SOURCE copied to $DESTINATION.`n
Check the output log for details: $LOGFILE`-Robocopy`-$TIMESTAMP.log"
"0"="$SOURCE and $DESTINATION in sync. No files copied.`n
Check the output log for details: $LOGFILE`-Robocopy`-$TIMESTAMP.log"

## Function to see if running with administrator privileges
function Test-Administrator  
    $user = [Security.Principal.WindowsIdentity]::GetCurrent();
    (New-Object Security.Principal.WindowsPrincipal $user).IsInRole([Security.Principal.WindowsBuiltinRole]::Administrator)  

## If running with administrator privileges
If (Test-Administrator -eq $True) {
	"Has administrator privileges"
	## Create EventLog Source if not already exists
	if ([System.Diagnostics.EventLog]::SourceExists("$JOB") -eq $false) {
	"Creating EventLog Source `"$JOB`""
    [System.Diagnostics.EventLog]::CreateEventSource("$JOB", "Application")
	## Write known ExitCodes to EventLog
	if ($MSG."$ExitCode" -gt $null) {
		Write-EventLog -LogName Application -Source $JOB -EventID $ExitCode -EntryType $MSGType."$ExitCode" -Message $MSG."$ExitCode"
	## Write unknown ExitCodes to EventLog
	else {
		Write-EventLog -LogName Application -Source $JOB -EventID $ExitCode -EntryType Warning -Message "Unknown ExitCode. EventID equals ExitCode"
## If not running with administrator privileges
else {
	## Write to screen and logfile
	Add-content $SCRIPTLOG "$TIMESTAMP $TIME No administrator privileges" -PassThru
	Add-content $SCRIPTLOG "$TIMESTAMP $TIME Cannot write to EventLog" -PassThru
	## Write known ExitCodes to screen and logfile
	if ($MSG."$ExitCode" -gt $null) {
		Add-content $SCRIPTLOG "$TIMESTAMP $TIME Printing message to logfile:" -PassThru
		Add-content $SCRIPTLOG ($TIMESTAMP + ' ' + $TIME + ' ' + $MSG."$ExitCode") -PassThru
		Add-content $SCRIPTLOG "$TIMESTAMP $TIME ExitCode`=$ExitCode" -PassThru
	## Write unknown ExitCodes to screen and logfile
	else {
		Add-content $SCRIPTLOG "$TIMESTAMP $TIME ExitCode`=$ExitCode (UNKNOWN)" -PassThru
	Add-content $SCRIPTLOG ""

In order to use that script, I have created another script that feeds it the required parameters. That script is as follows:

## Common variables for the backup operations
$InvokedFrom = (Split-Path $MyInvocation.InvocationName)
$MirrorScriptPath = join-path -path $InvokedFrom -childpath MirrorDirectories.ps1
$JobName = "EDriveBakJob-"
$NasDirectoryForBaks = "\\BACKUPNAS\plaguisebak"
$logDirectory = "H:\TestThing"

## Begin to massage variables into the final string which will be executed
$constantVariables = [string]::Format(" -jobName '{0}' -nasShare '{1}' -logDirectory '{2}'", $JobName, $NasDirectoryForBaks, $logDirectory)
$scriptPlusFolderSpecificVariables = $MirrorScriptPath + " -dirToBackup '{0}' -fullPathOfDirToBackup '{1}'" + $constantVariables

## A function to provide the completely finished string to be executed
Function Get-Full-Line-To-Execute([string]$DirName, [string]$DirFullPath) {
    $returnString = [string]::Format($scriptPlusFolderSpecificVariables, $DirName, $DirFullPath)    
    return $returnString

## ******************************************** Backup Operations ********************************************
## E:\Documents
$DirectoryName = "Documents"
$DirectoryFullPath = "E:\Documents"
$ExePlusArgsDocuments = Get-Full-Line-To-Execute $DirectoryName $DirectoryFullPath
write-host $ExePlusArgsDocuments "`r`n"
invoke-expression -Command $ExePlusArgsDocuments

write-host "$DirectoryName directory done!`r`n"

## E:\Jeremia
$DirectoryName = "Jeremia"
$DirectoryFullPath = "E:\Jeremia"
$ExePlusArgsJeremia = Get-Full-Line-To-Execute $DirectoryName $DirectoryFullPath
write-host $ExePlusArgsJeremia "`r`n"
invoke-expression -Command $ExePlusArgsJeremia

write-host "$DirectoryName directory done!`r`n"

## E:\Mozilla
$DirectoryName = "Mozilla"
$DirectoryFullPath = "E:\Mozilla"
$ExePlusArgsMozilla = Get-Full-Line-To-Execute $DirectoryName $DirectoryFullPath
write-host $ExePlusArgsMozilla "`r`n"
invoke-expression -Command $ExePlusArgsMozilla

write-host "$DirectoryName directory done!`r`n"

As you can see, all I have to do to add a directory to the backup operation is to create another section under the area delineated by the Backup Operations comment.

A view comments about that calling script:

  1. I keep this script in the same directory as the MirrorDirectories.ps1 script. This can be changed, but you’ll have to set the $MirrorScriptPath variable to the full path of its location
  2. $JobName is set to whatever tickles your fancy
  3. $NasDirectoryForBaks is set to be overarching backup directory which will contain all of the directories which I backup
  4. $logDirectory will contain the logs which Robocopy writes out

To explain the paths a little more, the overarching directory will be something like \\BACKUPNAS\plaguisebak (in my environment that is a share on a QNAP NAS). Then, in each operation a target folder is specified, such that the full path will be the path to the overarching directory plus the target folder e.g. If I was backing up E:\Code, $NasDirectoryForBaks is set to \\BACKUPNAS\plaguisebak and $DirectoryFullPath (lower in the script) is set to E:\Code with the $DirectoryName variable set to Code. This will result in E:\Code being mirrored to \\BACKUPNAS\plaguisebak\Code. It is important to do that, because if you set the targets of each backup operation to \\BACKUPNAS\plaguisebak without any subfolder-target, each backup operation will delete and overwrite whatever is in \\BACKUPNAS\plaguisebak.

As Niklas notes in the blog post in which he explains his script, the interesting aspect of it is the fact that it writes messages to the Windows Event Log. If something goes wrong, I can look there and see what error code Robocopy exited with. Here, we can see that the Robocopy operation exited with a code of 1 and the path to the log file is displayed:
Backup Succeeded

In this case, there was a problem (the Sql Server Service was still running), and it exited in an error state with a code of 8:
Backup Failed


A quick warning about my script. When I say mirror, I mean mirror. So, if you delete a file/directory from the source, the next time you run the script, it will be removed from the destination (backup location on my SAN). If you do want to retain a copy of something for long-term backup but want to remove it from your day-to-day system, you just need to copy it from either location to a third backup location. This is not a common occurrence for me. But what it does mean is that before I delete something from my machine, I have a think about whether I want to store it elsewhere for long-term persistence.

Get my scripts:

jQuery Ui DatePicker with ASP.NET MVC

On a project last year I had to use the jQuery Ui datepicker with ASP.NET MVC 3. I always intended to record the steps with explanations when I got a chance. And last week, I managed to find the time to do this:

One thing that I did in the implementation was include a colon in the DisplayAttribute of my model

        [Display(Name="Date of Birth: ")]
        public DateTime DateOfBirth { get; set; }

That leads to the non-optimal scenario whereby a colon is showing up in validation. In the video, I demonstrated how you could fix this in my client-side validation. But I forgot to demonstrate how you could fix it in the server-side validation (on the off-chance that your users have JavaScript disabled). You can use the following code in the action method of the controller:

            if (!ModelState.IsValid)
                //  HACK: This will remove the colon for server-side validation. I recommend not including a colon in the
                //  Display attribute on the relevant member in your model. Just put the colon in the razor view.
                ModelState["DateOfBirth"].Errors.Add(string.Format("The value '{0}' is not a valid for Date of Birth.", Request.Form["DateOfBirth"]));

As I mention in my comment, I believe that a colon should not be included in the model and could easily be added to the razor View. That would totally obviate the need for that hack.

Get the code: