FluentValidation and Emails with White Space

Quite often, users will copy and paste an email address into an email text input. And that email address will contain trailing white space. If that gets past your client-side validation code, you need to trim it at the server. That’s usually not a problem. But, if you are using the email validator in FluentValidation, such an address will fail validation and the message sent back to the user will be a generic one that the email address is not valid. It won’t mention a trailing space. And for most users, it can take quite a bit of time to figure out that the email address has some trailing whitespace. If our help-desk is anything to go by, many users didn’t figure that out at all and so it was down to me to address this.

After a lengthy discussion with our domain experts, we decided that we did not want an email address to be considered invalid, just because of a trailing space. Technically it is invalid. But it is easy enough to trim it in the Services layer. But we must first get past server-side validation. So, my existing validator, which was returning an error message about an invalid email address looked something like this:

RuleFor(m => m.NewEmail)
	.EmailAddress()	
	.WithMessage(ValidationConstants.SymbolIsNotAValidEmailAddress, x => x.NewEmail)
	.WithName("NewEmail"); 

As I mentioned above, the EmailAddress() validator will fail the validation of an email address with a trailing space. Before I move onto the custom validator I ended up writing, I just want to quickly note something we tried, but which turned out to be a bad idea. First, I tried the following:

RuleFor(m => m.NewEmail.Trim())
	.EmailAddress()
	.When(m => m.NewEmail != null)
	.WithMessage(ValidationConstants.SymbolIsNotAValidEmailAddress, x => x.NewEmail)
	.WithName("NewEmail"); 

Trimming the email property (in the 1st line) immediately felt wrong. We also ran into a few problems with that approach, which are beyond the scope of this post. But most importantly, mutating the property in the RuleFor lambda is not the way FluentValidation is meant to be used.

The right approach, the way we saw it, was to write a custom validator which would basically validate the email in the same way as the normal EmailAddress validator does, except permitting trailing whitespace. Step 1 in this process was to grab the source of the EmailAddress validator, which can be found here. Step 2 was to create the new validator itself, based on the EmailValidator referred to in Step 1, which I creatively named NonTrimmedEmailValidator:

    public class NonTrimmedEmailValidator : PropertyValidator, IRegularExpressionValidator, IEmailValidator
    {
        private readonly Regex regex;

        const string expression = @"^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-||_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+([a-z]+|\d|-|\.{0,1}|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])?([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))$";

        public NonTrimmedEmailValidator()
            : base(new LanguageStringSource("NonTrimmedEmailValidator"))
        {
            regex = new Regex(expression, RegexOptions.IgnoreCase);
        }


        protected override bool IsValid(PropertyValidatorContext context)
        {
            if (context.PropertyValue == null) return true;

            // only interested in whether the trimmed string is a valid email address.
            var trimmedEmail = ((string)context.PropertyValue).Trim();

            if (!regex.IsMatch(trimmedEmail))
            {
                return false;
            }

            return true;
        }

        public string Expression
        {
            get { return expression; }
        }
    }
}

Step 3 was to create an extension method to make it usable like the other fluent validators:

public static class FluentValidationExtensions
{
    public static IRuleBuilderOptions<T, TProperty> NonTrimmedStringIsValidEmailAddressWhenTrimmed<T, TProperty>(this IRuleBuilder<T, TProperty> ruleBuilder)
    {
        return ruleBuilder.SetValidator(new NonTrimmedEmailValidator());
    }
}

Now, we can validate objects which have properties that are email addresses, where the email has a trailing space.

RuleFor(m => m.NewEmail)
	.NonTrimmedStringIsValidEmailAddressWhenTrimmed()
	.WithMessage(ValidationConstants.SymbolIsNotAValidEmailAddress, x => x.NewEmail)
	.WithName("NewEmail");

And, like I am doing anyway, trim the string in the services layer. Now, the user does not get a validation failure when they copy and paste an email address from somewhere that happens to grab a bit of white space on the end.

Parsing a Comma Delimited List in SQL Server Scripts – Cleverly

There’s an external guy we use for really complex SQL. We have a report which has been the ruin of many a developer, and after many failures, it became apparent that developers could not do this. That’s when you need to get in a proper SQL merchant. Someone who thinks in terms of sets, rather than while loops.

But I digress. In one of his Scripts, I saw something which I found very interesting. He wrote a few short lines of code which parse a comma-delimited list (passed in as a parameter). Now, I’ve seen many a function which has been created for this, and they all seemed to fit about what I would do if I went to write it. But this guy… This is how he wrote it (inserting the result into a table variable called @HospitalIDTable):

		declare @HospitalIds CHAR (13) = '1,2,3,4,5,6,9' -- pretend this is a parameter
		declare			@XML as xml
		declare			@Delimiter as char(1) =','

		set				@XML = CAST(('<X>'+REPLACE(@HospitalIds ,@Delimiter ,'</X><X>')+'</X>') as xml)
		declare			@HospitalIDTable table (HospitalId int)
		insert into		@HospitalIDTable
		select			N.value('.', 'INT') AS HospitalId 
		from			@XML.nodes('X') AS T(N)
                -- done

If you output the @XML variable, you will see:

<X>1</X><X>2</X><X>3</X><X>4</X><X>5</X><X>6</X><X>9</X>

Now, the final SELECT statement – I don’t even understand. I know N is to make a string unicode, but value??? I need to research that. Back in a tick.

OK. So the value method is the XML data type method for XQuery. I should have known that, because I’ve done that before. Calling it off the N prefix is what threw me, and I still haven’t totally grokked what is going on there. But it is as cool as heck and if you do a SELECT on the @HospitalIDTable table variable, you get your list as a column of data.

Very concise and cool.

Experts like this guy make light work of complex SQL. We’ve had several dev’s look at his scripts and everyone agrees that their relationship with SQL is not on the same level as this guys. We’re front-end/middle-tiers devs. What did you expect?!

Why Cat?

Sometimes, when new-comers come to functional programming, they see some things which may seem strange to them. Things which don’t make sense from an object-oriented perspective. I’m going to take an example of this and show you what I mean. Lets say we take the following Javascript function cat, which takes multiple arrays and merges them all into one (I am using the Ramda library to assist. Anything with the R prefix is a Ramda function):

var cat = function () {
  var head = R.head(arguments);
  if (head)
    return head.concat.apply(head, R.tail(arguments));
  else
    return [];
};

The question may be posed, why bother with that function when Javascript has concat built in as a method on the Array prototype? The big difference is that concat is called on the object (the instance array), whereas cat operates on whatever is passed to it.:

// concat
[2, 1].concat([7, 6, 3, 4], [9, 6, 8]);
// returns [2, 1, 7, 6, 3, 4, 9, 6, 8]

// cat
cat([2, 1], [7, 6, 3, 4], [9, 6, 8])
// also returns [2, 1, 7, 6, 3, 4, 9, 6, 8]

It’s all about composability. Functional programming is a very “composable” paradigm. Functions (like cat) operate on whatever is passed to them as parameters. They do not operate on objects upon which they are methods. They are “free-standing” in nature. Lets see cat get composed as part of a bigger pipeline. Here, we have a function called splat. splat will basically flatten the arguments object of a function, similar to LINQ’s SelectMany:

var splat = function (fun) {
	return function(array) {
		return fun.apply(null, array);
	};
};

Now, lets say we want to use cat to merge some arrays. But the arrays are buried in the item of a larger array. Here’s how we could solve that:

var flatCat = splat(cat);
var arraysToMerge = [[7, 6, 3, 4], [2, 1],[77, 2, 45]]; // the arrays are in the 1st and only index of this larger array
var result = flatCat(arraysToMerge);
// result will be [7, 6, 3, 4, 2, 1, 77, 2, 45];

This works nicely with cat, but you cannot pass Array.prototype.concat into splat. It just won’t work. It’s object-focused nature is its undoing. Run it yourself with concat, instead of cat, to see it bomb out and you’ll understand why it failed.

So, whilst our first glance at cat seemed to size it up as a pointless refactoring to achieve the same thing as Array.prototype.concat, I hope I’ve demonstrated that by tweaking the concept into a function which stands alone (separate from any “owning” object), we can use it as a composable part in a larger operation. Just one cog in the functional pipeline.

Just remember, cat operates on whatever is passed to it.

Consuming a WebAPI Endpoint on Localhost from the Visual Studio Android Emulator

It is highly likely that any Xamarin Android application that you build will be consuming a RESTful API. And when you are developing the application, you don’t want to have to deploy that API to the Internet somewhere. You just want to run it up on localhost. It will make for a much faster development experience, as you will be able to iterate a lot quicker.

There’s not an abundance of help from the community on this and it took me about a day to figure it all out. So, hopefully this post will short-circuit the task for you.

My first port of call was this great post by the formidable Brian Noyes.

But it did not get me the whole way there. He left out a few important details, such as the fact that you need to run Visual Studio (“VS”) on elevated privileges to host the API on a IP address other than localhost. Otherwise, you will get the following error message Unable to connect to web server ‘IIS Express’.

So lets get to it…

You’ll need to install the Visual Studio emulator for Android which you can read about here. Note that it uses HyperV. This is great, but has a consequence. If you have VMWare’s VMWare player on your machine, you can forget about running any VMWare VMs as they are incompatible with HyperV. It is either one virtualization technology or the other. As the Visual Studio Android emulator is so good and fast, I ditched VMWare on my system.

So, here are the steps you need to do and I’ll assume you have already created an ASP.NET Core WebAPI project running on the full .NET framework (and not .NET Core).

  1. In this step, you need to configure your WebAPI project to serve up the API on a particular IP address (which is not localhost). Your Xamarin app needs to hit that IP address. Obviously, however, when production-ready, it will hit the full url of your production deployment of the Web API application. (I’m guessing you have already worked out that the localhost of your Android app is the Android device itself. So, localhost won’t help you.). To host the Web API on such an IP Address, open the applicationhost.config file, which lives in the hidden folder called .vs in the root directory of your ASP.NET Core solution (in a subfolder called config). Look for a bindings element inside the site element for your site and add a binding for 169.254.80.80; something like:
        <bindings>
           <binding protocol="http" bindingInformation="*:57971:localhost" />					
           <binding protocol="http" bindingInformation="*:57971:169.254.80.80" />
        </bindings>
    

    Note the IP Address 169.254.80.80, which I will discuss in the next step. Remember, you have to run Visual Studio with elevated privileges to host the Web API on that IP address. So when you fire up Visual Studio, do it with Run as Administrator.

  2. From Noyes’ post, “With the VS Android Emulator running, click on the double caret >> button in the side toolbar. This opens up the settings UI, one of which is for Network.” Now, look through the Desktop Adapters for one which has the IP address 169.254.80.80. For me, that was Desktop Adapter #4. It may have a different number for you. This is the one you have to use. It has to have an IP address of 169.254.80.80. So, this step is really more to confirm that one of the Desktop Adapters has that IP address. The mistake I made was that Noyes’ article stressed the usage of Desktop Adapter #2. But it is the IP address which is important, not which adapter number it has. Once you have confirmed that 169.254.80.80 is the IP Address for one of the adapters, you can move to the next step. Oh and obviously all calls to your local WebAPI in your Xamarin app should be made to that IP address.
  3. The last thing you need to do is open a port in your dev machine’s firewall for the port upon which your WebAPI project is listening. So, if you are running it on IIS Express on port 57543, then you need to create an inbound firewall rule permitting TCP connections to that port. This makes perfect sense, because the Emulator is a different machine, to all intents and purposes. It’s a stand alone virtual machine running in HyperV. And it is calling into your dev machine’s localhost services externally.

There’s not many steps, but the trick is figuring them out! Hopefully that saves you some time!

NDepend v2017

With NDepend v2017 being released recently, I thought I had to give it a test drive. Whilst I’m not an architect, sometimes I have to do a bit of architecture. Especially with home projects. It’s not like I can call Architects-R-Us and have them send an Architect around to produce an architecture for every app idea I have. And so I have a slowly growing interest in this space. This time I thought I’d write more of a tutorial about how to get going with NDepend.

Once you’ve downloaded NDepend and got the exe on your hard drive, you fire it up and point it at the sln file for a Visual Studio solution which you want to profile. It will pick up the projects in your solution and the dialog will look a little like this:

Figure 1 – Analyze Assemblies dialog

Click the Analyse 10 .NET Assemblies button and NDepend will perform its analysis and come back with another dialog:

Figure 2 – Analysis Complete dialog

The other thing it did was to launch an html NDepend Report Summary, because I had the Build Report checkbox ticked in the first dialog. More on that report summary later. So I clicked on the Show NDepend Interactive Graph button:

Figure 3 – Assemblies Graph

A dependency graph is displayed and it is interactive! I clicked on the UI node (far left) and it lit up the map with the dependant assemblies. More importantly, it showed a couple of windows which are yellow in colour and positioned at the left and foot of the screen. The window at the foot has help topics relevant to the current context. The window on the left is the most interesting. It is scoped to the selected graph node and provides a heap of at-a-glance information about that graph node and its descendants. Lets try again and click a node lower in the graph.

Figure 4 – Assemblies Graph with Descendant Selected

Figure 5 – The Dashboard

You can see that the ancestor nodes in the graph are green and the descendant nodes are blue. And again, the pop-up dialog (this time on the right) contains information scoped to the selected graph node (in this case, HomeLibrary.UiModel, a project which contains my DTOs): Despite a couple of issues in the code, I’m happy with the information which this graph is telling me. The dependencies are pretty much what I intended and I’m getting a good 50,000ft view of that. So lets poke around a bit more. I’ll click on the Dashboard tab a the foot of the main pane. This shows me a plethora of information and the thing I really like about it is the use of colour. Red – bad. Yellow – warn. Green – pass. At a glance, I can get a feel for the health of the application.

Figure 6 – Quality Gates

Figure 7 – Rule Violation

I can see I have 2 Quality Gate fails. That basically means I cannot release my application. Not until that number goes down to 0. Lucky this is just a sample application. So, lets take a look at what I need to fix! After clicking on the little red “2” link, I found that 1 of the 2 quality gates was a collection of 6 rules which I had transgressed:
There was one which immediately interested me, so I investigated further. It was called Attribute class name should be suffixed with ‘Attribute’. Absolutly! That would not like me to do otherwise. So, I double-clicked that rule to discover the culprit:

Looks like NoReorder was a class that was generated by JetBrains (probably Resharper). Being a generated class, I feel like I’m cool with this. And so here is where things get interesting. I’m going to try something here. If you look at the image immediately above, you can see some text which looks like a LINQ query. That’s called CQLinq and is what NDepend uses to return the results which show whether rules have been violated or not. As an extensibilty point, NDepend allows you to modify these queries and save them. And it saves them for this project. So, I made a quick and small modification to the query, taking advantage of the intellisense which is available in that context:

{
	// <Name>Attribute class name should be suffixed with 'Attribute'</Name>
	warnif count > 0 from t in Application.Types where 
	   t.IsAttributeClass && 
	  !t.NameLike (@"Attribute$") &&
	  t.FullName != "KesselRun.HomeLibrary.UiModel.Annotations.NoReorder"
	select new {
	   t,
	   Debt = 5.ToMinutes().ToDebt(),
	   Severity = Severity.Major
	}
}

I added the line of code t.FullName != "KesselRun.HomeLibrary.UiModel.Annotations.NoReorder". When I saved the query, that rule was no longer violated. I’m not sure if that is NDepend’s recommended way for handling that, but it is pretty cool that you can have such fine-grained control.

Moving on from Quality Gates, I want to take a look at a new Feature in NDepend 2017 called Technical Debt. They’ve taken this well-known concept and applied it in an innovative way into NDepend. You can see from the Dashboard screenshot above a section called Debt. Scoped to the overall solution, my app is receiving a B. But you can also see how this value changes depending on where your context is scoped to. If you take a look at the 2 dependency graph screenshot above which have the yellow information windows, you would have noticed a Debt Rating item. And it is scoped to whatever I have clicked. So what is this?

In a nutshell, they have used some algorithms to quantify the technical debt for each issue and rule violation. And that makes sense. If NDepend has found a problem, presumably there will be a time associated with fixing it. And even cooler, they have added an interest component. Every year the problem remains unfixed, the “time to fix” will grow. The debt grows with interest! That’s darn smart, because that’s how it works in the real world. The longer a code smell or bad design hangs around, the harder it becomes to fix. This kind of quantifying debt is great, because you can use it to buttress the business case to fix bad code (which, no doubt, you have inherited :)).

As with all things NDepend, the Debt data can be viewed from many different perspectives:

Figure 8 – Technical Debt Representations

The last feature I want to make mention of is the html report that you can generate with NDepend when it performs its analysis. The easiest way to do this is check the Build Report checkbox (under the Analyze button – see Figure 1 at the top of this post for that checkbox):

Figure 9 – Build Report

The report is kinda like the dashboard, but in html format. Here’s an online example of a report from an analysis that was run on NodaTime. In addition to the sections of the report (such as Diagrams, Quality Gates Summary, Rules Summary) on the front page, there is a fly-out menu from the left which gives you even more information. This includes items such as Dead Code, Hot Spots and even Trend Charts. This report is something you can host on a server, so it can easily be made available to stakeholders etc. in your organisation.

As an overall observation, I should also point out that the UX employed on NDepend is really innovative and clever. When you have that much information coming at you, it would be very daunting and confusing if the user interface was not good. But the NDepend user interface is excellent. It’s very intuitive. Navigating around is so easy and you only have to use it for half an hour or so before you feel really comfortable with all that data. That is a small amount of time to be able to get on top of a code analysis tool like NDepend. As time goes by, you become even more comfortable with it and stumble upon more of its hidden gems. And that is before I even thought about referring to the documentation on the NDepend site, which is very detailed and complete.

There’s so much more about this tool which could be covered, such as Continuous Integration, Visual Studio team Services build integration, Code Diffs … I’ve only really scratched the surface.

For people like me whose role slowly bleeds its way into the architecture space, we need all the help we can get. Tools like NDepend really help with this. Not only is it a hands-on-tool which gives a great overview of the health of your codebase, it also acts as a tutor. It gives advice and teaches with problem descriptions and suggestions. Resharper improved my code skills. NDepend will improve my design/architecture skills.