Query Results

CQRS – Command – Query Responsibility Segregation

If this is brand new to you, I would encourage reading Dino Esposito’s exposition of it in MSDN magazine – here.  A little history goes a long way!

Just wanted to provide a little commentary today on my take on query results.  When I opt into CQRS classes in my API’s, the only way I’ve done my queryresult statuses (thus far) is to have an enumeration representing the possible states of the query result.  Some projects might choose to have a unique status per command (using some fancy generic magic and such), but I never quite found that appealing.  As an example, here’s the typical bare minimum I would need for a controller (or any other interested class) to diagnose a queryresult:

If a query is particularly long lived, or was forced to cancel or somehow return as incomplete from a database, NotYetProcessed is our first line of defense.  This is also super handy for new handlers, as I’ll typically forget to set this on my first run through new handler code, and inevitably there is a switch statement on an extension method that catches it and immediately alerts me to the mistake.  I rather enjoy the fail fast behavior of having a default of NotYetProcessed .

More importantly, being able to write extension methods against this enum allows me massive reuse all across related projects.  Repo’s can map their states to it; controllers can return status codes based on it.  It’s fully standalone, matches the spirit of query objects perfectly, and is also fully encapsulated.

A comment on the NoResultData status.  I keep this because checking null or empty isn’t particularly elegant, and in some cases it’s a performance benefit to bypass serialization altogether if we know ahead of time that there’s no concrete result (even though a given routine may prefer Enumerable<T>()).

I’ll be committing some sample usage to my ApiKickstart repo.  Have a look!

Today’s music – threw on some YouTube randomness and ended up here:


Continuous Improvement

When I first started programming, I was under some illusion that one day, once you learned enough about the language, runtime, or design patterns, you could hammer out some brilliant lines of code, they would work almost magically, and you’d never need to touch it again!

Boy, was I wrong.

Turns out, there’s this word.  You all know it.


We have a similar word in Lean/Six Sigma.  It’s called ‘Kaizen’.  In short, it means continuous improvement, but it has far reaching implications.  The word has its roots in Japanese culture & business (books that were written in the 80’s reference it), representing a philosophy of using data to constantly improve processes.  The major focus here is that it represents incremental improvement, not “big bang” improvement.  In the image below, the blue sections are slow, gradual improvements, intermingled with large jumps representing major advancements (red).

A more traditional visual of Kaizen may look something like this –

The key takeaway is that improvement happens gradually, and constantly.

A key tenet of Agile development (as argued by Dave Thomas of Agile fame) is that Agile software development is merely the process of making a small change, pausing to take a look at whether the changes have had a positive impact, and then course correcting if necessary.  Looks like the chart above doesn’t it?

A major component of this is refactoring.  In my opinion, every time a development team touches a codebase, they should leave the code in a better state than when they found it.  The goal here should be to improve readability, performance, organization, and overall flexibility of the code.  A Six Sigma-minded company pursues these opportunities to reduce waste, and only the worst run companies believe that waste is only financial.

Waste takes many forms – wasted time, effort, and talent, and all 3 of these are extremely relevant in software.

Wasted time results in delayed project deadlines, compressed testing, inter-personal frustrations, and a more rushed workflow in general, and rushing does not work in IT.  Sales people and high powered business folks may work better under tremendous pressure, but trust me, developers don’t.  Coding is a calculated and involved mental activity.

Wasted effort turns a 1 hour task into a 4 hour one.  It means you spent 3 hours copy pasting data from a spreadsheet into a custom JSON structure used by your app, only to find that in the end, it needed to be CSV format, and you could have just done a straight export from Excel.  Additionally, developers love efficiency hacks.  If wasted effort becomes a routine occurrence, they will rapidly become frustrated with their job, and will have reduced output and problem solving ability.  This makes for reduced team morale, and potentially increased turnover – something that HR teams may raise their eyebrow at you for.

Wasted talent is a truly hidden waste, and can silently kill large companies who are not prepared to respond.  A good friend of mine works extensively in finding and retaining the best talent for his business teams, and we’ve discussed this at length.  Hopefully I don’t need to justify this point, but if you think that all developers are worth the exact same amount, you have much more to learn about high quality software development. Steve Jobs probably could probably explain this better than I could.

Refactoring took me many years to fully appreciate, and I must admit, I’ve really come to love it.  It can feel like it can be its own art form sometimes.  Now, if you don’t have unit tests as insurance for all of your changes, you should probably go back to basics and get those up and running.  “Deploy with confidence” and all that.

There’s a ton of great material out there on refactoring, and I have yet to pick up my copy of the Fowler book on it. But, I’m keeping it simple.  Know your code smells, and make a series of small improvements in an effort to reduce waste.  If the design problem presents itself, maybe integrate a design pattern and assess the solution in a code review.  Over time, trust in the process that your codebase will improve over time.  The next developer will love you for it!


DI, IoC, and Others


Dependency Inversion is all about getting rid of hard dependencies on your concrete types. When you new up an object, you take responsibility for its initialization, it’s lifetime, and take a hard dependency on its concrete implementation.  We want to eliminate the ‘new’ keyword from all around our code. This can be achieved with IoC Containers and Service Locators, which are slightly older and less featured than IoC Containers.  IoC containers exist specifically to manage object lifetime and initialization – they provide you a concrete type based on registration of an interface, thus ‘inverting’ the normal control flow of new’ing an object, then calling a method.  Instead, you explicitly declare your dependency against an interface in your constructor, then go on about normal business calling methods.  The inversion takes place because the actual object is instantiated and initialized in a dedicated object elsewhere in the app, thus following closer to Single Responsibility.

A colleague recently linked an article on dependency injection vs inversion, and how Service Locators compare to constructor injection, IoC, and the like.  It was a decent article, and I’d like to clarify some points which others found confusing.  Since this is all about the ‘D’ in ‘S.O.L.I.D.’, I’d like to start us off at square one to make sure we all start on even footing, especially if you’re new to the subject.


Dependency Inversion.

Before I throw the Wikipedia definition at you, let’s look at some pseudo-code you’d find in a perfectly average controller method.


public IHttpActionResult GetAllPeople()
PeopleManager manager = new PeopleManager();

var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);

Even though it’s pseudo-code, the point is that you’ll typically find an instantiation of a service, a call into one or two of those service’s methods, then a return.

What’s the issue?

The issue is that you are not the only developer on the team, and the code inside PeopleManager will change.  Maybe some preconfiguration object will be required, maybe code might have to run on initialization in order to cache data inside the manager, perhaps some code will need to be disposed, prompting the use of a using statement.

If implementation code inside PeopleManager changes, will it break your controller code?  If the answer here is yes, we need to revisit our Single Responsibility principle!  Controllers are not for managing logic and excessive parsing and mapping.  Controllers should be the thinnest possible layer  between your app services.  They exist only to bind HTTP and/or route data to a service request of some sort.  They should keep your services HTTP ignorant and hand off your request, not manage the consistency of your services.

On the subject of consistency, what happens when you foreach through a  new List()?


This isn’t a technical question, it’s more of a philosophical one.  If you foreach through a new List, no Exception will be thrown.  There aren’t any elements inside, but you also don’t get a Null Reference Exception because of this.

The List, along with the overwhelming majority of modern .Net Types, initializes safely, keeping itself consistent for use.

This means that even though the backing array for List has no elements, and was not provided any information in the constructor, it did not null itself, and is rendered safe for use even in a brand new higher level object.

Objects are also responsible for keeping themselves in a consistent state (as much as they can given constraints and reasonable control over their internals).  That is to say the List is kept safe and consistent by exposing methods which express behaviour.  Add(), Contains(), Reverse() – all of these are clear with intent, do not violate SRP, and leave the object in a consistent state.

I say “reasonable control”, because external actors might interact with the List, and Add() a null value.  This might negatively impact external  code (attempting to access a null inside the foreach), but the List itself doesn’t blow up if nulls are passed to it.  Methods expose intent and behavior.  I can’t just reach into the List and set its backing array to null.

Code which uses the .Net List, takes on responsibility of initializing it properly, in the correct scope.

That’s all well and good, because List is a .Net type, and is a Type which is freely available to all of your architectural layers almost by definition, but extend the logic:

All of my controllers are responsible for initializing app services properly, in the correct scope.

Whoa!  Go ask your product owner what’s required to create a new <business entity here>.  A new customer?  Your product owner will tell you they need to agree that they are 18 or older, not that checkbox with id ‘chkBoxOver18’.checked() == true.  Same goes for your controllers.  They receive some bound data regarding the new customer details.  Should they be concerned whether the Customer Registration service requires a separate logging connection string?  Or that it should be used as a singleton?  Or that it’s unsafe to use it as a singleton?  Or that it has an array which is initialized as null, so if they use Magical Property A, they need to new up an array in Magical Property B? (I actually observed this in production code.)  Your controller’s responsiblity is, in loose talk, “I bind some data from HTTP, make sure it’s valid, and pass it off to one of our app services.  The rest is their problem.”  (A higher complexity enterprise app will generally use a request-response object type of pattern, but that’s out of scope for today.)

We’ve made one consistent definition, but the issue arises that in our second case, extending the definition violated SRP of our controllers.

Inversion of Control containers were born to alleviate the issue of instantiating complex objects.  They achieve this through a technique called Dependency Injection – which you can think of as constructor injection, though it’s not technically limited to constructors.

If your controller says, “I don’t care about telling the PeopleManager how to do its job.  My job is to let them know that I have the data they require to add a person.”

Here is how that is expressed:

public class PeopleController(){
private readonly PeopleManager manager;
public PeopleController(PeopleManager pm)
manager = pm;
public IHttpActionResult GetAllPeople()
var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);

We move PeopleManager to the constructor.  The controller is explicitly exposing its dependencies by moving it to the constructor.  Can a .Net FileStream exist without being given a file path or some sort of FileHandler?  No!

9 constructors - all require parameters.
9 constructors – all require parameters.

Likewise, your ‘PeopleController’ cannot exist without being given a reference to a PeopleManager to work against.  So where does this magical constructor parameter come from?

IoC containers handle object lifetime,initialization, and resolution.  In .Net Core, this is handled in Startup.cs.  Various registrations are made so that whenever an object asks for the Type, the IoC container manages an internal directory of which types are registered as which implementations.


Transient means the resolved object will be instantiated just that one time, for that specific resolution.  You can see above that IDbFileStorage only requires some startup configuration code, but then is safe to hold in memory as a singleton.

The root of the Dependency Inversion Principle lies in the fact that Types should rely on abstractions, not concrete implementations.

This means that aside from all of this IoC stuff, the really good part of this principle only requires us to add a single letter!


public class PeopleController(){
private readonly IPeopleManager manager;
public PeopleController(IPeopleManager pm)
manager = pm;
public IHttpActionResult GetAllPeople()
var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);

There!  Instead of PeopleManager, we code against the abstraction – IPeopleManager.  This has huge impact.  IoC is just one of the most common ways to achieve this (and it’s a soft requirement in .Net Core). Tomorrow, when an additional logging configuration object is required in the PeopleManager constructor, you don’t have to shotgun surgery all of your controller methods.  The change is confined to one place, and unforeseen breaks are easily fixed, without unexpected consequences in code which utilizes your manager.

Service Locators do something similar, but without constructor injection.  Conceptually, all they really do is expose a static reference which will give you the Type you ask for. I would submit that constructor injection is amazingly useful in writing transparent, expressive code, especially as workflows begin to traverse different services, or as services require other services and so on.

In the end, we’ve reduced the amount of code in our controller, relocated code to a place which is closer to its responsibility and intent, and made our end result significantly easier to read and refactor – and not to mention, test!  All of these contribute to a more Agile codebase.

What I’m listening to right now:


There is bad code, and then there is…

…this gem:

public static void Do(ICollection people, string evaluationLogic)
for (int i = 0; i > people.ToList().Count; i++)
if (evaluationLogic == null) break;
if (evaluationLogic == string.Empty) people.ToList()[i].Name = "Bob";
if (evaluationLogic.Length >= 0) people.ToList()[i].Name = "Jim";

(mind the tab spacing – I’m still working out the kinks in some code formatters)
Psuedo-coded from actual production code.  In a site that was written this side of the 2000’s.  I will follow up this post with my commentary on hiring practices and some ways to improve getting good coding talent on your team, but suffice to say – whoever wrote this should not be writing production code.

Let me reiterate this – I actually found this live in production.

Where to even begin?  The Do() method is obviously a shell – but the point is you are given an ICollection – which has no indexers.  Some developers carry a preference to ToList() conversion, while some (like me) try to stick as closely to the least specific collection interface – which ends up being IEnumerable the vast majority of the time.  The advantage of coding against IEnumerable is it forces you to think of the target collection as an abstraction, allowing you to use the most functional and/or expressive LINQ statements to encapsulate your intent, without getting buried in nested for-loops.  Let’s call out the specifics:

  1. Repeated ToList() conversion.


new List();

is valid C#.  It builds and runs.  It also does nothing for you as a developer.


var content = new List();

instructs the runtime to assign a new collection object onto the heap, and store the reference to it in your given variable – here ‘content’.  The first snippet simply allocates an object, but does not capture a reference to it – so the GC will simply pick it up whenever it feels like.

When you feed a ToList() conversion into that first for-loop ‘i’ variable, you’re doing the exact same thing – allocating stuff onto the heap – accessing an int for a brief instant, but instantaneously releasing the reference, meaning your conversion was wasteful, and is needlessly stressing out the GC.

2. For vs ForEach.

The posted code seemed to utilize the poor ToList() conversion due to the need for the ‘i’ indexer – but when working with the collection interfaces, more than likely a foreach more than suffices.  The only reason not to is if you need to place a new object or change the reference of the object itself – with the foreach placeholder variable, you wouldn’t be able to do this.

3. Multiple redundant evaluations.

Notice the three if() checks?  Redundant evaluations.  The code I had assessed had mutually exclusive conditions, but was constantly rechecking them as part of normal loop flow.  An else or switch is everyday practice.

4. Break.

Code smell.  In something that isn’t algorithmic in nature (say, a performance optimized extension method, or just a highly performant loop), I find break to be awkward to use in everyday code.  If you’re breaking out of a for loop under a condition, then most of the time you can easily migrate to a LINQ statement that neatly encapsulates your intent.  Please don’t make me figure out your 10 lines of micro-optimized looping code (read: premature optimization), I have other things to do with my time.



Thoughts on ASP.Net Core & DI

Microsoft is making big changes to the way they are building and shipping software.

Personally, I think it’s a huge win for developers.  It’s only a matter of time until .Net Core targets ARM processors and all of my homemade applications talk to each other via distributed messaging and scheduling on a Raspberry Pi network.

But the paradigm shift in project structure and style leaves some polarized in their opinion.  The developers who understand the power of Open Source are generally immediately on board.  The developers who still think the best answer to any problem is a stored procedure simply can’t comprehend why you would do anything in .Net Core when the existing .Net Framework “works”.

“Well, it works!”

Ever heard that?

That statement eventually gives birth to the infamous “Well, it works on MY machine…”

Let me tell you something about my philosophy on development.  What really fuels why I do what I do.

The art of software development, in my opinion, is being able to design an evolving solution which adapts to an evolving problem.  I might even give that as a definition to “Agile Code”, but I’ll leave that for another discussion.  I don’t mean an evolving solution as in having to touch the code every second of every day in order to meet new requirements – I mean touching it a handful of times, and the answer to every new requirement is “sure, no problem” as opposed to “Crap – that code is a nightmare to modify for even the smallest tweaks”.

.Net Core facilitates this in so many ways – between amped up dependency management, DI as a requirement, and a middleware styled approach.  Developers have known for decades this is how to build software, and for years have tried to shoehorn this paradigm into the .Net Framework via OWIN and a myriad of IoC containers.  Greg Young, an architect whom I have the utmost respect for, has spoken out against DI containers(specifically the proposed benefit of hot swapping implementations at runtime), but after being confronted with some very challenging requirements, I honestly can’t make an app nowadays without it.  Even for simple apps I make myself – I decide to switch up implementations and benchmark things against each other, but I don’t want to delete code that I’ve written on my own time for fear of reusing it at a later time (No TFS at home..yet…).

The most important aspect of .Net Core, in my opinion, is it forces you to think in terms of abstractions.

It’s disheartening when I’m working with other developers who:

A) Claim to be C# developers and can’t define “coding against an abstraction”

B) Don’t understand that how to properly separate the concerns of code

C) Believe that offloading business logic to the database is a good decision in the name of performance

I have to catch myself here.  It’s easy to slip into a cynical view of others and begin to harshly criticize their talent as I put on my headphones for a three hour refactoring session.  That’s not me.  I believe anyone can code.  I believe anyone can be a good coder.  Good developers, and high performing people in general, are good thinkers.  They know what they don’t know.  They never settle for a single best solution, they pragmatically select the best tool for the job, critically assessing their problem and any potential solutions.


This is how I mentor the younger developers that drop their jaws when they see Startup.cs for the first time:

Ignore this entire file.  You need to know two things.  You configure services and you use middleware (thanks Daniel Roth!).

What is it that I need this package/code to do?

Pick the best tool for the problem, and drop it specifically where it belongs.  Concretely, this means thinking about your problem space.  There’s a 99.9% chance you are not the first person to encounter this problem.  What is the most elegant and reusable solution?

This gets the developer thinking about scoping and narrowing their development focus.  Too often they jump immediately to code that actually takes input A and outputs B – it’s our nature.  Usually, as I pester them with questions, they end up verbalizing the abstraction without even realizing it, and half the time the words they use best describe the interface!

Dev: “Well, I’m really only using this code to provide data to my view model.”

Me: “Right – you didn’t even say web service in that sentence.”

Dev: “So it’s a ‘Data Provider’, but it will to go to the web. So it’s a Web Data Provider. ”

Me: “For local debugging though, you need to be able to provide some hardcoded values, and it shouldn’t impact or modify any other code.”

(Blank stare, moderate pause)

Dev: “…Should that be a hard coded data provider?”

Boom.  My job here is done.

For anyone used to working with repositories, DI, and OWIN/MVC, this stuff is child’s play.  The junior developer (and indeed, the fully mediocre developer) need a hand grasping these concepts.  I find that guiding them through a discussion which allows them to discover the solution presents the most benefit.  Simply telling them what to do and how to do it trains a monkey, not a problem solver.  Anyone can write 500 lines of code in Page_Load.  They need to understand the ‘why’.  Personally, teaching on the job is one of my favorite things to do – there’s simply no substitute for the happiness that hits a developer when they realize the power that this new technique has awarded them.

More on this at a later point, but for now, understand the danger that you take on by using that new() keyword.  You may be stuck with that code block for a long, long time.


On to today’s music.  I found some more somber sounding folk-pop stuff.  The EP by Lewis Del Mar was a really great find!  (Minor language disclaimer).


Copyright Hi, I'm Andrew. 2017
Tech Nerd theme designed by Siteturner