My team and I are in the midst of a transition into Agile-scrum.  It has been advertised to the rest of the company, and our executive management is pushing for it.

Thus far, we haven’t received any official process or reorganization to align with the vision, so we don’t have product owners, demo’s, retro’s, or discovery meetings.  We still do old-school waterfall requirements documents (which by the way, everyone wants to change just now while we’re firming up testing).  We joke that we’re in this purgatory of “water-scrum-fall”.

“That team has scrum meetings, so they must be Agile, right?”

I thought “water-scrum-fall” was a word we simply made up.  Turns out, it’s a real thing!

I recently watched a dev talk about scaling Agile and how to properly integrate Agile-scrum in the enterprise.  Sure enough, at 2:10:

It’s a real thing!!!

Wow.  To see our current project structure in this diagram blew me away.  I wasn’t crazy!  As a younger dev, I once got reprimanded for asking about requirements documents.  I now understand.  BDUP (big design up front) is basically a failure from the outset.  Without a product owner to be the primary gate of prioritizing work, everyone and their mother has input on what the feature “should look like”.

What this results in is a never-ending cycle of “sprints” (I use the term loosely in this context), with the work of sprint 5 modifying or undoing the work of sprint 2, so on and so forth.  Without a feature-driven delivery date and rapid push to production, water-scrum-fall is less productive than either scrum or waterfall.  If I chose a camp, I could just decide to never deviate from requirements, and save myself the time of a scrum meeting (which ends up evolving into just another 1-hour meeting where 12 people show up).  Or perhaps I would lock stakeholders into a meeting room and refuse to let them out until they’ve prioritized a backlog for the team to accomplish a deliverable feature.

Being in the middle, as the diagram shows, I actually find less productive than going all in on either methodology.

Jez mentions value stream mapping, and that is a critical skill that any Six Sigma belt will have, but that businesses outside of manufacturing generally don’t utilize.  Sure, they can define it, but they miss the point.  You need to understand who your customers are, internal and external, and what their inputs are.  This will be also different at a team level, so it’s important for all members to understand these concepts, just as they would be expected to understand a user story.

As he says, the lead time on a certain process may be perceived as 9 months, and people have become accustomed to ignoring it.  If that’s where 80% of the delay is, focus all of your energy and cutting the red tape and getting straight to value!  When you have centralized project management settings software deadlines from an armchair, you will absolutely get incorrect estimates and budget overage.  Agile software is improved bit by bit, constantly re-evaluated and measured.  This is also akin to how Six Sigma belts seek to achieve change in their organizations.  They never aim for a massive change in one fell swoop.  They expect a domino effect of smaller changes.  Instinctively, they understand that impactful change happens over time, incrementally.  One day you realize, “I don’t even need to do <task/feature/process> because of all the smaller improvements we’ve made to <other tasks/features/processes>.

Where then, do you begin with your improvements? The answer: low hanging fruit.

You may have heard of the 80/20 rule.  This is a staple in Six Sigma.  Understand that 80% of X may be caused by 20% of Y, or that 20% of your problems are impacting 80% of your performance hits.  It changes the way that you use data to drive action and continuous improvement.  I wouldn’t micro-optimize code without justification for a real benefit, so why would you think that having scrum meetings and calling your project phases ‘sprints’ will magically speed up development?

I hope it goes without saying that with my lean background and love of automated testing, I absolutely prefer an Agile scrum methodology, with a heavy focus on automated testability against a robust backend.  If you’re a team lead or scrum master who has some power and say over how you execute your agile processes, definitely give that video a watch.

URN’s & URI’s, oh my!

Being more of a pure C# backend kind of guy, I’ve had to integrate with another SAML system recently.  I’ve actually never worked with SAML, so I dug into the spec and did the standard Googling.

As is not uncommon, I ended up on a wild goose chase of hyperlinks, after trying to find out what this meant:


It’s tucked inside standard SAML nodes (which is built on top of XML).  I figured I’d share with you what this whole “urn” moniker is all about.

All developers know what a URL is.  Most would call it a hyperlink or “web link” of some sort.  You’d be half right.  It stands for Uniform Resource Locator – and technically, it’s a subset of a URI – Uniform Resource Identifier.   A URI fits the following pattern (see RFC 3986):


(grabbed from Wikipedia)

So our standardized definition would be “A compact sequence of characters that identifies an abstract or physical resource.”  The “physical or abstract” part is important – the target resource could be a physical file, or could return a representation of a logical piece of data.  Remember, to good .Net developers , XML and JSON are merely the representations of our objects – they are the means, not the ends.

So, a URL is a type of URI – specifically a URI that is located over a network.  A URL could be any of these:




The first section is our network protocol (referenced as scheme), followed by host, port, path, and query fragments.

Being a visual person, the image made much more sense than delving through a spec somewhere trying to make sense of all the terminology.


I’ve always viewed the URI class in dot net as not adding much value – but after better understanding the URI structure, I have a whole new appreciation for it! Take a look at these:

Uri ftpUrl = new Uri("ftp://ftp.funet.fi/pub/standards/RFC/rfc959.txt");

Uri ldapUrl = new Uri("ldap://[2001:db8::7]/c=GB?objectClass?one");

Yields some pretty useful information automatically, without tedious parsing!


A URL is a type of URI, and, as it turns out, a URN is a type of URI! Uniform Resource Name is still a type of identifier, just intended to identify a hierarchical namespaces.

For example: urn:isbn:0451450523

A URN is still supposed to be a unique identifier, so the above line implies that the ISBN number refers to one book, and one book only.  So when it came to define the SAML protocol in a SAML message itself, it made sense to use an accepted standard instead of something custom.  OASIS can manage its own naming schema and convention under the “urn:OASIS” namespace.

As an aside, if you’re in web development, don’t go using Uri for parsing and doing tricky things with query strings. It wasn’t designed for that, and you’d be using the wrong tool for the job.

To get a little more “spec-y” – a resolution request can be made for any URN (of which these examples are) – but a URN resolver is responsible for taking that URN and translating it into a URL, or the resolvable location of that identifier.  This is somewhat abstract – but it’s quite simple really – a DNS server takes the URN of “https://app.pluralsight.com/library/” and resolves a physical network location to serve up that resource.  Subsequently, the web server accepts that URL and serves up the resource representation.  This is, in effect, a form of decoupling, which I find rather interesting.

It seems as if the deeper you get into web development, the farther back in time you end up. Indeed, this holds true for our industry as a whole.  REST API’s are my current specialty, and seeing so much similarity in terminology (“identifiers of resources”, “actions upon those resources”, “resource representations”) fascinated me when I first saw this.  It served to reinforce my commitment to the future of REST – as opposed to something new, custom and non-uniform.  REST is how the internet works – it makes sense to develop web services which minimize the deviation from natural HTTP constraints (ASP.Net session state is one such example – phenomenal idea when it came out – now pretty much everyone regrets it). I’ll be blogging about REST alot, but if you need another hyperlink to continue on your stream of curiosity, check out Roy Fielding’s work – any REST tutorial will reference him.

ASP.Net Core & CQRS

This one is geared toward mid level developers who have been doing MVC for awhile, but find themselves in these situations:

1) They havent gone back to revisit their MVC code in quite some time
2) They are using a tutorial or a beginner’s project and now need to move the design to a production system
3) Want to take it to the next level because their application logic and MVC skills have grown.

My history with MVC goes back a few years, and I’ve seen a few ways to build (and not build) web apps with it.


Disclaimer: Don’t copy paste all 3,000 lines of your web forms code into your controller.  Just don’t.  That’s called technical debt, and MVC isn’t a magic bullet for it!
I’ve been a part of simple “data in data out” pages, to giant migrations from web forms, to simple classification and migration of stored procedures to a web API backend (where my happy place is now).
I want to talk about 2 critical concepts, that if ignored, will still allow you to build your web apps,  but will cripple your ability to scale, refactor, and keep clean logic.

Don’t let your web app turn into “that system” that everyone hates working with and has to constantly fight to get changes in.

I’m calling these out because about a year ago I found (and subsequently proved out) a critical, but easy to miss, piece of MVC, and it’s resulted in some amazing flexibility and organization of logic, even in an enterprise grade financial services backend.

Sharing of ViewModels – Don’t Do It.

This is a classic case of developers being brainwashed that all code duplication is evil, and should be avoided at all costs.
Now, I agree that code duplication is code smell, and we should seek to minimize or eliminate it, as it pollutes logic and increases the number of
touch points when logic or data changes. But here’s an observation –

MVC ViewModels are dead classes with no logic.

The ‘S’ in SOLID. Can we define their responsibility?
A number of definitions might exist out there, but let’s go with something to the effect of “To facilitate binding of HTTP route data to a
strongly typed object to pass to our controller.

Possible changes that would break or violate this responsibility:

  • If the protocol changed from HTTP to something else
  • If the dependencies of our controller relied too heavily on the ViewModel (Leaky abstraction)
  • If the routing engine bound data differently
  • If anything other than controllers consumed these ViewModels

Amount of logic – 0.
Amount of intelligent methods – 0.
In short, they are dumb property bags. Dumb property bags make very flexible data binding objects.

To this end, and to facilitate maximum refactorability, a ViewModel should not be shared across controllers or its Actions. – I don’t care how simple your ‘AccountsViewModel’ is.
For any non trivial app, requirements will change – which means your data will change. This change may be focused, or it may be broad, and
your accounts VM will have its properties refactored in some workflows (controller actions) and not others. Do yourself a huge favor – you’re no longer in the “Intro to MVC” course – Split those ViewModels up. If you have trouble finding a name for them, then you need to narrow your logical responsiblity for controller actions.

You’d be surprised at how hard of a time I have getting other developers to do this. They either don’t understand the justification, or counter me with code duplication.
There’s really no legitimate counter argument to this technique. Just do it – and you’ll thank me when you’re refactoring 6 months from now.

Having split those ViewModels, you’ve allowed yourself some flexibility – but now we’re going capitalize on it.

If you haven’t heard of CQRS (or its predecessor – CQS), go Google it.  Greg Young and Jimmy Bogard are your guys, and I wouldn’t want to cover their material less efficiently or insightfully than they could.  Despite its simplicity – CQRS is a profound concept that can be the foundation for almost any data centric .Net app. Honestly, even in my integration systems (whose role is solely to integrate and not to own a data store), the entire design is a facade over a vendor’s data service – and there it still plenty of opportunity for CQRS to play a role – although I have to make some compromises (some more or less serious than others) in the design since the vendor system did not respect CQRS, so I can only go so far when their most critical services aren’t idempotent. Some of these legacy systems were designed by people who are only able to think in XML, and don’t understand that XML is merely the representation of an object.

A gross generalization of CQRS can be stated simply as separating your read and write models.

Well, that works nicely since we just split off our ViewModels!  So now we get to take this one step further.

The usual convention is to suffix these after their entity – so you’ll have things like CustomerViewModel, TransactionViewModel, and so forth.

This doesn’t describe whether the model is being used for a logical read or write (C and Q), so, following CQRS convention, our read models are named as queries,
and the view models that hold data to instruct a mutation of state in the system are named as commands.

So, where before, you had to shoehorn properties and member conversions in your AccountViewModel, you’ll now have things like:

Do you now see how easy it is to refactor these if any one of these workflows change? Plus, even a new developer will have a clear understanding of why these classes exist.

Requirements Change.

Today, all the web app needs is an Id to pass in to the GetAccountByIdQuery, but – at any point in the future, the web app might also have to pass in the account type, or a new feature may be added where a limited amount of information could be shown for querying an account which is already closed (which could require special transformation of the Id).

If you’ve been working with WebAPI’s for even a short period of time, you should be pretty comfortable with your verbs. At a high level –
GET – Cacheable, retrieval of data, idempotent
POST – Not cacheable or idempotent, used to send data to a server for processing

See how easily these ViewModels map to their respective verbs?

I have some apps that have such a thin controller layer – I actually fully substituted the Command and Query classes as my ViewModels!  There is ZERO model binding logic in these controllers, and my CQRS handlers consume them directly.  This is descriptive of a system which worked well with that design, not prescriptive of you to do it for your web app.  In fact, I might suggest most traditional MVC apps would have to do some amount of mapping from their ViewModel to a more robust query object (which may contain some methods to keep its state consistent), if they are all in on Query and Command handlers.

When I initially started doing this, it almost felt like cheating – but it makes my massive API very easy to reason with and understand.

If you’re working on a larger, more RESTful service, you’ll appreciate these even more, as they map to the other verbs just as well (I’ve used these with PUT, DELETE, and
even the obscure OPTIONS verb).

This should be an easy refactor in your app, and will help set you up for the next level of CQRS – tailoring your services and repositories around a polyglot
design – meaning your read services could defer a read action to an in memory redis cache, or a write service could coordinate multiple actions if you have things
like pub-sub or messaging frameworks, with all sorts of logic in between.

Happy refactoring!

Copyright Hi, I'm Andrew. 2017
Tech Nerd theme designed by Siteturner