ASP.Net Core & CQRS

This one is geared toward mid level developers who have been doing MVC for awhile, but find themselves in these situations:

1) They havent gone back to revisit their MVC code in quite some time
2) They are using a tutorial or a beginner’s project and now need to move the design to a production system
3) Want to take it to the next level because their application logic and MVC skills have grown.

My history with MVC goes back a few years, and I’ve seen a few ways to build (and not build) web apps with it.

 

Disclaimer: Don’t copy paste all 3,000 lines of your web forms code into your controller.  Just don’t.  That’s called technical debt, and MVC isn’t a magic bullet for it!
I’ve been a part of simple “data in data out” pages, to giant migrations from web forms, to simple classification and migration of stored procedures to a web API backend (where my happy place is now).
I want to talk about 2 critical concepts, that if ignored, will still allow you to build your web apps,  but will cripple your ability to scale, refactor, and keep clean logic.

Don’t let your web app turn into “that system” that everyone hates working with and has to constantly fight to get changes in.

I’m calling these out because about a year ago I found (and subsequently proved out) a critical, but easy to miss, piece of MVC, and it’s resulted in some amazing flexibility and organization of logic, even in an enterprise grade financial services backend.

Sharing of ViewModels – Don’t Do It.

This is a classic case of developers being brainwashed that all code duplication is evil, and should be avoided at all costs.
Now, I agree that code duplication is code smell, and we should seek to minimize or eliminate it, as it pollutes logic and increases the number of
touch points when logic or data changes. But here’s an observation –

MVC ViewModels are dead classes with no logic.

The ‘S’ in SOLID. Can we define their responsibility?
A number of definitions might exist out there, but let’s go with something to the effect of “To facilitate binding of HTTP route data to a
strongly typed object to pass to our controller.

Possible changes that would break or violate this responsibility:

  • If the protocol changed from HTTP to something else
  • If the dependencies of our controller relied too heavily on the ViewModel (Leaky abstraction)
  • If the routing engine bound data differently
  • If anything other than controllers consumed these ViewModels

Amount of logic – 0.
Amount of intelligent methods – 0.
In short, they are dumb property bags. Dumb property bags make very flexible data binding objects.

To this end, and to facilitate maximum refactorability, a ViewModel should not be shared across controllers or its Actions. – I don’t care how simple your ‘AccountsViewModel’ is.
For any non trivial app, requirements will change – which means your data will change. This change may be focused, or it may be broad, and
your accounts VM will have its properties refactored in some workflows (controller actions) and not others. Do yourself a huge favor – you’re no longer in the “Intro to MVC” course – Split those ViewModels up. If you have trouble finding a name for them, then you need to narrow your logical responsiblity for controller actions.

You’d be surprised at how hard of a time I have getting other developers to do this. They either don’t understand the justification, or counter me with code duplication.
There’s really no legitimate counter argument to this technique. Just do it – and you’ll thank me when you’re refactoring 6 months from now.

Having split those ViewModels, you’ve allowed yourself some flexibility – but now we’re going capitalize on it.

If you haven’t heard of CQRS (or its predecessor – CQS), go Google it.  Greg Young and Jimmy Bogard are your guys, and I wouldn’t want to cover their material less efficiently or insightfully than they could.  Despite its simplicity – CQRS is a profound concept that can be the foundation for almost any data centric .Net app. Honestly, even in my integration systems (whose role is solely to integrate and not to own a data store), the entire design is a facade over a vendor’s data service – and there it still plenty of opportunity for CQRS to play a role – although I have to make some compromises (some more or less serious than others) in the design since the vendor system did not respect CQRS, so I can only go so far when their most critical services aren’t idempotent. Some of these legacy systems were designed by people who are only able to think in XML, and don’t understand that XML is merely the representation of an object.

A gross generalization of CQRS can be stated simply as separating your read and write models.

Well, that works nicely since we just split off our ViewModels!  So now we get to take this one step further.

The usual convention is to suffix these after their entity – so you’ll have things like CustomerViewModel, TransactionViewModel, and so forth.

This doesn’t describe whether the model is being used for a logical read or write (C and Q), so, following CQRS convention, our read models are named as queries,
and the view models that hold data to instruct a mutation of state in the system are named as commands.

So, where before, you had to shoehorn properties and member conversions in your AccountViewModel, you’ll now have things like:
CreateNewAccountCommand
GetAllAccountsQuery
GetAccountByIdQuery
CloseAccountCommand

Do you now see how easy it is to refactor these if any one of these workflows change? Plus, even a new developer will have a clear understanding of why these classes exist.


Requirements Change.

Today, all the web app needs is an Id to pass in to the GetAccountByIdQuery, but – at any point in the future, the web app might also have to pass in the account type, or a new feature may be added where a limited amount of information could be shown for querying an account which is already closed (which could require special transformation of the Id).

If you’ve been working with WebAPI’s for even a short period of time, you should be pretty comfortable with your verbs. At a high level –
GET – Cacheable, retrieval of data, idempotent
POST – Not cacheable or idempotent, used to send data to a server for processing

See how easily these ViewModels map to their respective verbs?

I have some apps that have such a thin controller layer – I actually fully substituted the Command and Query classes as my ViewModels!  There is ZERO model binding logic in these controllers, and my CQRS handlers consume them directly.  This is descriptive of a system which worked well with that design, not prescriptive of you to do it for your web app.  In fact, I might suggest most traditional MVC apps would have to do some amount of mapping from their ViewModel to a more robust query object (which may contain some methods to keep its state consistent), if they are all in on Query and Command handlers.

When I initially started doing this, it almost felt like cheating – but it makes my massive API very easy to reason with and understand.

If you’re working on a larger, more RESTful service, you’ll appreciate these even more, as they map to the other verbs just as well (I’ve used these with PUT, DELETE, and
even the obscure OPTIONS verb).

This should be an easy refactor in your app, and will help set you up for the next level of CQRS – tailoring your services and repositories around a polyglot
design – meaning your read services could defer a read action to an in memory redis cache, or a write service could coordinate multiple actions if you have things
like pub-sub or messaging frameworks, with all sorts of logic in between.

Happy refactoring!

No REST for the wicked.

I tend to be an obsessive person.  I’ll get really excited about a technology, automation tool, open source library, or data related “thing”, and just be consumed for weeks, or months on end.  It happens with some games I play as well.  My wife, back when we were dating, even told me that she was fearful for a time that she was just another one of my obsessions that would fade.  (Don’t worry – we’re happily married!)

Lately I’ve been in between obsessions, and seem to be fishing around for some cool things to do.

Random aside – the words “I’m bored” you will NEVER hear me say.  Life is too short to stop learning, and the amount of things I do NOT know terrifies me.

Anyway, I completed a server build last Christmas, and I’ve been digging into some areas of network security that have been weak points for me.  Specifically a course on Pluralsight on Wireless Security.  Sure, taking care of the router at home is a trivial task for even the most average of IT people, but it was great learning about encryption IV’s stamped into the firmware of routers for the WPS feature.  I always disabled it because it “felt” insecure.  I’m a web developer, it’s not hard to imagine the myriad of techniques hackers use to compromise our precious data-driven websites.

My brother in law has been engrossed in Linux administration recently, and it’s got me thinking about my weak PowerShell and windows command prompt skills.  I’ve always been such a strongly typed .Net thinker that command line apps are giant “magic strings” for me – they almost feel dirty.  I won’t tag this post under code smell, but I’d love to go over magic strings in a later post, as I find them all the time and constantly have to refactor them.  I digress.

I feel like my brain has more control over me than I do over it.  (How’s that for meta-humor?)

But really.

2a0b4d4f5ea25f2d7f60722683af3962

 

So here’s my list of possible undertakings:

  • Buy a Raspberry Pi
  • Learn to Administer said Raspberry Pi via command line
  • Dig into system admin tasks for my Windows Server 2012 box so I can better understand the infrastructure side of IT
  • Hack my home WiFi with old, useless laptops, in the name of improving home security
  • Start shopping for new server components – do I want to build a new one the size of a shoebox?
  • VM a Linux box and just have fun learning that
  • Educate myself on Active Directory so I don’t appear like such a dolt to other IT admins
  • Continue my research into .Net based web scrapers, and see if I can spool up anything with .Net Core

The Pi’s have me really excited – I’m dying to set up a network of them so they can all talk to each other via RESTful HTTP calls.  Have I mentioned how much I love REST?  Using HTTP as it was originally intended – really helps to model out solution spaces for complex data services and Web API’s. I’ll go into that in another post, with concrete samples of how I’ve approached this in my own ASP.Net code.  I can imagine exploring the wonders of message queues and callback URI schedulers to coordinate automated tasks between the Pi’s – sending me texts throughout the day when one of them has finished scraping customer reviews for a product I’m researching.  I’d love to host a MongoDB node on one if it’s feasible!

I’m sure I’m missing a few from that list.  And I’ll watch a few movies and TV shows until something grabs a hold of my brain again.  Let’s just hope it’s not internet spaceships this time.

Today’s song is titled with this post.  It was wildly popular on the radio for awhile – I would still consider it more or less radio rock.  Generally people are pretty polarized on Cage the Elephant.

Ain’t no REST for the Wicked

Thoughts on ASP.Net Core & DI

Microsoft is making big changes to the way they are building and shipping software.

Personally, I think it’s a huge win for developers.  It’s only a matter of time until .Net Core targets ARM processors and all of my homemade applications talk to each other via distributed messaging and scheduling on a Raspberry Pi network.

But the paradigm shift in project structure and style leaves some polarized in their opinion.  The developers who understand the power of Open Source are generally immediately on board.  The developers who still think the best answer to any problem is a stored procedure simply can’t comprehend why you would do anything in .Net Core when the existing .Net Framework “works”.

“Well, it works!”

Ever heard that?

That statement eventually gives birth to the infamous “Well, it works on MY machine…”

Let me tell you something about my philosophy on development.  What really fuels why I do what I do.

The art of software development, in my opinion, is being able to design an evolving solution which adapts to an evolving problem.  I might even give that as a definition to “Agile Code”, but I’ll leave that for another discussion.  I don’t mean an evolving solution as in having to touch the code every second of every day in order to meet new requirements – I mean touching it a handful of times, and the answer to every new requirement is “sure, no problem” as opposed to “Crap – that code is a nightmare to modify for even the smallest tweaks”.

.Net Core facilitates this in so many ways – between amped up dependency management, DI as a requirement, and a middleware styled approach.  Developers have known for decades this is how to build software, and for years have tried to shoehorn this paradigm into the .Net Framework via OWIN and a myriad of IoC containers.  Greg Young, an architect whom I have the utmost respect for, has spoken out against DI containers(specifically the proposed benefit of hot swapping implementations at runtime), but after being confronted with some very challenging requirements, I honestly can’t make an app nowadays without it.  Even for simple apps I make myself – I decide to switch up implementations and benchmark things against each other, but I don’t want to delete code that I’ve written on my own time for fear of reusing it at a later time (No TFS at home..yet…).

The most important aspect of .Net Core, in my opinion, is it forces you to think in terms of abstractions.

It’s disheartening when I’m working with other developers who:

A) Claim to be C# developers and can’t define “coding against an abstraction”

B) Don’t understand that how to properly separate the concerns of code

C) Believe that offloading business logic to the database is a good decision in the name of performance

I have to catch myself here.  It’s easy to slip into a cynical view of others and begin to harshly criticize their talent as I put on my headphones for a three hour refactoring session.  That’s not me.  I believe anyone can code.  I believe anyone can be a good coder.  Good developers, and high performing people in general, are good thinkers.  They know what they don’t know.  They never settle for a single best solution, they pragmatically select the best tool for the job, critically assessing their problem and any potential solutions.

 

This is how I mentor the younger developers that drop their jaws when they see Startup.cs for the first time:

Ignore this entire file.  You need to know two things.  You configure services and you use middleware (thanks Daniel Roth!).

What is it that I need this package/code to do?

Pick the best tool for the problem, and drop it specifically where it belongs.  Concretely, this means thinking about your problem space.  There’s a 99.9% chance you are not the first person to encounter this problem.  What is the most elegant and reusable solution?

This gets the developer thinking about scoping and narrowing their development focus.  Too often they jump immediately to code that actually takes input A and outputs B – it’s our nature.  Usually, as I pester them with questions, they end up verbalizing the abstraction without even realizing it, and half the time the words they use best describe the interface!

Dev: “Well, I’m really only using this code to provide data to my view model.”

Me: “Right – you didn’t even say web service in that sentence.”

Dev: “So it’s a ‘Data Provider’, but it will to go to the web. So it’s a Web Data Provider. ”

Me: “For local debugging though, you need to be able to provide some hardcoded values, and it shouldn’t impact or modify any other code.”

(Blank stare, moderate pause)

Dev: “…Should that be a hard coded data provider?”

Boom.  My job here is done.

For anyone used to working with repositories, DI, and OWIN/MVC, this stuff is child’s play.  The junior developer (and indeed, the fully mediocre developer) need a hand grasping these concepts.  I find that guiding them through a discussion which allows them to discover the solution presents the most benefit.  Simply telling them what to do and how to do it trains a monkey, not a problem solver.  Anyone can write 500 lines of code in Page_Load.  They need to understand the ‘why’.  Personally, teaching on the job is one of my favorite things to do – there’s simply no substitute for the happiness that hits a developer when they realize the power that this new technique has awarded them.

More on this at a later point, but for now, understand the danger that you take on by using that new() keyword.  You may be stuck with that code block for a long, long time.

 

On to today’s music.  I found some more somber sounding folk-pop stuff.  The EP by Lewis Del Mar was a really great find!  (Minor language disclaimer).

 

Does this make your blood boil?

A group of professionals converse at a table, discussing a recent project delivery.  Amidst the banter, the following statement garners the attention of the room:

“You know, without us database guys, you developers wouldn’t have anything to develop!”

Chuckles all around, and a particularly opinionated business systems analyst comes back with:

“Without analysts to make any sense of your crappy data, nobody would care about your databases!”

The developers said nothing because, well, they’re developers.

 

Ever heard this kind of discussion?  I’ve witnessed a more serious version of it on more than one occasion.  In case you don’t know the appropriate response, let me go ahead and lay it out for you.

The quarterback on a football team huddles with the offensive side after making a play, and says to them:

“You know, if you guys didn’t have me throwing such great passes, this team would be nothing!”

How do you think they would respond?  How would you respond?

Being a team, then, by definition, none of them can fully execute a meaningful play without all roles and players allotted and focused on the goal.  Imagine the best quarterback in the world, throwing passes to middle school kids who are good at math, not football.  What’s the score of that game?

Naturally, the same analogy holds true for most sports – anything where specialized skill sets have to converge according to a common goal.

Of course, one of the lurking factors here is that each of these technical roles are compensated differently, but salaries are a matter of subjective compensation for a given skillset, negotiated according to geographic medians and other factors determined by payroll consultants and hiring managers.

 

“The database is king”?

“Nothing is more valuable than a company’s data”?

I honestly view these statements as a form of intellectual hubris.  

That may seem a bit harsh, so allow me to explain.  I’ve worked for companies where certain departments definitely had special treatment, but it’s one thing to receive that treatment and it’s quite another to believe in your soul that you are entitled to that treatment.

“Nothing is more valuable”?  Really? What about leadership inspiring purpose and pride in the hearts of thousands of employees?  What about a business product improving the lives and experiences of millions of humans across the globe?  How do you put a value on those things as compared to some bits on a hard drive?  (Any counter-argument here saying that those statements all require a database completely missed my point and should start re-reading this post from the beginning).

Now there are a couple things to which I will concede.  A company’s data, if managed well and correctly, can, and should stand the test of time.  A poorly designed database, will cripple the ability to wield it effectively, and in some cases, slowly and painfully degrade application innovation over the course of many years, and in some scenarios, many decades (I’m looking at you, AS/400). Speaking as an object-oriented thinker, nobody knows the pain of objected impedance mismatch more than myself.

I’m able to say with confidence that a company’s data is not its most valuable asset.  I’d stake my life on it – and it raises the question – what, then, is a company’s most important asset?

People.

I’ll say it again – let it go down nice and smooth.

People are a company’s most valuable asset.

Not your product, not your data, not your dollars.

Your employees are the product innovators, the customer advocates, the value creators.  Data is one of many tools in the toolbox.  And, trust me, you don’t need to sell me on it’s importance.

While the discussion in my opener happened to be lighthearted – there are those who would fiercely debate points and counterpoints to it.  The essence of the discussion is a non value added proposition.  If you catch anyone engaging in this discussion, do yourself a favor.  Shut it down with logic and pragmatism.  Get the group talking about something more useful and exciting, like why you should shift gears like a samurai.

Alright, that’s enough of that.  Today’s artist/song is a callback to the title.  For awhile now, I’ve been itching for a good Rock sound similar to the Black Keys – and I’ve definitely found it in the Icelandic rock group Kaleo.  ‘Hot Blood’ is one of my favorite songs recently, and aptly describes what happens to me whenever I’m drawn into another pointless, futile question of, “Which is more important, the application or the database?”

First things first.

So for my first substantive post, I want to talk about something which is near and dear to my heart.  Something which sets the tone and foundation for this entire blog.  Something that was instilled into my brain years and years ago by a man whom I would consider a valuable mentor – and someone who is truly dedicated to the measure of quality, down to their innermost core.

Let’s kick this off with a question.  One of the most contentious questions in business.

What is quality?

How you approach the answer to this question determines the starting point for how you answer this next one:

What is software quality?

I know what you’re thinking – too broad for a blog post.  You’re right.  Heck, so many books have been written on this topic I probably couldn’t keep all of them on my bookshelf.

Let’s look at some key questions that we, as software developers, should seriously be considering as we produce and improve our digital product.  Again, most of these questions arose from the fundamentals of manufacturing quality, Six Sigma, and Lean philosophy, but need to be rehashed to apply to today’s fast-paced world of application design and digital product development.

How do we know that the application we’ve built is of high quality?

Who determines this?

Can it be measured?

What if a different development team takes over the product, and they say we are wrong?  Who is right?  What is the criteria for deciding?

Can I look at a codebase and “feel” the quality of the code, or does experience and instinct have no place in a world packed with data analytics, metrics, KPI’s, and issue trackers?

By the way, does any of this matter if I get slapped with a requirements document and am told to make this feature fast and cheap because management wants to get this thing out the door?

Over the next few weeks, I’m going to give my thoughts on the topic, and I’ll try to refrain from being over academic.  But for now, I want to dig into some pretty fundamental concepts.

Any Six Sigma expert worth their salt is going to throw Deming at you, and he would say “The customer’s definition of quality is the only one that matters.”

True, but who is the customer?

During my transition from Six Sigma to Agile/Scrum, I realized any time Six Sigma says ‘Customer’, it can be translated to ‘End User’.

Which means our users‘ definition of quality is the only one that matters?

Not quite.

You see, a fundamental part of Six Sigma methodology is identifying customers in different “domains”, if you will.  The first line drawn is between internal and external customers.  An external customer can be a wholesaler, retailer, or end consumer (this is probably closer to the end user translation).

A critical part of doing business and maintaining high quality standards is measuring, and investing in, your internal customers.  This can be a supplier who makes your goods, the purchase order department, and even customer service representatives.  For some of you, this might be a slight philosophy change.  Yes, after your product goes live, your responsibility goes beyond profits and five star reviews.  Once its out there, it needs to be supported, have its reputation (and by extension, your brand) maintained, and continue to “delight your customers” (following the Deming phraseology).  In the tech world, this is more common knowledge, but you would be surprised at the number of traditional consumer goods companies which cease caring about their product once it leaves the warehouse freight bay.

I’m going to cut myself off here for this week.  I think we touched on a few areas which deserve more attention and I’d like to dedicate individual posts for some deeper thought.

This one’s for you, J.W!

 

 

(Credit for the image goes to Tesseract – a British Progressive Metal group, if you’re a metalhead.  I rather enjoy the melodic tones, and their drummer keeps things mixed up and fresh!)

Copyright Hi, I'm Andrew. 2017
Tech Nerd theme designed by Siteturner