Query Results

CQRS – Command – Query Responsibility Segregation

If this is brand new to you, I would encourage reading Dino Esposito’s exposition of it in MSDN magazine – here.  A little history goes a long way!

Just wanted to provide a little commentary today on my take on query results.  When I opt into CQRS classes in my API’s, the only way I’ve done my queryresult statuses (thus far) is to have an enumeration representing the possible states of the query result.  Some projects might choose to have a unique status per command (using some fancy generic magic and such), but I never quite found that appealing.  As an example, here’s the typical bare minimum I would need for a controller (or any other interested class) to diagnose a queryresult:

If a query is particularly long lived, or was forced to cancel or somehow return as incomplete from a database, NotYetProcessed is our first line of defense.  This is also super handy for new handlers, as I’ll typically forget to set this on my first run through new handler code, and inevitably there is a switch statement on an extension method that catches it and immediately alerts me to the mistake.  I rather enjoy the fail fast behavior of having a default of NotYetProcessed .

More importantly, being able to write extension methods against this enum allows me massive reuse all across related projects.  Repo’s can map their states to it; controllers can return status codes based on it.  It’s fully standalone, matches the spirit of query objects perfectly, and is also fully encapsulated.

A comment on the NoResultData status.  I keep this because checking null or empty isn’t particularly elegant, and in some cases it’s a performance benefit to bypass serialization altogether if we know ahead of time that there’s no concrete result (even though a given routine may prefer Enumerable<T>()).

I’ll be committing some sample usage to my ApiKickstart repo.  Have a look!


Today’s music – threw on some YouTube randomness and ended up here:

 

The Evolutionary Architect

In the midst of tackling this guy:

book

I can’t even begin to express how encouraging and refreshing it is to have so many of my thoughts and concerns finally captured into written word and gaining momentum.  Things like domain boundaries, and the fact that data duplication is OK if done in the name of scalability and development team autonomy.

Not that I’m anywhere near Sam Newman’s experience and knowledge, mind you.  But my posts are fairly clear when it comes to my philosophy of design.  The database is not king, sharing a monolithic schema can and will kill your ability to scale in ALL of the following areas: product feature change, team delivery speed, reliability of the software, and uptake of new development talent.

“A good software design is easier to change than a bad software design.” – “Pragmatic” Dave Thomas.

(his blog)

One thing I truly admire about this book is Sam’s pragmatism.  He’s not trying to sell you microservices, he moreso does a thorough pro-con analysis.  The people that should appreciate this most are, indeed, software architects.

In chapter 2, The Evolutionary Architect, Sam goes through and does a deeper dive on what it means to be an architect, how we as a software development community have misunderstood the word over the years, and how a true definition is still up for grabs.  Honestly, I completely agree with him.  “Back in the day”, when I worked for Global Quality at a Fortune 500, I had the opportunity of a lifetime to study Six Sigma methodology with a true master of the craft.  This person not only knew the ins and outs of the methodology and the process, but they were responsible for managing a large global team.  It was under this person that I learned, by example, how an evolutionary leader can be both a master of a specific process, but also step back into a management role and empower their team to execute that process.  As team members (and myself as a junior member at the time), we can and will fail.  It is the architect’s (indeed – any manager) role to mitigate that failure, and manage the complexity involved.

It is an architect’s role to manage the complexity of a software product, not to increase it.

Unfortunately, since leaving that particular company, I have yet to meet another leader anywhere close to that magnitude of employee empowerment, mentorship, and expertise in both the “product” and the “people”.

So, back to Sam’s points (now that I’ve given you my background and why I agree), he states that the architect’s role is often that of a tech lead.  Based on my experience, alot of tech leads get less than 2 hours of coding per day, and are often caught up in meetings and company bureaucracy which prevents them from being directly or actively involved in the development. Sam states (and I agree) “More than any other role, architects can have a direct impact on quality of the systems built, on the working conditions of their colleagues, and on the organization’s ability to respond to change.”

This then, makes them such a strange hybrid of technical, but also leadership expertise.

Personally, I’ve seen both extremes – an architect who injects their opinion into code, without consulting the pragmatic ideas of the rest of the team (who in turn has to own the end result), and also the architect who is so hands off that their responsibility is configuring TFS and learning how to use Git so that they can tell the other team members to go Google it.

Neither of these scenarios capture the true essence of an architect – but Sam goes on to say we’ve actually borrowed terminology and not fully understood the impact – and that the role is very well defined in other industries, like engineering, where there is a specific, measurable goal.  By contrast, software engineering is less than a century old.

Trebuchet "is a" type of catapult - right?
Trebuchet “is a” type of catapult – right?

“Architects have a duty to ensure the system is habitable for developers, too”.  This is critical – tech turnover is still high.  Developers leave because they don’t like working with your codebase (yes, Mr. architect, you’re responsible for the overall quality of your codebase – go sit with your juniors now), or because benefits, culture, and environment is better at a different company.  In my experience, a company that invests wildly in the satisfaction of their employees retains better talent for longer.  Software is unique in the fact that you can invest in your developers with shiny tools and conferences, instead of being limited to “only” monetary compensation (like a sales team for example).

“If we are to ensure that the systems we create are habitable for our developers, then our architects need to understand the impact of their decisions.  At the very least, this means spending time with the team, and ideally it should mean that these developers actually spend time coding with the team too.”

This could be pair programming exercises, code reviews (you don’t get to complain about quality if you don’t put forth a concrete solution) or mentoring sessions.  If you’re an architect that only knows how to create stored procedures which end up creating multiple dependencies and breaking changes for more than one application, then you need to stop calling yourself an architect, and start doing your job – developers hate working in environments like this.  Stored procedures make for top-tier database performance, and the absolute lowest software agility (if misused) – since your dependencies cannot be managed from within your solution.  That “one guy” has to “remember” that “oh, when you change this sproc, these two applications will break”. Not fun.

 

Sam compares the architect to more of a town planner – they don’t get to decide which buildings go where, but they are actively involved in IT governance, and pragmatic decision making (read: data-driven) – i.e, they zone out areas where commercial and residential buildings will eventually go.

1538-2-sim-city-3000
Anyone remember SimCity?

A town planner does not have the power to add and remove buildings or real estate developers from those zones.  Often times, it’s developers who are on the cutting edge of new tools that can achieve various outputs, and they should be empowered to deliver on the desired quality.  If you’re dictating who is consuming which stored procedures, you’re a town planner who is calling up Wal-Mart and asking them to move in.  If your development team has assessed the risks, and has pragmatically agreed on Costco or Meier, you need to let them do their job.
I’m also a big fan of governance through code, as this hearkens back to my Six Sigma days of mistake-proofing a process.  This can open up a whole new area of discussion, such as how Resharper, or architectural styles like DDD, REST, and CQRS can enforce best practices (as defined by you) at a code level.  Another discussion for another time!


For any fans of mainstream house, you may be interested in Deadmau5’s new album – W:/2016ALBUM/ (not a typo!)

DI, IoC, and Others

tldr;

Dependency Inversion is all about getting rid of hard dependencies on your concrete types. When you new up an object, you take responsibility for its initialization, it’s lifetime, and take a hard dependency on its concrete implementation.  We want to eliminate the ‘new’ keyword from all around our code. This can be achieved with IoC Containers and Service Locators, which are slightly older and less featured than IoC Containers.  IoC containers exist specifically to manage object lifetime and initialization – they provide you a concrete type based on registration of an interface, thus ‘inverting’ the normal control flow of new’ing an object, then calling a method.  Instead, you explicitly declare your dependency against an interface in your constructor, then go on about normal business calling methods.  The inversion takes place because the actual object is instantiated and initialized in a dedicated object elsewhere in the app, thus following closer to Single Responsibility.


A colleague recently linked an article on dependency injection vs inversion, and how Service Locators compare to constructor injection, IoC, and the like.  It was a decent article, and I’d like to clarify some points which others found confusing.  Since this is all about the ‘D’ in ‘S.O.L.I.D.’, I’d like to start us off at square one to make sure we all start on even footing, especially if you’re new to the subject.


S.O.L.I.D.

Dependency Inversion.

Before I throw the Wikipedia definition at you, let’s look at some pseudo-code you’d find in a perfectly average controller method.

 

public IHttpActionResult GetAllPeople()
{
PeopleManager manager = new PeopleManager();

var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);
}

Even though it’s pseudo-code, the point is that you’ll typically find an instantiation of a service, a call into one or two of those service’s methods, then a return.

What’s the issue?

The issue is that you are not the only developer on the team, and the code inside PeopleManager will change.  Maybe some preconfiguration object will be required, maybe code might have to run on initialization in order to cache data inside the manager, perhaps some code will need to be disposed, prompting the use of a using statement.

If implementation code inside PeopleManager changes, will it break your controller code?  If the answer here is yes, we need to revisit our Single Responsibility principle!  Controllers are not for managing logic and excessive parsing and mapping.  Controllers should be the thinnest possible layer  between your app services.  They exist only to bind HTTP and/or route data to a service request of some sort.  They should keep your services HTTP ignorant and hand off your request, not manage the consistency of your services.

On the subject of consistency, what happens when you foreach through a  new List()?

Nothing!

This isn’t a technical question, it’s more of a philosophical one.  If you foreach through a new List, no Exception will be thrown.  There aren’t any elements inside, but you also don’t get a Null Reference Exception because of this.

The List, along with the overwhelming majority of modern .Net Types, initializes safely, keeping itself consistent for use.

This means that even though the backing array for List has no elements, and was not provided any information in the constructor, it did not null itself, and is rendered safe for use even in a brand new higher level object.

Objects are also responsible for keeping themselves in a consistent state (as much as they can given constraints and reasonable control over their internals).  That is to say the List is kept safe and consistent by exposing methods which express behaviour.  Add(), Contains(), Reverse() – all of these are clear with intent, do not violate SRP, and leave the object in a consistent state.

I say “reasonable control”, because external actors might interact with the List, and Add() a null value.  This might negatively impact external  code (attempting to access a null inside the foreach), but the List itself doesn’t blow up if nulls are passed to it.  Methods expose intent and behavior.  I can’t just reach into the List and set its backing array to null.

Code which uses the .Net List, takes on responsibility of initializing it properly, in the correct scope.

That’s all well and good, because List is a .Net type, and is a Type which is freely available to all of your architectural layers almost by definition, but extend the logic:

All of my controllers are responsible for initializing app services properly, in the correct scope.

Whoa!  Go ask your product owner what’s required to create a new <business entity here>.  A new customer?  Your product owner will tell you they need to agree that they are 18 or older, not that checkbox with id ‘chkBoxOver18’.checked() == true.  Same goes for your controllers.  They receive some bound data regarding the new customer details.  Should they be concerned whether the Customer Registration service requires a separate logging connection string?  Or that it should be used as a singleton?  Or that it’s unsafe to use it as a singleton?  Or that it has an array which is initialized as null, so if they use Magical Property A, they need to new up an array in Magical Property B? (I actually observed this in production code.)  Your controller’s responsiblity is, in loose talk, “I bind some data from HTTP, make sure it’s valid, and pass it off to one of our app services.  The rest is their problem.”  (A higher complexity enterprise app will generally use a request-response object type of pattern, but that’s out of scope for today.)

We’ve made one consistent definition, but the issue arises that in our second case, extending the definition violated SRP of our controllers.

Inversion of Control containers were born to alleviate the issue of instantiating complex objects.  They achieve this through a technique called Dependency Injection – which you can think of as constructor injection, though it’s not technically limited to constructors.

If your controller says, “I don’t care about telling the PeopleManager how to do its job.  My job is to let them know that I have the data they require to add a person.”

Here is how that is expressed:


public class PeopleController(){
private readonly PeopleManager manager;
public PeopleController(PeopleManager pm)
{
manager = pm;
}
public IHttpActionResult GetAllPeople()
{
var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);
}
}

We move PeopleManager to the constructor.  The controller is explicitly exposing its dependencies by moving it to the constructor.  Can a .Net FileStream exist without being given a file path or some sort of FileHandler?  No!

9 constructors - all require parameters.
9 constructors – all require parameters.

Likewise, your ‘PeopleController’ cannot exist without being given a reference to a PeopleManager to work against.  So where does this magical constructor parameter come from?

IoC containers handle object lifetime,initialization, and resolution.  In .Net Core, this is handled in Startup.cs.  Various registrations are made so that whenever an object asks for the Type, the IoC container manages an internal directory of which types are registered as which implementations.

startup

Transient means the resolved object will be instantiated just that one time, for that specific resolution.  You can see above that IDbFileStorage only requires some startup configuration code, but then is safe to hold in memory as a singleton.

The root of the Dependency Inversion Principle lies in the fact that Types should rely on abstractions, not concrete implementations.

This means that aside from all of this IoC stuff, the really good part of this principle only requires us to add a single letter!

Huh?


public class PeopleController(){
private readonly IPeopleManager manager;
public PeopleController(IPeopleManager pm)
{
manager = pm;
}
public IHttpActionResult GetAllPeople()
{
var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);
}
}

There!  Instead of PeopleManager, we code against the abstraction – IPeopleManager.  This has huge impact.  IoC is just one of the most common ways to achieve this (and it’s a soft requirement in .Net Core). Tomorrow, when an additional logging configuration object is required in the PeopleManager constructor, you don’t have to shotgun surgery all of your controller methods.  The change is confined to one place, and unforeseen breaks are easily fixed, without unexpected consequences in code which utilizes your manager.

Service Locators do something similar, but without constructor injection.  Conceptually, all they really do is expose a static reference which will give you the Type you ask for. I would submit that constructor injection is amazingly useful in writing transparent, expressive code, especially as workflows begin to traverse different services, or as services require other services and so on.

In the end, we’ve reduced the amount of code in our controller, relocated code to a place which is closer to its responsibility and intent, and made our end result significantly easier to read and refactor – and not to mention, test!  All of these contribute to a more Agile codebase.


What I’m listening to right now:

listening

Hangfire on .Net Core & Docker

This is going to be a lengthy one, but I did some setup for Hangfire to run in a Docker container (on my Ubuntu server at home)and I thought it’d be pretty exciting to share -given where we are in the .Net lifecycle/ecosystem.

What exactly are we setting up?

So, as part of my software infrastructure at home, I was in need of a job scheduler.  Not because I run a business, but because this is what I do for…um…fun.  I’m starting to have some disparate apps and API’s that are needing some long running, durable job handling, so I selected Hangfire based on their early adoption of Core.

I also completed my Ubuntu server build/reimage this past summer, and I was looking to be able to consistently “Dockerize” my apps, so that was a key learning experience I wanted to take away from this.

So here’s the stack I used to complete this whole thing:

  • Hangfire Job Scheduler
  • Docker – you’ll need the Toolbox if you’re developing on Windows/Mac
  • Hosted on my Server running Ubuntu 16.04 (but you can run the image on your local toolbox instance as a PoC.

The easiest place to start is getting Hangfire up and running.  I’ll  skip over my Postgres and Ubuntu setup, but that stuff is widely covered in other documentation.  I’ll have to assume you have a library for your job store that targets Core (I know MongoDB is dangerously close to finalizing theirs, and they have a Docker Image to boot!).  The one I used is shown below in my project.json.

So, spool up a brand new Asp.Net Core app; I made mine a Web Api with no security.   You can name it Hangfire.Web if you want to exactly follow along, but it really doesn’t matter, as long as you spot the areas where it would need to be changed.

In your program.cs, comment the IIS integration code.  We’ll be running Kestrel on a Linux VM via the Asp.Net Docker Image.

capture

Add your job store connection string to your appsettings.json.

Next up, tweaking your project.json.  I did a few things here, and I’ll post mine for your copy pasting pleasure.  The important parts are removing any IIS packages and pre/post publish scripts/tools.  By default, a new project will come with a couple of IIS publish scripts, and they will break your build/publish if you run only Kestrel in the container.


{
"dependencies": {
"Microsoft.NETCore.App": {
"version": "1.0.1",
"type": "platform"
},
"Microsoft.AspNetCore.Mvc": "1.0.1",
"Microsoft.AspNetCore.Routing": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
"Microsoft.Extensions.Configuration.Json": "1.0.0",
"Microsoft.Extensions.Logging": "1.0.0",
"Microsoft.Extensions.Logging.Console": "1.0.0",
"Microsoft.Extensions.Logging.Debug": "1.0.0",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
"Hangfire": "1.6.6",
"Hangfire.PostgreSql.NetCore": "1.4.3",
"Serilog.Extensions.Logging": "1.2.0",
"Serilog.Sinks.Literate": "2.0.0",
"AB.FileStore.Impl.Postgres": "1.0.0",
"ConsoleApp1": "1.0.0"
},

"frameworks": {
"netcoreapp1.0": {
"imports": [
"dotnet5.6",
"portable-net45+win8"
]
}
},

"buildOptions": {
"emitEntryPoint": true,
"preserveCompilationContext": true
},

"runtimeOptions": {
"configProperties": {
"System.GC.Server": true
}
},

"publishOptions": {
"include": [
"wwwroot",
"**/*.cshtml",
"appsettings.json",
"web.config",
"Dockerfile"
]
}
}

You could honestly get rid of most of it for a bare-bones dependency build, but I left alot of defaults since I didn’t mind.
Next, Startup.cs:


public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMvc();

services.AddHangfire(options => options
.UseStorage(new PostgreSqlStorage(Configuration["PostgresJobStoreConnectionString"]))
.UseColouredConsoleLogProvider()
);

}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifetime)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();

loggerFactory.AddSerilog();
// Ensure any buffered events are sent at shutdown
appLifetime.ApplicationStopped.Register(Log.CloseAndFlush);

app.UseMvc();
//Hangfire:
//GlobalConfiguration.Configuration
// .UseStorage(new PostgreSqlStorage(Configuration["PostgresJobStoreConnectionString"]));

app.UseHangfireServer();
app.UseHangfireDashboard("/hangfire", new DashboardOptions()
{
Authorization = new List() { new NoAuthFilter() },
StatsPollingInterval = 60000 //can't seem to find the UoM on github - would love to know if this is seconds or ms
}
);
}

And the NoAuth filter for Hangfire:


using Hangfire.Dashboard;
using Hangfire.Annotations;

public class NoAuthFilter : IDashboardAuthorizationFilter
{
public bool Authorize([NotNull] DashboardContext context)
{
return true;
}
}

 

Couple notes:

  • The call to PostgresStorage will depend on your job store.  At the time of writing I found a few Postgres packages out there, but this was the only one that built against .Net Core.
  • Serilog logging was configured, but is completely optional for you.  Feel free to remove it.
  • Why the Authorization and NoAuthFilter?  Hangfire, by default, authorizes its dashboard.  While I admire the philosophy of “secure by default”, it took me extra time to configure a workaround for deploying to a remote server that is still  in a protected environment, and I didn’t want to mess around with plugging in Authorization.  You’d only find that out after you deployed the Hangfire app.
  • Stats polling interval is totally up to you.  I used a (very) long interval since the job store wasn’t really doing anything.  To get the stats I need to consciously navigate to that web page, and when I do, real-time isn’t a critical feature for me.

At this point, you have everything you need to hit F5 and run your Hangfire instance on local.  Now would be a good time to double check your job stores work, because next we’re moving on to…

maxresdefault

DOCKER!

The Big Idea

The idea is, we want to grab the Docker Image for Asp.Net Core, build our source code into it, and be able to run a container from it anywhere.  As you’ll see, we can actually run it locally through Docker Toolbox, and then transfer that image directly to Ubuntu and run it from there!

We’re going to prep our custom image (based on the aspnet core Docker image here).  We do that by creating a Dockerfile, which is a DSL that will instruct Docker on how to layer the images together and merge in your DLL’s.

Note that the Docker for Visual Studio tooling is in preview now, and after experiencing some build issues using it, I chose to just command line my way through.  It’s easy, I promise.

First, create a new file simply called ‘Dockerfile’ (no file extension) in your src/Project folder:

capture
Your Dockerfile:


FROM microsoft/aspnetcore:latest
MAINTAINER Andrew
ARG source=.
WORKDIR /publish
EXPOSE 1000
COPY $source .
ENTRYPOINT ["dotnet", "Hangfire.Web.dll"]

Let’s take a look at what this means.  The ‘FROM’ directive tells Docker to pull an image from the Docker hub.  MAINTAINER is fully optional, and can be left out if you’re paranoid.  ARG, COPY, and WORKDIR work together to set the current folder as a variable, then reference the publish folder from that variable, copying in its contents (which will be your DLL’s in just a moment).  ENTRYPOINT is what Docker will call into once the host boots up the image.  You can call ‘dotnet Hangfire.Web.dll’ straight from your bin folders to double check.  Keep in mind the DLL name in ENTRYPOINT will be whatever you named your project.

To make life a bit harder, I decided to use a specific port via the EXPOSE directive.  I chose an arbitrary number, and wanted to be explicit in my host deployment port assignments.

See that publish folder from above?  We’re going to create that now.  I didn’t want to mess around with publish profiles and Visual Studio settings, so now is where we go into command line mode.  Go ahead and call up the Docker Quickstart terminal.  We can actually call into the dot net core CLI from there, so we’ll do that for brevity.

capture

Make sure kitematic is running your Linux VM.  Mine is going through Virtual Box.  I couldn’t tell you if the process is the same for Hyper-V driven Windows containers.  You might hang at the above screenshot if the Linux VM isn’t detected/running.

‘cd’ into that same project folder where the Dockerfile is and run a dotnet publish.  You can copy mine from here, which just says publish the Release configuration into a new folder called ‘publish’.

capture

cd ‘C:\Users\Andrew\Desktop\ProjectsToKeep\Hangfire\src\Hangfire.Web’

dotnet publish -c Release -o publish

Now, we have freshly built DLL’s.  We call into the Docker CLI which will pull the necessary image, and merge in that folder we referenced.

docker build ./publish -t hangfireweb

The -t argument is a tag.  It’s highly recommended to assign a tag as you can use that name directly in the CLI. If you get errors like “error parsing reference”, then it’s probably related to the tag.  I noticed some issues related to symbols and capital letters.

capture

Bam!  Our image is built!

I can prove it with this command:

docker images

capture

This next command will take a look at the image, and run a new container instance off of it.

docker run -it -d -e “ASPNETCORE_URLS=http://+:1000” -p 1000:1000 –name Hangfire hangfireweb

–name assigns the container a name so we can verify it once it’s live.

-d runs it in a background daemon.

-e will pass in environment variables.  These are variables passed into Docker when its constructing the container, and in this case, Asp.Net defaulted to port 80 (as it should) – but you’ll remember I explicitly instructed the container to only expose port 1000, so I need to also tell Asp.Net to listen on port 1000.  You can view other environment variables for each image on the Docker hub site or in the Toolbox.  Additionally, the -p argument maps the host port to the container port.  In this case, I opened up 1000 and mapped it to 1000.

You’ll get some output, and can confirm the container is up and running with this call:

docker ps

capture

(Keep in mind, if you restart the computer or otherwise stop the container, you can view all containers via:

docker ps -a

You can navigate to the Hangfire UI to make sure everything is dandy –

capture

That’s all!  To run the image from my Ubuntu box I just used docker save and docker load commands. (reference here.)  All you’re really doing is saving the image to a file, and loading it up from another server.  Nothing to it.  You can even keep the Toolbox instance running, spool up a second, and the 2 will compete over the job store.

Hopefully this was helpful!


I’ll sound off with a more urban, electronic/hip-hop fusion along the lines of Griz or Gramatik.  I found the album Brighter Future by Big Gigantic.  This is fun stuff, but will definitely be appreciated by a select few of you.

Same Old Situation.

One of the key features I like about Agile estimation is that the members on the deliverable side get the final say regarding the size of work.  Traditionally, I’ve been a part of Fibonacci numbers, though I’ve heard some excellent arguments for T-shirt sizing (no surprise – finance wants to correlate everything to man hours for tax deductions, but this an outside interest which shouldn’t interfere with the estimation process itself).

Has anyone been part of a project estimated by a singular entity that’s not writing code?  It can be extremely frustrating.  It’s funny – the younger developers think it’s a problem unique to them – but the older developers have “seen it all”.  I had a great conversation with an older, ex-C++ developer, and we brought up The Mythical Man Month.  If you’re not familiar with Brooke’s law, it states something to the effect of “adding manpower to a late software project makes it later”. I say that because the book is on my list of reads, but I technically haven’t read it yet.  I notice some companies believe that any problem can be solved by throwing more manpower at the problem, without any regard to the quality of the product or developer skill/mindset.

There’s something about the “McDonald’s Developer” that irks me.  The idea that I can snatch up any Joe off the street, and in a handful of hours, teach him how to flip a burger and write code.  What’s so difficult for me to reconcile about this issue, is that McDonald’s (and most fast food chains) are among the leanest businesses in the world.  They have to be – fast food is fiercely competitive.  My Lean/Six Sigma brain wants to agree – but my pragmatic developer brain can’t seem to understand how this results in a high quality, maintainable product.  Developers who tend to fit this profile think of their problems in terms of for loops and making sure they check for null.  They haven’t had enough time in the industry (or around better developers) to look at architectural concerns, proper structuring of code to follow SOLID principles, and constantly analyzing what they write to find cleaner, most readable and efficient patterns.  This primitive skillset is then propagated by higher ups who don’t see the benefit of empowering these new developers, or simply don’t know how to grow the potential of a junior member.  Outsourcing has had a detrimental effect on this situation, as that mentality feeds the idea that more asses in chairs will solve your problem, when the truth is a good development team will design a system such that some obstacles will never even arise.  Boy, that makes it difficult to claim a victory for solving a problem by preventing it.  Anyone who’s ever worked in quality or been around manufacturing would understand all too well.

I think part of the way that I reconcile this McDonald’s mentality is that the architecture and framework of your application(s) should make it difficult for poorly designed or written code to enter the system.  This hearkens back to the mistake-proofing idea in Lean.  The process itself should be set up to minimize risk for any negative factors.  There are more than enough design patterns which make it easy to reason about the structure of a system without having to dive into thousands of lines of code.  It’s not that McDonald’s employees are any less capable or intelligent – it’s that the process sets them up for success and makes it difficult for them to mess up.  I’m allowed to make commentary here, having worked part-time in retail food myself.

Coincidentally (or not), I’ve noticed that places where the outsourcing mentality are prevalent have little to no code review practices – does that say anything about their value for their internal customers?

Starbucks is another example of treatment of internal customers.  It’s pretty obvious their culture attracts certain types of people (if not stereotypical).  It’s a very similar corollary – the processes they have in place make any given drink virtually identical, no matter who makes it, or where.

Does this mean their employees, who are part-time, single digit hourly wages, are any less deserving of investment?

Not at all!  In fact, I have a few friends taking advantage of the Starbucks education benefits and wrapping up their undergraduate degrees.  They may or may not be with the company in 5 years, but then again – most tech professionals won’t be either!  Costco is a good example of a company that is well known throughout Wall Street  as investing so heavily in their employees that they  have the best employee retention and customer satisfaction rates.

As I explore this idea more in the future – I’ll be sure to write my new discoveries on it.  In addition to the the rush I get from refactoring badly written C#, mentoring newer developers is near the top of my list of favorite things to do.  And I have a feeling that if they feel invested in, and are growing their skills, it’ll reduce the dollar and time waste involved with maintaining the revolving door of new hires.

I’ll have to end today’s post with a song of the same name.  Sublime, whose title album went five times Platinum largely due to the success of “What I Got,” has a very college reggae/party ska feel, which tends to be a very specific taste.  I always felt their music was fun, but not something I could commit to buying a physical copy of, or listen to repetitively and recommending to others.  What I didn’t know, is that their frontman, Bradley Nowell, died in 1996 – and the band dissolved soon after.

Turns out, they reformed with a new frontman in the late 2000’s – and it’s awesome.  The new singer/guitarist is Rome Ramirez, and they perform under the name Sublime With Rome.  This has to be some of the finest reggae to come out of the West Coast – Rome’s voice is amazing, the music has a very timeless feel.  I actually listen to it as rarely as I can, for fear of ruining the magic.  It’s like Sublime grew up, graduated college, and found its identity – the percussion is bang on, it’s well mixed, has a deep reggae vibe, and Rome’s voice keeps thing very fun, without being stereotypical reggae, as can be common in the states.

Check out Same Old Situation.

No REST for the wicked.

I tend to be an obsessive person.  I’ll get really excited about a technology, automation tool, open source library, or data related “thing”, and just be consumed for weeks, or months on end.  It happens with some games I play as well.  My wife, back when we were dating, even told me that she was fearful for a time that she was just another one of my obsessions that would fade.  (Don’t worry – we’re happily married!)

Lately I’ve been in between obsessions, and seem to be fishing around for some cool things to do.

Random aside – the words “I’m bored” you will NEVER hear me say.  Life is too short to stop learning, and the amount of things I do NOT know terrifies me.

Anyway, I completed a server build last Christmas, and I’ve been digging into some areas of network security that have been weak points for me.  Specifically a course on Pluralsight on Wireless Security.  Sure, taking care of the router at home is a trivial task for even the most average of IT people, but it was great learning about encryption IV’s stamped into the firmware of routers for the WPS feature.  I always disabled it because it “felt” insecure.  I’m a web developer, it’s not hard to imagine the myriad of techniques hackers use to compromise our precious data-driven websites.

My brother in law has been engrossed in Linux administration recently, and it’s got me thinking about my weak PowerShell and windows command prompt skills.  I’ve always been such a strongly typed .Net thinker that command line apps are giant “magic strings” for me – they almost feel dirty.  I won’t tag this post under code smell, but I’d love to go over magic strings in a later post, as I find them all the time and constantly have to refactor them.  I digress.

I feel like my brain has more control over me than I do over it.  (How’s that for meta-humor?)

But really.

2a0b4d4f5ea25f2d7f60722683af3962

 

So here’s my list of possible undertakings:

  • Buy a Raspberry Pi
  • Learn to Administer said Raspberry Pi via command line
  • Dig into system admin tasks for my Windows Server 2012 box so I can better understand the infrastructure side of IT
  • Hack my home WiFi with old, useless laptops, in the name of improving home security
  • Start shopping for new server components – do I want to build a new one the size of a shoebox?
  • VM a Linux box and just have fun learning that
  • Educate myself on Active Directory so I don’t appear like such a dolt to other IT admins
  • Continue my research into .Net based web scrapers, and see if I can spool up anything with .Net Core

The Pi’s have me really excited – I’m dying to set up a network of them so they can all talk to each other via RESTful HTTP calls.  Have I mentioned how much I love REST?  Using HTTP as it was originally intended – really helps to model out solution spaces for complex data services and Web API’s. I’ll go into that in another post, with concrete samples of how I’ve approached this in my own ASP.Net code.  I can imagine exploring the wonders of message queues and callback URI schedulers to coordinate automated tasks between the Pi’s – sending me texts throughout the day when one of them has finished scraping customer reviews for a product I’m researching.  I’d love to host a MongoDB node on one if it’s feasible!

I’m sure I’m missing a few from that list.  And I’ll watch a few movies and TV shows until something grabs a hold of my brain again.  Let’s just hope it’s not internet spaceships this time.

Today’s song is titled with this post.  It was wildly popular on the radio for awhile – I would still consider it more or less radio rock.  Generally people are pretty polarized on Cage the Elephant.

Ain’t no REST for the Wicked

Thoughts on ASP.Net Core & DI

Microsoft is making big changes to the way they are building and shipping software.

Personally, I think it’s a huge win for developers.  It’s only a matter of time until .Net Core targets ARM processors and all of my homemade applications talk to each other via distributed messaging and scheduling on a Raspberry Pi network.

But the paradigm shift in project structure and style leaves some polarized in their opinion.  The developers who understand the power of Open Source are generally immediately on board.  The developers who still think the best answer to any problem is a stored procedure simply can’t comprehend why you would do anything in .Net Core when the existing .Net Framework “works”.

“Well, it works!”

Ever heard that?

That statement eventually gives birth to the infamous “Well, it works on MY machine…”

Let me tell you something about my philosophy on development.  What really fuels why I do what I do.

The art of software development, in my opinion, is being able to design an evolving solution which adapts to an evolving problem.  I might even give that as a definition to “Agile Code”, but I’ll leave that for another discussion.  I don’t mean an evolving solution as in having to touch the code every second of every day in order to meet new requirements – I mean touching it a handful of times, and the answer to every new requirement is “sure, no problem” as opposed to “Crap – that code is a nightmare to modify for even the smallest tweaks”.

.Net Core facilitates this in so many ways – between amped up dependency management, DI as a requirement, and a middleware styled approach.  Developers have known for decades this is how to build software, and for years have tried to shoehorn this paradigm into the .Net Framework via OWIN and a myriad of IoC containers.  Greg Young, an architect whom I have the utmost respect for, has spoken out against DI containers(specifically the proposed benefit of hot swapping implementations at runtime), but after being confronted with some very challenging requirements, I honestly can’t make an app nowadays without it.  Even for simple apps I make myself – I decide to switch up implementations and benchmark things against each other, but I don’t want to delete code that I’ve written on my own time for fear of reusing it at a later time (No TFS at home..yet…).

The most important aspect of .Net Core, in my opinion, is it forces you to think in terms of abstractions.

It’s disheartening when I’m working with other developers who:

A) Claim to be C# developers and can’t define “coding against an abstraction”

B) Don’t understand that how to properly separate the concerns of code

C) Believe that offloading business logic to the database is a good decision in the name of performance

I have to catch myself here.  It’s easy to slip into a cynical view of others and begin to harshly criticize their talent as I put on my headphones for a three hour refactoring session.  That’s not me.  I believe anyone can code.  I believe anyone can be a good coder.  Good developers, and high performing people in general, are good thinkers.  They know what they don’t know.  They never settle for a single best solution, they pragmatically select the best tool for the job, critically assessing their problem and any potential solutions.

 

This is how I mentor the younger developers that drop their jaws when they see Startup.cs for the first time:

Ignore this entire file.  You need to know two things.  You configure services and you use middleware (thanks Daniel Roth!).

What is it that I need this package/code to do?

Pick the best tool for the problem, and drop it specifically where it belongs.  Concretely, this means thinking about your problem space.  There’s a 99.9% chance you are not the first person to encounter this problem.  What is the most elegant and reusable solution?

This gets the developer thinking about scoping and narrowing their development focus.  Too often they jump immediately to code that actually takes input A and outputs B – it’s our nature.  Usually, as I pester them with questions, they end up verbalizing the abstraction without even realizing it, and half the time the words they use best describe the interface!

Dev: “Well, I’m really only using this code to provide data to my view model.”

Me: “Right – you didn’t even say web service in that sentence.”

Dev: “So it’s a ‘Data Provider’, but it will to go to the web. So it’s a Web Data Provider. ”

Me: “For local debugging though, you need to be able to provide some hardcoded values, and it shouldn’t impact or modify any other code.”

(Blank stare, moderate pause)

Dev: “…Should that be a hard coded data provider?”

Boom.  My job here is done.

For anyone used to working with repositories, DI, and OWIN/MVC, this stuff is child’s play.  The junior developer (and indeed, the fully mediocre developer) need a hand grasping these concepts.  I find that guiding them through a discussion which allows them to discover the solution presents the most benefit.  Simply telling them what to do and how to do it trains a monkey, not a problem solver.  Anyone can write 500 lines of code in Page_Load.  They need to understand the ‘why’.  Personally, teaching on the job is one of my favorite things to do – there’s simply no substitute for the happiness that hits a developer when they realize the power that this new technique has awarded them.

More on this at a later point, but for now, understand the danger that you take on by using that new() keyword.  You may be stuck with that code block for a long, long time.

 

On to today’s music.  I found some more somber sounding folk-pop stuff.  The EP by Lewis Del Mar was a really great find!  (Minor language disclaimer).

 

Does this make your blood boil?

A group of professionals converse at a table, discussing a recent project delivery.  Amidst the banter, the following statement garners the attention of the room:

“You know, without us database guys, you developers wouldn’t have anything to develop!”

Chuckles all around, and a particularly opinionated business systems analyst comes back with:

“Without analysts to make any sense of your crappy data, nobody would care about your databases!”

The developers said nothing because, well, they’re developers.

 

Ever heard this kind of discussion?  I’ve witnessed a more serious version of it on more than one occasion.  In case you don’t know the appropriate response, let me go ahead and lay it out for you.

The quarterback on a football team huddles with the offensive side after making a play, and says to them:

“You know, if you guys didn’t have me throwing such great passes, this team would be nothing!”

How do you think they would respond?  How would you respond?

Being a team, then, by definition, none of them can fully execute a meaningful play without all roles and players allotted and focused on the goal.  Imagine the best quarterback in the world, throwing passes to middle school kids who are good at math, not football.  What’s the score of that game?

Naturally, the same analogy holds true for most sports – anything where specialized skill sets have to converge according to a common goal.

Of course, one of the lurking factors here is that each of these technical roles are compensated differently, but salaries are a matter of subjective compensation for a given skillset, negotiated according to geographic medians and other factors determined by payroll consultants and hiring managers.

 

“The database is king”?

“Nothing is more valuable than a company’s data”?

I honestly view these statements as a form of intellectual hubris.  

That may seem a bit harsh, so allow me to explain.  I’ve worked for companies where certain departments definitely had special treatment, but it’s one thing to receive that treatment and it’s quite another to believe in your soul that you are entitled to that treatment.

“Nothing is more valuable”?  Really? What about leadership inspiring purpose and pride in the hearts of thousands of employees?  What about a business product improving the lives and experiences of millions of humans across the globe?  How do you put a value on those things as compared to some bits on a hard drive?  (Any counter-argument here saying that those statements all require a database completely missed my point and should start re-reading this post from the beginning).

Now there are a couple things to which I will concede.  A company’s data, if managed well and correctly, can, and should stand the test of time.  A poorly designed database, will cripple the ability to wield it effectively, and in some cases, slowly and painfully degrade application innovation over the course of many years, and in some scenarios, many decades (I’m looking at you, AS/400). Speaking as an object-oriented thinker, nobody knows the pain of objected impedance mismatch more than myself.

I’m able to say with confidence that a company’s data is not its most valuable asset.  I’d stake my life on it – and it raises the question – what, then, is a company’s most important asset?

People.

I’ll say it again – let it go down nice and smooth.

People are a company’s most valuable asset.

Not your product, not your data, not your dollars.

Your employees are the product innovators, the customer advocates, the value creators.  Data is one of many tools in the toolbox.  And, trust me, you don’t need to sell me on it’s importance.

While the discussion in my opener happened to be lighthearted – there are those who would fiercely debate points and counterpoints to it.  The essence of the discussion is a non value added proposition.  If you catch anyone engaging in this discussion, do yourself a favor.  Shut it down with logic and pragmatism.  Get the group talking about something more useful and exciting, like why you should shift gears like a samurai.

Alright, that’s enough of that.  Today’s artist/song is a callback to the title.  For awhile now, I’ve been itching for a good Rock sound similar to the Black Keys – and I’ve definitely found it in the Icelandic rock group Kaleo.  ‘Hot Blood’ is one of my favorite songs recently, and aptly describes what happens to me whenever I’m drawn into another pointless, futile question of, “Which is more important, the application or the database?”

Copyright Hi, I'm Andrew. 2017
Tech Nerd theme designed by Siteturner