The Evolutionary Architect

In the midst of tackling this guy:

book

I can’t even begin to express how encouraging and refreshing it is to have so many of my thoughts and concerns finally captured into written word and gaining momentum.  Things like domain boundaries, and the fact that data duplication is OK if done in the name of scalability and development team autonomy.

Not that I’m anywhere near Sam Newman’s experience and knowledge, mind you.  But my posts are fairly clear when it comes to my philosophy of design.  The database is not king, sharing a monolithic schema can and will kill your ability to scale in ALL of the following areas: product feature change, team delivery speed, reliability of the software, and uptake of new development talent.

“A good software design is easier to change than a bad software design.” – “Pragmatic” Dave Thomas.

(his blog)

One thing I truly admire about this book is Sam’s pragmatism.  He’s not trying to sell you microservices, he moreso does a thorough pro-con analysis.  The people that should appreciate this most are, indeed, software architects.

In chapter 2, The Evolutionary Architect, Sam goes through and does a deeper dive on what it means to be an architect, how we as a software development community have misunderstood the word over the years, and how a true definition is still up for grabs.  Honestly, I completely agree with him.  “Back in the day”, when I worked for Global Quality at a Fortune 500, I had the opportunity of a lifetime to study Six Sigma methodology with a true master of the craft.  This person not only knew the ins and outs of the methodology and the process, but they were responsible for managing a large global team.  It was under this person that I learned, by example, how an evolutionary leader can be both a master of a specific process, but also step back into a management role and empower their team to execute that process.  As team members (and myself as a junior member at the time), we can and will fail.  It is the architect’s (indeed – any manager) role to mitigate that failure, and manage the complexity involved.

It is an architect’s role to manage the complexity of a software product, not to increase it.

Unfortunately, since leaving that particular company, I have yet to meet another leader anywhere close to that magnitude of employee empowerment, mentorship, and expertise in both the “product” and the “people”.

So, back to Sam’s points (now that I’ve given you my background and why I agree), he states that the architect’s role is often that of a tech lead.  Based on my experience, alot of tech leads get less than 2 hours of coding per day, and are often caught up in meetings and company bureaucracy which prevents them from being directly or actively involved in the development. Sam states (and I agree) “More than any other role, architects can have a direct impact on quality of the systems built, on the working conditions of their colleagues, and on the organization’s ability to respond to change.”

This then, makes them such a strange hybrid of technical, but also leadership expertise.

Personally, I’ve seen both extremes – an architect who injects their opinion into code, without consulting the pragmatic ideas of the rest of the team (who in turn has to own the end result), and also the architect who is so hands off that their responsibility is configuring TFS and learning how to use Git so that they can tell the other team members to go Google it.

Neither of these scenarios capture the true essence of an architect – but Sam goes on to say we’ve actually borrowed terminology and not fully understood the impact – and that the role is very well defined in other industries, like engineering, where there is a specific, measurable goal.  By contrast, software engineering is less than a century old.

Trebuchet "is a" type of catapult - right?
Trebuchet “is a” type of catapult – right?

“Architects have a duty to ensure the system is habitable for developers, too”.  This is critical – tech turnover is still high.  Developers leave because they don’t like working with your codebase (yes, Mr. architect, you’re responsible for the overall quality of your codebase – go sit with your juniors now), or because benefits, culture, and environment is better at a different company.  In my experience, a company that invests wildly in the satisfaction of their employees retains better talent for longer.  Software is unique in the fact that you can invest in your developers with shiny tools and conferences, instead of being limited to “only” monetary compensation (like a sales team for example).

“If we are to ensure that the systems we create are habitable for our developers, then our architects need to understand the impact of their decisions.  At the very least, this means spending time with the team, and ideally it should mean that these developers actually spend time coding with the team too.”

This could be pair programming exercises, code reviews (you don’t get to complain about quality if you don’t put forth a concrete solution) or mentoring sessions.  If you’re an architect that only knows how to create stored procedures which end up creating multiple dependencies and breaking changes for more than one application, then you need to stop calling yourself an architect, and start doing your job – developers hate working in environments like this.  Stored procedures make for top-tier database performance, and the absolute lowest software agility (if misused) – since your dependencies cannot be managed from within your solution.  That “one guy” has to “remember” that “oh, when you change this sproc, these two applications will break”. Not fun.

 

Sam compares the architect to more of a town planner – they don’t get to decide which buildings go where, but they are actively involved in IT governance, and pragmatic decision making (read: data-driven) – i.e, they zone out areas where commercial and residential buildings will eventually go.

1538-2-sim-city-3000
Anyone remember SimCity?

A town planner does not have the power to add and remove buildings or real estate developers from those zones.  Often times, it’s developers who are on the cutting edge of new tools that can achieve various outputs, and they should be empowered to deliver on the desired quality.  If you’re dictating who is consuming which stored procedures, you’re a town planner who is calling up Wal-Mart and asking them to move in.  If your development team has assessed the risks, and has pragmatically agreed on Costco or Meier, you need to let them do their job.
I’m also a big fan of governance through code, as this hearkens back to my Six Sigma days of mistake-proofing a process.  This can open up a whole new area of discussion, such as how Resharper, or architectural styles like DDD, REST, and CQRS can enforce best practices (as defined by you) at a code level.  Another discussion for another time!


For any fans of mainstream house, you may be interested in Deadmau5’s new album – W:/2016ALBUM/ (not a typo!)

DI, IoC, and Others

tldr;

Dependency Inversion is all about getting rid of hard dependencies on your concrete types. When you new up an object, you take responsibility for its initialization, it’s lifetime, and take a hard dependency on its concrete implementation.  We want to eliminate the ‘new’ keyword from all around our code. This can be achieved with IoC Containers and Service Locators, which are slightly older and less featured than IoC Containers.  IoC containers exist specifically to manage object lifetime and initialization – they provide you a concrete type based on registration of an interface, thus ‘inverting’ the normal control flow of new’ing an object, then calling a method.  Instead, you explicitly declare your dependency against an interface in your constructor, then go on about normal business calling methods.  The inversion takes place because the actual object is instantiated and initialized in a dedicated object elsewhere in the app, thus following closer to Single Responsibility.


A colleague recently linked an article on dependency injection vs inversion, and how Service Locators compare to constructor injection, IoC, and the like.  It was a decent article, and I’d like to clarify some points which others found confusing.  Since this is all about the ‘D’ in ‘S.O.L.I.D.’, I’d like to start us off at square one to make sure we all start on even footing, especially if you’re new to the subject.


S.O.L.I.D.

Dependency Inversion.

Before I throw the Wikipedia definition at you, let’s look at some pseudo-code you’d find in a perfectly average controller method.

 

public IHttpActionResult GetAllPeople()
{
PeopleManager manager = new PeopleManager();

var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);
}

Even though it’s pseudo-code, the point is that you’ll typically find an instantiation of a service, a call into one or two of those service’s methods, then a return.

What’s the issue?

The issue is that you are not the only developer on the team, and the code inside PeopleManager will change.  Maybe some preconfiguration object will be required, maybe code might have to run on initialization in order to cache data inside the manager, perhaps some code will need to be disposed, prompting the use of a using statement.

If implementation code inside PeopleManager changes, will it break your controller code?  If the answer here is yes, we need to revisit our Single Responsibility principle!  Controllers are not for managing logic and excessive parsing and mapping.  Controllers should be the thinnest possible layer  between your app services.  They exist only to bind HTTP and/or route data to a service request of some sort.  They should keep your services HTTP ignorant and hand off your request, not manage the consistency of your services.

On the subject of consistency, what happens when you foreach through a  new List()?

Nothing!

This isn’t a technical question, it’s more of a philosophical one.  If you foreach through a new List, no Exception will be thrown.  There aren’t any elements inside, but you also don’t get a Null Reference Exception because of this.

The List, along with the overwhelming majority of modern .Net Types, initializes safely, keeping itself consistent for use.

This means that even though the backing array for List has no elements, and was not provided any information in the constructor, it did not null itself, and is rendered safe for use even in a brand new higher level object.

Objects are also responsible for keeping themselves in a consistent state (as much as they can given constraints and reasonable control over their internals).  That is to say the List is kept safe and consistent by exposing methods which express behaviour.  Add(), Contains(), Reverse() – all of these are clear with intent, do not violate SRP, and leave the object in a consistent state.

I say “reasonable control”, because external actors might interact with the List, and Add() a null value.  This might negatively impact external  code (attempting to access a null inside the foreach), but the List itself doesn’t blow up if nulls are passed to it.  Methods expose intent and behavior.  I can’t just reach into the List and set its backing array to null.

Code which uses the .Net List, takes on responsibility of initializing it properly, in the correct scope.

That’s all well and good, because List is a .Net type, and is a Type which is freely available to all of your architectural layers almost by definition, but extend the logic:

All of my controllers are responsible for initializing app services properly, in the correct scope.

Whoa!  Go ask your product owner what’s required to create a new <business entity here>.  A new customer?  Your product owner will tell you they need to agree that they are 18 or older, not that checkbox with id ‘chkBoxOver18’.checked() == true.  Same goes for your controllers.  They receive some bound data regarding the new customer details.  Should they be concerned whether the Customer Registration service requires a separate logging connection string?  Or that it should be used as a singleton?  Or that it’s unsafe to use it as a singleton?  Or that it has an array which is initialized as null, so if they use Magical Property A, they need to new up an array in Magical Property B? (I actually observed this in production code.)  Your controller’s responsiblity is, in loose talk, “I bind some data from HTTP, make sure it’s valid, and pass it off to one of our app services.  The rest is their problem.”  (A higher complexity enterprise app will generally use a request-response object type of pattern, but that’s out of scope for today.)

We’ve made one consistent definition, but the issue arises that in our second case, extending the definition violated SRP of our controllers.

Inversion of Control containers were born to alleviate the issue of instantiating complex objects.  They achieve this through a technique called Dependency Injection – which you can think of as constructor injection, though it’s not technically limited to constructors.

If your controller says, “I don’t care about telling the PeopleManager how to do its job.  My job is to let them know that I have the data they require to add a person.”

Here is how that is expressed:


public class PeopleController(){
private readonly PeopleManager manager;
public PeopleController(PeopleManager pm)
{
manager = pm;
}
public IHttpActionResult GetAllPeople()
{
var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);
}
}

We move PeopleManager to the constructor.  The controller is explicitly exposing its dependencies by moving it to the constructor.  Can a .Net FileStream exist without being given a file path or some sort of FileHandler?  No!

9 constructors - all require parameters.
9 constructors – all require parameters.

Likewise, your ‘PeopleController’ cannot exist without being given a reference to a PeopleManager to work against.  So where does this magical constructor parameter come from?

IoC containers handle object lifetime,initialization, and resolution.  In .Net Core, this is handled in Startup.cs.  Various registrations are made so that whenever an object asks for the Type, the IoC container manages an internal directory of which types are registered as which implementations.

startup

Transient means the resolved object will be instantiated just that one time, for that specific resolution.  You can see above that IDbFileStorage only requires some startup configuration code, but then is safe to hold in memory as a singleton.

The root of the Dependency Inversion Principle lies in the fact that Types should rely on abstractions, not concrete implementations.

This means that aside from all of this IoC stuff, the really good part of this principle only requires us to add a single letter!

Huh?


public class PeopleController(){
private readonly IPeopleManager manager;
public PeopleController(IPeopleManager pm)
{
manager = pm;
}
public IHttpActionResult GetAllPeople()
{
var allPeople = manager.GetAllPeople();

return new HttpOkObjectResult(people);
}
}

There!  Instead of PeopleManager, we code against the abstraction – IPeopleManager.  This has huge impact.  IoC is just one of the most common ways to achieve this (and it’s a soft requirement in .Net Core). Tomorrow, when an additional logging configuration object is required in the PeopleManager constructor, you don’t have to shotgun surgery all of your controller methods.  The change is confined to one place, and unforeseen breaks are easily fixed, without unexpected consequences in code which utilizes your manager.

Service Locators do something similar, but without constructor injection.  Conceptually, all they really do is expose a static reference which will give you the Type you ask for. I would submit that constructor injection is amazingly useful in writing transparent, expressive code, especially as workflows begin to traverse different services, or as services require other services and so on.

In the end, we’ve reduced the amount of code in our controller, relocated code to a place which is closer to its responsibility and intent, and made our end result significantly easier to read and refactor – and not to mention, test!  All of these contribute to a more Agile codebase.


What I’m listening to right now:

listening

Hangfire on .Net Core & Docker

This is going to be a lengthy one, but I did some setup for Hangfire to run in a Docker container (on my Ubuntu server at home)and I thought it’d be pretty exciting to share -given where we are in the .Net lifecycle/ecosystem.

What exactly are we setting up?

So, as part of my software infrastructure at home, I was in need of a job scheduler.  Not because I run a business, but because this is what I do for…um…fun.  I’m starting to have some disparate apps and API’s that are needing some long running, durable job handling, so I selected Hangfire based on their early adoption of Core.

I also completed my Ubuntu server build/reimage this past summer, and I was looking to be able to consistently “Dockerize” my apps, so that was a key learning experience I wanted to take away from this.

So here’s the stack I used to complete this whole thing:

  • Hangfire Job Scheduler
  • Docker – you’ll need the Toolbox if you’re developing on Windows/Mac
  • Hosted on my Server running Ubuntu 16.04 (but you can run the image on your local toolbox instance as a PoC.

The easiest place to start is getting Hangfire up and running.  I’ll  skip over my Postgres and Ubuntu setup, but that stuff is widely covered in other documentation.  I’ll have to assume you have a library for your job store that targets Core (I know MongoDB is dangerously close to finalizing theirs, and they have a Docker Image to boot!).  The one I used is shown below in my project.json.

So, spool up a brand new Asp.Net Core app; I made mine a Web Api with no security.   You can name it Hangfire.Web if you want to exactly follow along, but it really doesn’t matter, as long as you spot the areas where it would need to be changed.

In your program.cs, comment the IIS integration code.  We’ll be running Kestrel on a Linux VM via the Asp.Net Docker Image.

capture

Add your job store connection string to your appsettings.json.

Next up, tweaking your project.json.  I did a few things here, and I’ll post mine for your copy pasting pleasure.  The important parts are removing any IIS packages and pre/post publish scripts/tools.  By default, a new project will come with a couple of IIS publish scripts, and they will break your build/publish if you run only Kestrel in the container.


{
"dependencies": {
"Microsoft.NETCore.App": {
"version": "1.0.1",
"type": "platform"
},
"Microsoft.AspNetCore.Mvc": "1.0.1",
"Microsoft.AspNetCore.Routing": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
"Microsoft.Extensions.Configuration.Json": "1.0.0",
"Microsoft.Extensions.Logging": "1.0.0",
"Microsoft.Extensions.Logging.Console": "1.0.0",
"Microsoft.Extensions.Logging.Debug": "1.0.0",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
"Hangfire": "1.6.6",
"Hangfire.PostgreSql.NetCore": "1.4.3",
"Serilog.Extensions.Logging": "1.2.0",
"Serilog.Sinks.Literate": "2.0.0",
"AB.FileStore.Impl.Postgres": "1.0.0",
"ConsoleApp1": "1.0.0"
},

"frameworks": {
"netcoreapp1.0": {
"imports": [
"dotnet5.6",
"portable-net45+win8"
]
}
},

"buildOptions": {
"emitEntryPoint": true,
"preserveCompilationContext": true
},

"runtimeOptions": {
"configProperties": {
"System.GC.Server": true
}
},

"publishOptions": {
"include": [
"wwwroot",
"**/*.cshtml",
"appsettings.json",
"web.config",
"Dockerfile"
]
}
}

You could honestly get rid of most of it for a bare-bones dependency build, but I left alot of defaults since I didn’t mind.
Next, Startup.cs:


public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMvc();

services.AddHangfire(options => options
.UseStorage(new PostgreSqlStorage(Configuration["PostgresJobStoreConnectionString"]))
.UseColouredConsoleLogProvider()
);

}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifetime)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();

loggerFactory.AddSerilog();
// Ensure any buffered events are sent at shutdown
appLifetime.ApplicationStopped.Register(Log.CloseAndFlush);

app.UseMvc();
//Hangfire:
//GlobalConfiguration.Configuration
// .UseStorage(new PostgreSqlStorage(Configuration["PostgresJobStoreConnectionString"]));

app.UseHangfireServer();
app.UseHangfireDashboard("/hangfire", new DashboardOptions()
{
Authorization = new List() { new NoAuthFilter() },
StatsPollingInterval = 60000 //can't seem to find the UoM on github - would love to know if this is seconds or ms
}
);
}

And the NoAuth filter for Hangfire:


using Hangfire.Dashboard;
using Hangfire.Annotations;

public class NoAuthFilter : IDashboardAuthorizationFilter
{
public bool Authorize([NotNull] DashboardContext context)
{
return true;
}
}

 

Couple notes:

  • The call to PostgresStorage will depend on your job store.  At the time of writing I found a few Postgres packages out there, but this was the only one that built against .Net Core.
  • Serilog logging was configured, but is completely optional for you.  Feel free to remove it.
  • Why the Authorization and NoAuthFilter?  Hangfire, by default, authorizes its dashboard.  While I admire the philosophy of “secure by default”, it took me extra time to configure a workaround for deploying to a remote server that is still  in a protected environment, and I didn’t want to mess around with plugging in Authorization.  You’d only find that out after you deployed the Hangfire app.
  • Stats polling interval is totally up to you.  I used a (very) long interval since the job store wasn’t really doing anything.  To get the stats I need to consciously navigate to that web page, and when I do, real-time isn’t a critical feature for me.

At this point, you have everything you need to hit F5 and run your Hangfire instance on local.  Now would be a good time to double check your job stores work, because next we’re moving on to…

maxresdefault

DOCKER!

The Big Idea

The idea is, we want to grab the Docker Image for Asp.Net Core, build our source code into it, and be able to run a container from it anywhere.  As you’ll see, we can actually run it locally through Docker Toolbox, and then transfer that image directly to Ubuntu and run it from there!

We’re going to prep our custom image (based on the aspnet core Docker image here).  We do that by creating a Dockerfile, which is a DSL that will instruct Docker on how to layer the images together and merge in your DLL’s.

Note that the Docker for Visual Studio tooling is in preview now, and after experiencing some build issues using it, I chose to just command line my way through.  It’s easy, I promise.

First, create a new file simply called ‘Dockerfile’ (no file extension) in your src/Project folder:

capture
Your Dockerfile:


FROM microsoft/aspnetcore:latest
MAINTAINER Andrew
ARG source=.
WORKDIR /publish
EXPOSE 1000
COPY $source .
ENTRYPOINT ["dotnet", "Hangfire.Web.dll"]

Let’s take a look at what this means.  The ‘FROM’ directive tells Docker to pull an image from the Docker hub.  MAINTAINER is fully optional, and can be left out if you’re paranoid.  ARG, COPY, and WORKDIR work together to set the current folder as a variable, then reference the publish folder from that variable, copying in its contents (which will be your DLL’s in just a moment).  ENTRYPOINT is what Docker will call into once the host boots up the image.  You can call ‘dotnet Hangfire.Web.dll’ straight from your bin folders to double check.  Keep in mind the DLL name in ENTRYPOINT will be whatever you named your project.

To make life a bit harder, I decided to use a specific port via the EXPOSE directive.  I chose an arbitrary number, and wanted to be explicit in my host deployment port assignments.

See that publish folder from above?  We’re going to create that now.  I didn’t want to mess around with publish profiles and Visual Studio settings, so now is where we go into command line mode.  Go ahead and call up the Docker Quickstart terminal.  We can actually call into the dot net core CLI from there, so we’ll do that for brevity.

capture

Make sure kitematic is running your Linux VM.  Mine is going through Virtual Box.  I couldn’t tell you if the process is the same for Hyper-V driven Windows containers.  You might hang at the above screenshot if the Linux VM isn’t detected/running.

‘cd’ into that same project folder where the Dockerfile is and run a dotnet publish.  You can copy mine from here, which just says publish the Release configuration into a new folder called ‘publish’.

capture

cd ‘C:\Users\Andrew\Desktop\ProjectsToKeep\Hangfire\src\Hangfire.Web’

dotnet publish -c Release -o publish

Now, we have freshly built DLL’s.  We call into the Docker CLI which will pull the necessary image, and merge in that folder we referenced.

docker build ./publish -t hangfireweb

The -t argument is a tag.  It’s highly recommended to assign a tag as you can use that name directly in the CLI. If you get errors like “error parsing reference”, then it’s probably related to the tag.  I noticed some issues related to symbols and capital letters.

capture

Bam!  Our image is built!

I can prove it with this command:

docker images

capture

This next command will take a look at the image, and run a new container instance off of it.

docker run -it -d -e “ASPNETCORE_URLS=http://+:1000” -p 1000:1000 –name Hangfire hangfireweb

–name assigns the container a name so we can verify it once it’s live.

-d runs it in a background daemon.

-e will pass in environment variables.  These are variables passed into Docker when its constructing the container, and in this case, Asp.Net defaulted to port 80 (as it should) – but you’ll remember I explicitly instructed the container to only expose port 1000, so I need to also tell Asp.Net to listen on port 1000.  You can view other environment variables for each image on the Docker hub site or in the Toolbox.  Additionally, the -p argument maps the host port to the container port.  In this case, I opened up 1000 and mapped it to 1000.

You’ll get some output, and can confirm the container is up and running with this call:

docker ps

capture

(Keep in mind, if you restart the computer or otherwise stop the container, you can view all containers via:

docker ps -a

You can navigate to the Hangfire UI to make sure everything is dandy –

capture

That’s all!  To run the image from my Ubuntu box I just used docker save and docker load commands. (reference here.)  All you’re really doing is saving the image to a file, and loading it up from another server.  Nothing to it.  You can even keep the Toolbox instance running, spool up a second, and the 2 will compete over the job store.

Hopefully this was helpful!


I’ll sound off with a more urban, electronic/hip-hop fusion along the lines of Griz or Gramatik.  I found the album Brighter Future by Big Gigantic.  This is fun stuff, but will definitely be appreciated by a select few of you.

Copyright Hi, I'm Andrew. 2017
Tech Nerd theme designed by Siteturner