My New Project Template

As of late, I’ve found myself hitting File > New Project ALOT.  It’s exciting to delve into some new packages and design paradigms, but I find myself taking on some of the same dependencies over and over again.  Some of my favorite that make it in virtually every time:

  • Serilog (link) – Structured logging
  • Serilog sinks, including, but not limited to File, Rolling File, MongoDB, SQL Server, RabbitMQ, and the Async wrapper
  • Mediatr (link) – In process messaging.  Jimmy Bogard, you are a legend.
  • Mediatr extensions to easily wire into Asp.Net Core.
  • NSwag for Api Documentation
  • Nicholas Blumhardt’s Smart Logging Middleware  (which I’ve tweaked as my own Nuget package)

I may remove one or two if the specific project demands it, but it beats reworking my Startup.cs every single time.

The folder structure is what threw me off and took the most time.  I tried to model it after most of the higher quality stuff I see on Github – things like IdentityServer4, and the Microsoft/AspNet repo.  I’ve gotten used to the ‘src’ folder with accompanying ‘docs’ and ‘samples’.  Achieving it was a pain, but I realized that creating Visual Studio Solution folders to be a close mirror to your actual structure helps to mentally organize it in your head.  A picture is worth a thousand words:



The top level has your .sln.




The next level in has historically been all of my csproj folders.  This time, I did a logical separation.  I may add one for messaging in the future as well.  Inside the logical folders (in the screenshot, CQRS) you will add your class libraries and projects.  Be careful though, Visual Studio can trick you here.

Add New Project will drop your .csproj into a normal flat list, but the physical location is right where you put it.  You have to use Add New Solution Folder to get a logical view in the Solution Explorer that matches your physical directory structure.  Bit of a nuisance, but it’s not so bad once you understand what’s going on.

Before Solution Folder:



(at this point, it’s actually physically located in one of the directories above)








After Adding Solution Folder via Right Click Solution > Add > New Solution Folder:



Just drag and drop your csproj into the solution folder, and check in the resulting changes to your sln file.







You could even do a totally different view in the Solution Explorer if you wanted, but my aim was to mirror most of the higher end stuff I see on GitHub in the .Net space.

I’ve thrown it up on Github:

Hopefully it helps you save some time in your next project!  Happy coding!

1? Or…2?

I promise I’ve got some code samples coming down the pipe, but I’ve found another article which really caught my attention and I couldn’t resist the urge to provide my commentary.

(just in case you weren’t crystal clear on my development philosophy)

Thoughts on Agile Database Development

First off, I’m madly in love with Marten + Postgres.  The whole idea of a hybrid relational/object store at first prompted me to sound the alarm about one package or database being guilty of “trying to do too much”.  After working with it (albeit, on personal projects), boy was I wrong.  It’s really the best of both worlds – the friction-less object storage aspect (reminiscent of RavenDB or MongoDB – by design), combined with enough control to satisfy the majority of DBA’s I’ve worked with, makes for a truly new experience in the ORM world, where we Microsoft goons have been spoon-fed entity framework for many years.  I swear I spend more time configuring EF6 then I do actually writing code that matters.

The comparison of Postgres to MSSQL Server is out of scope here, but suffice to say that if you’re willing to take a dependency (normally something I would absolutely never accept), you’ll be richly rewarded.  Not only is IDocumentSession tremendously optimized for a given query or command (see my posts on CQRS), but the surrounding tooling for doing real life deployment tasks is tremendously useful, especially considering this thing is FOSS.  Schema comparison through the Marten command line takes so much work out of having to manage which SQL changes happened when, assuming you don’t have an expensive enterprisey tool to handle that stuff for you.  Even if you do, chances are it’s not linked up with your POCO’s, which is where all the interesting action happens.

Which brings me up to the next point – software design philosophy.

Quoted from Jeremy’s post –

“The” Database vs. Application Persistence

There are two basic development paradigms to how we think about databases as part of a software system:

  1. The database is the system and any other code is just a conduit to get data back and forth from the database and  its consumers

  2. The database is merely the state persistence subsystem of the application

Wow.  Nailed it.  I feel that there’s so much talk about DDD, stored procedures, ORMs, business logic, performance, ACID persistence, and which-team-owns-what-code that we never even stopped to ask ourselves, “am I a ‘1’ or a ‘2’?  The answer to that question shapes the direction of every system you build, and all systems that your team builds.

Alot of conflict I see arise in Software Development in my company comes from this aspect of writing code for the simple purpose of putting data in the database.  This is fine until you run into a non-trivial set of business rules and external integration points.  In my experience, one of the highest killers of scalability is over-reliance on a monolithic transactional store.  Our obsession to save a few bits of hard disk space has led to a colossal convolution and violation of separation of concerns.  I’m not allowed to pull a customer’s purchase history because there are 12 joins involved and the performance is too slow?

When did that become acceptable?

Now, Marten is not the golden hammer to this problem, but rather the design philosophy of the team that created it is explicitly aligned to more domain oriented thinking, versus the philosophy of “all hail the mighty column”.

His point about an Application Database hits home too.  I lost count of the number of times our dev team broke the database in our dev environment.  Being able to stand up a local copy for isolated testing (or intentional breaking) is unbelievably useful for anyone that gives a crap about proving the durability of their system.  I’m going to give Jeremy some more free linkage related to the shared database antipattern.

I added a Marten project to my Bootstrapper GitHub project to help capture some of my initial work with Marten.  It’s really for basic copy-paste reference, or a fast import of generally used CRUD methods that you’d want to hot drop into a File > New Project experience.  I still need to commit my VS 2017 migrations to master…

As an aside,

If you’re spending more time in SQL schema design meetings than you are with the domain experts, you’re doing it wrong!

But again, that statement depends on whether you’re a 1 or a 2…

I’ll leave the discussion regarding the ‘Vietnam of computer science‘ for another day.  Happy coding, friends!


Hangfire on .Net Core & Docker

This is going to be a lengthy one, but I did some setup for Hangfire to run in a Docker container (on my Ubuntu server at home)and I thought it’d be pretty exciting to share -given where we are in the .Net lifecycle/ecosystem.

What exactly are we setting up?

So, as part of my software infrastructure at home, I was in need of a job scheduler.  Not because I run a business, but because this is what I do for…um…fun.  I’m starting to have some disparate apps and API’s that are needing some long running, durable job handling, so I selected Hangfire based on their early adoption of Core.

I also completed my Ubuntu server build/reimage this past summer, and I was looking to be able to consistently “Dockerize” my apps, so that was a key learning experience I wanted to take away from this.

So here’s the stack I used to complete this whole thing:

  • Hangfire Job Scheduler
  • Docker – you’ll need the Toolbox if you’re developing on Windows/Mac
  • Hosted on my Server running Ubuntu 16.04 (but you can run the image on your local toolbox instance as a PoC.

The easiest place to start is getting Hangfire up and running.  I’ll  skip over my Postgres and Ubuntu setup, but that stuff is widely covered in other documentation.  I’ll have to assume you have a library for your job store that targets Core (I know MongoDB is dangerously close to finalizing theirs, and they have a Docker Image to boot!).  The one I used is shown below in my project.json.

So, spool up a brand new Asp.Net Core app; I made mine a Web Api with no security.   You can name it Hangfire.Web if you want to exactly follow along, but it really doesn’t matter, as long as you spot the areas where it would need to be changed.

In your program.cs, comment the IIS integration code.  We’ll be running Kestrel on a Linux VM via the Asp.Net Docker Image.


Add your job store connection string to your appsettings.json.

Next up, tweaking your project.json.  I did a few things here, and I’ll post mine for your copy pasting pleasure.  The important parts are removing any IIS packages and pre/post publish scripts/tools.  By default, a new project will come with a couple of IIS publish scripts, and they will break your build/publish if you run only Kestrel in the container.

"dependencies": {
"Microsoft.NETCore.App": {
"version": "1.0.1",
"type": "platform"
"Microsoft.AspNetCore.Mvc": "1.0.1",
"Microsoft.AspNetCore.Routing": "1.0.1",
"Microsoft.AspNetCore.Server.Kestrel": "1.0.1",
"Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0",
"Microsoft.Extensions.Configuration.FileExtensions": "1.0.0",
"Microsoft.Extensions.Configuration.Json": "1.0.0",
"Microsoft.Extensions.Logging": "1.0.0",
"Microsoft.Extensions.Logging.Console": "1.0.0",
"Microsoft.Extensions.Logging.Debug": "1.0.0",
"Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0",
"Hangfire": "1.6.6",
"Hangfire.PostgreSql.NetCore": "1.4.3",
"Serilog.Extensions.Logging": "1.2.0",
"Serilog.Sinks.Literate": "2.0.0",
"AB.FileStore.Impl.Postgres": "1.0.0",
"ConsoleApp1": "1.0.0"

"frameworks": {
"netcoreapp1.0": {
"imports": [

"buildOptions": {
"emitEntryPoint": true,
"preserveCompilationContext": true

"runtimeOptions": {
"configProperties": {
"System.GC.Server": true

"publishOptions": {
"include": [

You could honestly get rid of most of it for a bare-bones dependency build, but I left alot of defaults since I didn’t mind.
Next, Startup.cs:

public void ConfigureServices(IServiceCollection services)
// Add framework services.

services.AddHangfire(options => options
.UseStorage(new PostgreSqlStorage(Configuration["PostgresJobStoreConnectionString"]))


// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory, IApplicationLifetime appLifetime)

// Ensure any buffered events are sent at shutdown

// .UseStorage(new PostgreSqlStorage(Configuration["PostgresJobStoreConnectionString"]));

app.UseHangfireDashboard("/hangfire", new DashboardOptions()
Authorization = new List() { new NoAuthFilter() },
StatsPollingInterval = 60000 //can't seem to find the UoM on github - would love to know if this is seconds or ms

And the NoAuth filter for Hangfire:

using Hangfire.Dashboard;
using Hangfire.Annotations;

public class NoAuthFilter : IDashboardAuthorizationFilter
public bool Authorize([NotNull] DashboardContext context)
return true;


Couple notes:

  • The call to PostgresStorage will depend on your job store.  At the time of writing I found a few Postgres packages out there, but this was the only one that built against .Net Core.
  • Serilog logging was configured, but is completely optional for you.  Feel free to remove it.
  • Why the Authorization and NoAuthFilter?  Hangfire, by default, authorizes its dashboard.  While I admire the philosophy of “secure by default”, it took me extra time to configure a workaround for deploying to a remote server that is still  in a protected environment, and I didn’t want to mess around with plugging in Authorization.  You’d only find that out after you deployed the Hangfire app.
  • Stats polling interval is totally up to you.  I used a (very) long interval since the job store wasn’t really doing anything.  To get the stats I need to consciously navigate to that web page, and when I do, real-time isn’t a critical feature for me.

At this point, you have everything you need to hit F5 and run your Hangfire instance on local.  Now would be a good time to double check your job stores work, because next we’re moving on to…



The Big Idea

The idea is, we want to grab the Docker Image for Asp.Net Core, build our source code into it, and be able to run a container from it anywhere.  As you’ll see, we can actually run it locally through Docker Toolbox, and then transfer that image directly to Ubuntu and run it from there!

We’re going to prep our custom image (based on the aspnet core Docker image here).  We do that by creating a Dockerfile, which is a DSL that will instruct Docker on how to layer the images together and merge in your DLL’s.

Note that the Docker for Visual Studio tooling is in preview now, and after experiencing some build issues using it, I chose to just command line my way through.  It’s easy, I promise.

First, create a new file simply called ‘Dockerfile’ (no file extension) in your src/Project folder:

Your Dockerfile:

FROM microsoft/aspnetcore:latest
ARG source=.
WORKDIR /publish
COPY $source .
ENTRYPOINT ["dotnet", "Hangfire.Web.dll"]

Let’s take a look at what this means.  The ‘FROM’ directive tells Docker to pull an image from the Docker hub.  MAINTAINER is fully optional, and can be left out if you’re paranoid.  ARG, COPY, and WORKDIR work together to set the current folder as a variable, then reference the publish folder from that variable, copying in its contents (which will be your DLL’s in just a moment).  ENTRYPOINT is what Docker will call into once the host boots up the image.  You can call ‘dotnet Hangfire.Web.dll’ straight from your bin folders to double check.  Keep in mind the DLL name in ENTRYPOINT will be whatever you named your project.

To make life a bit harder, I decided to use a specific port via the EXPOSE directive.  I chose an arbitrary number, and wanted to be explicit in my host deployment port assignments.

See that publish folder from above?  We’re going to create that now.  I didn’t want to mess around with publish profiles and Visual Studio settings, so now is where we go into command line mode.  Go ahead and call up the Docker Quickstart terminal.  We can actually call into the dot net core CLI from there, so we’ll do that for brevity.


Make sure kitematic is running your Linux VM.  Mine is going through Virtual Box.  I couldn’t tell you if the process is the same for Hyper-V driven Windows containers.  You might hang at the above screenshot if the Linux VM isn’t detected/running.

‘cd’ into that same project folder where the Dockerfile is and run a dotnet publish.  You can copy mine from here, which just says publish the Release configuration into a new folder called ‘publish’.


cd ‘C:\Users\Andrew\Desktop\ProjectsToKeep\Hangfire\src\Hangfire.Web’

dotnet publish -c Release -o publish

Now, we have freshly built DLL’s.  We call into the Docker CLI which will pull the necessary image, and merge in that folder we referenced.

docker build ./publish -t hangfireweb

The -t argument is a tag.  It’s highly recommended to assign a tag as you can use that name directly in the CLI. If you get errors like “error parsing reference”, then it’s probably related to the tag.  I noticed some issues related to symbols and capital letters.


Bam!  Our image is built!

I can prove it with this command:

docker images


This next command will take a look at the image, and run a new container instance off of it.

docker run -it -d -e “ASPNETCORE_URLS=http://+:1000” -p 1000:1000 –name Hangfire hangfireweb

–name assigns the container a name so we can verify it once it’s live.

-d runs it in a background daemon.

-e will pass in environment variables.  These are variables passed into Docker when its constructing the container, and in this case, Asp.Net defaulted to port 80 (as it should) – but you’ll remember I explicitly instructed the container to only expose port 1000, so I need to also tell Asp.Net to listen on port 1000.  You can view other environment variables for each image on the Docker hub site or in the Toolbox.  Additionally, the -p argument maps the host port to the container port.  In this case, I opened up 1000 and mapped it to 1000.

You’ll get some output, and can confirm the container is up and running with this call:

docker ps


(Keep in mind, if you restart the computer or otherwise stop the container, you can view all containers via:

docker ps -a

You can navigate to the Hangfire UI to make sure everything is dandy –


That’s all!  To run the image from my Ubuntu box I just used docker save and docker load commands. (reference here.)  All you’re really doing is saving the image to a file, and loading it up from another server.  Nothing to it.  You can even keep the Toolbox instance running, spool up a second, and the 2 will compete over the job store.

Hopefully this was helpful!

I’ll sound off with a more urban, electronic/hip-hop fusion along the lines of Griz or Gramatik.  I found the album Brighter Future by Big Gigantic.  This is fun stuff, but will definitely be appreciated by a select few of you.

Thoughts on ASP.Net Core & DI

Microsoft is making big changes to the way they are building and shipping software.

Personally, I think it’s a huge win for developers.  It’s only a matter of time until .Net Core targets ARM processors and all of my homemade applications talk to each other via distributed messaging and scheduling on a Raspberry Pi network.

But the paradigm shift in project structure and style leaves some polarized in their opinion.  The developers who understand the power of Open Source are generally immediately on board.  The developers who still think the best answer to any problem is a stored procedure simply can’t comprehend why you would do anything in .Net Core when the existing .Net Framework “works”.

“Well, it works!”

Ever heard that?

That statement eventually gives birth to the infamous “Well, it works on MY machine…”

Let me tell you something about my philosophy on development.  What really fuels why I do what I do.

The art of software development, in my opinion, is being able to design an evolving solution which adapts to an evolving problem.  I might even give that as a definition to “Agile Code”, but I’ll leave that for another discussion.  I don’t mean an evolving solution as in having to touch the code every second of every day in order to meet new requirements – I mean touching it a handful of times, and the answer to every new requirement is “sure, no problem” as opposed to “Crap – that code is a nightmare to modify for even the smallest tweaks”.

.Net Core facilitates this in so many ways – between amped up dependency management, DI as a requirement, and a middleware styled approach.  Developers have known for decades this is how to build software, and for years have tried to shoehorn this paradigm into the .Net Framework via OWIN and a myriad of IoC containers.  Greg Young, an architect whom I have the utmost respect for, has spoken out against DI containers(specifically the proposed benefit of hot swapping implementations at runtime), but after being confronted with some very challenging requirements, I honestly can’t make an app nowadays without it.  Even for simple apps I make myself – I decide to switch up implementations and benchmark things against each other, but I don’t want to delete code that I’ve written on my own time for fear of reusing it at a later time (No TFS at home..yet…).

The most important aspect of .Net Core, in my opinion, is it forces you to think in terms of abstractions.

It’s disheartening when I’m working with other developers who:

A) Claim to be C# developers and can’t define “coding against an abstraction”

B) Don’t understand that how to properly separate the concerns of code

C) Believe that offloading business logic to the database is a good decision in the name of performance

I have to catch myself here.  It’s easy to slip into a cynical view of others and begin to harshly criticize their talent as I put on my headphones for a three hour refactoring session.  That’s not me.  I believe anyone can code.  I believe anyone can be a good coder.  Good developers, and high performing people in general, are good thinkers.  They know what they don’t know.  They never settle for a single best solution, they pragmatically select the best tool for the job, critically assessing their problem and any potential solutions.


This is how I mentor the younger developers that drop their jaws when they see Startup.cs for the first time:

Ignore this entire file.  You need to know two things.  You configure services and you use middleware (thanks Daniel Roth!).

What is it that I need this package/code to do?

Pick the best tool for the problem, and drop it specifically where it belongs.  Concretely, this means thinking about your problem space.  There’s a 99.9% chance you are not the first person to encounter this problem.  What is the most elegant and reusable solution?

This gets the developer thinking about scoping and narrowing their development focus.  Too often they jump immediately to code that actually takes input A and outputs B – it’s our nature.  Usually, as I pester them with questions, they end up verbalizing the abstraction without even realizing it, and half the time the words they use best describe the interface!

Dev: “Well, I’m really only using this code to provide data to my view model.”

Me: “Right – you didn’t even say web service in that sentence.”

Dev: “So it’s a ‘Data Provider’, but it will to go to the web. So it’s a Web Data Provider. ”

Me: “For local debugging though, you need to be able to provide some hardcoded values, and it shouldn’t impact or modify any other code.”

(Blank stare, moderate pause)

Dev: “…Should that be a hard coded data provider?”

Boom.  My job here is done.

For anyone used to working with repositories, DI, and OWIN/MVC, this stuff is child’s play.  The junior developer (and indeed, the fully mediocre developer) need a hand grasping these concepts.  I find that guiding them through a discussion which allows them to discover the solution presents the most benefit.  Simply telling them what to do and how to do it trains a monkey, not a problem solver.  Anyone can write 500 lines of code in Page_Load.  They need to understand the ‘why’.  Personally, teaching on the job is one of my favorite things to do – there’s simply no substitute for the happiness that hits a developer when they realize the power that this new technique has awarded them.

More on this at a later point, but for now, understand the danger that you take on by using that new() keyword.  You may be stuck with that code block for a long, long time.


On to today’s music.  I found some more somber sounding folk-pop stuff.  The EP by Lewis Del Mar was a really great find!  (Minor language disclaimer).


Copyright Hi, I'm Andrew. 2017
Tech Nerd theme designed by Siteturner