Category Archives: Software Quality

1? Or…2?

I promise I’ve got some code samples coming down the pipe, but I’ve found another article which really caught my attention and I couldn’t resist the urge to provide my commentary.

(just in case you weren’t crystal clear on my development philosophy)

Thoughts on Agile Database Development

First off, I’m madly in love with Marten + Postgres.  The whole idea of a hybrid relational/object store at first prompted me to sound the alarm about one package or database being guilty of “trying to do too much”.  After working with it (albeit, on personal projects), boy was I wrong.  It’s really the best of both worlds – the friction-less object storage aspect (reminiscent of RavenDB or MongoDB – by design), combined with enough control to satisfy the majority of DBA’s I’ve worked with, makes for a truly new experience in the ORM world, where we Microsoft goons have been spoon-fed entity framework for many years.  I swear I spend more time configuring EF6 then I do actually writing code that matters.

The comparison of Postgres to MSSQL Server is out of scope here, but suffice to say that if you’re willing to take a dependency (normally something I would absolutely never accept), you’ll be richly rewarded.  Not only is IDocumentSession tremendously optimized for a given query or command (see my posts on CQRS), but the surrounding tooling for doing real life deployment tasks is tremendously useful, especially considering this thing is FOSS.  Schema comparison through the Marten command line takes so much work out of having to manage which SQL changes happened when, assuming you don’t have an expensive enterprisey tool to handle that stuff for you.  Even if you do, chances are it’s not linked up with your POCO’s, which is where all the interesting action happens.

Which brings me up to the next point – software design philosophy.

Quoted from Jeremy’s post –

“The” Database vs. Application Persistence

There are two basic development paradigms to how we think about databases as part of a software system:

  1. The database is the system and any other code is just a conduit to get data back and forth from the database and  its consumers

  2. The database is merely the state persistence subsystem of the application

Wow.  Nailed it.  I feel that there’s so much talk about DDD, stored procedures, ORMs, business logic, performance, ACID persistence, and which-team-owns-what-code that we never even stopped to ask ourselves, “am I a ‘1’ or a ‘2’?  The answer to that question shapes the direction of every system you build, and all systems that your team builds.

Alot of conflict I see arise in Software Development in my company comes from this aspect of writing code for the simple purpose of putting data in the database.  This is fine until you run into a non-trivial set of business rules and external integration points.  In my experience, one of the highest killers of scalability is over-reliance on a monolithic transactional store.  Our obsession to save a few bits of hard disk space has led to a colossal convolution and violation of separation of concerns.  I’m not allowed to pull a customer’s purchase history because there are 12 joins involved and the performance is too slow?

When did that become acceptable?

Now, Marten is not the golden hammer to this problem, but rather the design philosophy of the team that created it is explicitly aligned to more domain oriented thinking, versus the philosophy of “all hail the mighty column”.

His point about an Application Database hits home too.  I lost count of the number of times our dev team broke the database in our dev environment.  Being able to stand up a local copy for isolated testing (or intentional breaking) is unbelievably useful for anyone that gives a crap about proving the durability of their system.  I’m going to give Jeremy some more free linkage related to the shared database antipattern.

I added a Marten project to my Bootstrapper GitHub project to help capture some of my initial work with Marten.  It’s really for basic copy-paste reference, or a fast import of generally used CRUD methods that you’d want to hot drop into a File > New Project experience.  I still need to commit my VS 2017 migrations to master…

As an aside,

If you’re spending more time in SQL schema design meetings than you are with the domain experts, you’re doing it wrong!

But again, that statement depends on whether you’re a 1 or a 2…

I’ll leave the discussion regarding the ‘Vietnam of computer science‘ for another day.  Happy coding, friends!

 

Continuous Improvement

When I first started programming, I was under some illusion that one day, once you learned enough about the language, runtime, or design patterns, you could hammer out some brilliant lines of code, they would work almost magically, and you’d never need to touch it again!

Boy, was I wrong.

Turns out, there’s this word.  You all know it.

Refactoring.

We have a similar word in Lean/Six Sigma.  It’s called ‘Kaizen’.  In short, it means continuous improvement, but it has far reaching implications.  The word has its roots in Japanese culture & business (books that were written in the 80’s reference it), representing a philosophy of using data to constantly improve processes.  The major focus here is that it represents incremental improvement, not “big bang” improvement.  In the image below, the blue sections are slow, gradual improvements, intermingled with large jumps representing major advancements (red).

A more traditional visual of Kaizen may look something like this –

The key takeaway is that improvement happens gradually, and constantly.

A key tenet of Agile development (as argued by Dave Thomas of Agile fame) is that Agile software development is merely the process of making a small change, pausing to take a look at whether the changes have had a positive impact, and then course correcting if necessary.  Looks like the chart above doesn’t it?

A major component of this is refactoring.  In my opinion, every time a development team touches a codebase, they should leave the code in a better state than when they found it.  The goal here should be to improve readability, performance, organization, and overall flexibility of the code.  A Six Sigma-minded company pursues these opportunities to reduce waste, and only the worst run companies believe that waste is only financial.

Waste takes many forms – wasted time, effort, and talent, and all 3 of these are extremely relevant in software.

Wasted time results in delayed project deadlines, compressed testing, inter-personal frustrations, and a more rushed workflow in general, and rushing does not work in IT.  Sales people and high powered business folks may work better under tremendous pressure, but trust me, developers don’t.  Coding is a calculated and involved mental activity.

Wasted effort turns a 1 hour task into a 4 hour one.  It means you spent 3 hours copy pasting data from a spreadsheet into a custom JSON structure used by your app, only to find that in the end, it needed to be CSV format, and you could have just done a straight export from Excel.  Additionally, developers love efficiency hacks.  If wasted effort becomes a routine occurrence, they will rapidly become frustrated with their job, and will have reduced output and problem solving ability.  This makes for reduced team morale, and potentially increased turnover – something that HR teams may raise their eyebrow at you for.

Wasted talent is a truly hidden waste, and can silently kill large companies who are not prepared to respond.  A good friend of mine works extensively in finding and retaining the best talent for his business teams, and we’ve discussed this at length.  Hopefully I don’t need to justify this point, but if you think that all developers are worth the exact same amount, you have much more to learn about high quality software development. Steve Jobs probably could probably explain this better than I could.

Refactoring took me many years to fully appreciate, and I must admit, I’ve really come to love it.  It can feel like it can be its own art form sometimes.  Now, if you don’t have unit tests as insurance for all of your changes, you should probably go back to basics and get those up and running.  “Deploy with confidence” and all that.

There’s a ton of great material out there on refactoring, and I have yet to pick up my copy of the Fowler book on it. But, I’m keeping it simple.  Know your code smells, and make a series of small improvements in an effort to reduce waste.  If the design problem presents itself, maybe integrate a design pattern and assess the solution in a code review.  Over time, trust in the process that your codebase will improve over time.  The next developer will love you for it!

 

The Evolutionary Architect

In the midst of tackling this guy:

book

I can’t even begin to express how encouraging and refreshing it is to have so many of my thoughts and concerns finally captured into written word and gaining momentum.  Things like domain boundaries, and the fact that data duplication is OK if done in the name of scalability and development team autonomy.

Not that I’m anywhere near Sam Newman’s experience and knowledge, mind you.  But my posts are fairly clear when it comes to my philosophy of design.  The database is not king, sharing a monolithic schema can and will kill your ability to scale in ALL of the following areas: product feature change, team delivery speed, reliability of the software, and uptake of new development talent.

“A good software design is easier to change than a bad software design.” – “Pragmatic” Dave Thomas.

(his blog)

One thing I truly admire about this book is Sam’s pragmatism.  He’s not trying to sell you microservices, he moreso does a thorough pro-con analysis.  The people that should appreciate this most are, indeed, software architects.

In chapter 2, The Evolutionary Architect, Sam goes through and does a deeper dive on what it means to be an architect, how we as a software development community have misunderstood the word over the years, and how a true definition is still up for grabs.  Honestly, I completely agree with him.  “Back in the day”, when I worked for Global Quality at a Fortune 500, I had the opportunity of a lifetime to study Six Sigma methodology with a true master of the craft.  This person not only knew the ins and outs of the methodology and the process, but they were responsible for managing a large global team.  It was under this person that I learned, by example, how an evolutionary leader can be both a master of a specific process, but also step back into a management role and empower their team to execute that process.  As team members (and myself as a junior member at the time), we can and will fail.  It is the architect’s (indeed – any manager) role to mitigate that failure, and manage the complexity involved.

It is an architect’s role to manage the complexity of a software product, not to increase it.

Unfortunately, since leaving that particular company, I have yet to meet another leader anywhere close to that magnitude of employee empowerment, mentorship, and expertise in both the “product” and the “people”.

So, back to Sam’s points (now that I’ve given you my background and why I agree), he states that the architect’s role is often that of a tech lead.  Based on my experience, alot of tech leads get less than 2 hours of coding per day, and are often caught up in meetings and company bureaucracy which prevents them from being directly or actively involved in the development. Sam states (and I agree) “More than any other role, architects can have a direct impact on quality of the systems built, on the working conditions of their colleagues, and on the organization’s ability to respond to change.”

This then, makes them such a strange hybrid of technical, but also leadership expertise.

Personally, I’ve seen both extremes – an architect who injects their opinion into code, without consulting the pragmatic ideas of the rest of the team (who in turn has to own the end result), and also the architect who is so hands off that their responsibility is configuring TFS and learning how to use Git so that they can tell the other team members to go Google it.

Neither of these scenarios capture the true essence of an architect – but Sam goes on to say we’ve actually borrowed terminology and not fully understood the impact – and that the role is very well defined in other industries, like engineering, where there is a specific, measurable goal.  By contrast, software engineering is less than a century old.

Trebuchet "is a" type of catapult - right?
Trebuchet “is a” type of catapult – right?

“Architects have a duty to ensure the system is habitable for developers, too”.  This is critical – tech turnover is still high.  Developers leave because they don’t like working with your codebase (yes, Mr. architect, you’re responsible for the overall quality of your codebase – go sit with your juniors now), or because benefits, culture, and environment is better at a different company.  In my experience, a company that invests wildly in the satisfaction of their employees retains better talent for longer.  Software is unique in the fact that you can invest in your developers with shiny tools and conferences, instead of being limited to “only” monetary compensation (like a sales team for example).

“If we are to ensure that the systems we create are habitable for our developers, then our architects need to understand the impact of their decisions.  At the very least, this means spending time with the team, and ideally it should mean that these developers actually spend time coding with the team too.”

This could be pair programming exercises, code reviews (you don’t get to complain about quality if you don’t put forth a concrete solution) or mentoring sessions.  If you’re an architect that only knows how to create stored procedures which end up creating multiple dependencies and breaking changes for more than one application, then you need to stop calling yourself an architect, and start doing your job – developers hate working in environments like this.  Stored procedures make for top-tier database performance, and the absolute lowest software agility (if misused) – since your dependencies cannot be managed from within your solution.  That “one guy” has to “remember” that “oh, when you change this sproc, these two applications will break”. Not fun.

 

Sam compares the architect to more of a town planner – they don’t get to decide which buildings go where, but they are actively involved in IT governance, and pragmatic decision making (read: data-driven) – i.e, they zone out areas where commercial and residential buildings will eventually go.

1538-2-sim-city-3000
Anyone remember SimCity?

A town planner does not have the power to add and remove buildings or real estate developers from those zones.  Often times, it’s developers who are on the cutting edge of new tools that can achieve various outputs, and they should be empowered to deliver on the desired quality.  If you’re dictating who is consuming which stored procedures, you’re a town planner who is calling up Wal-Mart and asking them to move in.  If your development team has assessed the risks, and has pragmatically agreed on Costco or Meier, you need to let them do their job.
I’m also a big fan of governance through code, as this hearkens back to my Six Sigma days of mistake-proofing a process.  This can open up a whole new area of discussion, such as how Resharper, or architectural styles like DDD, REST, and CQRS can enforce best practices (as defined by you) at a code level.  Another discussion for another time!


For any fans of mainstream house, you may be interested in Deadmau5’s new album – W:/2016ALBUM/ (not a typo!)

OSS vs. DIY

Fact:  Open source  software has absolutely exploded both in development and utilization this past decade, and it doesn’t seem to be slowing down.

For the subject of this post, “Open Source” means the dot net ecosystem of Nuget packages, as you would primarily find on Github.  Everyone already knows about the posterchild example of Linux (it runs an overwhelming majority of the world’s supercomputers), so if the Open Source revolution is news to you, welcome to the early 2000’s.

Back to Nuget.

Previously I’ve worked with some “old-school” folks who fear the fact that some mish mosh of DLL’s can be imported into your project, and all of a sudden, you’re liable for the code failing.  From old school developers who aren’t familiar with this world, I hear things like:

Why should we import this package?

What is the benefit?

Who is going to support it?

Why can’t we do it ourself?

All of these questions are rooted in the fact that the questioner has not delved into the code of the major packages out there.

By major, I mean any package within the top 20th percentile of commits and/or downloads.  These show extraordinary health, resilience, quality, and durability.

If a brilliant engineer at Google walked into your office, offered to show your dev team some of the best development tricks and secrets, leave you with code samples, and charge nothing, would you honestly say no?  If the SQL mastermind behind StackOverflow offered to visit to show you how their backend works with incredible performance and ease of coding, and charge nothing, would you honestly turn them away in the name of wanting your own custom ADO.Net code?

If you would honestly do it yourself rather take in the code of experts in your field – then your software is prone to bugs, poor performance, and will create massive waste in time and resources due to being rigid, and difficult to change and manage.

So, your status quo has been to “Code It Yourself”.  Bringing the logic in house is the way to go, right?  We write it, we own it, we maintain it.

Couple of logical flaws, here.  Let’s take a look.

Take a package such as RESTSharp.  Now being the type of developer who likes to stick fairly close to my own HTTP wrapping backend, I typically would recommend against taking a dependency on something like RESTSharp in enterprise projects – so there’s no bias here.   Take a look at their unit test suite:

restsharp

Now, if you think that developer hours are better spent “rolling your own solution”, instead of identifying potential open source libraries for a particular problem, I have some provocative questions for you.

Does this single package library have more unit tests than an entire application your team has developed?

Have you hired a Microsoft MVP to give your team code samples?

Do you believe that your development team is superior to the collective wisdom of thousands of developers across the world?

Are you building out and sharing native DLL’s (either internally or externally) for 12+ platforms?  How easy is it to integrate with your system? (SSIS doesn’t count – we’re talking business logic here).  How easily can you share code with your mobile developers for something like C# extensions?

Based on my conversations with folks that keep Github at a distance and their own monolithic codebase close to their chest, they fall short in each of these areas.  Microsoft MVP’s are required to contribute to the community by definition.  I didn’t link to Microsoft’s site because their CSS was screwed up at the time of writing and the MVP page was broken (lol).

Counterarguments to OSS come from a place of comfort and familiarity.  The sort of thing that drives “we’ve always done it this way”.

And I understand – continuous integration terrifies you.  Pushing to production terrifies you.  With good reason.  Every time you change the code something breaks.  Don’t you wish you could remove that fear?

Fact:  The fewer lines of code that exist in your application, the less opportunity there is for bugs, breaks, and showstopping errors.  When’s the last time a Hello World app broke? Finding open source packages which have a narrow, specific purpose, removes lines of code from your app and outsources common functionality into a peer-reviewed, proven and reliable codebase.  Less lines of code means less maintenance, and cleaner readability. (in the vast majority of cases)

But I don’t want to train developers on new language semantics for all these packages!

Another fallacy.  We used to parse like this:

 


public static T XmlDeserializeFromString<T>(this string objectData)
{
return (T)XmlDeserializeFromString(objectData, typeof(T));
}

public static object XmlDeserializeFromString(this string objectData, Type type)
{
var serializer = new XmlSerializer(type);
object result;

using (TextReader reader = new StringReader(objectData))
{
result = serializer.Deserialize(reader);
}

return result;

Now we parse like this:


JsonConvert.Deserialize<MyType>(input);

To prove my point about less lines of code – say you wrote that custom XML code. Your junior developers now have to:

  1. Remember to close the stream, or understand what the compiler does with using { } statements.
  2.  Pass in a Type as a parameter to a method, which is rarely used and is considered bad practice (the industry heavily uses generics)
  3. Address exceptions thrown if other XML conditions are not met (namespaces, prefixes,etc).
  4. Get in the habit of seeing a (T) hard type cast, another bad practice.

Developers import nuget packages to make their lives easier, and this means lessening the coding they have to do.  Open source packages seek out elegant code API’s which occur by capturing the natural language intent of the logic directly into the code.  This is a concept known as Literate Programming.  LINQ is the perfect example of this, but another open source example would be Shouldly.  If the Shouldly package doesn’t make sense to you right way, then shame on you for not unit testing!

1-7j-rape1td6t3ckazwbnyg

Jokes aside, less for loops and null checks mean clearer logical intent and higher code quality!

Now I’m sure a creative developer can find a way to break the call to JSON.Net, but I imagine it would be rather difficult, and you would most likely get a neat, descriptive little exception message telling you what’s wrong, versus some ambiguous TypeCast Exception.

Technology moves, people.  If your development team isn’t moving at a pace comparable to the technology curve as a whole, you will get left behind and create excessive waste in time and resources when the software breaks.  It’s remarkably hard to measure waste in these areas, but a big one is production support.  Having people up at all hours troubleshooting 4 nested for loops because one of your 15 variables happened to be null, is not only a total waste of time and talent, but steadily drops morale and kills the potential of your developers to do what you hired them to do:

Solve problems and innovate solutions for the business.

Let’s stop reinventing the wheel, and start adding more value.


I discovered a great new song by Calexico, a group that does a hybrid of Americana/Mexican music.  Check out Coyoacán.

There is bad code, and then there is…

…this gem:

public static void Do(ICollection people, string evaluationLogic)
{
for (int i = 0; i > people.ToList().Count; i++)
{
if (evaluationLogic == null) break;
if (evaluationLogic == string.Empty) people.ToList()[i].Name = "Bob";
if (evaluationLogic.Length >= 0) people.ToList()[i].Name = "Jim";
}
}

(mind the tab spacing – I’m still working out the kinks in some code formatters)
Psuedo-coded from actual production code.  In a site that was written this side of the 2000’s.  I will follow up this post with my commentary on hiring practices and some ways to improve getting good coding talent on your team, but suffice to say – whoever wrote this should not be writing production code.

Let me reiterate this – I actually found this live in production.

Where to even begin?  The Do() method is obviously a shell – but the point is you are given an ICollection – which has no indexers.  Some developers carry a preference to ToList() conversion, while some (like me) try to stick as closely to the least specific collection interface – which ends up being IEnumerable the vast majority of the time.  The advantage of coding against IEnumerable is it forces you to think of the target collection as an abstraction, allowing you to use the most functional and/or expressive LINQ statements to encapsulate your intent, without getting buried in nested for-loops.  Let’s call out the specifics:

  1. Repeated ToList() conversion.

Calling:

new List();

is valid C#.  It builds and runs.  It also does nothing for you as a developer.

Calling:

var content = new List();

instructs the runtime to assign a new collection object onto the heap, and store the reference to it in your given variable – here ‘content’.  The first snippet simply allocates an object, but does not capture a reference to it – so the GC will simply pick it up whenever it feels like.

When you feed a ToList() conversion into that first for-loop ‘i’ variable, you’re doing the exact same thing – allocating stuff onto the heap – accessing an int for a brief instant, but instantaneously releasing the reference, meaning your conversion was wasteful, and is needlessly stressing out the GC.

2. For vs ForEach.

The posted code seemed to utilize the poor ToList() conversion due to the need for the ‘i’ indexer – but when working with the collection interfaces, more than likely a foreach more than suffices.  The only reason not to is if you need to place a new object or change the reference of the object itself – with the foreach placeholder variable, you wouldn’t be able to do this.

3. Multiple redundant evaluations.

Notice the three if() checks?  Redundant evaluations.  The code I had assessed had mutually exclusive conditions, but was constantly rechecking them as part of normal loop flow.  An else or switch is everyday practice.

4. Break.

Code smell.  In something that isn’t algorithmic in nature (say, a performance optimized extension method, or just a highly performant loop), I find break to be awkward to use in everyday code.  If you’re breaking out of a for loop under a condition, then most of the time you can easily migrate to a LINQ statement that neatly encapsulates your intent.  Please don’t make me figure out your 10 lines of micro-optimized looping code (read: premature optimization), I have other things to do with my time.

2c96a9fa4c6b4c3cc0fe8322e13ae534

 

Same Old Situation.

One of the key features I like about Agile estimation is that the members on the deliverable side get the final say regarding the size of work.  Traditionally, I’ve been a part of Fibonacci numbers, though I’ve heard some excellent arguments for T-shirt sizing (no surprise – finance wants to correlate everything to man hours for tax deductions, but this an outside interest which shouldn’t interfere with the estimation process itself).

Has anyone been part of a project estimated by a singular entity that’s not writing code?  It can be extremely frustrating.  It’s funny – the younger developers think it’s a problem unique to them – but the older developers have “seen it all”.  I had a great conversation with an older, ex-C++ developer, and we brought up The Mythical Man Month.  If you’re not familiar with Brooke’s law, it states something to the effect of “adding manpower to a late software project makes it later”. I say that because the book is on my list of reads, but I technically haven’t read it yet.  I notice some companies believe that any problem can be solved by throwing more manpower at the problem, without any regard to the quality of the product or developer skill/mindset.

There’s something about the “McDonald’s Developer” that irks me.  The idea that I can snatch up any Joe off the street, and in a handful of hours, teach him how to flip a burger and write code.  What’s so difficult for me to reconcile about this issue, is that McDonald’s (and most fast food chains) are among the leanest businesses in the world.  They have to be – fast food is fiercely competitive.  My Lean/Six Sigma brain wants to agree – but my pragmatic developer brain can’t seem to understand how this results in a high quality, maintainable product.  Developers who tend to fit this profile think of their problems in terms of for loops and making sure they check for null.  They haven’t had enough time in the industry (or around better developers) to look at architectural concerns, proper structuring of code to follow SOLID principles, and constantly analyzing what they write to find cleaner, most readable and efficient patterns.  This primitive skillset is then propagated by higher ups who don’t see the benefit of empowering these new developers, or simply don’t know how to grow the potential of a junior member.  Outsourcing has had a detrimental effect on this situation, as that mentality feeds the idea that more asses in chairs will solve your problem, when the truth is a good development team will design a system such that some obstacles will never even arise.  Boy, that makes it difficult to claim a victory for solving a problem by preventing it.  Anyone who’s ever worked in quality or been around manufacturing would understand all too well.

I think part of the way that I reconcile this McDonald’s mentality is that the architecture and framework of your application(s) should make it difficult for poorly designed or written code to enter the system.  This hearkens back to the mistake-proofing idea in Lean.  The process itself should be set up to minimize risk for any negative factors.  There are more than enough design patterns which make it easy to reason about the structure of a system without having to dive into thousands of lines of code.  It’s not that McDonald’s employees are any less capable or intelligent – it’s that the process sets them up for success and makes it difficult for them to mess up.  I’m allowed to make commentary here, having worked part-time in retail food myself.

Coincidentally (or not), I’ve noticed that places where the outsourcing mentality are prevalent have little to no code review practices – does that say anything about their value for their internal customers?

Starbucks is another example of treatment of internal customers.  It’s pretty obvious their culture attracts certain types of people (if not stereotypical).  It’s a very similar corollary – the processes they have in place make any given drink virtually identical, no matter who makes it, or where.

Does this mean their employees, who are part-time, single digit hourly wages, are any less deserving of investment?

Not at all!  In fact, I have a few friends taking advantage of the Starbucks education benefits and wrapping up their undergraduate degrees.  They may or may not be with the company in 5 years, but then again – most tech professionals won’t be either!  Costco is a good example of a company that is well known throughout Wall Street  as investing so heavily in their employees that they  have the best employee retention and customer satisfaction rates.

As I explore this idea more in the future – I’ll be sure to write my new discoveries on it.  In addition to the the rush I get from refactoring badly written C#, mentoring newer developers is near the top of my list of favorite things to do.  And I have a feeling that if they feel invested in, and are growing their skills, it’ll reduce the dollar and time waste involved with maintaining the revolving door of new hires.

I’ll have to end today’s post with a song of the same name.  Sublime, whose title album went five times Platinum largely due to the success of “What I Got,” has a very college reggae/party ska feel, which tends to be a very specific taste.  I always felt their music was fun, but not something I could commit to buying a physical copy of, or listen to repetitively and recommending to others.  What I didn’t know, is that their frontman, Bradley Nowell, died in 1996 – and the band dissolved soon after.

Turns out, they reformed with a new frontman in the late 2000’s – and it’s awesome.  The new singer/guitarist is Rome Ramirez, and they perform under the name Sublime With Rome.  This has to be some of the finest reggae to come out of the West Coast – Rome’s voice is amazing, the music has a very timeless feel.  I actually listen to it as rarely as I can, for fear of ruining the magic.  It’s like Sublime grew up, graduated college, and found its identity – the percussion is bang on, it’s well mixed, has a deep reggae vibe, and Rome’s voice keeps thing very fun, without being stereotypical reggae, as can be common in the states.

Check out Same Old Situation.

First things first.

So for my first substantive post, I want to talk about something which is near and dear to my heart.  Something which sets the tone and foundation for this entire blog.  Something that was instilled into my brain years and years ago by a man whom I would consider a valuable mentor – and someone who is truly dedicated to the measure of quality, down to their innermost core.

Let’s kick this off with a question.  One of the most contentious questions in business.

What is quality?

How you approach the answer to this question determines the starting point for how you answer this next one:

What is software quality?

I know what you’re thinking – too broad for a blog post.  You’re right.  Heck, so many books have been written on this topic I probably couldn’t keep all of them on my bookshelf.

Let’s look at some key questions that we, as software developers, should seriously be considering as we produce and improve our digital product.  Again, most of these questions arose from the fundamentals of manufacturing quality, Six Sigma, and Lean philosophy, but need to be rehashed to apply to today’s fast-paced world of application design and digital product development.

How do we know that the application we’ve built is of high quality?

Who determines this?

Can it be measured?

What if a different development team takes over the product, and they say we are wrong?  Who is right?  What is the criteria for deciding?

Can I look at a codebase and “feel” the quality of the code, or does experience and instinct have no place in a world packed with data analytics, metrics, KPI’s, and issue trackers?

By the way, does any of this matter if I get slapped with a requirements document and am told to make this feature fast and cheap because management wants to get this thing out the door?

Over the next few weeks, I’m going to give my thoughts on the topic, and I’ll try to refrain from being over academic.  But for now, I want to dig into some pretty fundamental concepts.

Any Six Sigma expert worth their salt is going to throw Deming at you, and he would say “The customer’s definition of quality is the only one that matters.”

True, but who is the customer?

During my transition from Six Sigma to Agile/Scrum, I realized any time Six Sigma says ‘Customer’, it can be translated to ‘End User’.

Which means our users‘ definition of quality is the only one that matters?

Not quite.

You see, a fundamental part of Six Sigma methodology is identifying customers in different “domains”, if you will.  The first line drawn is between internal and external customers.  An external customer can be a wholesaler, retailer, or end consumer (this is probably closer to the end user translation).

A critical part of doing business and maintaining high quality standards is measuring, and investing in, your internal customers.  This can be a supplier who makes your goods, the purchase order department, and even customer service representatives.  For some of you, this might be a slight philosophy change.  Yes, after your product goes live, your responsibility goes beyond profits and five star reviews.  Once its out there, it needs to be supported, have its reputation (and by extension, your brand) maintained, and continue to “delight your customers” (following the Deming phraseology).  In the tech world, this is more common knowledge, but you would be surprised at the number of traditional consumer goods companies which cease caring about their product once it leaves the warehouse freight bay.

I’m going to cut myself off here for this week.  I think we touched on a few areas which deserve more attention and I’d like to dedicate individual posts for some deeper thought.

This one’s for you, J.W!

 

 

(Credit for the image goes to Tesseract – a British Progressive Metal group, if you’re a metalhead.  I rather enjoy the melodic tones, and their drummer keeps things mixed up and fresh!)