Constraint Driven Development

Nathan Kramer January 24, 2020

No matter what language we are using, the software we write is subject to some set of real-world constraints.

These come from a number of different places:

  • Business constraints
  • UX design constraints
  • Engineering constraints

These constraints define what is possible and whether our software is working.

Identify and Automate

Quite often, we like to encode some of these real-world constraints into something that can be verified by a computer.

This affords us a few nice benefits:

  • Our code can be refined and guided by the constraints incrementally
  • Our code can be tested against the expanding set of constraints over time, helping to keep the software working as it changes.

Generally speaking, there are different times and places during software development that the software we write can be checked against these constraints.
Some of these checks are automated, and some are very human.
Some happen before we run our code, and some happen while the code is running.

To hone in further, I think it’s worth asking questions like the following about our tools and techniques when developing software:

  • How are we translating our real-world constraints into constraints we can check automatically?
  • Where and how do we verify our work against these constraints?

From an engineering perspective, we might ask:

  • What constraints are checked at the boundary of our apps, our APIs?
  • What constraints are checked when we hit CMD+S?
  • What constraints are checked when we type git commit?
  • What constraints are checked when we type git push?
  • What constraints are checked when we click merge?
  • What constraints are checked when we drag a ticket through our Jira columns?
  • What constraints are checked when we deploy to prod?

The answers to these questions don't strictly hinge on what programming language you use.
This blog post isn't about paradigm X vs paradigm Y.
No matter what tools we use, we can leverage these and other points in our process to check our software against carefully designed constraints.

However, I do want to break things down a bit further, making a distinction between compile time and run time constraint checking:

Compile time

  • Compilation errors
  • Type-safety
  • Linters and static analysis

Run time

  • Test suites, performance tests, etc
  • Run locally during development, or 
  • In CI, or 
  • In a build pipeline, or 
  • Against a real environment

In fact, here we might like to call “compile time” “pre-run time”, since this stage can and does exist for interpreted languages too (for example, consider rubocop etc). 

___________________________________________________________________

“Constraint Driven Development” as a Superset of TDD

When the software industry moved towards dynamic and interpreted languages, it was also forced to develop very sophisticated tools for describing and checking constraints at run-time.


As someone who enjoys the feedback loop offered by compile and type checks, when I first switched to using ruby, I found the lack of immediate feedback about the code I was writing disconcerting.

It felt like I was talking to a person who just believed everything I said and told me everything I did was great. 

For whatever reason, this feeling was even more prevalent than my prior experiences with languages like Python, Lua, and JavaScript. It felt paralysing.

What hadn't sunk in was that I needed to treat rspec like my “compiler”.

In a language like ruby, the developer writes or configures all of the constraint checks in the form of tests and configured linters and static analysis tools.
None are provided out of the box (apart from runtime errors :D)

Understanding this, I was able to reconnect with the feeling that I was collaborating with the computer, only I felt more involved with the other side of this collaboration:
I needed to write tests early and often.

At this point, I should clarify that I'm not constructing an argument against ruby here.

I happen to think there is nothing intrinsically wrong with embracing the responsibility of implementing all of the run-time constraint checks, and ruby has some incredibly powerful tools for doing this.
Indeed, much of this power comes from its dynamic nature.

I also happen to find this approach quite charming and thought-provoking.
It's the foundation for writing your own little Darwinian software-selection pressures, where whichever code survives your tests is deemed "good code".

  • 100% test coverage for code that was written in a Red -> Green - Refactor loop, is code that has been crafted by your specs.
  • If your specs accurately interpret your user stories,
  • And your user stories accurately represent the vision of the future provided by the product owner,
  • And that vision accurately represents the businesses strategy,
  • And the businesses strategy is sound,
  • Then you've probably got yourself some pretty cool code!

This is why, in my opinion, TDD, or something very close to it, is non-optional in a professional ruby context.

___________________________________________________________________

A different way to define constraints

It's interesting to consider the implications of the above for compiled languages.

With compiled languages, we're dealing with a layer of free, out-of-the box constraint checks. These constraint checks may be aligned with our goals, or they may be orthogonal to them, but either way, they must be met before we can even run the code.


Therefore, the tests that we write (to test the code's run-time behaviour) - can in principle be liberated from the need to check for syntax errors and type errors, and anything else which can be covered in this pre-run stage (static analysis and so-on).

In fact, some languages push all run-time errors back into the compile step.

Types as free Tests

When we add a type annotation to a piece of code, we teach the computer something about our intentions.

This, in turn, allows the computer to teach us about our code and its many implications, some of which might not be immediately apparent to us.
This can ultimately help us fulfill our intentions.

In other words, type checks are kind of like a set of free tests that you don't need to write, and - if using a well-featured type system - then these "tests" can be composed like lego, and inexpensively re-written, to describe arbitrarily complex parts of your domain.

Consider for a moment, the following (absolutely beautiful) UI mockup:

image

It shows 5 possible states of a Profile page for a social media website:


  • friends list has not been requested by the user
  • friends list is loading
  • friends list has finished loading and is empty
  • friends list has finished loading and is non empty
  • an error occurred

This is a fairly trivial thing to build. Let’s consider a scenario where two teams build it using different methods of constraint checking.

 
Team A

Team A builds the app in JavaScript.

They’re quickly able to build the app, and it works (mostly). But testing all the different cases becomes tedious. The team feels a bit bogged down by upcoming requirements, and their time to build the website is running out. Plus, they want to get to market quickly so they can verify their assumptions. The team decides to take a few short-cuts, conforming their UI to their data-model:

This simplifies things a lot.

Now they just need to have the empty list state, and the non empty list state, which saves 3 screens! As you can imagine, this will result in some very sad and lonely users when the API is slow or goes down.

image

The inertia the team faced in trying to test their work impacted what they felt was possible, and they simplified their app in the face of this.

_________________________________________________________________

Team B

A different team is working on a similar app. As it turns out, the language they use is statically typed and has a very expressive type system. They model this scenario like this:

This code declares a Friend type, which is a record with a first name and last name.

It then models 4 possible states the screen could be in:

  • NotRequested
  • Loading
  • Loaded (and here's the data)
  • Error (and here's the error)

Now, in the view function, the team's code won't actually compile until each of these 4 possibilities is accounted for.

For example, the following code doesn't currently handle the Error case:

And it fails to compile: 

To me, this looks like a test failure. We have the test suite the other team couldn’t be bothered writing.

By shifting the constraint to a type check, we've just saved a pretty enormous amount of time for an arguably just as useful constraint check. We didn't have to overcome any inertia to unlock this benefit, we just modeled the data well and reaped the benefits.

We're on the path of least resistance - we've made it effortless to do the right thing.

Now, I'm not suggesting that types should replace tests. Neither am I undermining the value of tests or TDD. In fact, I think the imaginary development team above should write automated tests for their front-end, as generously as needed, especially as their feature-set grows.

What I would be more tempted to argue is that some tools have better leverage than others at different points in the software development process.

Types are a different way to define constraints. They have a lot of leverage and are sometimes worth considering. In some cases, it might be preferable to test at a different level of abstraction and get all the same value.

_________________________________________________________________

The only test that matters, in the end

With all of this in mind, it's worth remembering that the only time the software is tested against the actual real-world constraints is in production.
The checking that took place above was all done via the constraints that we designed based on our interpretation of the real-world constraints.

In production, the constraints are checked using these tools:

  • Crashes and bugs
  • Poor user feedback
  • Not being market viable / not making money

I think you’ll agree that it would be desirable to avoid using crashes/bugs/ and unhappy users as our only litmus test for whether our code works. I think we’d prefer to have solved for those cases before production, and ideally we can still get to prod fast enough to test the real-world market assumptions while we’re at it.


In closing, I've decided to nickname this idea "Constraint Driven Development" as a superset of TDD.

In Constraint Driven Development, we:

  • Identify the real-world constraints for our software
  • Translate these real-world constraints into constraints we can automatically test, e.g domain modeling and type-checking, testing, performance tests, etc

And we do this so that we can make it effortless to do the right thing.

Tools, Links, Further Reading

  • JSON Schema - useful for describing schemas, validating them, and generating code or other declarative documents
  • OpenAPI - Like JSON schema, but for web APIs.
  • TypeScript - JavaScript with types!
  • Elm - A delightful language for reliable webapps.
  • Sorbet - a fast, powerful type checker designed for Ruby.
  • Notes on the Synthesis of Form - for further galaxy braining.

 

Author: Nathan Kramer

Nathan is a Senior Bespoke Engineer who loves music and ramen

Nathan Kramer

Nathan Kramer