There’s a funny phenomenon in a lot of companies that hire software engineers, where people draw a dividing line between “engineering” and “the business”. This is even reinforced in a lot of software engineering books and other media, which talk about “business people” and so on. I always think of engineering as very much a part of this apparently separate “business”, but that’s at least a whole other blog post.

One aspect of this is that engineering workflows get treated almost entirely differently from other processes in the business. Obviously there is the multiverse of gigantic workflow systems known as “Agile”, but there are also more trivial effects, one of which is the lack of care put into development workflows.

The “rest” of the business often gets quite a productised, full-package system, where attention is paid to foot-guns and other drains on productivity. The workflows of individual software engineers, in comparison, are often left as more of a Wild West experience which involves a lot of log reading, web searching, trawling through different knowledge silos and pinging colleagues at random for advice.

Quite often you’re given a README.md from eighteen months ago which contains a few lists of commands, and have to figure it out from there. This might stem from an attitude that as an engineer part of your skillset is figuring out this kind of thing, so it’s a natural part of the role.

Most companies could save a lot of engineering time and thus a lot of money by improving on this just a little bit.

There are a lot of such improvements that can be made that have quite a large effort-to-benefit payoff. They all stem from the basic idea that development time is also a workflow, and we can improve the user experience. Some companies do refer to Engineering Experience or EX, which gets at this.

## Some ideas to improve dev workflows

…e.g. testing, project setup

Acknowledge the context. All the points here stem from this central one. When talking about the process in question, we don’t try to pretend that it is perfect, so why should the code pretend that is is perfect? Acknowledge the footguns and common pitfalls, and go from there. If we could completely fix them, we would have, so their continued presence means they are tricky. What can the process do to improve on them, even if it can’t eliminate them entirely? The user is a fellow engineer, and might be yourself later; acknowledge that and think how you can help this user in this situation. See also: Don’t Make Me Think (unless there’s no other choice).

Early, explicit errors. If you can detect that something is wrong, then detect it and crash as early as possible, with an explicit message explaining the issue. So many annoyances and pit-falls can be ameliorated with this, as a crash at the earliest point the problem can de detected is much easier to debug than a vague problem ten links down the chain. As with many of these points, this is not a novel idea: list of tips in The Pragmatic Programmer.

Explicit assertion messages in tests. Make it as clear as possible exactly what has failed, or at least make the expectation meaningful. The worst kind of test failure is expected true, got false. It’s usually trivial to add a message to the test assertion that says expected the Foo component to have sent a message to the Bar component, but it didn't. One step further is to have that and then These messages were sent: .... Diffing the expectation from the result is also basic and helpful.

Automate your manual checks. If you find yourself checking for certain config or doing some other repetitive debugging whenever a particular issue comes up for you or a colleague, try to automate that check. It can be as trivial as an assertion early in the process that craps it out with a clear message. “Your blah should be configured to 123, but is configured to ABC”. “Foobar setting looks misconfigured, it should look like this: 123.ABC”. If the issue can break the process, then you can break it early with a clear message. Again, this is not a novel idea; when we do it for implementation code, we call it “testing”. Just apply it to the dev workflow, too.

Automate your manual fixes. Even better than automating the check, is to automate the fix when the problem is identified. If X config is wrong and we can figure out what it should be, just have the process automatically change it. Sometimes people don’t do this because it seems hacky. Please just do it; that way no-one even needs to know about this particular config and they can get on with their day.

Links and further reading. When reporting some problem to the user (i.e. the engineer, which could be yourself), chuck in some URLs to documentation, or a web page where they can go and fix it. Help them to help themselves.

Semi-automation. Some things can’t be fixed automatically as they require a human step, or need someone to log in to some system. You can still automate around the human step. Set up what you can for the user, and then e.g. prompt them to go to a particular web page and do something, or paste some required value into the terminal, where the process can continue with an automated fix for the rest of it.

Guide the user through the problem. If you’d have to verbally explain how to investigate some common issue, and you can’t automate the check or the fix, at least automate the explanation. Print a checklist to the terminal, or prompt the user to confirm various manual checks one by one. You can mix this in with automated checks, too. Increase the odds that the process can fix itself, or that the user can fix it themselves.