Better alive than right

“Better alive than right” is a kind of proverb I try to remind myself of when cycling in London on a daily basis. A sizeable minority of drivers in London have a disregard for people on bikes and break traffic rules in various ways around them.

A common one is drivers pulling out from a junction in front of someone on a bike, which requires the person cycling to stop to avoid a collision, even though the driver is at fault. The idea of “better alive than right” is to focus on keeping yourself safe above all, even when it’s frustrating as the driver is in the wrong in this situation.

This is largely the same as “better safe than sorry”, but made a bit more vivd. It’s also similar to the idea in the story of The Fish That Were Too Clever in The Panchatantra.

It also reminds me of Eliezer Yudkowsky’s “Rationality is Systematized Winning” post on Less Wrong:

“I have behaved virtuously, I have been so reasonable, it’s just this awful unfair universe that doesn’t give me what I deserve. The others are cheating by not doing it the rational way, that’s how they got ahead of me.”

Sometimes it is tempting to believe that following rules, seeking certainty or being “right” is going to bring the best results. That is true in a tautological sense – the right way is the one that is right, after all.

But real situations are rarely that simple or clear. In the cycling example, the person on the bike is “in the right”, and should be able to carry on without a car driver pulling out in front of them. But if they take that approach, they might get hit by a car, which is a far worse outcome, regardless of what the “rules” actually are.

Similarly, we can spend a lot of time and energy trying to exhaustively prove or guarantee that something is right. This might be worthwhile in some situations. A lot of the time, though, there is a cheap and effective check or fix that would prevent the hypothetical problem without being concerned about how correct it is.

If we focus on proving that the problem can never happen, there is a very high cost (energy spent on deduction and proof), and a very small reward – the system is perhaps slightly less complex. If we just assume we can never be too careful and put the unnecessary safety catch in anyway, there is a small cost of a little extra complexity, but a potentially large reward of the problem never occurring.

One example is the Therac-25 radiation therapy machine. This machine had software controls to prevent accidentally activating the radiation without a protective target in place, which could be lethal. Because of the presence of the software controls, no physical safety catches were put in place to prevent the potential accident. Infamously, it turned out that the software safeties could actually fail, and several patients were killed in accidents.

I don’t know what the design and planning process was like for the Therac-25, but we can imagine the argument being made that the “extra” hardware safeties were provably unnecessary, because it could be shown that the software guaranteed that such accidents would be impossible. If the seemingly less logical “might as well” approach had been taken instead, then superfluous hardware safeties would have been added anyway and the accidents would have been avoided.

Let’s make up some numbers. The “might as well” approach is cheap as it requires very little thought or rationalisation, say a cost of 0 to 5 whatevers. The potential reward ranges from 0 to 100 whatevers if it prevents a problem. The “this is provably unnecessary” approach has an upfront cost of 10 to 20 as it requires more planning and design effort to make the proof. It has a potential reward of say, 10 to 20 if it prevents extra complexity. It looks like it’s generally a better pay off to put the extra safety catches in. Looked at this way, it’s clearer that we’re gambling, and in most systems I would rather gamble towards safety and low cost than away from those.

I get the impression that redundancy is a more respected concept in physical engineering than it is in software engineering, which I’m thankful for. In software, it’s not uncommon to see objections to extra checks and safeties for things that “cannot happen”.

For example:

function fooBar(whateverInput) {
	if (someSafetyCheck(whateverInput)) {
		// Prevent the problem.
	}
	// Carry on as normal.
}

This will attract objections if the potential problem “should never happen”. But we should be thinking about balancing risks, costs and rewards, not what seems to be absolutely correct. What is the cost of the check? It looks pretty low – a negligible amount of extra complexity. What is the reward? Potentially preventing a bigger problem.

Looked at this way, we don’t need to care if the potential problem is actually permitted in the rules of the situation or not. We’re just balancing risk and reward. It’s better to be alive than right.


View post: Better alive than right