Refactoring Hardware vs Software

Because software is so "soft", making changes to its source code representation (while preserving functionality) much more common than making similar changes in hardware. This process even has its own jargon - refactoring - which, like a mathematical factorization, implies a cosmetic change with the purpose of making it easier for human readers to understand. Martin Folwer called it re-factoring probably because it's such a common activity when writing software.

In the hardware world, refactoring is extremely painful, and often goes by different jargon: a recall. Because it is so expensive and time-consumign (and can hurt a company's image), the tools for dealing with sources of weakness in hardware before shipping are very advanced, and much more commonly used than in the software industry. Good CAD/CAM programs have built-in failure analysis features which can be used for things like:

  • simulate impacts at high G-forces
  • thermal gradients
  • what happens when all or part of a circuit board is underwater

These tools are essential for hardware development, because unlike software, it's basically impossible to fix anything after you ship. The decision to make a change after the product is in the customer's hands is a very serious cost/benefit analysis. If you're lucky, the system is modular and you can fix one bad component without having to junk everything, but this is still a very invasive operation compared to pushing a software update to users.

Sometimes I look at aspects of the software industry - static type checking, test-driven development - as analogous to the kinds of failure prevention tools and techniques that the hardware industry takes for granted. Certain parts of the industry, e.g. aerospace and medical (hopefully) already employ these practices. But I think it would take a great cultural shift for software industry as a whole to adopt these tools and practices to the degree that the hardware industry has. After all, it's often a huge advantage to be able to keep designing something after the customers get hold of it.

What prevents more widespread adoption of these kinds of design validation and verification tools? Are they hard to implement in a general-purpose way? Do they overly constrain design choices? Is it too much overhead for most projects? Probably all of the above to some extent.