“Move fast and break things” is most associated with software and its continuous updates. With negligible costs to test and iterate changes, software engineers are uniquely positioned to experiment with different designs. However, new manufacturing and analysis software has enabled hardware engineers to move just as quickly and fundamentally shift how hardware is designed. Iterating hardware will always be more expensive than software, but with the right amount of investment time and costs, hardware engineers can rapidly innovate new designs at a software pace.
Virtual simulations allow hardware design iteration to reach the levels of software. Advances in artificial intelligence and machine learning automate an ever-increasing share of design and optimization work, freeing engineers to focus on harder problems. Many times, these software tools even go beyond the solution space a human could find naturally. Virtualization also increases the scope of solvable problems by compressing years into hours, allowing entire lifecycles of countless design variants and manufacturing methods to be analyzed before a single prototype is made. While it comes with a significant setup cost, creating an accurate virtual model quickly justifies itself by replacing costly physical prototypes with equally valid virtual analyses.
Moving fast and breaking things may sound like trading reliability for speed, but breaking things is an essential part of creating robust designs.
An engineering discipline particularly resistant to moving faster is the field of reliability engineering. This resistance is largely due to the common association of quickly designed with poorly designed. This isn’t unjustified—software is rife with poor usage of “testing in production”—but it is an association built upon how things have been, not how they must always be. Moving fast and breaking things may sound like trading reliability for speed, but breaking things is an essential part of creating robust designs. Making a reliable product necessarily involves identifying and redesigning all the insufficient options, which is made significantly easier with a mindset that encourages breaking things before the customer does.
The most realistic way to check for failures is, by definition, operating a product in the real world. However, simulated testing, both virtual and physical, is necessary to test designs efficiently. Translating real-world use cases into the underlying engineering requirements and directly evaluating designs against them enables repeatable evaluation of different concepts. Importantly, simulations can concentrate on the most extreme use cases, speeding up elimination of poor design options. This reduces the time spent on dead-end options while also allowing non-simulation testing to move beyond verifying basic requirements.
Ultimately, even with the most comprehensive simulations that cover all known requirements, real-world testing will still be necessary to identify unknown requirements. Reality is uniquely capable of finding new edge cases that break a design, and even the best, most useful models and experiments will struggle to find every failure condition because they cannot simulate unpredictability. When the chaos of field testing finds a new problem, engineering requirements must be updated to prevent repeat failures, allowing new designs to expand capability, rather than only reinforce it.
Throughout the product design process, it is important to avoid emphasizing rapid iteration over rapid innovation. Pursuit of constantly iterating can lead to becoming functionally fixed on a bad design direction, with time spent forcing a familiar design to solve a new problem when a radically different approach is more efficient. Truly innovative designs take time to create, so it is vital to emphasize that moving fast means quickly testing ideas, not quickly generating ideas.