At 51:50 he talks more about how pushing for understandability alters choices made. For example raft has four message types where a competitive algorithm has 10. Additionally every part of the algorithm must be motivated by something. There is less extraneous stuff.
"As a first step, you might consider autoscaling based on multiple custom metrics. This is possible to do, but I don't advise it for two reasons. Most important, I think, is that a multi-metric autoscale policy makes communication about its behavior difficult to reason about. "Why did the group scale?" is a very important question, one which should be answerable without elaborate deduction."
Hard to reason about because reason was a distant goal behind "making it work well enough to ship"
One way to make things understandable is to create tools whose explicit aim is to help people understand things:
These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user.
"People must be able to correct and understand curation decisions"
"When you've got algorithms weighing hundreds of factors over a huge data set, you can't really know why they come to a particular decision or whether it really makes sense"