Those lamenting the change from the old Kutaragi-era approach of PlayStation hardware design to the more "PC" like hardware design from Sony of today simply don't understand the fundamental balances that a proficient hardware design must achieve.
Three case studies: the Emotion Engine, Cell, and the Graphics Synthesizer. Each was essentially the result of pushing poor designs to next level with relatively huge silicon budgets. While they might've been interesting from a purely academic standpoint to see what those inefficient approaches looked like at a higher scale, that scale did nothing to correct the underlying flaws
The Graphics Synthesizer relied on multipass rendering to create effects rather than multi-texturing. Somehow, Sony didn't figure out what the graphics engineers of all of the other companies in the industry understood post-Voodoo1 that multi-texturing was a more efficient approach to applying effects, and that any loss in flexibility as compared to multi-pass was more than made up for in the extra performance and efficiency that was achieved. GS's approach was EOLed with its demise, and the industry has marched on down its path of smarter design successfully.
The Emotion Engine and Cell both fail for the same reason as one another. Their job within a game system was to serve as a strong CPU. The job of a CPU, not just for spreadsheets but even for games, is to handle serial workloads full of dependent data, conditional operations, and branches.
In a well balanced system, cores/chips are specialized to handle a specific type of common workload, and these different specialists (CPU for serial, GPU for parallel, video cores/DSPs for certain specific algorithms, etc) should ideally master their own work independently yet be able to work in harmony with the other specialists in the system. This is the heterogeneous processing model that's finally being embraced today.
The little MIPS and PowerPC cores of the massive EE and Cell die, respectively, were nowhere near enough to serve as strong CPUs. The extra area on each die was filled with a bunch of math/vector units, which made them more suited to assisting with graphics. The problem is, giving what is essentially GPU silicon to a CPU is just imbalanced and less efficient.
Those who argue that the Emotion Engine's and Cell's ALUs were more flexible than even a DirectX 11+ GPU miss the point of why the evolution of GPUs follows ever-advancing models of restricted functionality like Direct X feature sets in the first place. From the beginning, GPU designers could've designed graphics processors that were as flexible as CPUs, but the limited die area budgets they had in the early days meant that they would've had almost no performance to power those flexible pipelines.
Instead, they realized that they could accelerate a set of fixed-functions and get a better return on investment of the silicon they were using. As new fabrication processes afforded more and more silicon, they had the choice of speeding up those fixed graphics functions even more or spending the silicon on being able to do a wider set of graphics functions. Once the visual return on investment from being able to do new effects, or at least similar effects in a more efficient way, started to outweigh just doing more of the same old graphics function, the feature set of GPUs would expand to a new level, characterized by an evolving API like DirectX and such.
So, designing CPUs like the Emotion Engine and Cell with a ton of FLOPs was missing the point, not using the silicon to boost the functions a CPU was actually supposed to be doing, and doing work a GPU would've done more efficiently and with a better return on investment of the silicon used.