family_guy
Member
As a developer [not a console one unless PS2 Linux counts hehe] , if you told me that I can target a set of standardised HW requirements and my software can make deterministic assumptions about what my customers would be running the software on, think the backlash over old OpenGL/DX and Vulkan/DX12 all about determinism and control, I am not sure why I would want to give that up unless forced to by the market or the manufacturers. There is actual beauty in the current console model as a developer and as a gamer. More frequent generations = less incentive to desire to code for that HW... It is not like you will convince publishers to spend even more time on each game.
But what are really talking about here? Maybe having to support 1-2 additional specs? Years ago, developers were making the same games for Xbox 1, Gamecube, and PS2, each with radically different specs and development tools. Now we're talking about supporting an additional HW configuration within the same ecosystem and utilizing the same SDK. If you don't feel like optimizing, then fine. You can achieve the bare minimum which is make sure it's as stable as the older model and runs at a minimum standard full HD resolution. Is that too much too ask of you? If it is, then you need to modify your development process because if everyone goes this route then you'll be caught unprepared. It's like how when Japanese developers were largely unprepared for HD development last gen until they started adopting processes that Western developers were utilizing, many of which had a PC dev background. Build applications that can scale.