• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Was the Dreamcast actually powerful at launch? Or the beneficiary of no competition?

Was the Dreamcast a powerhouse at launch?

  • No

    Votes: 117 11.2%
  • Yes

    Votes: 930 88.8%

  • Total voters
    1,047

AMCC

Neo Member
Grande Prix Challenge is another game I've allways said it looks amazing on PS2, and about the framerate of the game I allways thought there was something "rare" going on like one of those interlaced 30 FPS video that look like 60 FPS depending on the deinterlacing method, you guys are super talented.
There is 🤣 GPC runs at 640x960 with sustained 60fps no matter what! It has too as any slow down results in 15fps for both CPU and GPU due to the complex optimized rendering method we developed!
 

SomeGit

Member
There is 🤣 GPC runs at 640x960 with sustained 60fps no matter what! It has too as any slow down results in 15fps for both CPU and GPU due to the complex optimized rendering method we developed!

If you don’t mind me asking, a bit unrelated to the topic, but how was the mood in the studio when the F1 exclusivity deal was announced and was GPC always meant as a one off title. It always bothered me how many good F1 games at the time there was and then Sony/FOM ruined that with an exclusive deal.
 

AMCC

Neo Member
If you don’t mind me asking, a bit unrelated to the topic, but how was the mood in the studio when the F1 exclusivity deal was announced and was GPC always meant as a one off title. It always bothered me how many good F1 games at the time there was and then Sony/FOM ruined that with an exclusive deal.
Not at all! We knew it might be a one time deal yet had hoped if the game was good enough it would continue. In any case it gave us the chance to make something like Leman equally optimized for PS2!
 
Hey Xaerox,


Thanks for the questions!

1. We build Lemans from the ground up specifically and only for the Dreamcast, the rendering engine, physics simulation, assets, all of it. A bit of a work of passion really and I think the approach was much like how Sega itself might build a first party game. I actually brought the game to Sega Japan and they were extremely surprised about some of the things the game was doing! They took me around all of the Sega AM groups to get their feedback and actually wanted to publish it worldwide as a 1st party Sega title ;-)
2. It's perhaps a myth for the final shipping version of the game! The game uses a sustained 50,000 polygons per frame + effects at 30FPS. It renders a pretty constant load balanced 25,000 for cars and 25,000 for the circuit per frame. So that's really 1.5M polygons per second, call it close to 2 with all effects lol. However, the graphics engine is very optimized and can do 4 million perhaps 5 million polygons per second! Lemans had the unique problem of having 25 cars on track at the same time. Every car has the same sophisticated physics simulation as the player car, as well as an AI driver and associated audio - the races are authentic, and all cars behave with the same physics characteristics as the player has. This uses a lot of CPU resources, compromising how much of the CPU can be dedicated to 3D transforms and feeding the GPU with vertices! There is an unreleased early version of the game that we showed at E3 that has one finished track and 8 finished cars on track that runs with the same polygon count at a sustained 60fps! In that version with 8 cars on track it does 50,000 polygons / frame at 60FPS = 3M polygons / second.
3. We would love to have made Grand Prix Challenge for Dreamcast as well. It uses a similar in-house engine, but this time optimized ground up for PS2. The Dreamcast could run it at 30FPS with some changes. GPC uses 2-3 times the polygon counts on PS2 at what looks like double the frame rate. However, it is an illusion and is cheating! For the longest time GPC was stuck at 30FPS on PS2, however late in development we discovered a secret that literally doubles the apparent frame rate! Essentially, we are doing something similar to DLSS3 on PS2 and I'm amazed it's taken this long for something like DLSS3 to appear! GPC runs at 30FPS, however it generates the next frame an in in between interpolated image from the prior frame to deliver 60FPS in a 30FPS game :D Transformers Armada uses the same trick and in hindsight it might have been possible to do that on a Dreamcast as well! In any case if GPC were ever made for Dreamcast it would have less polygons and better image quality than the PS2 game.
4. It's awesome to see people still trying to do things like on Dreamcast! If the graphics engine and assets were rebuilt from the ground up for Dreamcast there is no reason you couldn't do a pretty cool version of GTA III for it. However, given the original game uses Renderware that ran quite badly on PS2 and is very slow in first place it'll be a challenge to make it run beautifully on Dreamcast. Good luck with it though :)

Cheers to you in Colombia!
Thanks for your answers!!

Just a few quick questions/thoughts in regards to them.

1. It would have been interesting to see the game as first party! But probably weren´t a good deal for you guys? Only thing i can say is, that graphic engine you create for Le Mans was fire!! In fact, according to Digital Foundry, and basically to everyone with little bit of knowledge, Test Drive Le Mans was basically the most advanced racing game until GT 3 launched. That engine on more games on DC would have been a game changer and probably would had changed at least for a while the fate of the console.
Not at all! We knew it might be a one time deal yet had hoped if the game was good enough it would continue. In any case it gave us the chance to make something like Leman equally optimized for PS2!
2. If DC would had lived more, also something similar could happen on the console and you guys could had worked on a improved version? Or Le Mans reached the very limits of Dreamcast?

3. Which is your favorite Dreamcast game?

4. The GTA 3 port to DC is being developed using the PC version. Any advice you can give to them? This is sort of the most recent video sharing their advancements



5. On Beyond3D we have also a thread dedicated to DC with lotta great info and achievements reached by the community itself (as exporting Soul Calibur 2 models and stages to DC). If may be you want to check it out:
https://forum.beyond3d.com/threads/...ame-effect-dc-tech-retrospective-spawn.56017/

6. I still keep this! Cheers!!
https://i.postimg.cc/zXNCGPBd/Whats-App-Image-2024-08-29-at-8-51-26-AM.jpg
 
Last edited:

SomeGit

Member
Not at all! We knew it might be a one time deal yet had hoped if the game was good enough it would continue. In any case it gave us the chance to make something like Leman equally optimized for PS2!

Well at least that one off is still talked about a lot today, it also had a great rendition of Albert Park, I’m gonna guess you were a little biased on that.


Did you explore any other platforms for these two titles or were they too PS2 specific to be ported?
 
Last edited:

Esppiral

Member
Hey Xaerox,


Thanks for the questions!

1. We build Lemans from the ground up specifically and only for the Dreamcast, the rendering engine, physics simulation, assets, all of it. A bit of a work of passion really and I think the approach was much like how Sega itself might build a first party game. I actually brought the game to Sega Japan and they were extremely surprised about some of the things the game was doing! They took me around all of the Sega AM groups to get their feedback and actually wanted to publish it worldwide as a 1st party Sega title ;-)
2. It's perhaps a myth for the final shipping version of the game! The game uses a sustained 50,000 polygons per frame + effects at 30FPS. It renders a pretty constant load balanced 25,000 for cars and 25,000 for the circuit per frame. So that's really 1.5M polygons per second, call it close to 2 with all effects lol. However, the graphics engine is very optimized and can do 4 million perhaps 5 million polygons per second! Lemans had the unique problem of having 25 cars on track at the same time. Every car has the same sophisticated physics simulation as the player car, as well as an AI driver and associated audio - the races are authentic, and all cars behave with the same physics characteristics as the player has. This uses a lot of CPU resources, compromising how much of the CPU can be dedicated to 3D transforms and feeding the GPU with vertices! There is an unreleased early version of the game that we showed at E3 that has one finished track and 8 finished cars on track that runs with the same polygon count at a sustained 60fps! In that version with 8 cars on track it does 50,000 polygons / frame at 60FPS = 3M polygons / second.
3. We would love to have made Grand Prix Challenge for Dreamcast as well. It uses a similar in-house engine, but this time optimized ground up for PS2. The Dreamcast could run it at 30FPS with some changes. GPC uses 2-3 times the polygon counts on PS2 at what looks like double the frame rate. However, it is an illusion and is cheating! For the longest time GPC was stuck at 30FPS on PS2, however late in development we discovered a secret that literally doubles the apparent frame rate! Essentially, we are doing something similar to DLSS3 on PS2 and I'm amazed it's taken this long for something like DLSS3 to appear! GPC runs at 30FPS, however it generates the next frame an in in between interpolated image from the prior frame to deliver 60FPS in a 30FPS game :D Transformers Armada uses the same trick and in hindsight it might have been possible to do that on a Dreamcast as well! In any case if GPC were ever made for Dreamcast it would have less polygons and better image quality than the PS2 game.
4. It's awesome to see people still trying to do things like on Dreamcast! If the graphics engine and assets were rebuilt from the ground up for Dreamcast there is no reason you couldn't do a pretty cool version of GTA III for it. However, given the original game uses Renderware that ran quite badly on PS2 and is very slow in first place it'll be a challenge to make it run beautifully on Dreamcast. Good luck with it though :)

Cheers to you in Colombia!

It would be cool if that E3 demo ever leaked cof cof, in me meantime we can allways hack the game to run at 60 fps :p , it looks truly amazing.



 
Last edited:

kevboard

Member
I feel like with the recent GTA3 port, the question put forward by this thread title has been thoroughly answered 🙃

yes... yes it was.

you could never make a faithful port of this to PS1 or N64. the N64 could possibly support a highly downgraded version, but for PS1 you'd probably need a whole rewrite of many parts of the game to be perfomant enough to run
 

Geometric-Crusher

"Nintendo games are like indies, and worth at most $19" 🤡
the N64 could possibly support a highly downgraded version, but for PS1 you'd probably need a whole rewrite of many parts of the game to be perfomant enough to run
Meanwhile in real world the PS1 has Driver 2 and the N64 has a rough game there that maybe reminds us a freeroaming
 

kevboard

Member
Meanwhile in real world the PS1 has Driver 2 and the N64 has a rough game there that maybe reminds us a freeroaming

sure, but that's not really due to the hardware and moreso due to bad support by third party devs.

the N64 is dramatically more powerful than the PS1. it was simply sadly underutilized at the time due to really bad third party support.

due to the staggered release of systems back in the mid to late 90s, these systems (PS1, N64, Dreamcast) are almost like small evolutionary steps towards gen6.

they went from not even being able to do floating point calculations (PS1), to semi modern GPU (N64), to finally a competent 3D console (Dreamcast)
 

Fafalada

Fafracer forever
PS2 doing frame generation to reach locked 60 FPS at 640×960 in Grand Prix Challenge in 2003 is pretty wild info. Thanks for the input AMCC.
Not to play the terminology nazi here - but we're not talking 'frame generation' in context of what modern image-synthesis does.
This was inserting additional (fully rendered) frames where positional inputs are interpolated from a lower frequency simulated input - if you want a recent analogy - that N64 static-recompiler framework uses the same method to increase framerate (games continue running at their native framerate, but rendering does not).

Also yes - this is something you could do on other consoles - though on some it would be highly impractical to do (like XBox), and there are also numerous trade-offs that go beyond just increased latency - a big one being significant memory overhead - which was quite painful given how memory-starved the consoles of that era were.
Still - it is something that was used in a number of shipped XBox360/PS3 games as well - and unlike modern framegen there are no visual tradeoffs, it is indistinguishable from running at native framerate.
 
Last edited:

kevboard

Member
sure, but that's not really due to the hardware and moreso due to bad support by third party devs.

the N64 is dramatically more powerful than the PS1. it was simply sadly underutilized at the time due to really bad third party support.

due to the staggered release of systems back in the mid to late 90s, these systems (PS1, N64, Dreamcast) are almost like small evolutionary steps towards gen6.

they went from not even being able to do floating point calculations (PS1), to semi modern GPU (N64), to finally a competent 3D console (Dreamcast)

btw if anyone wants to see what a fully optimised N64 game can look like then watching Kaze Emanuar's videos on his Mario 64 hack (which honestly at this point is an entirely new game) is the way to go



he rewrote the entire engine of Mario 64 and fully optimised it for the N64. with his engine rewrite, it's most likely possible to run the orignal Mario 64 at 60fps on real hardware.

he on the other hand uses the improved performance from his optimisations to target 30fps but with graphics that are way closer to a gen6 game, and could be mistaken for Dreamcast graphics at times.
 
Last edited:

Alexios

Cores, shaders and BIOS oh my!
Hey Xaerox,


Thanks for the questions!

1. We build Lemans from the ground up specifically and only for the Dreamcast, the rendering engine, physics simulation, assets, all of it. A bit of a work of passion really and I think the approach was much like how Sega itself might build a first party game. I actually brought the game to Sega Japan and they were extremely surprised about some of the things the game was doing! They took me around all of the Sega AM groups to get their feedback and actually wanted to publish it worldwide as a 1st party Sega title ;-)
2. It's perhaps a myth for the final shipping version of the game! The game uses a sustained 50,000 polygons per frame + effects at 30FPS. It renders a pretty constant load balanced 25,000 for cars and 25,000 for the circuit per frame. So that's really 1.5M polygons per second, call it close to 2 with all effects lol. However, the graphics engine is very optimized and can do 4 million perhaps 5 million polygons per second! Lemans had the unique problem of having 25 cars on track at the same time. Every car has the same sophisticated physics simulation as the player car, as well as an AI driver and associated audio - the races are authentic, and all cars behave with the same physics characteristics as the player has. This uses a lot of CPU resources, compromising how much of the CPU can be dedicated to 3D transforms and feeding the GPU with vertices! There is an unreleased early version of the game that we showed at E3 that has one finished track and 8 finished cars on track that runs with the same polygon count at a sustained 60fps! In that version with 8 cars on track it does 50,000 polygons / frame at 60FPS = 3M polygons / second.
3. We would love to have made Grand Prix Challenge for Dreamcast as well. It uses a similar in-house engine, but this time optimized ground up for PS2. The Dreamcast could run it at 30FPS with some changes. GPC uses 2-3 times the polygon counts on PS2 at what looks like double the frame rate. However, it is an illusion and is cheating! For the longest time GPC was stuck at 30FPS on PS2, however late in development we discovered a secret that literally doubles the apparent frame rate! Essentially, we are doing something similar to DLSS3 on PS2 and I'm amazed it's taken this long for something like DLSS3 to appear! GPC runs at 30FPS, however it generates the next frame an in in between interpolated image from the prior frame to deliver 60FPS in a 30FPS game :D Transformers Armada uses the same trick and in hindsight it might have been possible to do that on a Dreamcast as well! In any case if GPC were ever made for Dreamcast it would have less polygons and better image quality than the PS2 game.
4. It's awesome to see people still trying to do things like on Dreamcast! If the graphics engine and assets were rebuilt from the ground up for Dreamcast there is no reason you couldn't do a pretty cool version of GTA III for it. However, given the original game uses Renderware that ran quite badly on PS2 and is very slow in first place it'll be a challenge to make it run beautifully on Dreamcast. Good luck with it though :)

Cheers to you in Colombia!
Kinda mind boggling stuff, thanks! Was looking for DC footage, this has some nice highlights from Le Mans, the channel is one I don't know of or anything, I guess they'll do some longplay of the game and that's a trailer for it (edit: yep, here's the 1st part)? Either way, nice footage of an ace game..!
 
Last edited:

AMCC

Neo Member
Not to play the terminology nazi here - but we're not talking 'frame generation' in context of what modern image-synthesis does.
This was inserting additional (fully rendered) frames where positional inputs are interpolated from a lower frequency simulated input - if you want a recent analogy - that N64 static-recompiler framework uses the same method to increase framerate (games continue running at their native framerate, but rendering does not).

Also yes - this is something you could do on other consoles - though on some it would be highly impractical to do (like XBox), and there are also numerous trade-offs that go beyond just increased latency - a big one being significant memory overhead - which was quite painful given how memory-starved the consoles of that era were.
Still - it is something that was used in a number of shipped XBox360/PS3 games as well - and unlike modern framegen there are no visual tradeoffs, it is indistinguishable from running at native framerate.
Actually Faf, this used the unique duel context renderering of PS2 to render two frames every 30th / sec via vertex interpolation to render a 'fast' fully rendered inbetween frame netween fully calculated ones. As you correctly say this is not pixel based like DLSS3, however if you have 'fill rate' above raw transform speed it is an even better solution than DLLS3 as the result is perfect unlike it. Complex shaders on modern hardware on the surface make such an approach less worthwhile, yet there is a trick with those for this approach as well! Again I am exptremely surprised this has not been frequently thought of or used.
 
Last edited:

Fafalada

Fafracer forever
Actually Faf, this used the unique duel context renderering of PS2 to render two frames every 30th / sec via vertex interpolation to render a 'fast' fully rendered inbetween frame netween fully calculated ones.
Yes - which is exactly what I described. Simulation runs at half the framerate, and rendering output is doubled from interpolated sim outputs (ie. the game renders at 60, but plays at 30).

Complex shaders on modern hardware on the surface make such an approach less worthwhile, yet there is a trick with those for this approach as well!
Shader complexity doesn't really change anything - from rendering perspective you're doing roughly the same amount of work as a normally rendered 60fps game anyway - that's the whole point.
And while far from 'popular', technique has been used in a few PS360 era releases to double the framerate to 60 (most notably with certain EA Sports titles).
On PS4 - the more common usage of the same pattern was in VR where some titles used it to quadruple the number of rendered 'frames' (2 for each eye * 2 for near/far field for resolution improvement) without paying any draw-call/CPU costs - but also without any latency trade-offs as it physically changed inputs for each frame instead of interpolating them.
There are also 'analogues' of the method out there - eg. PS2 emulator on PS4/5 specifically uses something similar to generate 4 samples for each frame (to get 4x resolution output) - instead of resolution-hacks which always cause compatibility issues - they render 4 full frames for each 1, jittered with sub-pixel offsets, and on modern GPU that's done via MRT so - analogous to your example of exploiting PS2 GS contexts.

Now where things differ is specifics of 'how' it's done (especially with considerations to what you could do on specific hw - there was more than one way to do this on PS2) - but that's implementation details (and they each come with different trade-offs on different hw - some were entirely impractical to the approach, like the og. XBox).
The N64 recompilation framework I mentioned does 'multiply' the framerate the same way by simply rerunning render-thread multiple times - it's not done for performance reasons(rendering those N64 frames is trivial anyway) but to have a robust fps-increase method that is game-agnostic (akin to PS2 emulator approach to increasing resolution).

Again I am exptremely surprised this has not been frequently thought of or used.
There's myriads of reasons for it - but then I'm similarly not impressed with many other things game-industry failed to adapt.
IMO - the main issue comes down to industry ultimately sticking to path-of-least resistance at scale. Eg. even with modern Image-gen - it took over a decade and 2 generations from it being practical (and demonstrated) on console hardware - to actually seeing it being adopted by industry. Mainly on the back of hw-vendors doing all the implementation work - and integration becoming nearly trivial now.
 
Last edited:
Its a shame D2 or Virtual On 2 don't get talked about more for amazing looking DC games. I loved D2 and games like Code Veronica started to have proper finger moments and polygons used for inside character's mouths, their check bones for individual parts giving us the start of proper facial animation for characters

Virtual ON 2 looks simply amazing too this day with some of the best and most vibrant textures ever see in a DC game and no slowdown whatsoever, no matter what was happing on screen.
 
I allways thought Virtual On looks like shit honestly

First one on the Saturn looks a bag of shit now (again, a lack of lighting really hurt the visuals on that system), not to mention the mesh transparencies during often screen-filling explosions

Oratorio Tangram on Dreamcast, on the other hand, still looks brilliant. One of the best Model 3 conversions

Shame we never got a PAL version for Dramcast
 
Last edited:

cireza

Member
First one on the Saturn looks a bag of shit now (again, a lack of lighting really hurt the visuals on that system)
It's a 1996 game. Lighting can be found in a ton of Saturn games, it is not "lacking". Developers were only starting to get around throwing dynamic lighting in games on consoles...
 
Last edited:

AMCC

Neo Member
Yes - which is exactly what I described. Simulation runs at half the framerate, and rendering output is doubled from interpolated sim outputs (ie. the game renders at 60, but plays at 30).


Shader complexity doesn't really change anything - from rendering perspective you're doing roughly the same amount of work as a normally rendered 60fps game anyway - that's the whole point.
And while far from 'popular', technique has been used in a few PS360 era releases to double the framerate to 60 (most notably with certain EA Sports titles).
On PS4 - the more common usage of the same pattern was in VR where some titles used it to quadruple the number of rendered 'frames' (2 for each eye * 2 for near/far field for resolution improvement) without paying any draw-call/CPU costs - but also without any latency trade-offs as it physically changed inputs for each frame instead of interpolating them.
There are also 'analogues' of the method out there - eg. PS2 emulator on PS4/5 specifically uses something similar to generate 4 samples for each frame (to get 4x resolution output) - instead of resolution-hacks which always cause compatibility issues - they render 4 full frames for each 1, jittered with sub-pixel offsets, and on modern GPU that's done via MRT so - analogous to your example of exploiting PS2 GS contexts.

Now where things differ is specifics of 'how' it's done (especially with considerations to what you could do on specific hw - there was more than one way to do this on PS2) - but that's implementation details (and they each come with different trade-offs on different hw - some were entirely impractical to the approach, like the og. XBox).
The N64 recompilation framework I mentioned does 'multiply' the framerate the same way by simply rerunning render-thread multiple times - it's not done for performance reasons(rendering those N64 frames is trivial anyway) but to have a robust fps-increase method that is game-agnostic (akin to PS2 emulator approach to increasing resolution).


There's myriads of reasons for it - but then I'm similarly not impressed with many other things game-industry failed to adapt.
IMO - the main issue comes down to industry ultimately sticking to path-of-least resistance at scale. Eg. even with modern Image-gen - it took over a decade and 2 generations from it being practical (and demonstrated) on console hardware - to actually seeing it being adopted by industry. Mainly on the back of hw-vendors doing all the implementation work - and integration becoming nearly trivial now.
Nice post Faf with some interesting thoughts. I think we were one of the first if not the first to expoit that? I wasn't aware other games on newer platforms were using a similar approach as it was really only relevant on a platform that had very excess fill rate over CPU or T&L performance. The PS2 VU's were epic, however as I'm sure you know a lot of heavy lifting was needed to render 'quality' polygons and a decent image which diluted things significanty from the raw performance!

On the surface other than Dreamcast, PS2 and Gamcube with their high speed GPU embedded VRAM and Tiles (Dreamcast) together with very highend modern hardware like RTX4090 all suffered from lack of GPU memory bandwidth relative to expected resolution and shader complexity as far as rendering goes.

So I'm quite surprised if machines like 360 used such an approach, though suppose in gamplay with a Madden or Fifa the GPU isn't very stretched with fillrate?!

Oddly enough my thought with modern GPU's flips that around. Shaders, Raytracing etc can be extremely expensive! DLSS3 is great as a per pixiel predictor & interpolator, however I think there is another more performant, potentially more accurate method due to the raw shader and raytracing cost.

Cheers!
 

RaduN

Member
I still remember seeing this running on a big screen in the shop window of GAME and deciding there and then that I simply must have a Dreamcast, nothing has impressed me this much since...


The motion capture was simply mind blowing.

The next game that wod make my jaw drop in terms of animations would be MGS2. It still is top tier even today, after 30 years.
 

Fafalada

Fafracer forever
Nice post Faf with some interesting thoughts. I think we were one of the first if not the first to expoit that? I wasn't aware other games on newer platforms were using a similar approach as it was really only relevant on a platform that had very excess fill rate over CPU or T&L performance.
The concessions vary on different platforms.
PS4 example was specifically using display-list recording to render multiple frames 'for free' from CPU perspective, there was no good way to do that using traditional methods without tanking the framerate. Essentially it was a bit like having a 'software tiler' that could max out GPU utilization without being held back by the weak CPU.
Interesting bit was that vertex throughput was computationally trivial on AMD GPUs of that era - but there was a fixed function limit on vertex/clock that could be a limiter with light shading workloads (eg. during Z-prepass). We worked around that by interleaving it with async PostProcess shaders (which was in effect, another way of leveraging excess 'fillrate').

So I'm quite surprised if machines like 360 used such an approach, though suppose in gamplay with a Madden or Fifa the GPU isn't very stretched with fillrate?!
It depended on the target resolution (that was the first gen that had proper variable resolution targets user could select). But moreover, CPUs had some absolutely dreadful performance bottlenecks. Early versions of Madden vere 30fps on both 360 and PS3 because UX framework couldn't go any faster on CPU side. So yes - there were a number of games in that era that underutilized the GPU in various ways until people got creative.

360 also had a small eDram that couldn't fit all target resolutions and had to resort to tiling - most games ignored it(lowered resolution instead) - but there were exceptions and brute-force CPU submitting frames wouldn't cut it. IIRC Halo 3 also did something with outputting two lower-res frames every frame to enable their HDR.

DLSS3 is great as a per pixel predictor & interpolator, however I think there is another more performant, potentially more accurate method due to the raw shader and raytracing cost.
The thing with Frame-Gen gaining popularity now is that it inflates framerate for everything. Especially on PC - the ratio of CPU vs. GPU limitation is completely random (and even user controlled) so optimisation for either/or is a bit of a shitshow. Frame-gen is an 'elegant' solution to that problem as it 'multiplies' output for the entire system. I can see cross-platform developers in particular would find that appealing.
Back in 2010s I was thinking a lot about context dependant temporal treatment that can achieve similar things (with better quality and responsiveness), and during VR days that became even more appealing - but none of it would be as 'plug&play' as slotting in a motion-vector based post-process.
Still - even with pixel-based framegen, I find the interpolation based approach a bit naff. We could definitely do better - but again, alternatives will be more invasive in terms of rendering-pipeline. Going back to the whole 'path of least resistance', in terms of adoption, I fear this is always going to be a clear winner.
 
Last edited:

RaduN

Member
I want to see the cv1000 arcade Cave shmups ported on the Dreamcast.
They were coded for the SH3 cpu which is closely related to SH4 in DC.

That would be mind-blowing.
 

Fat Frog

I advertised for Google Stadia
In my experience, nearly every indie game and probably most commercial games are just bottlenecking the graphics pipeline in stage 1
EJA6CbG.gif

Soon...😎
 
Last edited:

Wolzard

Member
It seems that this is exciting developers, as happened when they launched SGDK for the Sega Genesis, which caused several new ports and games to appear on the console and made the console be explored like never before thought possible. Nowadays the console looks like a Neo Geo Lite.

 

Connxtion

Member
The real allure of the Dreamcast were those beautiful cases. It has that next gen feel about it. Even the PlayStation fanboy inside of me got a boner. You'd see these games lined up on the rack next to the demo kiosk and you're effing sold. Still beautiful to this day.

s-l1200.jpg
Only downside to the PAL cases was they broke easy at the legs 😭

A liked the NTSC cases but am from the UK so was used to the broken legs on the PAL cases.

Oh a miss the good old days of going to gforce and getting me some DC imports.
 
Top Bottom