• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

What we know so far about the Nintendo NX with sources

Status
Not open for further replies.
R

Rösti

Unconfirmed Member
It's the opening of a store.

The best we'll get is: "No comment on NX."
That's what I expect at most, yeah. But still, it would be something. I think this is the first media event hosted by Nintendo in quite a while.

Some journalist will ask him about the NX and he'll say "we will start talking about it next fiscal year"
I'd be happy with that.
 
Some journalist will ask him about the NX and he'll say "we will start talking about it next fiscal year"


If i'm not mistaken, latest PowerVR GPUs have a chip dedicated to that.

That would be spectacular, if true. I haven't paid much attention to PowerVR GPUs as of late, but I might have to!
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Speaking of ray-tracing, I really feel that the focus on that needs to shift towards hardware-based solutions. Regardless of how powerful a chip is, using it up for ray-tracing is just ridiculously inefficient when they are other things that a GPU needs to do. If you specialize hardware for it, not only could you get more accurate results (and much more quickly), you would free up lots of resources on the GPU.
There are a couple of issues with raytracing that neither of the modern processing paradigms handle well:
1. CPUs don't have enough parallelism - we need hundreds, nay, thousands of threads for good raytracing.
2. GPUs have the parallelism, but suffer badly from thread divergence, and that kills their efficiency at the task.

Basically, I want a couple-of-hundred SIMD cores with their own instruction stream decoders and schedulers, and TCM (cache coherence not required). Also, I want a chip that implements SIMD n-way sort in hw (say, sort8 in 8-way SIMD), via a sorting network (read: does the sort in a few ticks). That could significantly speed up several tree traversal algorithms (eg BVH, octrees).
 

Thraktor

Member
I'm not sure I get this.

WiiU has slow/horrible load times even when fully powered on.
The system is fine as long as you stay in the dash, but anything else is terrible.

Seems like a modular OS design issue or whatever, where they don't even cache anything in RAM or if they do, it's quite a minor part of the "app", with the configuration being a major offender. Which in itself is quite surprising coming from the snappy (albeit obviously far less complex/feature rich) UI of the Wii. :-\

Are the complaints about the slow WiiU OS really about the bootup speed?
I always boot from the quick start menu and it's not an issue.
(I always turn off PS4 properly as well and never use sleep mode other than when I'm downloading something).

Switching games, going into system settings or into the eShop is slow as hell though and has nothing to do with standby power draw

The PS4's app loading times benefit heavily from keeping them in RAM while the system's asleep (which is how the majority of users experience the console). As an example, the Playstation store takes about 7 seconds to load for me after a cold boot, which is still a bit quicker than the eShop, but certainly isn't instant. For typical users, though, it's pretty much instant, as it's already sitting in RAM. Ditto with apps like Netflix. That's what a lot of people are going to expect from the NX.

Again, though, my point is that you can achieve this while still maintaining a very low-power standby mode, so long as you avoid power-hungry desktop RAM like DDR3/4 or GDDR5. As an example, the iPhone 6S uses a single 2GB module of LPDDR4 and manages to achieve 10 days of standby time on a 6.55WHr battery. That comes to about 27.3mW of power draw during standby, and even if that was entirely the LPDDR4 you'd be looking at just over 100mW for 8GB, which could potentially be accommodated in a sub 500mW standby state. In fact, combining it with the jump down to a 28nm SoC and more aggressive power-gating of the CPU (which ARM reference designs are tailored for) and you could probably introduce instant-on functionality while actually dropping the standby power consumption over the Wii U (although at that point PSU efficiency is probably the main limitation).

I actually do have one other quick question for you in relation to the previous discussion on 3DS BC; do 3DS games have access to the ARM9 and/or ARM7 and their dedicated memory pools?
 

thefro

Member
Reggie will be talking about the Wii U's fantastic lineup coming soon including The Legend of Zelda Twilight Princess HD, Pokken Tournament, and Star Fox Zero. And what about Fire Emblem Fates for 3DS?

"Look, we've got great games coming out right now and the best first party lineup of any platform holder over the next two months".
 

Pif

Banned
Reggie will be talking about the Wii U's fantastic lineup coming soon including The Legend of Zelda Twilight Princess HD, Pokken Tournament, and Star Fox Zero. And what about Fire Emblem Fates for 3DS?

"Look, we've got great games coming out right now and the best first party lineup of any platform holder over the next two months".
I'm actually expecting all those games to flop review and sales wise.

Predictions:

Pokken - 7/10
Zelda - 7/10
Starfox - 5/10

All combined, won't even reach 1 million sales.
 

Thanks for this! It lead me to another resource that went into more depth about how the ray-tracing unit works. After reading about it, I'm very, VERY impressed with what they've been able to pull off!

There are a couple of issues with raytracing that neither of the modern processing paradigms handle well:
1. CPUs don't have enough parallelism - we need hundreds, nay, thousands of threads for good raytracing.
2. GPUs have the parallelism, but suffer badly from thread divergence, and that kills their efficiency at the task.

Basically, I want a couple-of-hundred SIMD cores with their own instruction stream decoders and schedulers, and TCM (cache coherence not required). Also, I want a chip that implements SIMD n-way sort in hw (say, sort8 in 8-way SIMD), via a sorting network (read: does the sort in a few ticks). That could significantly speed up several tree traversal algorithms (eg BVH, octrees).

That would definitely require its own dedicated hardware, but I'm liking it already!

Blu, what do you think about Imagination Technologies' ray-tracing chip?

Imagination Technologies said:
In the PowerVR architecture (Figure 3), the arrays are grouped into Unified Shading Clusters (USCs). The process of scanline rendering naturally results in a high degree of this type of coherence, so the arrays can be kept busy, with ALU latencies and the inevitable memory access latencies masked to a certain degree by task switching.

There are a number of data masters feeding into the schedulers to handle vertex-, pixel-, and compute-related tasks. Once the shading operation is done, the result is output into a data sink for further processing, depending on what part of the rendering pipeline is being handled.


The ray tracing unit (RTU) can be added to this list as both a data sink and a data master so that it can both receive (sink) new ray queries from the shaders and dispatch (master) ray/triangle intersection results back for shading. It contains registers for a large number of complete ray queries (with user data) attached to a SIMD array of fixed-function "Axis Aligned Bounding Box vs. Ray" testers and "Triangle vs. Ray" testers.

Importantly, there is a coherence gathering unit which assembles memory access requests into one of two types of coherency queues: intersection queues and shading queues, then schedules them for processing. Intersection queues are scheduled on to the SIMD AABB or triangle testers; shading queues are mastered out to the USCs.

Intersection queues are created and destroyed on the fly and represent a list of sibling Bounding Volume Hierarchy (BVH) nodes or triangles to be streamed in from off-chip memory. Initially the queues are typically full naturally because the root BVH nodes span a large volume in the scene and therefore most rays hit them consistently. When a full queue of rays is to be tested against the root of the hierarchy, the root nodes are read from memory and the hardware can intersect rays against nodes and/or triangles as appropriate.

For each node that hits, a new intersection queue is dynamically created and rays that hit that node are placed into the new child queue. If the child queue is completely full (which is common at the top of the BVH), it is pushed onto a ready stack and processed immediately.

If the queue is not full (which occurs a little deeper in the tree, especially with scattered input rays from the USC), it is retained in a queue cache until more hits occur against that same BVH node at a later time. In this mode, the queues effectively represent an address in DRAM to start reading in the future. This has the effect of coherence gathering rays into regions of 3D space and will dynamically spend the queues on areas of the scene which are more challenging to collect coherence against.

This process continues in a streaming fashion until the ray traverses to the triangle leaf nodes; when a ray is no longer a member of any intersection queue, the closest triangle has been found.

At this point, a new shading queue is created, but this time it is coherence gathering on the shading state that is associated with that triangle. Once a shading queue is full, this becomes a task which is then scheduled for shader execution. Uniforms and texturing state are loaded into the common store and parallel execution of the shading task begins: each ray hit result represents a shading instance within that task.

The behavior is then identical to that of a rasterization fragment shader with the added feature that shaders can create new rays using a new instruction added to the PowerVR shader instruction set, and send them as new ray queries to the RTU.

The RTU returns ray/triangle intersection results to the shaders in a different order than that in which they entered due to the coherence gathering. A ray that enters the RTU early in the rendering of a frame may be the last to leave depending on coherence conditions.

This approach to dynamic coherence gathering has the effect of parallelizing on rays instead of pixels, which means that even rays that originate from totally different ray trees from other pixels can be collected together to maximize all available coherence that exists in the scene. This then decouples the pipelines, creating a highly latency-tolerant system and enabling an extensive set of reordering possibilities.

You can read more about it here:

http://www.embedded.com/design/real-world-applications/4430971/Ray-tracing--the-future-is-now



It works in conjunction with the SIMDs already on the GPU instead of having its own cores, so this wouldn't be my ideal solution, but they've done an INCREDIBLE job at maximizing the efficiency of those cores for the purposes of ray-tracing!
 
They have told multiple times to build an os so they can develop games from both units.



http://mynintendonews.com/2014/02/0...helds-will-no-longer-be-completely-different/

(this isn't the only thing that they refer to that kind of info)
Yeah, that quote says enough, really.
They want to be like iOS/android in the way that it'll prevent droughts, how can they do that?
A (mostly) shared library.
And if AAA western 3rd parties are really a lost cause, Nintendo should go "F#ck it!" and make a device catering to their own games.
Make something like the Wii again where it wasn't meant to live off of 3rd parties like the Wii u was.
Make it cheap enough and out of the same price ball park that the X1 is.
Of course, they'd need to make some changes like hiring more support studios and expanding their current dev teams to handle HD development on the handheld and console better, but that seems like the right direction to go.
Work with Capcom on Monster Hunter and SE on DQ and Kingdom hearts while pattering with PG, Namco, and Tecmo to make more 2nd party games.
If they can make it powerful enough to get 3rd parties that would be great as well, but marketing it as a Nintendo system that they're depending on 3rd parties to fill in the gaps and then having the worst version of all those games is probably not a good idea.
 

Clefargle

Member
Rösti;195878648 said:
Hopefully the media tour at Nintendo NY store today can bring something on NX. I assume Reggie will be there, maybe Scott Moffitt too. And press will be there, so someone asking a question about the system is more or less guaranteed. Or Reggie will Kimishima the media by stating right off the bat they do not plan on revealing any information today regarding NX. We'll see what happens.

t1455890400z1.png

NX is only one letter away from NY

Just sayin.....
 

Fawk Nin

Member
I'm actually expecting all those games to flop review and sales wise.

Predictions:

Pokken - 7/10
Zelda - 7/10
Starfox - 5/10

All combined, won't even reach 1 million sales.

Whatever about reviews, Pokken and Zelda together will easily hit 1 mil in my opinion.
Star Fox...I agree with you.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Thanks for this! It lead me to another resource that went into more depth about how the ray-tracing unit works. After reading about it, I'm very, VERY impressed with what they've been able to pull off!



That would definitely require its own dedicated hardware, but I'm liking it already!

Blu, what do you think about Imagination Technologies' ray-tracing chip?
I think they've attacked the divergence / decoherentisation problem smartly, but they too can become victims to pathological cases where a few rogue rays (no pun whatsoever) kill the efficiency of the entire system. E.g. what happens to their coherence gathering when there's no coherence?
 
It's a banana.

In case you are serious: it's neither, it's both. nobody knows because NOTHING about the NX is known.
Iwata refered to the new systems as brothers, can't have brothers if your a only child.
Whether NX refers only to the console or only to the handheld is a different story, but there's two devices on the way.
And we know a bit thanks to Iwata quotes and mentions from developers just no specs
 

Thraktor

Member
Thanks for this! It lead me to another resource that went into more depth about how the ray-tracing unit works. After reading about it, I'm very, VERY impressed with what they've been able to pull off!



That would definitely require dedicated hardware, but I'm liking it already!

Blu, what do you think about Imagination Technologies' ray-tracing chip?



You can read more about it here:

http://www.embedded.com/design/real-world-applications/4430971/Ray-tracing--the-future-is-now



It works in conjunction with the SIMDs already on the GPU instead of having its own cores, so this wouldn't be my ideal solution, but they've done an INCREDIBLE job at maximizing the efficiency of those cores for the purposes of ray-tracing!

Their solution still depends on heavily coherent rays, which is an inherent limitation of attempting to make a ray-tracing GPU that also performs well with rasterisation (i.e. consists primarily of SIMT ALUs). The issue with these kinds of solutions is that the more complex the scene gets, the less efficient the hardware becomes. That is, if a scene becomes two times as complicated, then performance will drop by more than 50%. By the time you get to the "holy grail" of ray-tracing, with lifelike scenes, the rays are so incoherent that the SIMT paradigm becomes cripplingly inefficient.

Of course, Imagination Technologies aren't going after the holy grail of raytracing just yet, they're producing a GPU which can do basic ray-tracing and also good rasterised graphics, and they seem to be the most successful of anyone thus far in achieving that. But when we get to an actual ray-tracing console in a few years down the line it's unlikely to look much like IT's Wizard chips (or any GPU's we'd recognise today).

Personally I'm of the opinion that hardware-based Metropolis light transport is what's going to provide the breakthrough for real-time raytracing of highly complex scenes. Path-tracing, which is what all current ray-tracing hardware implements (and what Blu is advocating in his hypothetical ray-tracer) has a limitation in that the memory access patterns for traversal (which constitutes the bulk of the computational demand) are inherently unpredictable (if you can predict the memory a ray traversal will need then you've solved that ray traversal already). You need to keep almost all of the necessary parts of the acceleration structure in local memory in order to prevent the latency hits of constantly having to wait for main memory. (Hardware path-tracers already have big issues with cache misses with pretty simple geometry*) To do this in highly detailed games could require an unreasonably large amount of die space being dedicated to memory, limiting the number of traversal units you can use.

Metropolis light transport, by comparison, has the benefit that in implementing the Metropolis-Hastings algorithm, the designer of a hardware MTL unit can choose pretty much whatever mutation function they want. The mutation function defines what data the MLT unit will require when operating, and a designer can hence choose a mutation function that maximises the probability (or even guarantees) that the necessary data will already be in a local data store/cache. The cache itself could also be designed to ensure a maximal width of the mutation function, to minimise time to convergence. This can be done regardless of the complexity of the scene, and can ensure that the MLT unit runs at close to 100% efficiency and rarely stalls to wait for data. MLT also has some advantages in terms of rendering scenes which are primarily indirectly lit, and things like caustics, as it explores local maxima more thoroughly.

*SGRT: A Mobile GPU Architecture for Real-Time Ray Tracing, Lee, et al, ACM HPG 2013
 

10k

Banned
Its a smart tv with a disc slot that comes with a remote and is compatible with all Wii remotes and accessories. Including the gamepad.

Controller comes bundled with it.
 

thefro

Member
Rösti;195878648 said:
Hopefully the media tour at Nintendo NY store today can bring something on NX. I assume Reggie will be there, maybe Scott Moffitt too. And press will be there, so someone asking a question about the system is more or less guaranteed. Or Reggie will Kimishima the media by stating right off the bat they do not plan on revealing any information today regarding NX. We'll see what happens.

t1455890400z1.png

Looks like no Reggie (probably at DICE), just Scott Moffitt

https://twitter.com/WootiniGG/status/700683972318248961
https://twitter.com/DualShockers/status/700684938618781696
https://twitter.com/RayStrazdas/status/700687493260320768

They are using the red & silver/white branding that they are for most of their other US retail spaces.
 

Luigiv

Member
I think they've attacked the divergence / decoherentisation problem smartly, but they too can become victims to pathological cases where a few rogue rays (no pun whatsoever) kill the efficiency of the entire system. E.g. what happens to their coherence gathering when there's no coherence?

Their solution still depends on heavily coherent rays, which is an inherent limitation of attempting to make a ray-tracing GPU that also performs well with rasterisation (i.e. consists primarily of SIMT ALUs). The issue with these kinds of solutions is that the more complex the scene gets, the less efficient the hardware becomes. That is, if a scene becomes two times as complicated, then performance will drop by more than 50%. By the time you get to the "holy grail" of ray-tracing, with lifelike scenes, the rays are so incoherent that the SIMT paradigm becomes cripplingly inefficient.

Of course, Imagination Technologies aren't going after the holy grail of raytracing just yet, they're producing a GPU which can do basic ray-tracing and also good rasterised graphics, and they seem to be the most successful of anyone thus far in achieving that. But when we get to an actual ray-tracing console in a few years down the line it's unlikely to look much like IT's Wizard chips (or any GPU's we'd recognise today).

Personally I'm of the opinion that hardware-based Metropolis light transport is what's going to provide the breakthrough for real-time raytracing of highly complex scenes. Path-tracing, which is what all current ray-tracing hardware implements (and what Blu is advocating in his hypothetical ray-tracer) has a limitation in that the memory access patterns for traversal (which constitutes the bulk of the computational demand) are inherently unpredictable (if you can predict the memory a ray traversal will need then you've solved that ray traversal already). You need to keep almost all of the necessary parts of the acceleration structure in local memory in order to prevent the latency hits of constantly having to wait for main memory. (Hardware path-tracers already have big issues with cache misses with pretty simple geometry*) To do this in highly detailed games could require an unreasonably large amount of die space being dedicated to memory, limiting the number of traversal units you can use.

Metropolis light transport, by comparison, has the benefit that in implementing the Metropolis-Hastings algorithm, the designer of a hardware MTL unit can choose pretty much whatever mutation function they want. The mutation function defines what data the MLT unit will require when operating, and a designer can hence choose a mutation function that maximises the probability (or even guarantees) that the necessary data will already be in a local data store/cache. The cache itself could also be designed to ensure a maximal width of the mutation function, to minimise time to convergence. This can be done regardless of the complexity of the scene, and can ensure that the MLT unit runs at close to 100% efficiency and rarely stalls to wait for data. MLT also has some advantages in terms of rendering scenes which are primarily indirectly lit, and things like caustics, as it explores local maxima more thoroughly.

*SGRT: A Mobile GPU Architecture for Real-Time Ray Tracing, Lee, et al, ACM HPG 2013

Hey, I know
a very tiny bit of
VHDL, and you two seem to know what you're talking about. Let's band together, grab a couple of FPGA's and develop our own ray-tracing proccessor design, with blackjack and hooker... in fact, forget the ray-tracing processor.
 

Thraktor

Member

Don't worry, just read this 1200 page book on physically based rendering (there's also a third edition coming out this summer if you're willing to wait). Then read this book for a comprehensive analysis of the Metropolis-Hastings algorithm (you'll probably want a maths or statistics background first, but you pretty much only need to read the first six chapters). And then find a good book on high-level ASIC design and you're set.

More seriously, though, the design of ray-tracing hardware is a very young field which requires a lot of understanding of maths and statistics as well as computer graphics and hardware design. While I could go into a considerably greater degree of detail, it would still be very difficult to understand without the necessary background, and would simply derail the thread even further (as NX is quite obviously not going to have a ray-tracing GPU).
 

Roe

Member
yJURgX2.jpg


So what I'm getting out of this is: Nintendo NX will basically bring the PC platform to everything Nintendo,

I wouldn't be surprised if we see crossover play, like PC games on your Wii U and vice versa.
 

10k

Banned
yJURgX2.jpg


So what I'm getting out of this is: Nintendo NX will basically bring the PC platform to everything Nintendo,

I wouldn't be surprised if we see crossover play, like PC games on your Wii U and vice versa.
What you're getting out of this, is wrong.

All that picture means is your Nintendo account can be logged in, registered, and/or used from those devices. But for actually Nintendo games you're gonna need a Nintendo device or a device that plays their mobile games.
 

Sadist

Member
yJURgX2.jpg


So what I'm getting out of this is: Nintendo NX will basically bring the PC platform to everything Nintendo,

I wouldn't be surprised if we see crossover play, like PC games on your Wii U and vice versa.
Nah. It means that you use your Nintendo account on PC's, smart devices and their dedicated hardware.
 
I think they've attacked the divergence / decoherentisation problem smartly, but they too can become victims to pathological cases where a few rogue rays (no pun whatsoever) kill the efficiency of the entire system. E.g. what happens to their coherence gathering when there's no coherence?

Their solution still depends on heavily coherent rays, which is an inherent limitation of attempting to make a ray-tracing GPU that also performs well with rasterisation (i.e. consists primarily of SIMT ALUs). The issue with these kinds of solutions is that the more complex the scene gets, the less efficient the hardware becomes. That is, if a scene becomes two times as complicated, then performance will drop by more than 50%. By the time you get to the "holy grail" of ray-tracing, with lifelike scenes, the rays are so incoherent that the SIMT paradigm becomes cripplingly inefficient.

Of course, Imagination Technologies aren't going after the holy grail of raytracing just yet, they're producing a GPU which can do basic ray-tracing and also good rasterised graphics, and they seem to be the most successful of anyone thus far in achieving that. But when we get to an actual ray-tracing console in a few years down the line it's unlikely to look much like IT's Wizard chips (or any GPU's we'd recognise today).

Personally I'm of the opinion that hardware-based Metropolis light transport is what's going to provide the breakthrough for real-time raytracing of highly complex scenes. Path-tracing, which is what all current ray-tracing hardware implements (and what Blu is advocating in his hypothetical ray-tracer) has a limitation in that the memory access patterns for traversal (which constitutes the bulk of the computational demand) are inherently unpredictable (if you can predict the memory a ray traversal will need then you've solved that ray traversal already). You need to keep almost all of the necessary parts of the acceleration structure in local memory in order to prevent the latency hits of constantly having to wait for main memory. (Hardware path-tracers already have big issues with cache misses with pretty simple geometry*) To do this in highly detailed games could require an unreasonably large amount of die space being dedicated to memory, limiting the number of traversal units you can use.

Metropolis light transport, by comparison, has the benefit that in implementing the Metropolis-Hastings algorithm, the designer of a hardware MTL unit can choose pretty much whatever mutation function they want. The mutation function defines what data the MLT unit will require when operating, and a designer can hence choose a mutation function that maximises the probability (or even guarantees) that the necessary data will already be in a local data store/cache. The cache itself could also be designed to ensure a maximal width of the mutation function, to minimise time to convergence. This can be done regardless of the complexity of the scene, and can ensure that the MLT unit runs at close to 100% efficiency and rarely stalls to wait for data. MLT also has some advantages in terms of rendering scenes which are primarily indirectly lit, and things like caustics, as it explores local maxima more thoroughly.

*SGRT: A Mobile GPU Architecture for Real-Time Ray Tracing, Lee, et al, ACM HPG 2013

Yeah, the coherency dependency (and the fact that it's a hybridization of the rasterization process) ensures that this won't be an all-encompassing solution, but I'm just glad that significant progress has been made in the field of ray-tracing because it was only a few years ago that it seemed like you wouldn't be able to use ray-tracing (for full scene light transport, not just shadows or reflections) in real time for at least a few decades. Now it seems that we may not be that far off in achieving the holy grail of real-time 3D graphics accomplishments.

I agree with Thraktor on investing into hardware-based MLT (or even photon-mapped) solutions, though I'd also like to see some hardware solutions to tackle radiosity as well.
 

TheMoon

Member
Iwata refered to the new systems as brothers, can't have brothers if your a only child.
Whether NX refers only to the console or only to the handheld is a different story, but there's two devices on the way.
And we know a bit thanks to Iwata quotes and mentions from developers just no specs

I assume people read OPs (even though I know better). That's covered in the OP so I don't feel the need to repeat the basics again^^
 

beril

Member
Rösti;195878648 said:
Hopefully the media tour at Nintendo NY store today can bring something on NX. I assume Reggie will be there, maybe Scott Moffitt too. And press will be there, so someone asking a question about the system is more or less guaranteed. Or Reggie will Kimishima the media by stating right off the bat they do not plan on revealing any information today regarding NX. We'll see what happens.

t1455890400z1.png

Surprise NX launch in the Nintendo World Store today!!!
 

Thraktor

Member
Yeah, the coherency dependency (and the fact that it's a hybridization of the rasterization process) ensures that this won't be an all-encompassing solution, but I'm just glad that significant progress has been made in the field of ray-tracing because it was only a few years ago that it seemed like you wouldn't be able to use ray-tracing (for full scene light transport, not just shadows or reflections) in real time for at least a few decades. Now it seems that we may not be that far off in achieving the holy grail of real-time 3D graphics accomplishments.

I agree with Thraktor on investing into hardware-based MLT (or even photon-mapped) solutions, though I'd also like to see some hardware solutions to tackle radiosity as well.

Photon mapping is actually a pretty good application for hardware like Imagination Technologies' as you can use it in as part of a hybrid renderer to incorporate GI into a rasterised game with far less computational cost than full path-tracing/MLT/etc. For use cases that require backwards-compatibility with fully rasterised software (e.g. PC, mobile), I can see it getting quite a bit of use.
 
The cold realization that this topic and NX info in general is in Limbo unto after April 2016

Even then there are no guarantees that they will say shit till E3
 
Photon mapping is actually a pretty good application for hardware like Imagination Technologies' as you can use it in as part of a hybrid renderer to incorporate GI into a rasterised game with far less computational cost than full path-tracing/MLT/etc. For use cases that require backwards-compatibility with fully rasterised software (e.g. PC, mobile), I can see it getting quite a bit of use.

Great, now I wanna see what IT can do with photon-mapping. You're right, a biased algorithm like photon mapping would work pretty well with their setup. Though looking at their history, it would seem that they have their feet firmly planted in the field of ray-tracing.
 
Status
Not open for further replies.
Top Bottom