Epic sheds light on the data streaming requirements of the Unreal Engine 5 demo

4yrs ago was when I was in between jobs. Realtime graphics isn't rocket science. It doesn't deserve the hype that you guys try to give it to discredit my skills. I was quite surprised how easy it is when I started working with it.

is not rocket science but is not the same as offline renderers
 
is not rocket science but is not the same as offline renderers
No it's not the same. But the differences definitely don't mean someone who has worked in offline won't grasp it enough to give insightful information. I've worked through basically the entire graphics pipeline and it's surprisingly easy. I don't see anyone from offline background having any problems with learning RT graphics fairly quickly.

The point is to stop thinking that I don't have a background in it and therefore can't give meaningful information pertaining to RT graphics. I've worked in both so I can give my experiences in both. A lot of graphics programmers in the gaming industry have NOT worked in film. But I have no doubt they would grasp it fairly quickly.
 
Do they say how much it costs in bandwidth to keep that memory pool well fed? 768MB is awfully small, but this is the in view data, not what is needed to keep it full--by example when the camera moves around fast at the end of the demo, compared to when it moves slowly when the lady is walking around.
 
No it's not the same. But the differences definitely don't mean someone who has worked in offline won't grasp it enough to give insightful information. I've worked through basically the entire graphics pipeline and it's surprisingly easy. I don't see anyone from offline background having any problems with learning RT graphics fairly quickly.

The point is to stop thinking that I don't have a background in it and therefore can't give meaningful information pertaining to RT graphics. I've worked in both so I can give my experiences in both. A lot of graphics programmers in the gaming industry have NOT worked in film. But I have no doubt they would grasp it fairly quickly.

Okay, so please enlighten us on how having a 768mb streaming pool proves that the SSD of the next gen consoles are overkill and not being pushed in this demo. All I'm asking is you back up the claims you make and endorse.

What was shown in the OP that you would Classify as "reality surfacing" and putting and help end the arguments with "sony bozos"?
 
Last edited:
Facts redeeming logic over marketing with no surprise
The i/o bottleneck always was mix/ramdom read capabilities/iops (and not sequential ones) streaming data. A number a lot lower than raw speed in most SSD.
768mo/s is already a Huge number in this case and with optimisation and different scenery higher can probably be hit.
We really need to know this number or iops of both in this case It's the important number here to know

To elaborate on this, on userbenchmark the top pc ssds average around 100mb/s for random read. while they hit close to 2000mb/s for sequential.

Don't let marketing and PR get the best out of you people.
 
Okay, so please enlighten us on how having a 768mb streaming pool proves that the SSD of the next gen consoles are overkill and not being pushed in this demo. All I'm asking is you back up the claims you make and endorse.

What was shown in the OP that you would Classify as "reality surfacing" and putting and help end the arguments with "sony bozos"?

I have not watched that video. SSD on next-gen could very well be overkill. Why? Because I have ALWAYS mentioned that no matter what data you pass down the pipe, it has to be processed by the GPU and written to a pixel on the screen. Something has to be the bottleneck of any pipeline no matter what the hardware. Do I know if 5G/s of data is easy to render @ 4k/60FPS or 4k/30FPS? No one knows that yet. There are too many factors that make up that data to make a straightforward call.

Next year when we get to play with that demo on PCs, I'm sure you'll get a much more thorough dissection of this demo from all kinds of developers with various hardware.
 
It's going to be a shocker for some and for others like me not so much, because I've been saying this for months now. I hate to break it to some of you but that demo's data streaming could be handled by a 5 year old SATA SSD.

8wl1rua.png


768MB is the in view streaming requirement on the hardware to handle that demo, 768 MEGABYTES... COMPRESSED. And what was the cost of this on the rendering end?

Well, this is the result...

dQOnqne.png


This confirms everything I've said, not that these SSD's are useless, because they're 100% not. That data streaming would be impossible with mechanical drives, however, and this is a big however. That amount of visual data and asset streaming is already bottlenecking the renderer, it's bringing that GPU to its knees. There's very little cost to the CPU as you will see below, but as noted about 100 different times on this website and scoffed at constantly by detractors; the GPU will always be the limiting factor..

lNv2lKl.png


I've maintained this since square one, Microsoft and Sony both went overkill on their SSD's. That amount of I/O increase is not capable of aligning with the rendering pipeline in terms of the on demand volume of data streaming these SSD allow.

So what's the point here? You've got two systems with SSD's far more capable than their usefulness, but one came at a particularly high cost everywhere else in the system. I'll let you figure out which one that is and where.

deadest.png
Ummmm, I'm a 40 year old guy who stays high and a bit on the slow side. Can you put this in Lamens terms?
 
I have not watched that video. SSD on next-gen could very well be overkill. Why? Because I have ALWAYS mentioned that no matter what data you pass down the pipe, it has to be processed by the GPU and written to a pixel on the screen. Something has to be the bottleneck of any pipeline no matter what the hardware. Do I know if 5G/s of data is easy to render @ 4k/60FPS or 4k/30FPS? No one knows that yet. There are too many factors that make up that data to make a straightforward call.

Next year when we get to play with that demo on PCs, I'm sure you'll get a much more thorough dissection of this demo from all kinds of developers with various hardware.
You don't know if the PS5 GPU can consume 5G/s of data?? No one knows? 😂
 
I have not watched that video. SSD on next-gen could very well be overkill. Why? Because I have ALWAYS mentioned that no matter what data you pass down the pipe, it has to be processed by the GPU and written to a pixel on the screen. Something has to be the bottleneck of any pipeline no matter what the hardware. Do I know if 5G/s of data is easy to render @ 4k/60FPS or 4k/30FPS? No one knows that yet. There are too many factors that make up that data to make a straightforward call.

Next year when we get to play with that demo on PCs, I'm sure you'll get a much more thorough dissection of this demo from all kinds of developers with various hardware.

Well, this is what I was looking for. Anyone claiming they found the proof PS5 isn't good enough you didn't, because according to VFXVeteran, himself NO ONE KNOWS YET. Is anyone else posting here more experienced with this stuff? Can we get a more professional viewpoint than this on NeoGAF? If the answer is no then people telling us they've found the key to PS5 being a poorly made system that can't handle what they claim it can or whatever else is just spreading FUD and needs to wait and see.
 
You don't know if the PS5 GPU can consume 5G/s of data?? No one knows? 😂
Which is amazing to me that nobody knows after all the SSD post. At this point I thought the PS5 SSD could heal lepers, walk on water, and turn water into wine. Are you telling me that's not true?
 
Well, this is what I was looking for. Anyone claiming they found the proof PS5 isn't good enough you didn't, because according to VFXVeteran, himself NO ONE KNOWS YET. Is anyone else posting here more experienced with this stuff? Can we get a more professional viewpoint than this on NeoGAF? If the answer is no then people telling us they've found the key to PS5 being a poorly made system that can't handle what they claim it can or whatever else is just spreading FUD and needs to wait and see.
You could say this with claims of how capable it is as well. It's all PR speak at this point.
 
I have not watched that video. SSD on next-gen could very well be overkill. Why? Because I have ALWAYS mentioned that no matter what data you pass down the pipe, it has to be processed by the GPU and written to a pixel on the screen. Something has to be the bottleneck of any pipeline no matter what the hardware. Do I know if 5G/s of data is easy to render @ 4k/60FPS or 4k/30FPS? No one knows that yet. There are too many factors that make up that data to make a straightforward call.

Next year when we get to play with that demo on PCs, I'm sure you'll get a much more thorough dissection of this demo from all kinds of developers with various hardware.
So you come in here, not having watched the video, double-downing on OP's false claims which have been corrected by multiple people already and insulting "PS zealots" out of the blue, yet you play victim because people aren't responding nicely enough for your snowflakey taste?
 
4yrs ago was when I was in between jobs. Realtime graphics isn't rocket science. It doesn't deserve the hype that you guys try to give it to discredit my skills. I was quite surprised how easy it is when I started working with it.
No matter how hard you try some people won't let you win , even when you are right, my advice, let them and use your energy elsewhere.
 
You don't know if the PS5 GPU can consume 5G/s of data?? No one knows? 😂

Consume means render to a pixel. What data are we talking about? just points with nothing but transforms or are we talking about gigs of geometry with material parameters and shaders placed on them rendering to a 4k backbuffer at 60FPS? See how no one knows that yet?
 
Consume means render to a pixel. What data are we talking about? just points with nothing but transforms or are we talking about gigs of geometry with material parameters and shaders placed on them rendering to a 4k backbuffer at 60FPS? See how no one knows that yet?
Well one thing we do know, an ssd, even at the fancy ps5 speeds will not be GPU bottlenecked. Lol, seriously.
 
Last edited:
Consume means render to a pixel. What data are we talking about? just points with nothing but transforms or are we talking about gigs of geometry with material parameters and shaders placed on them rendering to a 4k backbuffer at 60FPS? See how no one knows that yet?

WRONG

The Cerny undergrads in the spec thread know everything. You should just accept it, all your experience means nothing here 🤣
 
So you come in here, not having watched the video, double-downing on OP's false claims which have been corrected by multiple people already and insulting "PS zealots" out of the blue, yet you play victim because people aren't responding nicely enough for your snowflakey taste?

So let me get this straight. I tone down the hyperbole and explain like a professional - and that still warrants a snide remark? So basically that's all you understand is rudeness and insults.. Check!
 
The OP isn't making a false claim. He is assuming based on some logic and some history. It could be far-fetched, but it's most likely not.

If we go by strictly the UE5 demo, the bottleneck is indeed the resolution and framerate at which is was running. If the SSD was streaming 768MB of data and the PS5 could only handle 1440p/30FPS, then that would be an upper bound on the SSD bandwidth for THAT particular demo. That doesn't mean that another developer couldn't get more than 768MB at the same res/FPS with more details. His assumption is that if the GPU could handle more, it would have shown the demo at 4k/60FPS. That would indicate that you could increase that SSD stream pool a lot more to see where the GPU could no longer keep up it's 4k/60FPS target.

Is it an assumption? Sure. Is it a GROSS assumption based off of unfounded logic? No.
 
His assumption is that if the GPU could handle more, it would have shown the demo at 4k/60FPS. That would indicate that you could increase that SSD stream pool a lot more to see where the GPU could no longer keep up it's 4k/60FPS target.

Is it an assumption? Sure. Is it a GROSS assumption based off of unfounded logic? No.

No its straight up fallacious.

Cutting to the heart of the matter its all about data-dependency. If a thing requires more computation or more source data.

Basically there's no hard and fast single rule because depending on scenario either could be the limiting factor.

Some things Faster i/o is not likely to affect much, like resolution because that can be independent of image composition. On the other hand just being able to access more information per frame/game-cycle is going to give results that could be extremely difficult to recreate computationally.

The demo in my view shows a mix of these things, so its likely having more of either resource be it i/o or computation is of benefit. I'd suspect loss of computation scales down more gracefully than an i/o shortfall though.
 
No its straight up fallacious.

Cutting to the heart of the matter its all about data-dependency. If a thing requires more computation or more source data.

Basically there's no hard and fast single rule because depending on scenario either could be the limiting factor.


Some things Faster i/o is not likely to affect much, like resolution because that can be independent of image composition. On the other hand just being able to access more information per frame/game-cycle is going to give results that could be extremely difficult to recreate computationally.

The demo in my view shows a mix of these things, so its likely having more of either resource be it i/o or computation is of benefit. I'd suspect loss of computation scales down more gracefully than an i/o shortfall though.


It's not fallacious. You just verified that it's impossible to gauge. How then can you state with 100% conviction that his assumption is completely wrong? He's assuming the computation limit. I said that it all depends on the data and the scenario. But that doesn't seem to be accepted here. People want an explanation NOW. What's funny is that even if Epic described it in complete technical details, it would go over most people's heads.
 
Last edited:
Because the graphics card isn't just a wire where data feeds directly to the screen. GPU has to compute all that data into pixels and that's where all the work is happening.
So you mean the GPU is the bottleneck, not the resolution. You mean that the target render was too much for the process to hit. Meaning the process was the bottleneck.

Anyway, OP was talking shit, and you aren't far behind. I doubt you have any programming experience to be honest. Feel free to test that notion.
 
So you mean the GPU is the bottleneck, not the resolution. You mean that the target render was too much for the process to hit. Meaning the process was the bottleneck.

Correct.

Anyway, OP was talking shit, and you aren't far behind. I doubt you have any programming experience to be honest. Feel free to test that notion.

LOL! Where is this coming from? WTF?
 
Correct.



LOL! Where is this coming from? WTF?
When someone beats their chest, sometimes you want to see if they can fight. I'd love to see you quickly code up say, given a point { 0, 0, 0 } and a sphere with origin { 100, 100, 100 } and radius of 50, calculate the position vector from the point to the centre of the sphere. Then the Tangent Plane of said Sphere (which intersects with said position vector, basically slicing the view in half).

I'll make it easy, you only really need the Sphere for the Tangent Plane, also you could even explain why it would be useful!
 
Last edited:
The OP isn't making a false claim. He is assuming based on some logic and some history. It could be far-fetched, but it's most likely not.

If we go by strictly the UE5 demo, the bottleneck is indeed the resolution and framerate at which is was running. If the SSD was streaming 768MB of data and the PS5 could only handle 1440p/30FPS, then that would be an upper bound on the SSD bandwidth for THAT particular demo. That doesn't mean that another developer couldn't get more than 768MB at the same res/FPS with more details. His assumption is that if the GPU could handle more, it would have shown the demo at 4k/60FPS. That would indicate that you could increase that SSD stream pool a lot more to see where the GPU could no longer keep up it's 4k/60FPS target.

Is it an assumption? Sure. Is it a GROSS assumption based off of unfounded logic? No.


ImAAOYk.jpg
 
The OP isn't making a false claim. He is assuming based on some logic and some history. It could be far-fetched, but it's most likely not.

If we go by strictly the UE5 demo, the bottleneck is indeed the resolution and framerate at which is was running. If the SSD was streaming 768MB of data and the PS5 could only handle 1440p/30FPS, then that would be an upper bound on the SSD bandwidth for THAT particular demo. That doesn't mean that another developer couldn't get more than 768MB at the same res/FPS with more details. His assumption is that if the GPU could handle more, it would have shown the demo at 4k/60FPS. That would indicate that you could increase that SSD stream pool a lot more to see where the GPU could no longer keep up it's 4k/60FPS target.

Is it an assumption? Sure. Is it a GROSS assumption based off of unfounded logic? No.

Amusing, so 768 mb is the streaming pool, you realise that this is not the Mbs ? Its not every 1 second it needs 768 mb.

The target is 60 FPS and time to G-buffer is 4.25 ms.

So, how much data max do you think is streamed and in what time ?

I dont expect an answer, just pointing out you dont know do you.
 
Last edited:
When someone beats their chest, sometimes you want to see if they can fight. I'd love to see you quickly code up say, given a point { 0, 0, 0 } and a sphere with origin { 100, 100, 100 } and radius of 50, calculate the position vector from the point to the centre of the sphere. Then the Tangent Plane of said Sphere (which intersects with said position vector, basically slicing the view in half).

I'll make it easy, you only really need the Sphere for the Tangent Plane, also you could even explain why it would be useful!

LOL! So now, I have to prove that I'm a graphics programmer everytime I speak. HAHAHA. Sorry, I don't need to prove ANYTHING to any of you. You don't give me a paycheck. I can't believe this shit.:messenger_tears_of_joy:
 
When someone beats their chest, sometimes you want to see if they can fight. I'd love to see you quickly code up say, given a point { 0, 0, 0 } and a sphere with origin { 100, 100, 100 } and radius of 50, calculate the position vector from the point to the centre of the sphere. Then the Tangent Plane of said Sphere (which intersects with said position vector, basically slicing the view in half).

I'll make it easy, you only really need the Sphere for the Tangent Plane, also you could even explain why it would be useful!

Umm whether you have questions about his credibility or not I don't think giving him a homework assignment is the way to go about it
 
Last edited:
LOL! So now, I have to prove that I'm a graphics programmer everytime I speak. HAHAHA. Sorry, I don't need to prove ANYTHING to any of you. You don't give me a paycheck. I can't believe this shit.:messenger_tears_of_joy:

Honestly, is expertise in graphics programming enough to make a judgement when talking about the function of a game engine which needs to handle more than rasterization? Especially when the chief thing being demonstrated is the ability to handle datasets too large for more conventional techniques to handle comfortably.

I just don't see any upside to arguing this topic aside from fanboy e-peening. More data is good, more compute is good, trying to litigate which is better makes no sense when one could easily construct a plausible hypothetical case to demonstrate either outcome.

What can be pointed out though with confidence is that if compute performance was mission critical, why is it that part of the system being cut-down for Lockhart?
 
In all fairness, if 768MB is the max right now... they both went a little nuts. they're sporting 4/5-8/9 gb compressed. It's gotta be for something. To quote a great character "It can't be for nothing" - Ellie lol

Quicker load times will be very nice for both. either way, this is great news multi platform games and game development in general
 
LOL! So now, I have to prove that I'm a graphics programmer everytime I speak. HAHAHA. Sorry, I don't need to prove ANYTHING to any of you. You don't give me a paycheck. I can't believe this shit.:messenger_tears_of_joy:

Most people here who your debating know nothing about system architecture and havent ever programmed anything, let alone understand anything about graphics architecture.

Lower your expectations or just try enjoy waiting it out and being proved right in the end.

One hilariously obvious point that people are finally starting to realise that i made months ago is ...

if your streaming data at 10GB a sec ... your 100 GByte game will be 10 seconds long before you need to buy a new game.

The obvious bottleneck to fast SSD is going to be game size ...

Reduce the asset size .. you reduce the streaming requirements ... or you could just be more effecient about your reads ... minimising throughput needs and maximising memory ... by using SFS.
 
Most people here who your debating know nothing about system architecture and havent ever programmed anything, let alone understand anything about graphics architecture.

Lower your expectations or just try enjoy waiting it out and being proved right in the end.

One hilariously obvious point that people are finally starting to realise that i made months ago is ...

if your streaming data at 10GB a sec ... your 100 GByte game will be 10 seconds long before you need to buy a new game.

The obvious bottleneck to fast SSD is going to be game size ...

Reduce the asset size .. you reduce the streaming requirements ... or you could just be more effecient about your reads ... minimising throughput needs and maximising memory ... by using SFS.
It's not a video that starts and ends, lol. It's effectively a ram multiplier as you no longer need to reserve large ram pools for unseen assets as you're able to bring them in on demand. Do people not like RAM now??

Common this is getting silly now.
 
It's not a video that starts and ends, lol. It's effectively a ram multiplier as you no longer need to reserve large ram pools for unseen assets as you're able to bring them in on demand. Do people not like RAM now??

Common this is getting silly now.

So you can go 10 seconds in any direction? Actually that would be 3.3 seconds in any direction until your repeating assets.

Sure ive oversimplied to help the masses understand... but think about it and youll see its a massive issue.
 
LOL! So now, I have to prove that I'm a graphics programmer everytime I speak. HAHAHA. Sorry, I don't need to prove ANYTHING to any of you. You don't give me a paycheck. I can't believe this shit.:messenger_tears_of_joy:

I'll ask what nobody is asking which is why this is the 17th thread on the subject.

Can you run games with bigger levels, crazy levels of geometry, little to no loading times, 8k textures, on top of high framerate and resolution because of the PS5 SSD architecture?
Can they just forget about the GPU limitations because their architecture works around it and the way current GPU are conceived is dead because of what Sony has created?

Especially the red part, that's important, because that's what literally everybody who believes Cerny is claiming believes, and let's stop pretending otherwise.
 
I'll ask what nobody is asking which is why this is the 17th thread on the subject.

Can you run games with bigger levels, crazy levels of geometry, little to no loading times, 8k textures, on top of high framerate and resolution because of the PS5 SSD architecture?
Can they just forget about the GPU limitations because their architecture works around it and the way current GPU are conceived is dead because of what Sony has created?

Especially the red part, that's important, because that's what literally everybody who believes Cerny is claiming believes, and let's stop pretending otherwise.

This has been asked and answered a hundred times

The limitations of the GPU are not being forgotten, There have just been a lot of cases where the thing that was limiting a game having better textures or bigger worlds was the storage not being up to snuff.. Or rather faster storage was something that could have been a simple and obvious remedy in a lot of cases.

The PS4 GPU is capable of rendering a flying sequence in Horizon but the storage can't keep up with it.

No, the whole world is not suddenly exclusively about storage but it's silly to think that storage is such a minor component of the system that suddenly have it being over 50x faster won't have many positive effects.

So yeah, a lot of the aspects you mentioned will be improved because of the PS5 SSD architecture. You will still be limited by the GPU just like the GPU has been limited by the Ram and CPU.

This is the 17th thread about this topic because big leaps in the console capabilities spark discussion and because there are people who refuse to believe that better storage can cause better visuals.
 
Last edited:
So let me get this straight. I tone down the hyperbole and explain like a professional - and that still warrants a snide remark? So basically that's all you understand is rudeness and insults.. Check!

You toned down the Hyperbole to admit that you came in to insult people and thank someone who made a claim that they couldn't back up in the slightest. And now you go

"I calmed down and admitted I was talking garbage! why you being mean"

Yes, talking out of your ass warrants a snide remark, if that's a problem maybe you could try not being an idiot?
 
Last edited:
This has been asked and answered a hundred times

The limitations of the GPU are not being forgotten, There have just been a lot of cases where the thing that was limiting a game having better textures or bigger worlds was the storage not being up to snuff.. Or rather faster storage was something that could have been a simple and obvious remedy in a lot of cases.

The PS4 GPU is capable of rendering a flying sequence in Horizon but the storage can't keep up with it.

No, the whole world is not suddenly exclusively about storage but it's silly to think that storage is such a minor component of the system that suddenly have it being over 50x faster won't have many positive effects.

So yeah, a lot of the aspects you mentioned will be improved because of the PS5 SSD architecture. You will still be limited by the GPU just like the GPU has been limited by the Ram and CPU.

This is the 17th thread about this topic because big leaps in the console capabilities spark discussion and because there are people who refuse to believe that better storage can cause better visuals.

Thank you, however the bold is an euphemism, some people do legitimately think you need a RTX3090 and possibly 12c24t to make up for SSD and decompression.

I think that is due to people explaining some of the things the architecture can do but not what it cannot do.
 
After 10 seconds of streaming at 10 GB per second ....
Internet buffering for hours for next 100 gig dataset.

or we could just continue to reuse the same assets for the whole game?

or games will use smaller assets .. and dont require such fast SSD?

which is it?
 
After 10 seconds of streaming at 10 GB per second ....
Internet buffering for hours for next 100 gig dataset.

or we could just continue to reuse the same assets for the whole game?

or games will use smaller assets .. and dont require such fast SSD?

which is it?
What does internet have to do with it?
 
One hilariously obvious point that people are finally starting to realise that i made months ago is ...

if your streaming data at 10GB a sec ... your 100 GByte game will be 10 seconds long before you need to buy a new game.

You don't understand how it works.
1. There is no GB/sec at the asset level. There's only: request to get something, delay, response that that something is in RAM.
2. You should think of it as a RAM extension. I.e. you have a 100GB of RAM. But some requests are fast (go to main RAM) and some are slow (go to SSD). 768MB buffer for streaming is just a technical detail of the implementation for the above.
3. Now you can say "we could do the same with HDD" and it's correct. But with HDD the delay difference was so big that the in-RAM pool was a hefty portion of the main RAM. And the whole system was finniky: you need a larger buffer but you have not enough RAM overall. It's not straightforward, corners were cut.
4. Now you should see that you always load the same data. The thing is: you cannot know in advance which one will be really the same. Even for straight linear game you may load a lot of the same assets all over again. You can spend a lot of dev time to optimize that or you can just use a next gen SSD and never think about it again.
 
Umm whether you have questions about his credibility or not I don't think giving him a homework assignment is the way to go about it

It's a trick question, in that, if you actually know anything (he doesn't, he is a bluffer), you could write it out in about the length of this post, from your head, maybe look up a well known value for confirmation.

I wouldn't even give this as homework on the first day.

I even did it on a shitty online tool in a couple of seconds.

isgMNXW.jpg
 
Last edited:
It's a trick question, in that, if you actually know anything (he doesn't, he is a bluffer), you could write it out in about the length of this post, from your head, maybe look up a well known value for confirmation.

I wouldn't even give this as homework on the first day.

I even did it on a shitty online tool in a couple of seconds.

isgMNXW.jpg
Melbourne House "Xenon" remake confirmed.

youre-old-gif-12.gif
 
Last edited:
It's a trick question, in that, if you actually know anything (he doesn't, he is a bluffer), you could write it out in about the length of this post, from your head, maybe look up a well known value for confirmation.

I wouldn't even give this as homework on the first day.

I even did it on a shitty online tool in a couple of seconds.

isgMNXW.jpg

I'm sure he could handle it just fine if he wanted to.
but why would he?

I don't think he needs to prove his qualifications, I just think his qualifications don't matter when he so often demonstrably wrong.
 
Top Bottom