• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NIVIDA Transformative Moment in AI -Live

Quantum253

Gold Member
Nvidia and the future of AI



Blackwell Innovations to Fuel Accelerated Computing and Generative AI
Blackwell’s six revolutionary technologies, which together enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters, include:

  • World’s Most Powerful Chip — Packed with 208 billion transistors, Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
  • Second-Generation Transformer Engine — Fueled by new micro-tensor scaling support and NVIDIA’s advanced dynamic range management algorithms integrated into NVIDIA TensorRT™-LLM and NeMo Megatron frameworks, Blackwell will support double the compute and model sizes with new 4-bit floating point AI inference capabilities.
  • Fifth-Generation NVLink — To accelerate performance for multitrillion-parameter and mixture-of-experts AI models, the latest iteration of NVIDIA NVLink® delivers groundbreaking 1.8TB/s bidirectional throughput per GPU, ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs.
  • RAS Engine — Blackwell-powered GPUs include a dedicated engine for reliability, availability and serviceability. Additionally, the Blackwell architecture adds capabilities at the chip level to utilize AI-based preventative maintenance to run diagnostics and forecast reliability issues. This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs.
  • Secure AI — Advanced confidential computing capabilities protect AI models and customer data without compromising performance, with support for new native interface encryption protocols, which are critical for privacy-sensitive industries like healthcare and financial services.
  • Decompression Engine — A dedicated decompression engine supports the latest formats, accelerating database queries to deliver the highest performance in data analytics and data science. In the coming years, data processing, on which companies spend tens of billions of dollars annually, will be increasingly GPU-accelerated.
https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing
 
Last edited:

OverHeat

« generous god »
RLSZubh.jpg
 

StreetsofBeige

Gold Member
I watching it now on the side. even though I dont understand whats he's saying, at least you can tell based on his background he understand the tech. So he's a grassroots techie who became CEO and can talk about this shit 24/7.

On the other hand, look how many numbnut execs in the world get hired, and you can tell when they speak they dont even know their own product lines.
 

StreetsofBeige

Gold Member
I thought he was having a stroke on stage just then, but it was just a series of bad jokes not landing.
I dont know how legit or BS all his gabbing and slides are (as John Sawyer said above a lot of buzzwords), but I'm enjoying his stage show. He's pretty engaging, but so far his jokes are lousy. Great tech storyteller. Not great comedy storyteller.
 

Quantum253

Gold Member
It's going to be interesting seeing GPUs with cores and how game streaming starts to change with AI leading backend computing to manageable transfer rates
 

Quantum253

Gold Member
The cat theory reminds me of how to generate images (however dense/big you want) with limited data needed. Or the need to fully render environments, worlds, etc. it could be exponentially increased.
 

Quantum253

Gold Member
That's going to be interesting how Blackwell is integrated into the chipsets/hardware and software development cycles.
 

A.Romero

Member
What he showed today is pretty crazy. Really. The part that impressed me the most was Blackwell's spine having an equivalent of all of the Internet's aggregate bandwidth. It's crazy.
 

Quantum253

Gold Member
What he showed today is pretty crazy. Really. The part that impressed me the most was Blackwell's spine having an equivalent of all of the Internet's aggregate bandwidth. It's crazy.
Absolutely. From a business perspective, this is going to be massive. Everything will be ran through models and have some type of AI driven aspect. From the outside, there didn't seem much there, but how everything we love in the games industry will be transformed by this tech and it will govern the next consoles/GPUs/game development/etc.
 

Bernoulli

M2 slut
AMD's next-gen Instinct MI400X will take the AI GPU battle directly to NVIDIA and its next-gen Blackwell B100 AI GPU, where we should expect major upgrades in AI performance from both new AI accelerators. HBM3e memory offers a 50% increase in speeds over HBM3, with up to 10TB/sec of memory bandwidth per system and 5TB/sec of memory bandwidth per chip, with memory capacities of up to 141GB HBM3e memory per GPU.

However, AMD's upcoming Instinct MI300 refresh will be a refreshed fighter against H200 and B100 from NVIDIA. Kepler says: "there will (be) an MI300 refresh with HBM3e. Also B100 is expensive as, so MI300 still has an advantage in cost".

Read more: https://www.tweaktown.com/news/9642...ed-in-2025-mi300-refresh-the-works/index.html
 

Quantum253

Gold Member
AMD's next-gen Instinct MI400X will take the AI GPU battle directly to NVIDIA and its next-gen Blackwell B100 AI GPU, where we should expect major upgrades in AI performance from both new AI accelerators. HBM3e memory offers a 50% increase in speeds over HBM3, with up to 10TB/sec of memory bandwidth per system and 5TB/sec of memory bandwidth per chip, with memory capacities of up to 141GB HBM3e memory per GPU.

However, AMD's upcoming Instinct MI300 refresh will be a refreshed fighter against H200 and B100 from NVIDIA. Kepler says: "there will (be) an MI300 refresh with HBM3e. Also B100 is expensive as, so MI300 still has an advantage in cost".

Read more: https://www.tweaktown.com/news/9642...ed-in-2025-mi300-refresh-the-works/index.html
That's pretty impressive as well. The next 5-10 years is going to be crazy. NVidia is about to do for AI what Microsoft did for the Operating System. AMD is running like Apple so we're about to have the Apple/Microsoft showdown again, but this time with NVidia and AMD. The fact Blackwell can process 10 TB/second chip-to-chip link into a single, unified GPU is crazy to me.
 
Top Bottom