• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

A Potential Solution To RAMPocolypse? Universal, Standardized Modular RAM Format (Rough Concept)

Is a universal, modular, P&P RAM standard needed in 10 years to ensure affordable gaming future?


  • Total voters
    14
  • This poll will close: .
This isn't an announcement or scoop or anything at all; instead, it's just me having a really quick idea WRT solutions for RAM shortages. Truth is, this type of shit is probably going to keep happening as time goes on, and in greater degrees of severity. For mass-market consumer electronics in particular, no real long-term solution will kill the industry off sooner or later, because things like RAM and NAND are always going to be in demand from more & more industries, naturally driving up costs.

So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.

The idea is simple: a memory interface standard (at the logical, physical, port connect & memory device physical dimension level) that defines a way of being scalable by utilizing a commonly agreed-upon minimum capacity type (i.e 1 Gbit) "memory units", each 8-bit along each of four data interconnect points (to interface with neighbor memory units, and in some cases the controller & bus interface), integrated on a "card layer" that can both increase capacity and ensure connectivity between the memory units via a multi-point mesh data interconnect. You still have a row/column setup, you still use the "column count" to define bus width and "row count" to define capacity with this type of standard etc. Sticking with the microSD card dimension sizing, a practical capacity limit is probably around 2 GB (32-bit interface per card, 4x2 arrangement) to 8 GB (64-bit interface per card, 8x8 arrangement)

The thing is, you get this memory on devices the size of microSD cards, and certain devices might have slots for just one, or up to two, or four, or eight etc. Devices would stipulate pairing requirements i.e 2x 32-bit 2 GB cards for a 64-bit interface/4 GB RAM capacity limit, etc. There's no theoretical limit to the number of slots that can be interfaced, just a practical one based on the type of device and its market, physical footprint, power target (both performance & TDP consumption) etc. I figure you could design this type of memory with replication of different standards in mind i.e GDDR for GPUs, HBM for data centers, DDR for CPUs, NAND for storage etc. Mind, we are not talking about the same physical design as those memories; the actual memory logic for this would be more universal and scalable, but particular features of the memory controller could be adapted to replicate these types of pre-existing types.

In the same way you can, today, take a microSD card from your phone and pop it in your laptop, that's what kind of future I see existing for volatile memory, especially in the consumer electronics space. This way, a person just buys the amount they foresee themselves using, they don't have to worry if it's "compatible" with their devices (they just assume it automatically is), and can swap it along devices as they see fit. It'd need to be very plug-and-play, and I still expect devices supporting it would need a small block (32 MB - 512 MB) of soldered RAM installed (or maybe some NOR with XIP support) for when the user pulls the cards out (fall back automatically to a low-power, internal RAM/NOR-backed UEFI/BIOS environment while cards are swapped in or out before auto-prompting the user to initiate the "real" UI space when minimum RAM amount is inserted & detected).

But with modern advances in memory fields (including CXE 3.0 and beyond), I don't feel this type of future development is far-fetched or even far off, and it's going to become an absolute necessity because the resources to keep making more and more RAM & NAND that's soldered and non-transferrable, are going up in prices while depleting in supply. We're going to have to start thinking of volatile memory in a renewable resource type of way (paired with smarter logic for upscaling, compute (PNM, PIM), etc.) if especially we want consumer electronics spaces like gaming to exist in a non-niche capacity beyond the next 10-20 years, IMHO.

Anyway, just a brief concept for a potential memory standard & solution. Game consoles would really benefit from this, clearly, considering the frankly stupid increases we just saw with PS5 today, and will very likely see with Xbox and Nintendo in the near future. Let alone what this means for things like the PS6. It'd be cool if Valve are already developing something like this, considering Steam Machine's specs are somewhat more modest, but I doubt they are. And yes, there are aspects to latency that'd have to be ironed out with this type of memory concept clearly, but that could be solved over time. Plus, there are always still workarounds to that (i.e arranging your data as SoA to only pay penalty for initial access latency), tho granted that requires more effort on part of the developers.

Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)? Any ideas on how to improve it? Alternative memory solutions (that are realistic and relatively practical)?

(*Bolded certain parts for emphasis to act as a kind of TL;DR)
 
Too complicated. The problem with stuff like that, is that RAM is such an integral part of any computers make up that it's not something OEMs (especially with consoles) could leave up to the consumer.

But more importantly, the average consumer just doesn't care about stuff like that. It's hard enough them understanding external hard drives much less RAM.
 
So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.
Ilb5GXr8p9fh3mg8.png


Joke aside the only way to get a true standard is having one of the manufacturers buying out everyone else to setup a monopoly.
 
Last edited:
Too complicated. The problem with stuff like that, is that RAM is such an integral part of any computers make up that it's not something OEMs (especially with consoles) could leave up to the consumer.

What I'm suggesting here doesn't remove the integral functionality of RAM from the picture or has OEMs leave everything up to consumers; it's meant to act as a way of them actually retaining customers by having a modular, universal RAM spec and format that consumers can use between devices, with monetary savings for them coming in that reusability.

Meanwhile, OEMs save money on production costs, and can still build products running a gamut of configurations when it comes to capacity, bandwidth, speed, and related functionality.

But more importantly, the average consumer just doesn't care about stuff like that. It's hard enough them understanding external hard drives much less RAM.

I'd argue given recent developments, more consumers are definitely starting to care, and will want solutions where they maintain a strong degree of autonomy.

If what we're seeing now continues for a few more years, and gets more severe, more people will become educated on it and want a solution, even if they don't exactly know what that solution should look like. Hence, the opportunity like one I'm describing here.

Just saying, there's (hopefully) a chance something develops along these lines. There is 100% a viable alternative aside from cloud streaming (which won't be nearly as much a cost-saver as some are thinking), and it doesn't have to replace cloud streaming, either. More options ultimately means more customers which means more dollars for companies.

Ilb5GXr8p9fh3mg8.png


Joke aside the only way to get a true standard is having one of the manufacturers buying out everyone else to setup a monopoly.

This is why I mentioned JEDEC at the beginning; they actually set a lot of standards in the computing industry, and are something of a consortium.

If it's a JEDEC standard (like DDR, HBM etc. are), then the various manufacturers will build around it. Without that component, then yes, it'd be impossible and leave too much power to a given manufacturer who could monopolize it.
 
What I'm suggesting here doesn't remove the integral functionality of RAM from the picture or has OEMs leave everything up to consumers; it's meant to act as a way of them actually retaining customers by having a modular, universal RAM spec and format that consumers can use between devices, with monetary savings for them coming in that reusability.
Ok... technically, what you are talking about, as a system, already exists. People can go and buy RAM and upgrade or transfer it between systems. You don't see that happening with GDDR because its requires a very specific kinda use case, and it would make no sense forving that kinda expense on people that just want something to browse the internet.
Meanwhile, OEMs save money on production costs, and can still build products running a gamut of configurations when it comes to capacity, bandwidth, speed, and related functionality.
So basically, proposing a one-size-fits-all type RAM, the only differences being capacity and bandwidth, which in themselves are tied to how much of said RAM is "plugged in". Sounds good... but I feel its trying to solve a problem that doesn't need solving. Would be nice to have though, but I dont see it happening because whenever you try and build it this kinda versatility, it always comes at the cost of simplicity of design. And people will take design over function more times than not.
I'd argue given recent developments, more consumers are definitely starting to care, and will want solutions where they maintain a strong degree of autonomy.
No... most consumers do not care. Most people who buy a PS5, do not even know how much RAM it has. And a lot of people who buy a laptop do not even know what RAM is.
If what we're seeing now continues for a few more years, and gets more severe, more people will become educated on it and want a solution, even if they don't exactly know what that solution should look like. Hence, the opportunity like one I'm describing here.
Nope, they will not care. What we need is to increase supply. Not a new standard.
Just saying, there's (hopefully) a chance something develops along these lines. There is 100% a viable alternative aside from cloud streaming (which won't be nearly as much a cost-saver as some are thinking), and it doesn't have to replace cloud streaming, either. More options ultimately means more customers which means more dollars for companies.
I don't think its that serious... as always, the industry will adjust. And we are already seeing that adjustment. Its why we have low-power and high-power everything, and that snow becomes a thing with consoles. Sony (for example) is fully aware that the kinda hardware that can truly be considerred a next gen console, would probably cost over $800 to make. So what they do is that in addition to making that, they make something that costs $400.

And that's just how things would be now. If the cost of entry goes up, it simply means the floor for entry has to be lower to accommodate as many people as possible, and let those who want to or can afford the higher-end devices do so.

Sony isn't going to force everyone to get an $800+ console, they will ensure there is a $500ish option too.
 
Last edited:
While we get used to "RAM not included" disclaimers, we'll also have to get used to the newly de-RAM'd product's price not reflecting the significant decrease in manufacturing costs.
 
It's a cool concept, but I can see several problems with it. Firstly, consoles operate on fixed specifications to optimise games. If they had to put up huge signs and run a campaign for things like the N64 expansion pack to make people aware that they needed the extra cartridge for certain games, imagine what it would be like with a modular memory system.

Secondly, unlike permanent storage memory, RAM cannot be placed anywhere, it must be located as close as possible to the CPU. This is problematic for devices with super-compact designs, such as smartphones or laptops (in fact, it already is for current laptops with soldered-on RAM). This means that you can't use an SD-style side slot, for example.
 
Ilb5GXr8p9fh3mg8.png


Joke aside the only way to get a true standard is having one of the manufacturers buying out everyone else to setup a monopoly.
Or a product is made that is heads and shoulders so much better than everything else, and whose benefits are glaringly obvious, that everyone just has to use it. e.g., NVMe SSDs.

It's a cool concept, but I can see several problems with it. Firstly, consoles operate on fixed specifications to optimise games. If they had to put up huge signs and run a campaign for things like the N64 expansion pack to make people aware that they needed the extra cartridge for certain games, imagine what it would be like with a modular memory system.
Exactly, its just too complicated and would mean you have to be educating or hoping that millions of casuals out there can figure this out. Then what ends up happening is that OEMs take the innitiative for those casuals and start making devices with this new RAM pre-packaged so its more convenient to the consumer, then they start making devices with the RAM soldered onto the PCB because they realize the consumers arent interested in changing the RAM and also that it makes manufacturing cheaper for them, and then th.... oh wait; we are right back to here.
 
CAMM2 is that standard with the smallest footprint. The problem is that there is a problem with specification for things like GDDR.

Too complicated. The problem with stuff like that, is that RAM is such an integral part of any computers make up that it's not something OEMs (especially with consoles) could leave up to the consumer.

But more importantly, the average consumer just doesn't care about stuff like that. It's hard enough them understanding external hard drives much less RAM.
They tried this with the N64 RAM pack too and adoption wasn't that high as far as I know.
 
Sony (for example) is fully aware that the kinda hardware that can truly be considerred a next gen console, would probably cost over $800 to make. So what they do is that in addition to making that, they make something that costs $400.

And that's just how things would be now. If the cost of entry goes up, it simply means the floor for entry has to be lower to accommodate as many people as possible, and let those who want to or can afford the higher-end devices do so.

Sony isn't going to force everyone to get an $800+ console, they will ensure there is a $500ish option too.

As far as we know, Sony is only working on the flagship console and the handheld, which certainly will not cost less than the Switch 2.

And yes, there will be an option at $400 - $500. It's called the PS5.
 
CAMM2 is that standard with the smallest footprint. The problem is that there is a problem with specification for things like GDDR.


They tried this with the N64 RAM pack too and adoption wasn't that high as far as I know.
And even then, the N64 still had to ship with built-in RAM.
 
Too complex a solution for a too specific problem. We can more or less do this with storage (even this has its set of complications) because moving around data between devices makes sense from a practical perspective. Your proposal serves no other purpose other than save what can range on practice from $50 to $150 (possibly less). That is good money yes, but not enough to warrant changing and developing entire standards, especially when there are much simpler solutions like using less or cheaper types of memory for devices that dont require too much.

Basically, you're overengineering things.
 
Last edited:
The problem with the current DRAM market is not due to standards, it's due to scarcity induced by AI buildup.
There is a second problem, that has been recurring in the DRAM market, and that is price fixing. And the AI buildup just made this much worse.
 
Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)?
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?
 
Last edited:
The problem isn't in architecture but with not enough production lines to satisfy demand fool.

We are hungry ! How we can sort this out !? I know ! How about we pack our sandwitches in paper rather than aluminium foil !
Someone: How about we make more food ?
 
"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."
RTX gpus with configurable vram sounds way too good for Nvidia to even consider
 
- I ran CoD with 400fps and everything on Ultra
- how much better it's for real in comparison with the console version that costs less than half of this machine?
- peasant

...I guess devs already have a standard with RAM. I mean, consoles exist basically since PCs became affordable, so the thinking was something like every six years or so we see a new generation. The problem is that some people keeps pushing and pushing, but not for a good evolution, mostly because of selling as the company or for the sake of epenis. There's a lot of people with the top GPU every year, the biggest of RAM... So some devs just have a small demon on their shoulder telling "don't care about cutting corners on PC", so then we have examples like Flight Simulator asking 32gb of RAM in fucking 2024 while Xbox runs fine
 
This isn't an announcement or scoop or anything at all; instead, it's just me having a really quick idea WRT solutions for RAM shortages. Truth is, this type of shit is probably going to keep happening as time goes on, and in greater degrees of severity. For mass-market consumer electronics in particular, no real long-term solution will kill the industry off sooner or later, because things like RAM and NAND are always going to be in demand from more & more industries, naturally driving up costs.

So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.

The idea is simple: a memory interface standard (at the logical, physical, port connect & memory device physical dimension level) that defines a way of being scalable by utilizing a commonly agreed-upon minimum capacity type (i.e 1 Gbit) "memory units", each 8-bit along each of four data interconnect points (to interface with neighbor memory units, and in some cases the controller & bus interface), integrated on a "card layer" that can both increase capacity and ensure connectivity between the memory units via a multi-point mesh data interconnect. You still have a row/column setup, you still use the "column count" to define bus width and "row count" to define capacity with this type of standard etc. Sticking with the microSD card dimension sizing, a practical capacity limit is probably around 2 GB (32-bit interface per card, 4x2 arrangement) to 8 GB (64-bit interface per card, 8x8 arrangement)

The thing is, you get this memory on devices the size of microSD cards, and certain devices might have slots for just one, or up to two, or four, or eight etc. Devices would stipulate pairing requirements i.e 2x 32-bit 2 GB cards for a 64-bit interface/4 GB RAM capacity limit, etc. There's no theoretical limit to the number of slots that can be interfaced, just a practical one based on the type of device and its market, physical footprint, power target (both performance & TDP consumption) etc. I figure you could design this type of memory with replication of different standards in mind i.e GDDR for GPUs, HBM for data centers, DDR for CPUs, NAND for storage etc. Mind, we are not talking about the same physical design as those memories; the actual memory logic for this would be more universal and scalable, but particular features of the memory controller could be adapted to replicate these types of pre-existing types.

In the same way you can, today, take a microSD card from your phone and pop it in your laptop, that's what kind of future I see existing for volatile memory, especially in the consumer electronics space. This way, a person just buys the amount they foresee themselves using, they don't have to worry if it's "compatible" with their devices (they just assume it automatically is), and can swap it along devices as they see fit. It'd need to be very plug-and-play, and I still expect devices supporting it would need a small block (32 MB - 512 MB) of soldered RAM installed (or maybe some NOR with XIP support) for when the user pulls the cards out (fall back automatically to a low-power, internal RAM/NOR-backed UEFI/BIOS environment while cards are swapped in or out before auto-prompting the user to initiate the "real" UI space when minimum RAM amount is inserted & detected).

But with modern advances in memory fields (including CXE 3.0 and beyond), I don't feel this type of future development is far-fetched or even far off, and it's going to become an absolute necessity because the resources to keep making more and more RAM & NAND that's soldered and non-transferrable, are going up in prices while depleting in supply. We're going to have to start thinking of volatile memory in a renewable resource type of way (paired with smarter logic for upscaling, compute (PNM, PIM), etc.) if especially we want consumer electronics spaces like gaming to exist in a non-niche capacity beyond the next 10-20 years, IMHO.

Anyway, just a brief concept for a potential memory standard & solution. Game consoles would really benefit from this, clearly, considering the frankly stupid increases we just saw with PS5 today, and will very likely see with Xbox and Nintendo in the near future. Let alone what this means for things like the PS6. It'd be cool if Valve are already developing something like this, considering Steam Machine's specs are somewhat more modest, but I doubt they are. And yes, there are aspects to latency that'd have to be ironed out with this type of memory concept clearly, but that could be solved over time. Plus, there are always still workarounds to that (i.e arranging your data as SoA to only pay penalty for initial access latency), tho granted that requires more effort on part of the developers.

Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)? Any ideas on how to improve it? Alternative memory solutions (that are realistic and relatively practical)?

(*Bolded certain parts for emphasis to act as a kind of TL;DR)
Look at CAMM memory. It basically went nowhere except some Dell hardware. Supposedly it's a standard but nobody is adopting it.

Unless manufacturers are forced I just don't see this happening. That said, it's a good idea and could potentially help long term ecosystem.
 
Last edited:
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?
Your evil knows no bounds 🤣 😈 🤣!!!
 
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?
I like how Nvidia is banned.
 
Top Bottom