• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

A Potential Solution To RAMPocolypse? Universal, Standardized Modular RAM Format (Rough Concept)

Is a universal, modular, P&P RAM standard needed in 10 years to ensure affordable gaming future?


  • Total voters
    21
  • Poll closed .
This isn't an announcement or scoop or anything at all; instead, it's just me having a really quick idea WRT solutions for RAM shortages. Truth is, this type of shit is probably going to keep happening as time goes on, and in greater degrees of severity. For mass-market consumer electronics in particular, no real long-term solution will kill the industry off sooner or later, because things like RAM and NAND are always going to be in demand from more & more industries, naturally driving up costs.

So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.

The idea is simple: a memory interface standard (at the logical, physical, port connect & memory device physical dimension level) that defines a way of being scalable by utilizing a commonly agreed-upon minimum capacity type (i.e 1 Gbit) "memory units", each 8-bit along each of four data interconnect points (to interface with neighbor memory units, and in some cases the controller & bus interface), integrated on a "card layer" that can both increase capacity and ensure connectivity between the memory units via a multi-point mesh data interconnect. You still have a row/column setup, you still use the "column count" to define bus width and "row count" to define capacity with this type of standard etc. Sticking with the microSD card dimension sizing, a practical capacity limit is probably around 2 GB (32-bit interface per card, 4x2 arrangement) to 8 GB (64-bit interface per card, 8x8 arrangement)

The thing is, you get this memory on devices the size of microSD cards, and certain devices might have slots for just one, or up to two, or four, or eight etc. Devices would stipulate pairing requirements i.e 2x 32-bit 2 GB cards for a 64-bit interface/4 GB RAM capacity limit, etc. There's no theoretical limit to the number of slots that can be interfaced, just a practical one based on the type of device and its market, physical footprint, power target (both performance & TDP consumption) etc. I figure you could design this type of memory with replication of different standards in mind i.e GDDR for GPUs, HBM for data centers, DDR for CPUs, NAND for storage etc. Mind, we are not talking about the same physical design as those memories; the actual memory logic for this would be more universal and scalable, but particular features of the memory controller could be adapted to replicate these types of pre-existing types.

In the same way you can, today, take a microSD card from your phone and pop it in your laptop, that's what kind of future I see existing for volatile memory, especially in the consumer electronics space. This way, a person just buys the amount they foresee themselves using, they don't have to worry if it's "compatible" with their devices (they just assume it automatically is), and can swap it along devices as they see fit. It'd need to be very plug-and-play, and I still expect devices supporting it would need a small block (32 MB - 512 MB) of soldered RAM installed (or maybe some NOR with XIP support) for when the user pulls the cards out (fall back automatically to a low-power, internal RAM/NOR-backed UEFI/BIOS environment while cards are swapped in or out before auto-prompting the user to initiate the "real" UI space when minimum RAM amount is inserted & detected).

But with modern advances in memory fields (including CXE 3.0 and beyond), I don't feel this type of future development is far-fetched or even far off, and it's going to become an absolute necessity because the resources to keep making more and more RAM & NAND that's soldered and non-transferrable, are going up in prices while depleting in supply. We're going to have to start thinking of volatile memory in a renewable resource type of way (paired with smarter logic for upscaling, compute (PNM, PIM), etc.) if especially we want consumer electronics spaces like gaming to exist in a non-niche capacity beyond the next 10-20 years, IMHO.

Anyway, just a brief concept for a potential memory standard & solution. Game consoles would really benefit from this, clearly, considering the frankly stupid increases we just saw with PS5 today, and will very likely see with Xbox and Nintendo in the near future. Let alone what this means for things like the PS6. It'd be cool if Valve are already developing something like this, considering Steam Machine's specs are somewhat more modest, but I doubt they are. And yes, there are aspects to latency that'd have to be ironed out with this type of memory concept clearly, but that could be solved over time. Plus, there are always still workarounds to that (i.e arranging your data as SoA to only pay penalty for initial access latency), tho granted that requires more effort on part of the developers.

Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)? Any ideas on how to improve it? Alternative memory solutions (that are realistic and relatively practical)?

(*Bolded certain parts for emphasis to act as a kind of TL;DR)
 
Too complicated. The problem with stuff like that, is that RAM is such an integral part of any computers make up that it's not something OEMs (especially with consoles) could leave up to the consumer.

But more importantly, the average consumer just doesn't care about stuff like that. It's hard enough them understanding external hard drives much less RAM.
 
So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.
Ilb5GXr8p9fh3mg8.png


Joke aside the only way to get a true standard is having one of the manufacturers buying out everyone else to setup a monopoly.
 
Last edited:
Too complicated. The problem with stuff like that, is that RAM is such an integral part of any computers make up that it's not something OEMs (especially with consoles) could leave up to the consumer.

What I'm suggesting here doesn't remove the integral functionality of RAM from the picture or has OEMs leave everything up to consumers; it's meant to act as a way of them actually retaining customers by having a modular, universal RAM spec and format that consumers can use between devices, with monetary savings for them coming in that reusability.

Meanwhile, OEMs save money on production costs, and can still build products running a gamut of configurations when it comes to capacity, bandwidth, speed, and related functionality.

But more importantly, the average consumer just doesn't care about stuff like that. It's hard enough them understanding external hard drives much less RAM.

I'd argue given recent developments, more consumers are definitely starting to care, and will want solutions where they maintain a strong degree of autonomy.

If what we're seeing now continues for a few more years, and gets more severe, more people will become educated on it and want a solution, even if they don't exactly know what that solution should look like. Hence, the opportunity like one I'm describing here.

Just saying, there's (hopefully) a chance something develops along these lines. There is 100% a viable alternative aside from cloud streaming (which won't be nearly as much a cost-saver as some are thinking), and it doesn't have to replace cloud streaming, either. More options ultimately means more customers which means more dollars for companies.

Ilb5GXr8p9fh3mg8.png


Joke aside the only way to get a true standard is having one of the manufacturers buying out everyone else to setup a monopoly.

This is why I mentioned JEDEC at the beginning; they actually set a lot of standards in the computing industry, and are something of a consortium.

If it's a JEDEC standard (like DDR, HBM etc. are), then the various manufacturers will build around it. Without that component, then yes, it'd be impossible and leave too much power to a given manufacturer who could monopolize it.
 
What I'm suggesting here doesn't remove the integral functionality of RAM from the picture or has OEMs leave everything up to consumers; it's meant to act as a way of them actually retaining customers by having a modular, universal RAM spec and format that consumers can use between devices, with monetary savings for them coming in that reusability.
Ok... technically, what you are talking about, as a system, already exists. People can go and buy RAM and upgrade or transfer it between systems. You don't see that happening with GDDR because its requires a very specific kinda use case, and it would make no sense forving that kinda expense on people that just want something to browse the internet.
Meanwhile, OEMs save money on production costs, and can still build products running a gamut of configurations when it comes to capacity, bandwidth, speed, and related functionality.
So basically, proposing a one-size-fits-all type RAM, the only differences being capacity and bandwidth, which in themselves are tied to how much of said RAM is "plugged in". Sounds good... but I feel its trying to solve a problem that doesn't need solving. Would be nice to have though, but I dont see it happening because whenever you try and build it this kinda versatility, it always comes at the cost of simplicity of design. And people will take design over function more times than not.
I'd argue given recent developments, more consumers are definitely starting to care, and will want solutions where they maintain a strong degree of autonomy.
No... most consumers do not care. Most people who buy a PS5, do not even know how much RAM it has. And a lot of people who buy a laptop do not even know what RAM is.
If what we're seeing now continues for a few more years, and gets more severe, more people will become educated on it and want a solution, even if they don't exactly know what that solution should look like. Hence, the opportunity like one I'm describing here.
Nope, they will not care. What we need is to increase supply. Not a new standard.
Just saying, there's (hopefully) a chance something develops along these lines. There is 100% a viable alternative aside from cloud streaming (which won't be nearly as much a cost-saver as some are thinking), and it doesn't have to replace cloud streaming, either. More options ultimately means more customers which means more dollars for companies.
I don't think its that serious... as always, the industry will adjust. And we are already seeing that adjustment. Its why we have low-power and high-power everything, and that snow becomes a thing with consoles. Sony (for example) is fully aware that the kinda hardware that can truly be considerred a next gen console, would probably cost over $800 to make. So what they do is that in addition to making that, they make something that costs $400.

And that's just how things would be now. If the cost of entry goes up, it simply means the floor for entry has to be lower to accommodate as many people as possible, and let those who want to or can afford the higher-end devices do so.

Sony isn't going to force everyone to get an $800+ console, they will ensure there is a $500ish option too.
 
Last edited:
While we get used to "RAM not included" disclaimers, we'll also have to get used to the newly de-RAM'd product's price not reflecting the significant decrease in manufacturing costs.
 
It's a cool concept, but I can see several problems with it. Firstly, consoles operate on fixed specifications to optimise games. If they had to put up huge signs and run a campaign for things like the N64 expansion pack to make people aware that they needed the extra cartridge for certain games, imagine what it would be like with a modular memory system.

Secondly, unlike permanent storage memory, RAM cannot be placed anywhere, it must be located as close as possible to the CPU. This is problematic for devices with super-compact designs, such as smartphones or laptops (in fact, it already is for current laptops with soldered-on RAM). This means that you can't use an SD-style side slot, for example.
 
Ilb5GXr8p9fh3mg8.png


Joke aside the only way to get a true standard is having one of the manufacturers buying out everyone else to setup a monopoly.
Or a product is made that is heads and shoulders so much better than everything else, and whose benefits are glaringly obvious, that everyone just has to use it. e.g., NVMe SSDs.

It's a cool concept, but I can see several problems with it. Firstly, consoles operate on fixed specifications to optimise games. If they had to put up huge signs and run a campaign for things like the N64 expansion pack to make people aware that they needed the extra cartridge for certain games, imagine what it would be like with a modular memory system.
Exactly, its just too complicated and would mean you have to be educating or hoping that millions of casuals out there can figure this out. Then what ends up happening is that OEMs take the innitiative for those casuals and start making devices with this new RAM pre-packaged so its more convenient to the consumer, then they start making devices with the RAM soldered onto the PCB because they realize the consumers arent interested in changing the RAM and also that it makes manufacturing cheaper for them, and then th.... oh wait; we are right back to here.
 
CAMM2 is that standard with the smallest footprint. The problem is that there is a problem with specification for things like GDDR.

Too complicated. The problem with stuff like that, is that RAM is such an integral part of any computers make up that it's not something OEMs (especially with consoles) could leave up to the consumer.

But more importantly, the average consumer just doesn't care about stuff like that. It's hard enough them understanding external hard drives much less RAM.
They tried this with the N64 RAM pack too and adoption wasn't that high as far as I know.
 
Sony (for example) is fully aware that the kinda hardware that can truly be considerred a next gen console, would probably cost over $800 to make. So what they do is that in addition to making that, they make something that costs $400.

And that's just how things would be now. If the cost of entry goes up, it simply means the floor for entry has to be lower to accommodate as many people as possible, and let those who want to or can afford the higher-end devices do so.

Sony isn't going to force everyone to get an $800+ console, they will ensure there is a $500ish option too.

As far as we know, Sony is only working on the flagship console and the handheld, which certainly will not cost less than the Switch 2.

And yes, there will be an option at $400 - $500. It's called the PS5.
 
CAMM2 is that standard with the smallest footprint. The problem is that there is a problem with specification for things like GDDR.


They tried this with the N64 RAM pack too and adoption wasn't that high as far as I know.
And even then, the N64 still had to ship with built-in RAM.
 
Too complex a solution for a too specific problem. We can more or less do this with storage (even this has its set of complications) because moving around data between devices makes sense from a practical perspective. Your proposal serves no other purpose other than save what can range on practice from $50 to $150 (possibly less). That is good money yes, but not enough to warrant changing and developing entire standards, especially when there are much simpler solutions like using less or cheaper types of memory for devices that dont require too much.

Basically, you're overengineering things.
 
Last edited:
The problem with the current DRAM market is not due to standards, it's due to scarcity induced by AI buildup.
There is a second problem, that has been recurring in the DRAM market, and that is price fixing. And the AI buildup just made this much worse.
 
Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)?
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?
 
Last edited:
The problem isn't in architecture but with not enough production lines to satisfy demand fool.

We are hungry ! How we can sort this out !? I know ! How about we pack our sandwitches in paper rather than aluminium foil !
Someone: How about we make more food ?
 
"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."
RTX gpus with configurable vram sounds way too good for Nvidia to even consider
 
- I ran CoD with 400fps and everything on Ultra
- how much better it's for real in comparison with the console version that costs less than half of this machine?
- peasant

...I guess devs already have a standard with RAM. I mean, consoles exist basically since PCs became affordable, so the thinking was something like every six years or so we see a new generation. The problem is that some people keeps pushing and pushing, but not for a good evolution, mostly because of selling as the company or for the sake of epenis. There's a lot of people with the top GPU every year, the biggest of RAM... So some devs just have a small demon on their shoulder telling "don't care about cutting corners on PC", so then we have examples like Flight Simulator asking 32gb of RAM in fucking 2024 while Xbox runs fine
 
This isn't an announcement or scoop or anything at all; instead, it's just me having a really quick idea WRT solutions for RAM shortages. Truth is, this type of shit is probably going to keep happening as time goes on, and in greater degrees of severity. For mass-market consumer electronics in particular, no real long-term solution will kill the industry off sooner or later, because things like RAM and NAND are always going to be in demand from more & more industries, naturally driving up costs.

So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.

The idea is simple: a memory interface standard (at the logical, physical, port connect & memory device physical dimension level) that defines a way of being scalable by utilizing a commonly agreed-upon minimum capacity type (i.e 1 Gbit) "memory units", each 8-bit along each of four data interconnect points (to interface with neighbor memory units, and in some cases the controller & bus interface), integrated on a "card layer" that can both increase capacity and ensure connectivity between the memory units via a multi-point mesh data interconnect. You still have a row/column setup, you still use the "column count" to define bus width and "row count" to define capacity with this type of standard etc. Sticking with the microSD card dimension sizing, a practical capacity limit is probably around 2 GB (32-bit interface per card, 4x2 arrangement) to 8 GB (64-bit interface per card, 8x8 arrangement)

The thing is, you get this memory on devices the size of microSD cards, and certain devices might have slots for just one, or up to two, or four, or eight etc. Devices would stipulate pairing requirements i.e 2x 32-bit 2 GB cards for a 64-bit interface/4 GB RAM capacity limit, etc. There's no theoretical limit to the number of slots that can be interfaced, just a practical one based on the type of device and its market, physical footprint, power target (both performance & TDP consumption) etc. I figure you could design this type of memory with replication of different standards in mind i.e GDDR for GPUs, HBM for data centers, DDR for CPUs, NAND for storage etc. Mind, we are not talking about the same physical design as those memories; the actual memory logic for this would be more universal and scalable, but particular features of the memory controller could be adapted to replicate these types of pre-existing types.

In the same way you can, today, take a microSD card from your phone and pop it in your laptop, that's what kind of future I see existing for volatile memory, especially in the consumer electronics space. This way, a person just buys the amount they foresee themselves using, they don't have to worry if it's "compatible" with their devices (they just assume it automatically is), and can swap it along devices as they see fit. It'd need to be very plug-and-play, and I still expect devices supporting it would need a small block (32 MB - 512 MB) of soldered RAM installed (or maybe some NOR with XIP support) for when the user pulls the cards out (fall back automatically to a low-power, internal RAM/NOR-backed UEFI/BIOS environment while cards are swapped in or out before auto-prompting the user to initiate the "real" UI space when minimum RAM amount is inserted & detected).

But with modern advances in memory fields (including CXE 3.0 and beyond), I don't feel this type of future development is far-fetched or even far off, and it's going to become an absolute necessity because the resources to keep making more and more RAM & NAND that's soldered and non-transferrable, are going up in prices while depleting in supply. We're going to have to start thinking of volatile memory in a renewable resource type of way (paired with smarter logic for upscaling, compute (PNM, PIM), etc.) if especially we want consumer electronics spaces like gaming to exist in a non-niche capacity beyond the next 10-20 years, IMHO.

Anyway, just a brief concept for a potential memory standard & solution. Game consoles would really benefit from this, clearly, considering the frankly stupid increases we just saw with PS5 today, and will very likely see with Xbox and Nintendo in the near future. Let alone what this means for things like the PS6. It'd be cool if Valve are already developing something like this, considering Steam Machine's specs are somewhat more modest, but I doubt they are. And yes, there are aspects to latency that'd have to be ironed out with this type of memory concept clearly, but that could be solved over time. Plus, there are always still workarounds to that (i.e arranging your data as SoA to only pay penalty for initial access latency), tho granted that requires more effort on part of the developers.

Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)? Any ideas on how to improve it? Alternative memory solutions (that are realistic and relatively practical)?

(*Bolded certain parts for emphasis to act as a kind of TL;DR)
Look at CAMM memory. It basically went nowhere except some Dell hardware. Supposedly it's a standard but nobody is adopting it.

Unless manufacturers are forced I just don't see this happening. That said, it's a good idea and could potentially help long term ecosystem.
 
Last edited:
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?
Your evil knows no bounds 🤣 😈 🤣!!!
 
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?
I like how Nvidia is banned.
 
Ok... technically, what you are talking about, as a system, already exists. People can go and buy RAM and upgrade or transfer it between systems. You don't see that happening with GDDR because its requires a very specific kinda use case, and it would make no sense forving that kinda expense on people that just want something to browse the internet.

Yes people can in theory do that with RAM today, but it's clunky, not plug-and-play simple, not necessarily user-intuitive, non-standardized in terms of form factor or slot mechanism (or bus interface for that matter) etc. That's why I made the comparison to microSD card; ideally what's being proposed would be more along the lines of that when it comes to ease-of-use, mobility, modularity, physical profile, (somewhat) power consumption targets, universality etc., just with a form of RAM instead.

So basically, proposing a one-size-fits-all type RAM, the only differences being capacity and bandwidth, which in themselves are tied to how much of said RAM is "plugged in". Sounds good... but I feel its trying to solve a problem that doesn't need solving. Would be nice to have though, but I dont see it happening because whenever you try and build it this kinda versatility, it always comes at the cost of simplicity of design. And people will take design over function more times than not.

Again, I'd say there is definitely a problem something like this could contribute to solving. Just look at what's happening currently; device costs are going through the roof for manufacturing because of memory shortages (and other things). Most of the devices affected use memory soldered to the board, and is non-removable. If it's removable, it's usually a specific memory type that won't suit a large range of devices, and even in cases where it could, specific hardware devices may only function if that memory has very specific bus timings, CAS/RAS, read/write functionality etc.

Would this hypothetical memory be perfect for all device types, as a main memory? No. Could it at least be beneficial as a supplemental pool? Yes. And, as well, I think naturally this will be what ends up developing anyhow given advances with things like CXE 3.0; we'll probably start seeing SSDs with RAM on them as the main memory instead of NAND, and probably implement some battery-backed setup to spoof nonvolatility. There were concept SSD designs going back to the '00s in the consumer space that tried doing this, but many standards we take for granted today in terms of memory accesses and data storage didn't exist yet, so those concepts never got widespread support and most of the market only viewed solid-state as a way of replacing magnetic-based cold storage, that's where the focus was.

Those conceptual SSD designs using RAM rather than NAND (and in a way, consumer-wise this goes even further back into the 90s with certain game consoles having battery-backed SRAM as data save methods) were good ideas, they just came too soon. Intel's 3D-NAND (forgot the specific name) was also a bit too early and somewhat undercooked. But the concept itself definitely seems it could be the future and fit into what I'm suggesting here. Now what type of RAM is more the question; you could get the best of both worlds with stuff like FeRAM or MRAM, but they're costly per MB compared to mainstream RAM we use today, or even vs. HBM.

No... most consumers do not care. Most people who buy a PS5, do not even know how much RAM it has. And a lot of people who buy a laptop do not even know what RAM is.

Most consumers are definitely gonna care about that $150 - $200 price increase though, even if they don't know the factors which led to it. And in addressing the main bottlenecks contributing to the price increases which are in the company's control, having a plug-and-play, modular, universally accepted JEDEC-approved volatile memory standard that could allow them to move the costs of memory out of the hardware directly (to save on hardware production costs of the product that'd use the memory in an otherwise soldered fashion) would probably be a godsend.

Nope, they will not care. What we need is to increase supply. Not a new standard.

And how do you increase supply when the materials that provide the supply are finite and running scarce? When the pipelines that produce the products made from the supply are limited in number and at full capacity, yet demand never ends?

You can't just "increase supply" out of thin air. Having more and better ways for end-users (and corporations) to be flexible with that supply (memory) is a more realistic and achievable mid-term goal IMO, until supply can itself actually sustainably increase.

And, even if/when it does, best to keep that mid-term goal solution around, as it acts as a net benefit to supply ecosystems.

I don't think its that serious... as always, the industry will adjust. And we are already seeing that adjustment. Its why we have low-power and high-power everything, and that snow becomes a thing with consoles. Sony (for example) is fully aware that the kinda hardware that can truly be considerred a next gen console, would probably cost over $800 to make. So what they do is that in addition to making that, they make something that costs $400.

But that splits their production capacity & facilities (with them and partners). It can become a logistics nightmare if demand for the $400 product is weak and $800 product is strong, but Sony only produce enough supply to meet projected demand for the $400 product. Now that's wasted manufacturing, wasted QA, wasted S&H, wasted storage, wasted distribution, wasted marketing costs AND who's to say they still don't have thin or negative margins on that $400 product?

And that's just how things would be now. If the cost of entry goes up, it simply means the floor for entry has to be lower to accommodate as many people as possible, and let those who want to or can afford the higher-end devices do so.

Realistically though, if the market solutions (those that companies typically provide) remain the same, that floor is only going to rise, and these things don't happen linearly. I'd expect the floor to rise higher than the ceiling does, over time, because that's kind of already looking like to be the trend.

Sony isn't going to force everyone to get an $800+ console, they will ensure there is a $500ish option too.

And that $500 option is still $200 - $300 more than the similar low-cost option they could offer only 1-2 generations ago. Hence, the floor is still rising.
 
Yes people can in theory do that with RAM today, but it's clunky, not plug-and-play simple, not necessarily user-intuitive, non-standardized in terms of form factor or slot mechanism (or bus interface for that matter) etc. That's why I made the comparison to microSD card; ideally what's being proposed would be more along the lines of that when it comes to ease-of-use, mobility, modularity, physical profile, (somewhat) power consumption targets, universality etc., just with a form of RAM instead.



Again, I'd say there is definitely a problem something like this could contribute to solving. Just look at what's happening currently; device costs are going through the roof for manufacturing because of memory shortages (and other things). Most of the devices affected use memory soldered to the board, and is non-removable. If it's removable, it's usually a specific memory type that won't suit a large range of devices, and even in cases where it could, specific hardware devices may only function if that memory has very specific bus timings, CAS/RAS, read/write functionality etc.

Would this hypothetical memory be perfect for all device types, as a main memory? No. Could it at least be beneficial as a supplemental pool? Yes. And, as well, I think naturally this will be what ends up developing anyhow given advances with things like CXE 3.0; we'll probably start seeing SSDs with RAM on them as the main memory instead of NAND, and probably implement some battery-backed setup to spoof nonvolatility. There were concept SSD designs going back to the '00s in the consumer space that tried doing this, but many standards we take for granted today in terms of memory accesses and data storage didn't exist yet, so those concepts never got widespread support and most of the market only viewed solid-state as a way of replacing magnetic-based cold storage, that's where the focus was.

Those conceptual SSD designs using RAM rather than NAND (and in a way, consumer-wise this goes even further back into the 90s with certain game consoles having battery-backed SRAM as data save methods) were good ideas, they just came too soon. Intel's 3D-NAND (forgot the specific name) was also a bit too early and somewhat undercooked. But the concept itself definitely seems it could be the future and fit into what I'm suggesting here. Now what type of RAM is more the question; you could get the best of both worlds with stuff like FeRAM or MRAM, but they're costly per MB compared to mainstream RAM we use today, or even vs. HBM.



Most consumers are definitely gonna care about that $150 - $200 price increase though, even if they don't know the factors which led to it. And in addressing the main bottlenecks contributing to the price increases which are in the company's control, having a plug-and-play, modular, universally accepted JEDEC-approved volatile memory standard that could allow them to move the costs of memory out of the hardware directly (to save on hardware production costs of the product that'd use the memory in an otherwise soldered fashion) would probably be a godsend.



And how do you increase supply when the materials that provide the supply are finite and running scarce? When the pipelines that produce the products made from the supply are limited in number and at full capacity, yet demand never ends?

You can't just "increase supply" out of thin air. Having more and better ways for end-users (and corporations) to be flexible with that supply (memory) is a more realistic and achievable mid-term goal IMO, until supply can itself actually sustainably increase.

And, even if/when it does, best to keep that mid-term goal solution around, as it acts as a net benefit to supply ecosystems.



But that splits their production capacity & facilities (with them and partners). It can become a logistics nightmare if demand for the $400 product is weak and $800 product is strong, but Sony only produce enough supply to meet projected demand for the $400 product. Now that's wasted manufacturing, wasted QA, wasted S&H, wasted storage, wasted distribution, wasted marketing costs AND who's to say they still don't have thin or negative margins on that $400 product?



Realistically though, if the market solutions (those that companies typically provide) remain the same, that floor is only going to rise, and these things don't happen linearly. I'd expect the floor to rise higher than the ceiling does, over time, because that's kind of already looking like to be the trend.



And that $500 option is still $200 - $300 more than the similar low-cost option they could offer only 1-2 generations ago. Hence, the floor is still rising.
Ok.. a few things...

There can never be a standardized one size fits all type of RAM, simply because you have different kinda devices that all need to do different things. And if you try and centralize everything around a specific component, then all you ahve accomplised is shift the problem to somewhere else.

Eg.

Let's say you make this standard SD-sized RAM. And lets say Each of those 3GB RAM "cards" has a speed of 48Gbs. To have a chip support that speed, you need to have a 32bit PHY controller, and that thing will draw a certain amount of power. For every extra chip you add, you also need to add another 32bit controller. Now, while that may be great for GPUs, what about CPUs? laptop APUs? Phones?

There are simply some devices that you would need to use a different kinda RAM for. Hell, the only things that need RAM that fast are GPUs.

All you would have accomplished is just making a new standard that doesnt solve any thing. And even worse... when RAM prices increase, your RAM price increases too.
 
I can pretty much guarantee you the industry as a whole is salivating at the idea.
I mean the headlines and PR practically write themselves:

Every tech blog twitter:
"With memory prices exploding, it's time to rethink device architecture.
Why is RAM still bundled instead of user-upgradable across all devices?
A apple A amd Intel Intel N Nvidia "

Asha Sharma Socials:
"Introducing Xbox Series Helix:
– True next-gen flexibility
Memory sold separately"

"RTX 6090 announced
– VRAM fully configurable (0GB bundled)
– Supports up to 128GB external memory modules
– Subscription unlock for >16GB"
Jensen Huang
"We believe memory should scale with user ambition."

I mean with headlines like these - who would say no?

Aw man 🤣. Welp, they'd never leave an opportunity go where they can remove stuff and somehow charge people MORE when doing so. Just one of many reasons I hate most of these corporations, particularly in mature markets where no need to "prove themselves" removes any pretense of honesty or taking hits to grow a customer base (since that base is already there).

Supposing something like what's being suggested would happen, I'd prefer it from a smaller company or upstart, who could actually grow respectably in the market long enough to stabilize, and they go through standardizing it with JEDEC or something similar. That's the company I'd likely support, until they too succumb to the virus of Final Boss Corporate Greed.

The compression algorithms are more likely to save the day

Those will help too, good point. And probably before anything being suggested in the OP. But the way I see it, everything can stack one on top the other over time, multiplying the inherent benefits.

Look at CAMM memory. It basically went nowhere except some Dell hardware. Supposedly it's a standard but nobody is adopting it.

Unless manufacturers are forced I just don't see this happening. That said, it's a good idea and could potentially help long term ecosystem.

Yeah, the challenges such a format would have are understandable. History's littered with failed formats for many reasons, mainly because each company has "their" standard they want to be THE standard.

I think having JEDEC approval for standardization here, like with other RAM memories (SDRAM, DDR, HBM, GDDR etc.) would help immensely, as you are defining a standard and letting companies still do their own thing when it comes to capacities, frequency, widths etc. But I'm not that well-versed in how the JEDEC functions (i.e what type of consortium is it, is it similar to the DVD and Blu-Ray consortiums of the past etc.?), and from what little I've looked into it, would seem even these universal memory standards tend to come from competing corporations (though in fairness, many also seem to rise from academia research fields and I guess those might partner with corporations to develop proof of concepts).

RTX gpus with configurable vram sounds way too good for Nvidia to even consider

Sadly you're right, but this could be what helps spark up a competitor. Or, we could see such a concept take shape in the mobile space, and as mobile space encroaches on x86 desktop, that naturally puts pressure on companies like Nvidia to adapt or suffer.

The problem isn't in architecture but with not enough production lines to satisfy demand fool.

We are hungry ! How we can sort this out !? I know ! How about we pack our sandwitches in paper rather than aluminium foil !
Someone: How about we make more food ?

Like I said to Mr.Phoenix Mr.Phoenix earlier, how are you going to magically increase the supply of the materials that contribute to the production of the supply (product)? If its' a finite resource and running scarce, it doesn't matter how many production lines you set up!

All of those production lines are going to have to split a depleting amount of supply to make products with, so the cost analysis would make setting up those additional production facilities (at scale) a waste.

The problem with the current DRAM market is not due to standards, it's due to scarcity induced by AI buildup.
There is a second problem, that has been recurring in the DRAM market, and that is price fixing. And the AI buildup just made this much worse.

Price-fixing is a major problem, indeed. Tho FWIW, the industry's been punished for that in the past (or at least, identified they were in fact price-fixing by regulatory bodies, and at least fined I would imagine). Back in the day tho (talking '80s, '90s, early to mid '00s) it wasn't AS big an issue because the volume of mass-market RAM was more spread out among a larger field of major manufacturers.

It wasn't like today where the vast majority is concentrated to SK Hynix, Samsung and Micron (and you get maybe a few MUCH lower-volume producers as second and third-sources).

Too complex a solution for a too specific problem. We can more or less do this with storage (even this has its set of complications) because moving around data between devices makes sense from a practical perspective. Your proposal serves no other purpose other than save what can range on practice from $50 to $150 (possibly less). That is good money yes, but not enough to warrant changing and developing entire standards, especially when there are much simpler solutions like using less or cheaper types of memory for devices that dont require too much.

Basically, you're overengineering things.

It's not to, say, change existing RAM standards or phase them out for something new. I probably should've used different wording in the OP :/

Rather, it's more to act as a supplement, and for specific products where using it as a main memory makes sense, those are the ones which do so. For others, it's a value-add.

It's a cool concept, but I can see several problems with it. Firstly, consoles operate on fixed specifications to optimise games. If they had to put up huge signs and run a campaign for things like the N64 expansion pack to make people aware that they needed the extra cartridge for certain games, imagine what it would be like with a modular memory system.

Secondly, unlike permanent storage memory, RAM cannot be placed anywhere, it must be located as close as possible to the CPU. This is problematic for devices with super-compact designs, such as smartphones or laptops (in fact, it already is for current laptops with soldered-on RAM). This means that you can't use an SD-style side slot, for example.

Well okay, for consoles let's say, this hypothetical memory wouldn't become the main memory. It could, in theory, serve a purpose similar to the N64's Expansion Cart (which I saw someone else mention as not doing well, but I think that's more due to release timing and design flaws with the N64 impacting usability) or the Saturn's 1 MB/4 MB RAM Carts. It'd be there as a value-add, an expansion-style option. Not the main memory supply (though, I think a company like Nintendo could find means of utilizing it as a main memory solution).

But game consoles aren't the only devices out there. Granted, we're here talking games, and I did have consoles in mind, but there's nothing stopping this hypothetical memory type from proliferating in other sectors until "eventually" a game console decided to take up usage. Heck, we've seen that play out in practice for multiple generations, with consoles waiting until some market maturity before adapting certain technologies, including certain types of RAM.

I see a thicc_girls_are_teh_best tech thread. I leave a positive reaction in the OP.

It's that simple.

Hell yeah bro, much respect 👨🤜🤛👲 (this is supposed to be a fist bump 🤣)
 
Like I said to Mr.Phoenix Mr.Phoenix earlier, how are you going to magically increase the supply of the materials that contribute to the production of the supply (product)? If its' a finite resource and running scarce, it doesn't matter how many production lines you set up!

All of those production lines are going to have to split a depleting amount of supply to make products with, so the cost analysis would make setting up those additional production facilities (at scale) a waste.
I am lossing you here... whats the depleting amount?

The RAM issue we have now isnt because silicon is in short supply. Silicon is the second most abundant material on the planet afterall.

The problem is simple. There are not enough companies that actually make RAM. That or the quantity of RAM they make is not enough to meet demand. And why this tends to happen is because things like RAM or nand flash chips are reactive products. If you want to increase supply, you basically need to build out more fab that literally cost 10s of billions of $. But what happens when you can now meet that demand and the demand for the chips drops... You now have 10s of billions worth of hardware with nothing to do. Ram prices plummet.

Oh, and get this... you know that thing you were proposing, even if something like that was adopted, and all these AI clients buy up all the RAM as they are doing now, we will still be in this exact same scenario.

What we need is simply more RAM being made... not a new type of RAM to be made. Cause unless that new type of RAM isn't made like RAM, then it would be scarce too.
 
Rather, it's more to act as a supplement, and for specific products where using it as a main memory makes sense, those are the ones which do so. For others, it's a value-add.
Thats still enormous effort and planning. Just using the standards we have now doesn't cut it either. RAM slots in desktops and laptops weren't designed having in mind constant memory swapping and removal. Estabilished input standards like usb or microsd inputs also won't cut it as a replacement for proper RAM.

Like i said, you're overengineering a solution for a problem that could be solved as easily as "lets use cheap chinese manufactured RAM instead". I know there is a lot of chaos, fear and doom being spread right now over this, but on the grand scheme of things this is nothing more than an annoyance.
 
Last edited:
A guaranteed solution is pivoting back to legacy platforms that simply don't require much RAM.
Gaming is well beyond the point of diminishing returns and the RAM jump from PS3's 512mb to PS4's 8gb wasn't worth it.
There are PS3 games that wouldn't be possible on PS2 (TLOU, etc.) but the reality is that almost everything on PS4 and PS5 could be done on a PS3 at a slightly lower resolution.
 
Last edited:
What we need is more fabs.
Nah, only true if we are certain that demand isn't heading for a cliff and sudden drop. Those fabs cost so much, you only build them if they are used longer term. Even with the increased prices and exploded margins seemingly no one is making that invest and commitment.
The orange retard might even force tariffs on RAM too, so much that it becomes sensible to build a fab in Murica but unless that is a decision that is not backtracked again, nobody would want to do that unless actually forced by that clown.
That's imho the only individual that could force it, the market itself apprantly seems not to budge.
 
Last edited:
This isn't an announcement or scoop or anything at all; instead, it's just me having a really quick idea WRT solutions for RAM shortages. Truth is, this type of shit is probably going to keep happening as time goes on, and in greater degrees of severity. For mass-market consumer electronics in particular, no real long-term solution will kill the industry off sooner or later, because things like RAM and NAND are always going to be in demand from more & more industries, naturally driving up costs.

So a thought that came to me is, why doesn't a company make a standardized modular RAM standard in tandem with JEDEC specification? I'm thinking of something as a cross between Dell's CAMM memory, m.2 NVMe storage standard, and microSD cards. It would build off the decades of memory standardization we've seen going from FPM to EDO to SDRAM to DDR to GDDR to HBM, from bubble memory to ROM to NOR to NAND, you get the idea.

The idea is simple: a memory interface standard (at the logical, physical, port connect & memory device physical dimension level) that defines a way of being scalable by utilizing a commonly agreed-upon minimum capacity type (i.e 1 Gbit) "memory units", each 8-bit along each of four data interconnect points (to interface with neighbor memory units, and in some cases the controller & bus interface), integrated on a "card layer" that can both increase capacity and ensure connectivity between the memory units via a multi-point mesh data interconnect. You still have a row/column setup, you still use the "column count" to define bus width and "row count" to define capacity with this type of standard etc. Sticking with the microSD card dimension sizing, a practical capacity limit is probably around 2 GB (32-bit interface per card, 4x2 arrangement) to 8 GB (64-bit interface per card, 8x8 arrangement)

The thing is, you get this memory on devices the size of microSD cards, and certain devices might have slots for just one, or up to two, or four, or eight etc. Devices would stipulate pairing requirements i.e 2x 32-bit 2 GB cards for a 64-bit interface/4 GB RAM capacity limit, etc. There's no theoretical limit to the number of slots that can be interfaced, just a practical one based on the type of device and its market, physical footprint, power target (both performance & TDP consumption) etc. I figure you could design this type of memory with replication of different standards in mind i.e GDDR for GPUs, HBM for data centers, DDR for CPUs, NAND for storage etc. Mind, we are not talking about the same physical design as those memories; the actual memory logic for this would be more universal and scalable, but particular features of the memory controller could be adapted to replicate these types of pre-existing types.

In the same way you can, today, take a microSD card from your phone and pop it in your laptop, that's what kind of future I see existing for volatile memory, especially in the consumer electronics space. This way, a person just buys the amount they foresee themselves using, they don't have to worry if it's "compatible" with their devices (they just assume it automatically is), and can swap it along devices as they see fit. It'd need to be very plug-and-play, and I still expect devices supporting it would need a small block (32 MB - 512 MB) of soldered RAM installed (or maybe some NOR with XIP support) for when the user pulls the cards out (fall back automatically to a low-power, internal RAM/NOR-backed UEFI/BIOS environment while cards are swapped in or out before auto-prompting the user to initiate the "real" UI space when minimum RAM amount is inserted & detected).

But with modern advances in memory fields (including CXE 3.0 and beyond), I don't feel this type of future development is far-fetched or even far off, and it's going to become an absolute necessity because the resources to keep making more and more RAM & NAND that's soldered and non-transferrable, are going up in prices while depleting in supply. We're going to have to start thinking of volatile memory in a renewable resource type of way (paired with smarter logic for upscaling, compute (PNM, PIM), etc.) if especially we want consumer electronics spaces like gaming to exist in a non-niche capacity beyond the next 10-20 years, IMHO.

Anyway, just a brief concept for a potential memory standard & solution. Game consoles would really benefit from this, clearly, considering the frankly stupid increases we just saw with PS5 today, and will very likely see with Xbox and Nintendo in the near future. Let alone what this means for things like the PS6. It'd be cool if Valve are already developing something like this, considering Steam Machine's specs are somewhat more modest, but I doubt they are. And yes, there are aspects to latency that'd have to be ironed out with this type of memory concept clearly, but that could be solved over time. Plus, there are always still workarounds to that (i.e arranging your data as SoA to only pay penalty for initial access latency), tho granted that requires more effort on part of the developers.

Does anyone else around see potential in this type of memory concept, especially as a solution to what we're seeing today (or part of the solution, anyway)? Any ideas on how to improve it? Alternative memory solutions (that are realistic and relatively practical)?

(*Bolded certain parts for emphasis to act as a kind of TL;DR)

This doesn't solve any problems and introduces a ton of new ones.

1) RAM is already modular!

2 / 4 / 8 / 16 / 32 GB sticks are suitable for scaling in most applications. There's fuck all benefit to being able to slot 26 GB of ram instead of 32 GB in most practical scenarios. There's also nothing stopping Sony, Nintendo, or Smart Device manufacturers from making the memory in the PS5 swapable other than cost. It's simply cheaper for them to solder that shit directly to the board and more space efficient too. They could have the memory on 2 or 4 slotable modules if they wanted but customers want smaller cheaper technology, not modular technology.

2) Memory type, bus speeds, clock timings, voltage, etc are not a fixed thing, you can't just have a universal ram chip.

We have modular ram and it's already not universally swappable. This isn't because everyone in the industry is a big dumb dumb or just wants to sell you a new thing. Newer generation DDR specs have implementations of different voltage regulation technologies. Sometimes they have worse performance in some areas but better overall performance. CAS / CL timings are higher on DDR 5 modules than DDR 4, but DDR 5 modules run on a much higher frequency that makes up for it. The processor has to be able to support and operate at those frequencies too. God just imagine the fucking nightmare when customers wonder why the chips they bought from 4 different companies, years apart, with wildly different frequences and clock specs, don't just play nicely together when smashed into the same device. Can't the processor just communicate with all of them on different timings / frequencies? You end up having to rebuy sets of compatible ram anyway... Like we currently do.

3) The more sockets you have the more problems you introduce with syncing and bus design.

You ever wonder why the ram sockets are inches away from the CPU? That's because any interference basically fucks your high frequency communication bus. Even traces back to the CPU have to be adjusted and equalized in many custom designs to avoid one chip having a shorter route than another back to the controller causing race condition problems. Trying to design all your ram to be slottable onto a series of small ports in asking for trouble, especially externally like a microSD type slot.

4) The socket type isn't what makes ram expensive. The chips themselves are the majority of the cost.

4 x 16, 2 x 32, and 1 x 64 kits are basically the same price and making more PCBs and soldering the ram is cheap. That is not the problem the industry is facing with pricing. The silicon ram chips themselves are basically made by 3 major companies SK Hynix, Micro, and Samsung. These guys are at capacity and short of producing new facilities that would take years to get up and running, they can't just double their production overnight. Even those facilities aside there are probably further issues with supply of raw natural resources, meaning new mines, new roads, power infrasturcture, etc. The issue in the market is primarily a sudden demand spike because of AI. No socket type would have prepared us for this. Things will normalize over time but this is not a technology problem. It's an economics / production / logistics problem.

5) The industry is already taking steps toward modular socket designs for smaller devices.

Check out Compression Attached Memory Modules:
https://arstechnica.com/gadgets/202...es-may-make-upgradable-laptops-a-thing-again/

6) Modular RAM probably introduces bandwidth limitations.

Swappable RAM on your GPU probably sounds cool in theory and I'd love to upgrade my older 3070 Ti past it's 8 GB limitation for a bit of cash. Problem is GDDR7 high end cards are probably already pushing the bus as designed to it's limits. I'm not sure you can get away with introducing memory chips on these types of super high end devices without running into bus and signalling issues. There's probably a bunch of cases where modular memory means performance limitations. You see similar things with many other electrical cables where splices in the line introduce loss and noise. I would imagine this is an idea the industry has thought of but hasn't been able to practically design around, that or it hasn't been seen as a desirable enough feature in most cases to be worth the downsides. It's really only been the past few generations of cards where VRAM has been a huge bottleneck. If anything will make progress on this issue it might actually be AI demand for huge VRAM pools, though they seem to be going in the multi GPU direction atm.


I really don't see what problem you're solving with anything pitched above.
 
Last edited:
Top Bottom