DRAM prices have increased 170% due to AI demand

vEmxAV6Pk4yflnQ3.gif
 
But think of all the good things AI will bring into your life! You'll get CEO focussed approved movies made with AI so they don't have to pay anyone, CEO approved TV streaming shows so they don't have to pay anyone, you'll get to spend time on the internet with social media flooded with AI that's there to keep you in your bubble, why even play games? Let the AI play games for you and give you a brief bullet point summary of the best and worst moments for you to praise and be outraged by, then you can video call your family for Christmas and hear valuable tales from your dead granparents thanks to the gift of AI, then you can focus on your work in the factory, go to work in the factory, make sure to watch AI generated content during your break, then get back to work, enjoy your new life enhanced by AI, you wont ever need to talk to another human being again, AI will chat with other AIs on your behalf, leaving you with even more time to consume content generated just for you!
Maybe a bit of accelerationism is exactly what we need in order to get back to God, family and country.
 
Good thing I decided to pull a trigger and bought my 9950x3d, 128 GB RAM and 5090 in september. Now I see the price went from <500 to almost 1500 euro. Basically 300% increase.
 
Last edited:
I've posted this in another thread, but due to all this AI race bullshit that sent DRAM prices (especially DDR5) skyrocketing over the past few weeks, I've decided to grab a new PC now instead of waiting until spring (I was banking on Nvidia's supposedly refreshing the 5000 series with Super variants).

Got a nice BF discount on the system I bought yesterday, so I'm very content with this decision of getting a new PC sooner rather than later. Specs are 9800X3D, 32GB DDR5 and 5070ti.
Congrats to your new pc.
 
Congrats to your new pc.
Thanks, my bro! Really happy with it since it's a big upgrade over what I had before (old i7‑6700K, 16GB DDR4, and a 1080ti). It's also been stable in everything I've tested so far yesterday: WuWa with everything maxed out including RT, DNA, ZZZ, 3DMark Time Spy (ran a bunch of loops), etc. The only thing I still don't like is Windows 11, but it is what it is. :messenger_grinning_sweat:
 
I hate how all these trends and "new advancements" are completely screwing up the PC ecosystem. Crypto mining and GPUs, and now AI and DRAM. The absolute worst.
 
Thanks, my bro! Really happy with it since it's a big upgrade over what I had before (old i7‑6700K, 16GB DDR4, and a 1080ti). It's also been stable in everything I've tested so far yesterday: WuWa with everything maxed out including RT, DNA, ZZZ, 3DMark Time Spy (ran a bunch of loops), etc. The only thing I still don't like is Windows 11, but it is what it is. :messenger_grinning_sweat:
Oh wow, thats a huge upgrade. You won't need any upgrades for several years now. I also recently switched to 11, and I'm not really warming up to it either.
 
The models get better within their limitations, but the limitations are still there. Like it or not, these AIs are just statistical models at the end of the day, and statistics can only achieve so much.

Can you tell the class all about the limitations?

There are some AI researchers on X that I'd like to share your findings with.
 
Last edited:
Can you tell the class all about the limitations?

There are some AI research on X that I'd like to share your findings with.
>AIs rely mainly on statistical models, this makes them really bad at continuity and long-term thinking. This is evident in things like video generation where they often cannot produce videos longer than 5 second clips without beginning to halucinate. Even generating something like a comic strip often comes with shifting backgrounds and other inconsistent elements. Its a very fundamental flaw in the way these AIs operate. Its possible to improve on at the cost of larger datapools for training but...

>The available datapool these models need to train is limited. As much data as we have now, theres a growing, exponential need for more in order to improve on the models that might not just be available. Worse yet, these pools are starting to get contaminated with AI generated content itself, causing inbreeding.

>These models learning capabilities are very limited outside of the training phase. This makes goals like AGI practically impossible to achieve, as well as other limitations.

Of course, thats all without getting into the economics of the thing.
 
Last edited:
>AIs rely mainly on statistical models, this makes them really bad at continuity and long-term thinking. This is evident in things like video generation where they often cannot produce videos longer than 5 second clips without beginning to halucinate. Even generating something like a comic strip often comes with shifting backgrounds and other inconsistent elements. Its a very fundamental flaw in the way these AIs operate. Its possible to improve on at the cost of larger datapools for training but...

>The available datapool these models need to train is limited. As much data as we have now, theres a growing, exponential need for more in order to improve on the models that might not just be available. Worse yet, these pools are starting to get contaminated with AI generated content itself, causing inbreeding.

>These models learning capabilities are very limited outside of the training phase. This makes goals like AGI practically impossible to achieve, as well as other limitations.

Of course, thats all without getting into the economics of the thing.

LOL ok. Well, suffice to say I don't have any new research insights into the fundamental limitations of LLMs to share with said AI researchers on X.
 
Well, suffice to say I don't have any new research insights into the fundamental limitations of LLMs
Vast majority of people barely understand the basics, which is why they treat AI like some code wizardry that can do anything. Kind of how people in the 50s and 60s thought we'd have fully human-like robots by the early 2000s because "electronics" was essentially magic to them.
 
>AIs rely mainly on statistical models, this makes them really bad at continuity and long-term thinking. This is evident in things like video generation where they often cannot produce videos longer than 5 second clips without beginning to halucinate. Even generating something like a comic strip often comes with shifting backgrounds and other inconsistent elements. Its a very fundamental flaw in the way these AIs operate. Its possible to improve on at the cost of larger datapools for training but...

>The available datapool these models need to train is limited. As much data as we have now, theres a growing, exponential need for more in order to improve on the models that might not just be available. Worse yet, these pools are starting to get contaminated with AI generated content itself, causing inbreeding.

>These models learning capabilities are very limited outside of the training phase. This makes goals like AGI practically impossible to achieve, as well as other limitations.

Of course, thats all without getting into the economics of the thing.

The coherency issue over time you mentioned just doesn't seem to be a fundamental problem at all. Each new model improves on that. Google's Genie 2 world model had an 'interaction horizon' of 10-20 seconds. Their Genie 3 model got that up to several minutes.

In so much as there is a data limit in the form of readily accessible basic text data there are already multiple solutions and mitigations like better curation, multimodal data and synthetic data.

The models are already using a significant amount of synthetic data and, no, it isn't harming the models or their progress.

Continual learning is being worked on and there has been some progress. Nobody doubts it is a hard problem to crack but there isn't any evidence it's an impossible challenge to overcome.
 
Last edited:
The coherency issue over time you mentioned just doesn't seem to be a fundamental problem at all. Each new model improves on that. Google's Genie 2 world model had an 'interaction horizon' of 10-20 seconds. Their Genie 3 model got that up to several minutes.
Like i said, it's possible to improve on that, but it gets exponentially harder. Also, on the Genie example you gave, remembering an artificial enviroment (as in it saving on memory what it previously generated) is different from coherence and continuity. What genie is doing there is different from creating a video.

In so much as there is a data limit in the form of readily accessible basic text data there are already multiple solutions and mitigations like better curation, multimodal data and synthetic data.
Those aren't solutions, they're mitigations to the issue. Synthetic data for example goes back to the inbreeding issue i mentioned.

The models are already using a significant amount of synthetic data and, no, it isn't harming the models or their progress.
Over time, repeated use of synthetic data will cause assumption loops and biases that'll need explicit correction. They even gave a term for its overuse (Model Autophagy Disorder or MAD). It can at best complement real world data, but cannot replace it

Continual learning is being worked on and there has been some progress. Nobody doubts it is a hard problem to crack but there isn't any evidence it's an impossible challenge to overcome.
There are bags of tricks for specific situations that can help mitigate or move the frontier up in certain cases, like the one i mentioned above with saving the details from artificial enviroment to memory, so that turning away from a rock and back at it doesn't make the rock change or disappear. But again, these do not solve the fundamental problems with these models.
 
Last edited:
Top Bottom