Samsung raises DDR5 contract price by over 100% - "No stock"

So many people here so confident the AI bubble is going to burst. Why not put your money where your mouth is take short positions and get rich?

Right? They're in the mirror practicing saying "I told ya so!" while doing nothing about it. Just like my poor friends still telling me I should've invested in Amazon 20 years ago.
 
2bUsGkMePvzlc6KS.jpg

Bill Murray Walk GIF
 
This is one of the few times I hope China just fucking steamrolls the market with Chinese RAM so that the Samsung et al. get butt fucked. Same with GPUs. I hope that Nvidia get butt fucked as well.
 
Yes. This will cause more people to buy used therefore killing the planet a bit slower and possibly teaching people important lessons about the not needing best and latest. MacBook M1 Max is half the price of a new regular M5, while often outperforming it significantly in benchmarks.

Amen. All the people whining and bitching like children who's parents just took their favorite toy away.

This was needed.
 
This is one of the few times I hope China just fucking steamrolls the market with Chinese RAM so that the Samsung et al. get butt fucked. Same with GPUs. I hope that Nvidia get butt fucked as well.
Where do you think "Chinese RAM" is going ? Hint, China also needs data centers.
 
Last edited:
We've hit a wall and the future is about efficiency and workarounds, not throwing more resources and power at it. It was gonna happen eventually but gaming is now getting forced into this situation.
Them "tools" make it inefficient and bloat how much ram we need. The lower level programming you do the less ram that is needed.

Ram usage is not getting smaller. Todays programmers don't understand effeciency
 
Last edited:
And we had people in this thread-


Telling us the companies would just eat the costs. 😂
 
Micron does but they just announced focus on enterprise customers (and really AI and hyperscalers).
This isn't what they've announced. They've decided to bail from their own DIY business - shut down Crucial brand. That's all they've said.
I'm still quite certain that this is press induced panic more than anything substantial.
Those who like to talk about "AI bubble" should try looking at DRAM pricing changes and apply the same logic there.
 
Glad I went 96gb in my Intel build but I'm kicking myself for only going 48gb in my Ryzen build.
48fb is fine for a long time. I Have 64 GB in both my builds, and have another 64gb in spare, in case a module stops working for some reason.
Should be more than enough until I upgrade to DDR6 in 5 years time.
 
Last edited:
no stock = not wanting to produce more, but instead they want to charge more money for the same

Why don't they make factories for the extra demand they had? Why don't other companies make their own RAM factories?
 
So many people here so confident the AI bubble is going to burst. Why not put your money where your mouth is take short positions and get rich?
It will burst because behind it there is no viable business model. It's the same as dot com boom where you just mentioned a website and gotten money. Everyone is putting money in AI because nobody wants to go against the common sentiment - also most of the people in charge are filthy rich and have pay packages structured ina way they won't lose that much if it doesn't pan out, and can potentially gain a lot. So most of the time top management has very little reason not to do it:

1 - you win? You get filthy rich
2 - you lose? Not your fault, everybody lost as well

People have difficulty understanding that in business you don't need to go for perfect, just the upside has to be larger than the downside.

Where are the truly transformative AI features ? I work with startups and see hundreds of SaaS platforms on a weekly basis - all AI in them is fancy LLM with a lot of if/then trigger conditions. Almost none of this is adding any value and creates a ton of information overload (you have no idea how productivity in tech was devastated by adoption of Slack).
 
Last edited:
LLM will never answer this unless explicitly told (found answer somewhere) as they don't physically read. For simplicity and speed of LLM this step is omitted and they go straight to meanings and garlic as a meaning have no R.
Honest answer would be "I don't understand/know" but LLM tailored to give answer even if it's a shitty one.

no stock = not wanting to produce more, but instead they want to charge more money for the same

Why don't they make factories for the extra demand they had? Why don't other companies make their own RAM factories?
They think it's temporary uptick in demand and a part of normal super-cycle. Demand goes like sine function with upper than average and lower than average take turns and it's unwise to overreact to extremes.
For example Hynix only plan to put extra efforts if this part of super-cycle will last longer than expected and it'll be probably no earlier than 2027. Before that they are sticking with gradual expansion according to long term trends of the market.
 
LLM will never answer this unless explicitly told (found answer somewhere) as they don't physically read. For simplicity and speed of LLM this step is omitted and they go straight to meanings and garlic as a meaning have no R.
Honest answer would be "I don't understand/know" but LLM tailored to give answer even if it's a shitty one.
I know. It's still funny, tho.
 
I'm fine with 32gb as of now and made the wife buy a new minipc with 32gb for the workplace, sincerely thinking of it's worth to upgrade to a PS5pro now and resell the gaming pc next year😅😁
 
They can do whatever they want as long as they all do the same price increase. There is no stock because they don't want to increase production.

This is why we really need China (or others) to start producing more DDR5 to really compete against SK Hynix, Micron and Samsung.
 
Last edited:

The thing about this test and test like those is that it uses loophole in how LLM's work. LLM's work on basis of tokens aka bits of words not single letters. So stuff like Strawberry might be divided into Stra - wbe - rry. Moreover LLM doesn't see tokens as words like "Stra" but a number so it would be something like 24522 - 232111 - 43332. The amount of tokens is preset and it is called vocabulary.

The point is that it's an issue with underlying architecture not with LLM intelligence itself.
Models with per letter tokens should solve that without an issue or just higher intelligence LLMs that became aware of their own underlying architecture.

The reason why we use tokens instead of letters is because tokens are much cheaper to run performance wise. For the same reason why for example japanese kanji characters can represent whole concepts in one sign rather than writing multiple words or sentences to describe it.
For sure we will have per letter LLM's. Doing so this now would trade too much performance and size.
 
Top Bottom