Microsoft AI ambitions struggle to meet goals

These companies are going to find out eventually that outsourcing all of your work overseas and importing millions of visa holders to save money eventually causes your product to go to crap, including AI.
Who do they think are even going to buy these products when nobody has a job and thus no money. They're putting the cart before the horse out here.
 
Last edited:
Some AI stuff is really great - speech to text, summarization, image recognition, translation etc
Some AI stuff is reasonable - better search engine, casual image generation etc
Some AI stuff just bad - complex stuff, stuff requiring memory or rare knowledge etc

The problem of AI companies is that good/reasonable stuff not worth even tens of billions and they aim for trillions, so they try to upsale stuff that obviously bad. And not eveyone buying this shit.


Even translation is bad. I mean, proper translation, not superficial level stuff. I ran a few tests on my books (from Spanish to English) and the results were appalling. Each page needed dozens corrections.

The problem is that execs and AI preachers make the wrong assumption that those easy tasks in which AI is good at can easily lead to the next level of complexity. Applied to videogames, generating random landscape images is not "a previous step" to achieving a full world map by typing in a few prompts. The gap between these two tasks is huge. Same as believing that a chatbot can write a (good) novel or script. It's nonsense.
 
Microsoft's approach to AI feels underwhelming, whereas Google appears to have a more compelling strategy in this area. The integration seems to make sense and is actually useful which is a shame since Office and OneDrive are quite good tools.
 
Last edited:
Even translation is bad. I mean, proper translation, not superficial level stuff. I ran a few tests on my books (from Spanish to English) and the results were appalling. Each page needed dozens corrections.
It's one uses of AI I actually do and find it reasonable. Much better than what MTL was before.
For common stuff it's good, for specific/rare stuff it might trip itself, but humans tend to do it too.
So basically it's on the level of an average translator, not high skilled pro or one with extensive knowledge in your field. But generally it's more than enough.
 
Even translation is bad. I mean, proper translation, not superficial level stuff. I ran a few tests on my books (from Spanish to English) and the results were appalling. Each page needed dozens corrections.

The problem is that execs and AI preachers make the wrong assumption that those easy tasks in which AI is good at can easily lead to the next level of complexity. Applied to videogames, generating random landscape images is not "a previous step" to achieving a full world map by typing in a few prompts. The gap between these two tasks is huge. Same as believing that a chatbot can write a (good) novel or script. It's nonsense.
This is the very reason why prices will not drop anytime soon, chatbots still have room to improve.

Even if the AI bubble burst, the largest players are still committed to multi-year AI/AGI roadmaps requiring massive compute. (GPUs, CPUs, HBM, Advanced packaging (CoWoS, 3D stacking) and networking)

This demand isn't driven by hype but long-term strategy.

Companies like X (xAI), OpenAI, Meta, Google, Microsoft, and Amazon plan to deploy tens of millions of GPUs over the next decade to pursue self-improving models (AGI research) and operate global inference networks.

This creates a baseline demand that stays high regardless of "bubble" conditions.
 
This is the very reason why prices will not drop anytime soon, chatbots still have room to improve.

Even if the AI bubble burst, the largest players are still committed to multi-year AI/AGI roadmaps requiring massive compute. (GPUs, CPUs, HBM, Advanced packaging (CoWoS, 3D stacking) and networking)

This demand isn't driven by hype but long-term strategy.

Companies like X (xAI), OpenAI, Meta, Google, Microsoft, and Amazon plan to deploy tens of millions of GPUs over the next decade to pursue self-improving models (AGI research) and operate global inference networks.

This creates a baseline demand that stays high regardless of "bubble" conditions.
They plan until investors happy with plans. And investors happy if there is a profit in sight. And if there is none - plans can be changed.
There is already a noise against as we see in Oracle and MS cases.

Companies reports to boards and boards ain't there for charity "better humanity" projects
 
Last edited:
They plan until investors happy with plans. And investors happy if there is a profit in sight. And if there is none - plans can be changed.
There is already a noise against as we see in Oracle and MS cases.

Companies reports to boards and boards ain't there for charity "better humanity" projects
Well Elon for example ain't going to stop until he reaches his goal.

xAI's Colossus 2 - First Gigawatt Datacenter In The World, Unique RL Methodology, Capital Raise
At the end of the day, Elon can get Tesla to invest more or take loans on more of his Tesla and SpaceX stocks to invest tens of billions into xAI. This will allow them to build Colossus 2. No one truly knows how levered Elon is already, but it is widely understood he can always sell and unlock a lot more of his dry powder into xAI. Elon will do everything he can to not lose to Sam Altman.

Colossus 1+2 goal is to have a million GPUs, which sucks for us because based on this HBM roadmap, these chips are increasing in memory capacity by a lot each generation.
BjkfTgAk2RZakhHo.jpg


Then there's HBF that's gonna make shit worse in the future.
Dda8R5WQmxbYERbv.jpg

uHbAi936DSHWXPK6.jpg
 
Some AI stuff is really great - speech to text, summarization, image recognition, translation etc
I deal with the result of AI summarization and translation every day, and it's absolutely not great at either of them, unless the task is basic af.

People get blinded by the fact that what AIs spit out now reads fine essentially 100% of the time. The grammar is on point, the tone is appropriate and the word choices make sense, so they assume the content must be accurate as well. But a lot of the time it isn't, and it's not until someone else has to rely on that AI summary or translation that the cracks begin to show, and then they're the ones who have to spend their time and effort dealing with the consequences while the guy responsible gets to brag about how much easier his job is now thanks to AI.
 
It's one uses of AI I actually do and find it reasonable. Much better than what MTL was before.
For common stuff it's good, for specific/rare stuff it might trip itself, but humans tend to do it too.
So basically it's on the level of an average translator, not high skilled pro or one with extensive knowledge in your field. But generally it's more than enough.

You're delusional if you thing that so called AI is on a level of an average translator.
 
An AI is fucking dumb and retarded investors should try to understand this. It works with statistics, it's basically these 3 words suggested on top of your smartphone keyboard.

Of course it has some uses like image generation, summarize some code researches or any little simple stuff like this. But corporations wants to make us believe that we are already in the future with automatic taxis and assisting robots at home. The truth is all this stuff still has to be done remotely by humans because AI is dumb.

Neural Network is something that exists since the dawn of computer science. We can do better stuff now thanks to more processing power, but having to use so much power for so little advancement isn't worth it. And unless a genius finds a new way to program AI, we won't see that robots future. Sorry retarded investors, I hope you fail hard now.

By the way, is there a definitive way to remove Gemini of my google searces? I've been adding "-AI" but if there's an option to shut it forever, I'd like to use it.
 
I work for a development company that's heavily investing in AI for it's staff of programmers - including myself. While there's some use here in terms of assisting with boilerplate code generation, you'd be a fool to integrate this technology into your core business to the degree required for Microsoft to see a ROI on Nadella's over-extend on AI. The technology is simply not good enough for the kind of uses they're trying to force on to people. It can't design worth a damn, it follows trends to its own ruin, and it's pathetic at applying basic human common sense. It's a monkey, and should be used for monkey work.

Unfortunately for the tech industry, that directly limits how deep its uses can be, and thus how much people are prepared to spend on it. Running billion dollar data centres, running million dollar energy bills, using a trillion dollars of investment, all so I don't have to write out SQL scripts by hand isn't smart business. Microsoft, Meta, Google; this industry expected AI to drive legitimately hundreds of billions in new profits annually that would never go away because they want people to literally build their businesses around it. It seems the market was smarter than they were.
 
People get blinded by the fact that what AIs spit out now reads fine essentially 100% of the time. The grammar is on point, the tone is appropriate and the word choices make sense, so they assume the content must be accurate as well. But a lot of the time it isn't, and it's not until someone else has to rely on that AI summary or translation that the cracks begin to show, and then they're the ones who have to spend their time and effort dealing with the consequences while the guy responsible gets to brag about how much easier his job is now thanks to AI.
I can translate it myself and on average it'll be no better. I'm no translator by still quite confident in my language skills.
And I fucking know that a lot, a LOT of people will translate it even worse, even those who are supposed to do it better.
I work at control environment so I have a habit to quick check check results - and amount of times AI failed really miserable are negligible at those tasks. It's not ideal, but it's passable.

You're delusional if you thing that so called AI is on a level of an average translator.
You know that average means shit by "high standard" of internet?
 
Last edited:
Top Bottom