Windows president says platform is "evolving into an agentic OS," gets cooked in the replies — "Straight up, nobody wants this"

Is AI the Future of OS


  • Total voters
    321
What do you mean it turned your consumption of content into a more efficient experience? I need explanations. Isn't consumption a passive thing, like you are watching a movie? What is the goal here and what is your workflow?
I think he means, as an example of consumption: instead of sitting there watching a 1 hour youtube video getting a summary via Harpa AI of the important part of the information he was looking for. He doesn't mean spoiling a movie instead of watching it.
 
Last edited:
I unapologetically love it. I already use agentic tools and it has turned my consumption of content into a deeper more efficient experience like no other. An agentic OS would further complement my overarching experience.

With that said, I do have a serious issue with Microsoft in the future of an agentic experience. CoPilot is sometimes like the worst nanny ever due to its role and place in institutions, thus being overprotective against some content (which could lead to legal actions), often blocking processing legit content.

..Which is why I have removed CoPilot and moved onto testing agentic tools like Harpa AI / Sider AI.

You really do have to expand on what you actually mean by "my consumption of content into a deeper more efficient experience like no other". Because currently, without context. You come across as an incredibly stupid person.
 
Last edited:
Maybe they're talking about like in the late 90s with "internet wil be the main thing", and launched Me/2000

...that didn't turn out good thou
 
Idk, AI everything seems almost inevitable at this point. Not only are all the big corporations betting everything on it, the govt is too.

Almost all new investment money is going into ai, hardware, data centers, and everything that goes with it.

They aren't gonna let it fail and its gonna go EVERYWHERE. The economy depends on it succeeding in a big way.
 
I think he means, as an example of consumption: instead of sitting there watching a 1 hour youtube video getting a summary via Harpa AI of the important part of the information he was looking for. He doesn't mean spoiling a movie instead of watching it.
I dunno. For learning, I'll watch and make notes and do. Having an AI summary doesn't help me.
 
Dave's Garage actually laid out exactly what would bring me back to Windows. It's obvious he's still invested, he helped write the kernel after all. His video was very insightful and almost made me install W11 just because I admire Dave so much. I didn't end up doing it, but man I hope someone listens to him. Sounds awesome.
Who's Dave? Does How would anyone like him get you excited about windows 11 and what does he have to do with it?
 
But seriously, I'm genuinely interested in what you use it for and how! Please explain here.
He won't because he probably had gpt write his initial message and even he doesn't know what the fuck he is talking about.

"Efficiently consuming content with the use of ai" is one of the most naueseating things I've read in a while.





It all gets turned off by me because I assume it will impact my fps.
 
Last edited:
I unapologetically love it. I already use agentic tools and it has turned my consumption of content into a deeper more efficient experience like no other. An agentic OS would further complement my overarching experience.

With that said, I do have a serious issue with Microsoft in the future of an agentic experience. CoPilot is sometimes like the worst nanny ever due to its role and place in institutions, thus being overprotective against some content (which could lead to legal actions), often blocking processing legit content.

..Which is why I have removed CoPilot and moved onto testing agentic tools like Harpa AI / Sider AI.
Did AI write this, lol. Sounds like NeoGAF is one of those sources of efficient content consumption.

Is ChatGPT in the room with us? 😉
 
Last edited:
I dunno. For learning, I'll watch and make notes and do. Having an AI summary doesn't help me.
Try NotebookLM. It's the best learning resource I have seen and tried and it's (mostly) free.

It lets you compile your sources (and find more) and then build your queries against those sources only. That can include long papers, YouTube, web pages, etc.

Then you can build flash cards, create podcasts, do mind maps, study guides and a lot more. It's awesome and possibly my favorite Google product of all times.




To be fair this is mostly Enterprise targeted. Since Corpos are pushing for AI everywhere all the time in their employees and MS obliges.

Thing is what MS is also pushing on Enterprise side is governance, discoverability and security. They had a whole lot of that just announced at Ignite.

Anyways, life personal PC there should be a way to disable all of this crud.

I do admit some of the MS new support for local offline models is pretty cool though.
 
The problem is that AGI/ASI would defeat the purpose of an OS and even the person operating that OS hence the pushback. There would be no real need for "Windows" anymore. The people who use an OS want an OS that doesn't spy on them teaching these models to replace them. The old fashioned people just want a machine they operate to perform tasks themselves.
I disagree.

Even with AGI/ASI around people would still use tech resources and therefore you would need an interface to interact with them that would use both voice and graphics. All this shit is transitional but beyond this stage lies its true potential. We've seen glimpses of how this could be in movies, games and books.

I understand why a lot of people reject the idea of training AI models that will eventually replace them and in a lot of ways it is just resistance to change, just like any other revolutionary tech that has been part of humanity in the past and changes things radically... Only this time the potential it has goes beyond anything we have seen before.
 
I disagree.

Even with AGI/ASI around people would still use tech resources and therefore you would need an interface to interact with them that would use both voice and graphics. All this shit is transitional but beyond this stage lies its true potential. We've seen glimpses of how this could be in movies, games and books.
Visual and voice communication would be needed but I think local hardware, an OS in particular "windows" and most productivity applications would become pointless if AGI and especially ASI take over. I mean why would you need a window to launch different applications like showing you excel if you can just ask your assistant (who is more clever than you if ASI) to perform the tasks and just show you the relevant info through a display? You wouldnt need to navigate an OS to perform tasks at all because it would be doing it and just displaying information that you might not even need anymore. The single display app or browser could show you things from the cloud and be just as competent as the local "OS". Then you've got to ask what is an OS anymore, I wouldn't need file management and folders, I wouldn't need to be presented a list of applications to launch, I wouldn't click on a gmail or outlook icon to show me a list of emails, I wouldn't need a calendar if the ASI is my personal assistant I simply talk to. What tasks would I be performing in the OS UI myself? That's the whole point I believe, I would be giving up agency to the "agent" and not need to perform tasks myself. Some even tried AI interfaces without a screen in the past with the Humane Ai Pin but this is before AGI/ASI. Wear some Google Glass like device and just ask it to do everything/show things without you needing to actually perform anything in what you would consider an "OS" UI today. You wouldn't need to operate anything if it's agentic. It would all just be data accessed and consumed on the cloud and no actual tasks to perform in a UI.
 
Visual and voice communication would be needed but I think local hardware, an OS in particular "windows" and most productivity applications would become pointless if AGI and especially ASI take over. I mean why would you need a window to launch different applications like showing you excel if you can just ask your assistant (who is more clever than you if ASI) to perform the tasks and just show you the relevant info through a display? You wouldnt need to navigate an OS to perform tasks at all because it would be doing it and just displaying information that you might not even need anymore. The single display app or browser could show you things from the cloud and be just as competent as the local "OS". Then you've got to ask what is an OS anymore, I wouldn't need file management and folders, I wouldn't need to be presented a list of applications to launch, I wouldn't click on a gmail or outlook icon to show me a list of emails, I wouldn't need a calendar if the ASI is my personal assistant I simply talk to. What tasks would I be performing in the OS UI myself? That's the whole point I believe, I would be giving up agency to the "agent" and not need to perform tasks myself. Some even tried AI interfaces without a screen in the past with the Humane Ai Pin but this is before AGI/ASI. Wear some Google Glass like device and just ask it to do everything/show things without you needing to actually perform anything in what you would consider an "OS" UI today. You wouldn't need to operate anything if it's agentic. It would all just be data accessed and consumed on the cloud and no actual tasks to perform in a UI.
You are right in the sense that a list of applications and such wouldn't be needed but even if every single compute task is done in the cloud, you still need a way to interface with it locally. It is still an OS because by definition an OS is the interface humans use to interact with computers and digital devices, regardless of where that processing is done.

Yes, you can ask "tell me what NASDAQ is today" but if you want to see a trend you won't want it to just tell you "On the first quarter of 2023 it was, on the second quarter was, then went up to during the third quarter, then it took a dip". For this, you might actually want to look at a chart. There are several scenarios (not just Q&A) that require information to be displayed for human consumption because that's the way we as humans work. What a very capable AI would save you from is having to open a browser, typing "www.google.com" or whatever and then type the information you want to see. Even more so if the kind of data you are looking for might reside in different systems. So for example, you could ask it "give me a comparison of how my investments have performend year to date compared to NASDAQ". This would require you to pull information from your portfolio, put it in Excel or whatever and do the same with NASDAQ from another source (as right now you wouldn't normally find that precise comparison by default in your own investment porffolio at a glance).

It wouldn't be Windows in it's current form for sure but what Microsoft might be doing is striving to be that universal interface and so far they are the closest to getting there thanks to their market penetration and because they have been investing heavily in AI for a while. You are right about different ways to display this information (the end game would be to have it injected directly into your brain through something like neural link) but even just interface to interact with the AI layer (which would orchrestate many different underlying systems) would be an OS.

Now, while that stuff is a reality there are many steps in between that require Windows as it is right now and AI in the presentation we have today (LLM's/Agentic AI). Copilot right now is pretty crappy but it's slowly getting there and at some point not too far into the future it will make having to interact with OS in the way we currently do obsolete.

It's not for everyone, that's for sure. For example I prefer reading than listening because I can read faster so I would still want to see my E-Mail list but there are people that probably prefer to listen to their E-Mails.
 
Who's Dave? Does How would anyone like him get you excited about windows 11 and what does he have to do with it?
Microsoft Windows kernel OG. He's passionate and helped create an OS durable enough to withstand two decades of "international" software "developers" spaghetti slop being heaped on top of it. And somehow it still mostly works. Incredible.
 
Last edited:
I use agentic AI for tons of stuff now. I performed about a month's worth of system architecture design in about 3 days with validated and running code samples. I'd love to have that kind of productivity tooling built in to my operating system. I don't see where agentic AI has implications for gaming unless you want Windows to play the game for you, but for actual productivity tasks it would be pretty rad for me.
 
You are right in the sense that a list of applications and such wouldn't be needed but even if every single compute task is done in the cloud, you still need a way to interface with it locally. It is still an OS because by definition an OS is the interface humans use to interact with computers and digital devices, regardless of where that processing is done.
It's a barebones OS (as in kernel, device drivers etc) in that it doesn't need to be "windows" anymore. A Chromebook, a browser, a single app or any other barebones thinclient OS that can display text, images and video would do the trick. I guess what I'm trying to get at is that the OS advantage actually disappears when you no longer need applications, folders, icons, a windowing system, etc.
Yes, you can ask "tell me what NASDAQ is today" but if you want to see a trend you won't want it to just tell you "On the first quarter of 2023 it was, on the second quarter was, then went up to during the third quarter, then it took a dip". For this, you might actually want to look at a chart.
This is not really agentic though with or without a display. Even pretty stupid Alexa combined with an Echo Show can display this sort of thing. Agentic would not require you to even ask this question of it. Ask yourself why do you want to look at the NASDAQ? Agentic would do that task for you. So "invest for me" would be the task that AGI and especially ASI would perform for you as an agent and likely outperform you on. As AI becomes more and more advanced the need for those steps where you have agency in a task disappear.
There are several scenarios (not just Q&A) that require information to be displayed for human consumption because that's the way we as humans work. What a very capable AI would save you from is having to open a browser, typing "www.google.com" or whatever and then type the information you want to see. Even more so if the kind of data you are looking for might reside in different systems. So for example, you could ask it "give me a comparison of how my investments have performend year to date compared to NASDAQ". This would require you to pull information from your portfolio, put it in Excel or whatever and do the same with NASDAQ from another source (as right now you wouldn't normally find that precise comparison by default in your own investment porffolio at a glance).
I get that visual information is good in making the human understand but I'm not seeing how this would put Windows ahead. I'm saying this would be the end of local applications/programs that windows relies on to keep people needing windows, especially productivity applications. AI would only require a capability to display information to the human. any local "application/program/windowing system" we have currently is only there as a user interface for the human to perform a task, who might not even be required to do the productivity tasks anymore. So for example why are you using excel to display this? Just ask for that "report" of whatever it is you're using excel to achieve. Hell you can even skip the report and let it be the boss of whatever that report was going to influence.
Now, while that stuff is a reality there are many steps in between that require Windows as it is right now and AI in the presentation we have today (LLM's/Agentic AI).
This is true, for now it is useful for tasks that AI is not smart enough to be completely agentic on but when AGI/ASI becomes a reality windows could very well disappear. There would be no reason for humans to operate an "operating system" UI or applications, folders, settings, etc at all.
It's not for everyone, that's for sure. For example I prefer reading than listening because I can read faster so I would still want to see my E-Mail list but there are people that probably prefer to listen to their E-Mails.
I hate dealing with emails in general. If you have an agent who does that for you you might not need to read or listen to them at all.
 
Last edited:
It's a barebones OS (as in kernel, device drivers etc) in that it doesn't need to be "windows" anymore. A Chromebook, a browser, a single app or any other barebones thinclient OS that can display text, images and video would do the trick. I guess what I'm trying to get at is that the OS advantage actually disappears when you no longer need applications, folders, icons, a windowing system, etc.

This is not really agentic though with or without a display. Even pretty stupid Alexa combined with an Echo Show can display this sort of thing. Agentic would not require you to even ask this question of it. Ask yourself why do you want to look at the NASDAQ? Agentic would do that task for you. So "invest for me" would be the task that AGI and especially ASI would perform for you as an agent and likely outperform you on. As AI becomes more and more advanced the need for those steps where you have agency in a task disappear.

I get that visual information is good in making the human understand but I'm not seeing how this would put Windows ahead. I'm saying this would be the end of local applications/programs that windows relies on to keep people needing windows, especially productivity applications. AI would only require a capability to display information to the human. any local "application/program/windowing system" we have currently is only there as a user interface for the human to perform a task, who might not even be required to do the productivity tasks anymore. So for example why are you using excel to display this? Just ask for that "report" of whatever it is you're using excel to achieve. Hell you can even skip the report and let it be the boss of whatever that report was going to influence.

This is true, for now it is useful for tasks that AI is not smart enough to be completely agentic on but when AGI/ASI becomes a reality windows could very well disappear. There would be no reason for humans to operate an "operating system" UI or applications, folders, settings, etc at all.

I hate dealing with emails in general. If you have an agent who does that for you you might not need to read or listen to them at all.
I agree with most of that. It is a barebones OS but still an OS. Does it need to be Windows? No. Does Microsoft want it to be Windows? Yes.

It is agentic AI but different use cases. It's true that you might want to delegate your whole decision making to an ASI/AGI (which in itself involves much more radical change) but there will still be a lot of users that want some kind of involvement. Many people will still want to use charts and display all kinds of information for many things like decision making, communicating and even teaching.

I understand hating dealing with emails or many other things but again, humans will still need to communicate and might prefer different types of interfaces. I don't like text to speech not because of how it sounds or whatever (it's very competent nowadays) but because I prefer reading than listening or watching videos (unless I'm doing something else simultaneously). At the end, all of that could be possible so we can pick and choose how to approach it and that's a big part of what the OS does.
 
You just know Microsoft will make DirectX 13 packed with exclusive features or stuff that's hard to translate to other APIs like Vulkan or translation layers like Proton, just to keep people on Windows for gaming related purposes.
 
Last edited:



Sad Jim Carrey GIF
 

A few days ago, Microsoft announced that Windows 11 is undergoing an agentic overhaul. The company indirectly warned that security vulnerabilities may be exposed, and today they issued an updated notice. "As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs." After installing Windows 11 Build 26220.7262, you'll find a new toggle for "Experimental agentic features" in Settings > System under "AI Components." Fortunately, this is an optional feature and must be enabled manually.

Upon turning the feature on, Windows will show a warning that these capabilities are experimental and might affect your device. In practice the security risk is the bigger concern. New attack techniques tied to autonomous agents are already appearing, with cross-prompt injection standing out. In those attacks, malicious instructions are concealed inside ordinary documents or interface elements so an agent follows them instead of its original task. That could allow an agent to install malware, leak payment details, or carry out other harmful actions.
Microsoft says these agents run inside an "Agentic Workspace," where each agent gets a scoped, auditable account and its actions are recorded for later review. The company compares this to Windows Sandbox, but agents are designed to persist and can continue to act on files across sessions, which expands the possible attack surface. By default an agent may be granted read and write access to common folders such as Downloads, Desktop, Documents, Pictures, Music and Videos. Even with limited action sets and separate execution accounts, those defaults leave gaps until Microsoft implements stronger protections like finer-grained permissions and robust defenses against prompt injection.

What a fucking piece of crap...
 




What a fucking piece of crap...
This is hilarious
 
Even after reading the warning Microsoft gave about huge performance drops and security issues, I mean, you need to be a brainrot fanboy, the most idiot kinda of fanboy, when even the company you lick balls are telling you about the danger of turning ON the agent.
 
Idk, AI everything seems almost inevitable at this point. Not only are all the big corporations betting everything on it, the govt is too.

Almost all new investment money is going into ai, hardware, data centers, and everything that goes with it.

They aren't gonna let it fail and its gonna go EVERYWHERE. The economy depends on it succeeding in a big way.
This is why central planning never works, improper allocation of resources.
 




What a fucking piece of crap...

Chris Titus to the rescue
 
The day I need to talk to my computer to do basic tasks is the day I stop using computers. This is insanity, and all these huge tech companies have lost the plot.
 
Wait until this product goes on sale that you can choose to purchase or not.

Absolutely terrifying. It's over.
And if it catches on and the other old way becomes the nokia 3310 of the computing world?

Nobody is scared btw. Just saying what the future might look like and it's not something that somebody who doesn't want to talk to their computer might like.

But I get it, MSFT AI goals! Got to push them.
 
Last edited:
People can't be that dumb to turn on this agent.
You'd be surprised how many people voluntarily offload their thought process without an ounce of critical assessment.

There's already cases of people who've grown so dependent on resorting to AI cough up a seemingly "confident" looking answer to them instead of contemplating it for themselves. Its really encouraging people to not work for knowledge and disable their independent thinking. This is what unrestricted unethical technological pushes look like.
 
Last edited:
And if it catches on and the other old way becomes the nokia 3310 of the computing world?

Nobody is scared btw. Just saying what the future might look like and it's not something that somebody who doesn't want to talk to their computer might like.

But I get it, MSFT AI goals! Got to push them.

Nothing about this notion makes sense.

Literally nobody is talking about removing the ability to control computers or OS using keyboards/mice and other physical inputs.

You can still buy old Nokia-style phones with keypads. You can still buy record players and CD players.

What are you even trying to describe? A world where a voice only interface is so incredibly useful, powerful and clearly superior that companies and consumers decide to stop using because it has been rendered completely obsolete?

Again, wow, absolutely terrifying beyond words. We'll leave aside that it sounds absurd.

The level of incoherent hysteria here about literally every aspect of this technology is fascinating.
 
Last edited:
Wait until this product goes on sale that you can choose to purchase or not.

Absolutely terrifying. It's over.
And what happens when Apple inevitably follows suit? What happens when there is no viable alternative to a traditional computing experience, where you interface with a computer like you do now?
Sure, you'll likely say Linux or something else--but that is just not doable for a large group of people for a lot of reasons.

We're seeing an attempt at eroding traditional computing experiences, all for the sake of making the computing experience "easier" or "more efficient", or whatever else corporate bullshit speak these clowns will spit out.

I'm not denying the advantages of AI, and the help it can bring a lot of people, especially in a professional environment. But if I want to hop on my computer on a Sunday morning while drinking my coffee, I don't want to have to talk to the fucking thing just so I can read the news, check my portfolio, or just read some sports updates. Especially if that change comes with a massive cost of performance and security.

If Microsoft and others want to give users more "agency" in the computing process, give us actual choices and let us choose how we want to use our devices, not ramrodding this AI bullshit into everything without our consent.
 
Nothing about this notion makes sense.

Literally nobody is talking about removing the ability to control computers or OS using keyboards/mice and other physical inputs.

You can still buy old Nokia-style phones with keypads. You can still buy record players and CD players.

What are you even trying to describe? A world where a voice only interface is so incredibly useful, powerful and clearly superior that companies and consumers decide to stop using because it has been rendered completely obsolete?

Again, wow, absolutely terrifying beyond words. We'll leave aside that it sounds absurd.

The level of incoherent hysteria here about literally every aspect of this technology is fascinating.
You seem a bit hysterical there. I already said I'm not saying this is "terrifying beyond words", you need to calm down. I just replied to somebody who said they hate the idea of talking to their computer and I said it might be unavoidable in the future if these type of devices catch on. The idea of "Oh but you can still buy Nokias" wasn't in dispute but those things have been marginalised. It wouldn't even connect to a network because 3G is being shut down this year so hooray you have a useless device now!
 




What a fucking piece of crap...
FYI, I recently broke an AI/LLM with excessive agency (in a lab environment). The news of these kinds of vulnerabilities shouldn't be very surprising. The tech is still in an infantile stage and maturing, albeit not under good circumstances imo. There's probably more waiting to be uncovered.

There's all sorts of logic attacks and tricks you can perform to fool an AI and make it do some dirty work.
 
Last edited:
FYI, I recently broke an AI/LLM with excessive agency (in a lab environment). The news of these kinds of vulnerabilities shouldn't be very surprising. The tech is still in an infantile stage and maturing, albeit not under good circumstances imo. There's probably more waiting to be uncovered.

There's all sorts of logic attacks and tricks you can perform to fool an AI and make it do some dirty work.

It isn't news and isn't surprising.

It's been known and widely understood that LLMs hallucinate and present (various levels of, as they advance) security vulnerabilities from the start.

Before anybody built anything that could be called an agent or agentic it was known that they would hallucinate and would present security vulnerabilities, because they are built on LLMs.

When the first agentic products or features were show and it was clear that they hallucinated and had security vulnerabilities nobody who understood anything about LLMs was surprised because of course they do. You can prompt-inject a browser agent because you can prompt-inject the LLMs they are built on.

What would be news at this point is that hallucinations are being reduced and security vulnerabilities eliminated.

Why does Microsoft warn that this new and experimental first version of this feature can hallucinate and currently has various security vulnerabilities? Because that is known.

Why do you think the feature is marked experimental?

Are you guys under the impression that everyone in the industry doesn't understand either of these basic known facts about LLMs and that the agents being direct products of LLMs contain the same vulnerabilities plus new ones from agents interacting with the open web and OSs?

The questions are when will the models improve enough to reduce hallucinations to something that is either close to zero or zero and will the security vulnerabilities currently presented by extremely early versions of agents be reduced and eliminated over time?

The conversations on here about agents is currently completely indistinguishable from past conversations about LLMs and video/image/sound/music models whereby for some reason people were sceptical that they would just get better and better.
 
Last edited:




What a fucking piece of crap...

This is unbelievable. This fucking shit should be illegal. They are actively compromising your safety and security on Windows. What the fuck is Microsoft doing? Trash company.
 
It isn't news and isn't surprising.

It's been known and widely understood that LLMs hallucinate and present (various levels of, as they advance) security vulnerabilities from the start.

Before anybody built anything that could be called an agent or agentic it was known that they would hallucinate and would present security vulnerabilities, because they are built on LLMs.

When the first agentic products or features were show and it was clear that they hallucinated and had security vulnerabilities nobody who understood anything about LLMs was surprised because of course they do. You can prompt-inject a browser agent because you can prompt-inject the LLMs they are built on.

What would be news at this point is that hallucinations are being reduced and security vulnerabilities eliminated.

Why does Microsoft warn that this new and experimental first version of this feature can hallucinate and currently has various security vulnerabilities? Because that is known.

Why do you think the feature is marked experimental?

Are you guys under the impression that everyone in the industry doesn't understand either of these basic known facts about LLMs and that the agents being direct products of LLMs contain the same vulnerabilities plus new ones from agents interacting with the open web and OSs?

The questions are when will the models improve enough to reduce hallucinations to something that is either close to zero or zero and will the security vulnerabilities currently presented by extremely early versions of agents be reduced and eliminated over time?

The conversations on here about agents is currently completely indistinguishable from past conversations about LLMs and video/image/sound/music models whereby for some reason people were sceptical that they would just get better and better.
Yeah and no one gives a fuck. Windows used to work fine without all this AI crap inserted into it. It isn't needed. So fuck all this hallucinations and other such bullshit. Just get it out of there (along with all of the other bloatware shit Microsoft shoves into Windows) and just leave it as bare bones as possible like it once was.
 
Yeah and no one gives a fuck. Windows used to work fine without all this AI crap inserted into it. It isn't needed. So fuck all this hallucinations and other such bullshit. Just get it out of there (along with all of the other bloatware shit Microsoft shoves into Windows) and just leave it as bare bones as possible like it once was.

Ok, I'll pass this onto Satya.

He's actually still in hospital recovering from being cooked in the replies as per the OP.
 
Last edited:
Ok, I'll pass this onto Satya.

He's actually still in hospital recovering from being cooked in the replies as per the OP.
Well lets hope there is enough of a revolt to this shit and good old Satya understands that people don't want this shit invading their operating systems. But we all know that once your mate Satya gets his mind onto something, he spends billions to make sure it's what people want.

He has fucked up Xbox. He will fuck up Windows.
 
It isn't news. It's been known and widely understood that LLMs hallucinate and present (various levels of, as they advance) security vulnerabilities.

Before anybody built anything that could be called an agent or agentic it was known that they would hallucinate and would present security vulnerabilities, because they are built on LLMs.

When the first agentic products or features were show and it was clear that they hallucinated and had security vulnerabilities nobody who understood anything about LLMs was surprised because of course they do.

What would be news at this point is that hallucinations are being reduced and security vulnerabilities eliminated.

Why does Microsoft warn that this new and experimental first version of this feature can hallucinate and currently has various security vulnerabilities? Because that is known.

Are you guys under the impression that everyone in the industry doesn't understand either of these basic known facts about LLMs and that the agents being direct products of LLMs contain the same vulnerabilities plus new ones from agents interacting with the open web and OSs?

The questions are when will the models improve enough to reduce hallucinations to something that is either close to zero or zero and will the security vulnerabilities currently presented by extremely early versions of agents be reduced and eliminated over time?

The conversations on here about agents is currently completely indistinguishable from past conversations about LLMs and video/image/sound/music models whereby for some reason people were sceptical that they would just get better and better.
The fact, at least according to you, they knowingly shipped a vulnerable hallucinating AI agent, with supposedly known security issues, should've caused major concern. That should've made it disqualified for market release from the very beginning. It would be a ridiculously huge admission of safety risks. Experimental feature or not.

This is the kind of shit that should've been subjected to thorough safety scrutiny before deployment. I'd be utterly livid if this got inserted into a working business/enterprise environment un-tested.

They're exposing users to attack surfaces which they, Microsoft, apparently were fully aware of existed. How the hell should that give them any relief? If they really knew about it beforehand, then that's just unbelievably reckless. We're living in a digital age where the internet and technology has become more fused and integrated than ever into our daily lives, for better or worse. This isn't fun and games anymore. Its serious shit that has tangible consequences.

Would you be okay with someone selling you a car without door glass and to encourage hot-wiring, too?

P.S: Btw, I wasn't addressing the hallucination part specifically in my original post. I was strictly referring to the vulnerability issue. I'm fully aware of AI models have a risk to throw up some "hallucinating" garble which unfortunately some people intake without caution.
 
Last edited:
Top Bottom