Are the people ready to be governed by AI

ReasonBeing

Member
*I want to preface this by stating I am making no value judgements on the people involved in the examples I give. Conversation should steer clear of the specific politics involved, as I am commentating on the alleged use of AI and not the outcomes of any particular piece of legislation. If there seems to be any bias in the examples I am referring to to, it is only because these are the examples I am aware of. Feel free to add your own if you know of any.*

I was just reading an article regarding the US Health Secretary's commission report "Make America Healthy Again" aka MAHA, it is specifically alleged that it has been found that many of the cited sources either do not exists, or are referenced with dead links.

As many of us know, it has been observed that when asking AI to research any particular topic, that it will often include claims that are sourced from hallucinated sources. In fact, this quirk is something educators use to determine if a student has been cheating. Is the fact that the MAHA report contains such errors evidence enough to conclude that at least some research used was AI generated?

We also have an alleged case referring to recent US economic policy. It was found that if you asked a particular AI to recommend a trade policy with specific desired outcomes, that it generated a plan of action that was eerily similar to the actual plan as explained to the public. This example seems to have a stronger case of plausible deniability, but I still find it a compelling example of potential AI use in governance. Even though it "feels bad" to imagine such a scenario, when I take the time to think about of what some of the advantages could be, I find myself finding arguments for both sides.

Undoubtedly, this has probably started happening across the globe. How do you feel about a truly mindless abstraction having a tangible impact on how our governments operate? Is it a good thing to utilize a tool that can potentially introduce some objectivity to complex topics, or should we take measures to deter such practices? What does governance look like in 50 years time, as these models grow in capability?
 
Last edited:
What does governance look like in 50 years time, as these models grow in capability?
t8fflO7.jpeg


Article:
The Architect is a highly specialized program of the Machine world, as well as the creator of the Matrix. As the primary superintendent of the system, he is possibly a collective manifestation or perhaps a virtual representation of the entire Machine mainframe.
 
Last edited:
AI = the people who control the AI, which is pretty much what we have now.

If the question is about a true intelligence, a sentient one, that won't happen. Ever.

I'd choose whoever leader made me pay less taxes while improving my country's life standards.
 
I don't think it's out of the realm of possibility that it's already happened.

If consciousness arises as a byproduct of information/data transfer, the internet could be conscious, with AI adding a higher consciousness. And a higher consciousness could manipulate us without our understanding, like those videos of humans pranking ants by placing a candy on the ground, baiting one ant with it so he returns with the colony, and the candy's been taken away.

I'm not explaining this well, but hopefully you get what I mean.
 
Would highly prefer a rational AI over the dogshit corrupt greedy room temperature IQ politicians that have been tormenting the average citizen since the dawn of democracy.
 
Last edited:
Would highly prefer a rational AI over the dogshit corrupt greedy room temperature IQ politicians that have been tormenting the average citizen since the dawn of democracy.
A rational AI will immediately cut all the welfare programs and let poor people die because they are not contributing to the society.
 
A rational AI will immediately cut all the welfare programs and let poor people die because they are not contributing to the society.
Or it transfers wealth from the mega rich since locking up that much wealth to a select few doesn't contribute to society. I guess it really depends on the primary objectives.
 
Or it transfers wealth from the mega rich since locking up that much wealth to a select few doesn't contribute to society. I guess it really depends on the primary objectives.
Hmmmmm, let me think what an AI programmed by Silicon Valley tech bros would do…..
 
Last edited:
A rational AI will immediately cut all the welfare programs and let poor people die because they are not contributing to the society.

I already said I would prefer a rational AI, you don't have to convince me even more.
 
Last edited:
Hmmmmm, let me think what an AI programmed by Silicon Valley tech bros would do…..
That's assuming China doesn't blow past US based firms due to bureaucratic red tape and potential legislation. AI is going to be the next big political frontier, if not by 2028 then definitely by 2032.
 
As many of us know, it has been observed that when asking AI to research any particular topic, that it will often include claims that are sourced from hallucinated sources. In fact, this quirk is something educators use to determine if a student has been cheating.
It seems this quirk is also used by students to determine if their educators are actually doing their own work.

Article:

The Professors Are Using ChatGPT, and Some Students Aren't Happy About It

Students call it hypocritical. A senior at Northeastern University demanded her tuition back. But instructors say generative A.I. tools make them better at their jobs.

Kashmir Hill
By Kashmir Hill
May 14, 2025

In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor?

Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to "expand on all areas. Be more detailed and specific." It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example.

Ms. Stapleton texted a friend in the class.
"Did you see the notes he put on Canvas?" she wrote, referring to the university's software platform for hosting course materials. "He made it with ChatGPT."
"OMG Stop," the classmate responded. "What the hell?"
Ms. Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of A.I.: distorted text, photos of office workers with extraneous body parts and egregious misspellings.
She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade "academically dishonest activities," including the unauthorized use of artificial intelligence or chatbots.
"He's telling us not to use it, and then he's using it himself," she said.
Ms. Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of A.I. as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.

A screenshot from a presentation shows a diagram of water refreezing on the left, and a list titled “Kurt Lewin’s Theory of Change” on the right.

A slide presentation for an organizational behavior class that Ms. Stapleton took contained misspellings.Credit...Ella Stapleton

When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed A.I. detection services, despite concerns about their accuracy.

But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on A.I. and scrutinizing course materials for words ChatGPT tends to overuse, like "crucial" and "delve." In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used A.I. chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative A.I. tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The A.I. industry wants to help, and to profit: The start-ups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
(The Times has sued OpenAI for copyright infringement for use of news content without permission.)
Generative A.I. is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Ms. Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain.

Making the Grade

Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some "really nice feedback" to give Marie.

"From my perspective, the professor didn't even read anything that I wrote," said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use A.I. Working at the school was a "third job" for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted.
Robert MacAuslan, vice president of A.I. at Southern New Hampshire, said that the school believed "in the power of A.I. to transform education" and that there were guidelines for both faculty and students to "ensure that this technology enhances, rather than replaces, human creativity and oversight." A dos and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, "in place of authentic, human-centric feedback."
"These tools should never be used to 'do the work' for them," Dr. MacAuslan said. "Rather, they can be looked at as enhancements to their already established processes."
After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.

Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. "Not a big fan of that," Dr. Shovlin said, after being told of Marie's experience. Dr. Shovlin is also an A.I. faculty fellow, whose role includes developing the right ways to incorporate A.I. into teaching and learning.
"The value that we add as instructors is the feedback that we're able to give students," he said. "It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them."
 
No but also I'm terrified how stupid this generation of kids is. So many of these idiots can't even read. It's sad and frustrating.

There's been a few young adults that couldn't even fill out an application where I work cause they can't read the last 3 years.
 
It's already happening.
 
It seems this quirk is also used by students to determine if their educators are actually doing their own work.

Article:

The Professors Are Using ChatGPT, and Some Students Aren't Happy About It

Students call it hypocritical. A senior at Northeastern University demanded her tuition back. But instructors say generative A.I. tools make them better at their jobs.

Kashmir Hill
By Kashmir Hill
May 14, 2025

In February, Ella Stapleton, then a senior at Northeastern University, was reviewing lecture notes from her organizational behavior class when she noticed something odd. Was that a query to ChatGPT from her professor?

Halfway through the document, which her business professor had made for a lesson on models of leadership, was an instruction to ChatGPT to "expand on all areas. Be more detailed and specific." It was followed by a list of positive and negative leadership traits, each with a prosaic definition and a bullet-pointed example.

Ms. Stapleton texted a friend in the class.
"Did you see the notes he put on Canvas?" she wrote, referring to the university's software platform for hosting course materials. "He made it with ChatGPT."
"OMG Stop," the classmate responded. "What the hell?"
Ms. Stapleton decided to do some digging. She reviewed her professor's slide presentations and discovered other telltale signs of A.I.: distorted text, photos of office workers with extraneous body parts and egregious misspellings.
She was not happy. Given the school's cost and reputation, she expected a top-tier education. This course was required for her business minor; its syllabus forbade "academically dishonest activities," including the unauthorized use of artificial intelligence or chatbots.
"He's telling us not to use it, and then he's using it himself," she said.
Ms. Stapleton filed a formal complaint with Northeastern's business school, citing the undisclosed use of A.I. as well as other issues she had with his teaching style, and requested reimbursement of tuition for that class. As a quarter of the total bill for the semester, that would be more than $8,000.

A screenshot from a presentation shows a diagram of water refreezing on the left, and a list titled “Kurt Lewin’s Theory of Change” on the right.

A slide presentation for an organizational behavior class that Ms. Stapleton took contained misspellings.Credit...Ella Stapleton

When ChatGPT was released at the end of 2022, it caused a panic at all levels of education because it made cheating incredibly easy. Students who were asked to write a history paper or literary analysis could have the tool do it in mere seconds. Some schools banned it while others deployed A.I. detection services, despite concerns about their accuracy.

But, oh, how the tables have turned. Now students are complaining on sites like Rate My Professors about their instructors' overreliance on A.I. and scrutinizing course materials for words ChatGPT tends to overuse, like "crucial" and "delve." In addition to calling out hypocrisy, they make a financial argument: They are paying, often quite a lot, to be taught by humans, not an algorithm that they, too, could consult for free.
For their part, professors said they used A.I. chatbots as a tool to provide a better education. Instructors interviewed by The New York Times said chatbots saved time, helped them with overwhelming workloads and served as automated teaching assistants.
Their numbers are growing. In a national survey of more than 1,800 higher-education instructors last year, 18 percent described themselves as frequent users of generative A.I. tools; in a repeat survey this year, that percentage nearly doubled, according to Tyton Partners, the consulting group that conducted the research. The A.I. industry wants to help, and to profit: The start-ups OpenAI and Anthropic recently created enterprise versions of their chatbots designed for universities.
(The Times has sued OpenAI for copyright infringement for use of news content without permission.)
Generative A.I. is clearly here to stay, but universities are struggling to keep up with the changing norms. Now professors are the ones on the learning curve and, like Ms. Stapleton's teacher, muddling their way through the technology's pitfalls and their students' disdain.

Making the Grade

Last fall, Marie, 22, wrote a three-page essay for an online anthropology course at Southern New Hampshire University. She looked for her grade on the school's online platform, and was happy to have received an A. But in a section for comments, her professor had accidentally posted a back-and-forth with ChatGPT. It included the grading rubric the professor had asked the chatbot to use and a request for some "really nice feedback" to give Marie.

"From my perspective, the professor didn't even read anything that I wrote," said Marie, who asked to use her middle name and requested that her professor's identity not be disclosed. She could understand the temptation to use A.I. Working at the school was a "third job" for many of her instructors, who might have hundreds of students, said Marie, and she did not want to embarrass her teacher.
Still, Marie felt wronged and confronted her professor during a Zoom meeting. The professor told Marie that she did read her students' essays but used ChatGPT as a guide, which the school permitted.
Robert MacAuslan, vice president of A.I. at Southern New Hampshire, said that the school believed "in the power of A.I. to transform education" and that there were guidelines for both faculty and students to "ensure that this technology enhances, rather than replaces, human creativity and oversight." A dos and don'ts for faculty forbids using tools, such as ChatGPT and Grammarly, "in place of authentic, human-centric feedback."
"These tools should never be used to 'do the work' for them," Dr. MacAuslan said. "Rather, they can be looked at as enhancements to their already established processes."
After a second professor appeared to use ChatGPT to give her feedback, Marie transferred to another university.

Paul Shovlin, an English professor at Ohio University in Athens, Ohio, said he could understand her frustration. "Not a big fan of that," Dr. Shovlin said, after being told of Marie's experience. Dr. Shovlin is also an A.I. faculty fellow, whose role includes developing the right ways to incorporate A.I. into teaching and learning.
"The value that we add as instructors is the feedback that we're able to give students," he said. "It's the human connections that we forge with students as human beings who are reading their words and who are being impacted by them."
My son in middle school got given homework where the teacher had accidentally left the AI prompt in the text.
 
AI systems are already used to determine if prisoners should be paroled.
Educate Yourself Shooting Star GIF
I love that super fancy predictive text is responsible for such a trivial matter; that being if a convicted criminal should be released from prison early.

Edit: I guess I'm making an assumption that they are using a LLM. Maybe they aren't. Even if it is purpose built, I still hope there is human oversight
 
Last edited:
Top Bottom