ReasonBeing
Member
*I want to preface this by stating I am making no value judgements on the people involved in the examples I give. Conversation should steer clear of the specific politics involved, as I am commentating on the alleged use of AI and not the outcomes of any particular piece of legislation. If there seems to be any bias in the examples I am referring to to, it is only because these are the examples I am aware of. Feel free to add your own if you know of any.*
I was just reading an article regarding the US Health Secretary's commission report "Make America Healthy Again" aka MAHA, it is specifically alleged that it has been found that many of the cited sources either do not exists, or are referenced with dead links.
As many of us know, it has been observed that when asking AI to research any particular topic, that it will often include claims that are sourced from hallucinated sources. In fact, this quirk is something educators use to determine if a student has been cheating. Is the fact that the MAHA report contains such errors evidence enough to conclude that at least some research used was AI generated?
We also have an alleged case referring to recent US economic policy. It was found that if you asked a particular AI to recommend a trade policy with specific desired outcomes, that it generated a plan of action that was eerily similar to the actual plan as explained to the public. This example seems to have a stronger case of plausible deniability, but I still find it a compelling example of potential AI use in governance. Even though it "feels bad" to imagine such a scenario, when I take the time to think about of what some of the advantages could be, I find myself finding arguments for both sides.
Undoubtedly, this has probably started happening across the globe. How do you feel about a truly mindless abstraction having a tangible impact on how our governments operate? Is it a good thing to utilize a tool that can potentially introduce some objectivity to complex topics, or should we take measures to deter such practices? What does governance look like in 50 years time, as these models grow in capability?
I was just reading an article regarding the US Health Secretary's commission report "Make America Healthy Again" aka MAHA, it is specifically alleged that it has been found that many of the cited sources either do not exists, or are referenced with dead links.
As many of us know, it has been observed that when asking AI to research any particular topic, that it will often include claims that are sourced from hallucinated sources. In fact, this quirk is something educators use to determine if a student has been cheating. Is the fact that the MAHA report contains such errors evidence enough to conclude that at least some research used was AI generated?
We also have an alleged case referring to recent US economic policy. It was found that if you asked a particular AI to recommend a trade policy with specific desired outcomes, that it generated a plan of action that was eerily similar to the actual plan as explained to the public. This example seems to have a stronger case of plausible deniability, but I still find it a compelling example of potential AI use in governance. Even though it "feels bad" to imagine such a scenario, when I take the time to think about of what some of the advantages could be, I find myself finding arguments for both sides.
Undoubtedly, this has probably started happening across the globe. How do you feel about a truly mindless abstraction having a tangible impact on how our governments operate? Is it a good thing to utilize a tool that can potentially introduce some objectivity to complex topics, or should we take measures to deter such practices? What does governance look like in 50 years time, as these models grow in capability?
Last edited: