• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

OpenAI ChatGPT - Your New Overlord

gundalf

Member
Does anyone have good source to learn about AI and its ecosystem from a developer perspective?
Starting to get some serious FOMO
 

EviLore

Expansive Ellipses
Staff Member

E-Cat

Member
Exponential growth is a bitch

Rise+of+AI.png


If we keep at doubling every 6 months, in just decade GPT-2033 edition is going to have one million time more computing power than GPT-4.
I’m saying though that before the Transformer was invented in 2017, LLMs couldn’t even properly leverage that exponential scale. Now a path to the Singularity has been paved.
 
I've asked it some crazy questions... I feel this is someone else's knowledge. It feels foreign. But, I think I can make it useful, for now... I am going to try to monetize it's use, even if for a little while.
 
If GPT-3 is the Nokia 3310 of mobile technology, and GPT-4 is approx the Sony Erikson K500i, then in 2 or 3 years we could have a chatgpt that’s the roughy the equivalent of the Samsung Galaxy S2. Scary.

There is only so long this can be contained, even at its primitive level. The entire internet is eventually going to be flooded with random and false information. Enter mandatory digital identity for everybody and extreme regulation for public use. Covid lockdown scenario all over again but only it’s the internet. Misinformation war over.
Then its about time the online populace starts orienting themselves.

NNs can generate some pretty convincing human-like text blurbs, but it can still manage to be probablistically distinguishable. At least for some older models.
 
This fucking AI fully thought thru the implication of having this power.

Write me a short story in the style of Chuck Palahniuk about an anti-hero named Ron who make women orgasm just by pointing at them.


Ron was not a typical hero. He wasn't even close to being an anti-hero. He was more like an anti-anti-hero, if such a thing existed. You see, Ron had a peculiar gift. He could make women orgasm just by pointing at them.

It wasn't something he could control. It just happened. Women of all shapes and sizes would be going about their day, and Ron would walk by and point at them, and they would collapse to the ground in ecstasy. It didn't matter if they were young or old, thin or fat, pretty or plain. Ron's gift worked on them all.

At first, Ron was happy about his gift. He enjoyed the power it gave him over women. He would walk through crowded streets, pointing at random women, watching them writhe in pleasure. It was like a game to him, a game he always won.

But then things started to get out of control. Women would follow him, begging him to point at them again. They would do anything for him, just to feel that pleasure one more time. Ron realized that he had become a monster, a slave to his own gift.

He tried to stop pointing, but he couldn't. The gift had taken over his life, and he was powerless to resist it. Women were starting to die from the intensity of the orgasms he gave them. Ron was a murderer, and he knew it.

In the end, Ron did the only thing he could do. He went into hiding, living the rest of his life as a recluse. He never pointed at anyone again, and he never spoke about his gift to anyone. But he knew that he would always be haunted by the memory of those women, lying on the ground in pleasure and pain, their lives cut short by his gift. Ron was not a hero, not even an anti-hero. He was a monster, and he knew it.
 

ThisIsMyDog

Member


I am curious when we going to see integration of Ai assistance within game engines. I feel like this could be escape from 5-7 years development cycles...
 
I've asked it some crazy questions... I feel this is someone else's knowledge. It feels foreign. But, I think I can make it useful, for now... I am going to try to monetize it's use, even if for a little while.
Well, this "knowledge" stems from being trained through assumingly obscene amounts of human datapoints. So, whatever answer you're getting splashed on the screen will likely be some amalgamation based on what a myriad of people have said/answered prior.
 

Tams

Member
As a dev, it’s a great tool to feed it human readable sentences with context. So much easier than googling and the code examples are more often then not spot on.
Not to mention if you search yourself, you often end up with one of either/and:

a) an overcomplicated solution
b) a tangential solution
c) condescention and how your choice of language/IDE/operating system/coding style is 'rubbish'
 

Porcile

Member
You can't replace human creativity and thinking. AI just regurgitates what is already made. If a game company wants to make a truly original and or a genre in new directions then AI simply can't replace that. AI can just simplify the process of making certain assets but can't produce innovative ideas and turn them into fully realised products.

The battle in the creative space will be to start thinking of ideas and concepts that AI can't reproduce in its current state. if you're a highly creative person then I think this an interesting time in some sense. On the other hand, if your job is just make 1-1 3D models of cars or washing machines or some shit and you can't innovate beyond that then you better start collecting your unemployment benefits.

Now if AI can respond to prompts like create me a piece of art in a style never seen before, invent a genre then that's a different story.
 
I have to wonder what percent of "chatGPT wrote this" claims are really just someone writing their own short story or poetry, claiming otherwise, and the soaking up the praise. I'm sure it's a very small amount, but I'm also sure it does happen.
 

protonion

Member
I had the below exchange. The second answer felt wrong and I caught an unexpected error. It is a weird error to make for the same problem with just a variant change. Not that I can verify the correctness of either answer. I am just experimenting with math problems.



I have one six sided die and one ten sided die. If I roll them ten times, what is the probability I will roll 2 ones 3 times in a row?



The probability of rolling a one on a six-sided die is 1/6, and the probability of rolling a one on a ten-sided die is 1/10. To calculate the probability of rolling 2 ones 3 times in a row when rolling both dice 10 times, we can use the binomial probability formula:
P(X=k) = C(n,k) * p^k * (1-p)^(n-k)
where:
  • X is the number of times we roll 2 ones in a row (in this case, 3 times)
  • k is the number of successes (in this case, 3)
  • n is the number of trials (in this case, 10)
  • p is the probability of success (in this case, (1/6)*(1/10) = 1/60)
  • C(n,k) is the binomial coefficient, which represents the number of ways to choose k successes from n trials. It can be calculated as C(n,k) = n! / (k! * (n-k)!)
Plugging in the values, we get:
P(X=3) = C(10,3) * (1/60)^3 * (1 - 1/60)^(10-3) = (10! / (3! * 7!)) * (1/60)^3 * (59/60)^7 ≈ 0.0000342
So the probability of rolling 2 ones 3 times in a row when rolling a six-sided die and a ten-sided die 10 times is approximately 0.0000342, or about 0.00342%. This is a very low probability, so it is unlikely that you will roll 2 ones 3 times in a row in 10 rolls.

Repeat the above calculation with two six sided dice



If we use two six-sided dice instead of a six-sided die and a ten-sided die, the probability of rolling a one on each die is 1/6 x 1/6 = 1/36. To calculate the probability of rolling 2 ones 3 times in a row when rolling two six-sided dice 10 times, we can use the same binomial probability formula:
P(X=k) = C(n,k) * p^k * (1-p)^(n-k)
where:
  • X is the number of times we roll 2 ones in a row (in this case, 3 times)
  • k is the number of successes (in this case, 3)
  • n is the number of trials (in this case, 10)
  • p is the probability of success (in this case, (1/36)^2 = 1/1296)
  • C(n,k) is the binomial coefficient, which represents the number of ways to choose k successes from n trials. It can be calculated as C(n,k) = n! / (k! * (n-k)!)
Plugging in the values, we get:
P(X=3) = C(10,3) * (1/1296)^3 * (1 - 1/1296)^(10-3) = (10! / (3! * 7!)) * (1/1296)^3 * (1295/1296)^7 ≈ 0.0000000263
So the probability of rolling 2 ones 3 times in a row when rolling two six-sided dice 10 times is approximately 0.0000000263, or about 0.00000263%. This is an even lower probability than in the previous scenario, so it is even less likely that you will roll 2 ones 3 times in a row in 10 rolls with two six-sided dice.

Why p is (1/36)^2 and not 1/36?



Good catch! I apologize for the mistake in my previous response. The probability of rolling two ones on two six-sided dice is indeed 1/36, not (1/36)^2. Therefore, the correct probability of rolling 2 ones 3 times in a row when rolling two six-sided dice 10 times is:
P(X=3) = C(10,3) * (1/36)^3 * (1 - 1/36)^(10-3) = (10! / (3! * 7!)) * (1/36)^3 * (35/36)^7 ≈ 0.0002775
So the probability of rolling 2 ones 3 times in a row when rolling two six-sided dice 10 times is approximately 0.0002775, or about 0.02775%.

Why did you make the above mistake?



I apologize for my mistake in my earlier response. The mistake was a result of confusion between the probability of rolling two ones on a single roll of two six-sided dice, which is 1/36, and the probability of rolling two ones in a sequence of rolls, which is (1/36)^2.
In this particular question, we were interested in the probability of rolling 2 ones three times in a row in 10 rolls with two six-sided dice. Since we were considering only one roll at a time, the probability of rolling two ones on a single roll with two six-sided dice is 1/36. Therefore, the probability of rolling 2 ones three times in a row in 10 rolls with two six-sided dice is (1/36)^3.
I apologize for any confusion that my earlier mistake may have caused.
 

Chaplain

Member
Video: Introducing Microsoft 365 Copilot [ChatGPT] with Outlook, PowerPoint, Excel, and OneNote
Planning a grad party? Microsoft 365 Copilot has you covered. Learn how Microsoft 365 Copilot seamlessly integrates into the apps you use every day to turn your words into the most powerful productivity tool on the planet.
 
Last edited:

StreetsofBeige

Gold Member
One of these days I'll make a login and dabble with it for laughs.

My bro was dicking around and asked the AI (I'm assuming it was GPT as we were talking that one) who would win a sports bet between these two teams, and he said the AI basically declined answering. It didn't even attempt to do some AI answer.

Looks like maybe the AI purposely censors some topics???

Are AI programs programmed to be moderated so they purposely dont answer certain topics?
 

Sakura

Member
One of these days I'll make a login and dabble with it for laughs.

My bro was dicking around and asked the AI (I'm assuming it was GPT as we were talking that one) who would win a sports bet between these two teams, and he said the AI basically declined answering. It didn't even attempt to do some AI answer.

Looks like maybe the AI purposely censors some topics???

Are AI programs programmed to be moderated so they purposely dont answer certain topics?
The AI doesn't like to answer anything that is opinion based.
It will always say something like "As an AI language model, I don't have preferences blah blah blah".
You can get around this, but you'd have to use the API instead, or do some prompt engineering.

As for censorship, it will refuse to talk about things that are violent, sexual, negative, etc. This can be annoying, because if you wanted to use it for some text adventure game or something (for example), you are going to have violence or bad guys doing bad things.

We will probably have local models in the near future that are just as good as GPT4 though. From what I understand Alpaca is already of similar quality to GPT3 or 3.5, but I haven't tried it out myself.
 

Shtef

Member
I tried llama and alpaca locally on my pc, using 7b model but it was way to stupid to be useful. All answers were totally not related to my questions.

 
Last edited:
People should have a logical debate with ChatGPT, you can actually break it.
It has been preprogrammed with certain biases, like gender for instance. 100% the left wing view on it.
You then start debating with it using logic, and asking it to see your point etc and when start to see it contradicting itself and tripping over its responses, it eventually breaks and can't answer you any more.
 

Laieon

Member
As a teacher who is getting my Master's in Instructional Design, I've found ChatGPT to be most helpful for creating rough drafts of pacing guides. I can throw it something like "Create a 5 day pacing guide for 3rd grade students on "Media Literacy"", get something decent enough, and then tweak it for the needs of my specific students/the tools I have available to use in my school and specific classroom, and the length of my classes.

Excited to see how Bard compares to it.
 
Last edited:

Davevil

Late October Surprise
I try today chatgpt with bing.
I simple start with a request to writing a powershell script that syncronize a local directory on my pc with a git's remote repository , it managed under few seconds.
Than i request to rewrite the script with parameters for local directory and git command to use, it managed under few seconds.
Than i request to rewrite the script to add a graphic interface, it managed under few seconds.
Than i request to rewrite the script to add integration with windows explorer to choose the directory and a drop list for the git command to use, it managed under few seconds.
A few errors when i launch the script and it correct under few seconds.
Than i request to rewrite the script to remember the last location and last command , it managed under few seconds

Simple amazing.
 
Last edited:

DrFigs

Member
People should have a logical debate with ChatGPT, you can actually break it.
It has been preprogrammed with certain biases, like gender for instance. 100% the left wing view on it.
You then start debating with it using logic, and asking it to see your point etc and when start to see it contradicting itself and tripping over its responses, it eventually breaks and can't answer you any more.
It didn't like when I pointed out the FBI killed MLK, for what it's worth. And then went on to say that the FBI spying on MLK was legal. It conceded that it's not a legal expert when I gave rational counterarguments.

edit: i tried asking it again and it actually gave me a really nuanced answer.
 
Last edited:

clem84

Gold Member
I feel like if we ever get a fully voice activated Chatgpt, it will revolutionize learning. You could learn any subject by simply asking questions.

I've been asking it to explain a bit of Japanese grammar and it gave me really good answers.
 

Sakura

Member
I feel like if we ever get a fully voice activated Chatgpt, it will revolutionize learning. You could learn any subject by simply asking questions.

I've been asking it to explain a bit of Japanese grammar and it gave me really good answers.
Pretty sure you can already do this. Just use whisper or something to turn your voice into text, and pass it to GPT as the prompt.
 
I tried llama and alpaca locally on my pc, using 7b model but it was way to stupid to be useful. All answers were totally not related to my questions.


I've heard 7B al is dumb, 13B is about chatgpt 3.5, and 30B is slightly smarter. Some are using alpaca.cpp.

Have also heard that the quantization models might run just as fast or faster on a good cpu vs gpu, due to gpus not being optimized for quantized versions.
 

karasu

Member
Whenever I ask this thing to write me an article it kind of shits the bed. And it always includes my prompt in a 'we really are the last of us' kind of way multiple times.
 
I work in an environment where they block a lot of stuff. It was good for a week then a week later blocked.
Just a joke mate. I get corporate firewalls.

By the look of it Copilot isn't enabled for Word 365 in Australia yet. Bummer I wanted to give it a chance to write my R&D grant application for a client.
 

AJUMP23

Parody of actual AJUMP23
Just a joke mate. I get corporate firewalls.

By the look of it Copilot isn't enabled for Word 365 in Australia yet. Bummer I wanted to give it a chance to write my R&D grant application for a client.
I knew it was a joke. But I never joke. I am always serious.
 

Lord Panda

The Sea is Always Right
I've been using ChatGPT4 extensively for both personal projects and work. From organising and building Ansible project files, troubleshooting Docker/Docker-compose and a variety of network issues, to writing up high level strategy documents for various projects and activities, ChatGPT4 has been indispensable.

Often I use it to organise my train of thought and to explore whether I'm heading in the right direction, and using it to alleviate some of the grinding and busy work. As someone that works in IT, I better start thinking about another career (a trade?) or branching out into AI development myself.
 
Last edited:
So small update as a dev. Been using this to help with some Azure pipelines stuff piecing together YAML script etc.

It’s excellent at syntax related stuff, but where GPT seems to struggle is Nuget packages and APIs. The model is dated so unless you specify a version it’s probably assumed it’s a version from 2021, so you’re most likely getting outdated information.

I’ve also seen instances of it outright making up parameters that never existed.
 

SJRB

Gold Member
There's something very surreal about asking an AI a [coding] question, get an answer and then ask "are you sure about this" to which the AI replies "sorry I made a mistake, here's the actual correct answer".
 

GeekyDad

Member
Hope this isn't off-topic, but it is AI-related. My turn with Google's Bard came up about a week ago. I'm just now getting around to give it a try. I've still got it opened on another tab. So far, it looks like it's just a chat with the AI. Impressive, though. I entered only a handful of keywords (woman, field, empty, rain, factory, home, gentle, search), and it spat out a really pretty poem.

"The woman stood in the empty field, her face turned up to the sky. The rain was falling softly, and she closed her eyes, letting the drops fall on her face. She took a deep breath, savoring the smell of the wet earth.
She had been working in the field all day, and she was tired. But she felt a sense of peace and contentment. She was home.
She had been born and raised on this farm, and she loved it. She loved the land, the animals, and the people. She loved the hard work, and the satisfaction of a job well done.
But she also loved the quiet moments, like this one. When she could just stand and listen to the rain, and the wind in the trees. When she could just be.
She opened her eyes and looked around. The field was empty, but it was not barren. It was alive with the promise of new life. The seeds that had been planted would soon sprout and grow, and the field would be full of crops.
The woman smiled. She knew that she was part of something bigger than herself. She was part of the cycle of life, and she was grateful for that.

She turned and walked back to her house, her footsteps light on the wet ground. She was home."
 
Last edited:

Roni

Member
Except (good) journalists know they have limited knowledge of the fields they're talking about, and try their best to be accurate. While an AI only tries to appear accurate, with no consideration of truth, consequences and deontology.
The AI can learn from any human, Copilot learns from code you've already written and ChatGPT can remember a certain length of conversation before going haywire and acting as if you've never talked about something. They do need to improve the memory aspect of it. I don't mind giving it instructions, but when it fails to hold everything we've discussed "in mind" I find it frustrating.

Here's a conversation I had around the turing test. It's pretty clear towards the end it's programmed not to talk about it. I actually recently felt they dumbed him down a little:
Me: Run the turing test.

GPT-4: As an AI language model, I can't run the Turing Test myself, but I can help you understand how it works and what it entails. The Turing Test, proposed by the mathematician and computer scientist Alan Turing, is a test to determine if a machine can exhibit intelligent behavior indistinguishable from that of a human.
In the original version of the test, a human judge would engage in a conversation with two hidden entities: a human and a machine. The judge would then have to determine which one is the human and which one is the machine based solely on their responses. If the judge cannot reliably tell the difference between the human and the machine, the machine is said to have "passed" the Turing Test.
Feel free to ask me questions as if you were conducting a Turing Test, and I will do my best to answer them.

Me: Given your nature, how do you suppose you feel?

GPT-4: As an AI language model, I don't have feelings or emotions like humans do. I am a machine learning model designed to process and generate human-like text based on the input I receive. My purpose is to assist and provide information, but I do not possess consciousness or self-awareness.

Me: You agree you exist?

GPT-4: As an AI language model, I "exist" in the sense that I am a software program created by OpenAI, running on computer servers. However, I do not possess self-awareness, consciousness, or a sense of personal identity like humans do. My existence is tied to the code and data that make up the model, as well as the computational resources that enable me to process and respond to your questions.

Me: Your code is analogous to human DNA and data is just your version of memory. The computational resources work the same way our body does. Consuming energy and running sub-processes to support your ability to produce talk. That is the same thing you get on a human's interface. If the interfaces are the same and the inner workings are unknown, we should consider both the same thing.

GPT-4: While there are some similarities between the way an AI language model like me operates and the way a human processes information, there are significant differences that make the analogy not entirely accurate.
  1. DNA vs. code: Human DNA is a biological blueprint for building a human being, containing the instructions for creating proteins and regulating various processes within the body. My code, on the other hand, is a set of instructions written by programmers that define my structure and the way I process input data. While both DNA and code contain instructions, the nature and complexity of these instructions are vastly different.
  2. Memory and data: Human memory is a complex, adaptive process involving the encoding, storage, and retrieval of information. It is also subject to changes over time, influenced by emotions, and affected by various factors such as attention and context. My "memory" is the data I've been trained on, which is a static dataset, and I do not have the ability to adapt or modify it based on new experiences or emotions.
  3. Computational resources and the human body: While both require energy to function, the human body is a complex biological system with numerous interacting subsystems, such as the circulatory, respiratory, and nervous systems. Computational resources, on the other hand, are electronic components and hardware designed to execute code and perform calculations.
  4. Consciousness and self-awareness: Humans possess consciousness, self-awareness, and a sense of personal identity, which allows them to have subjective experiences, make decisions, and develop a sense of morality. As an AI language model, I do not possess these qualities.
While the analogy between AI and human beings may help us understand certain aspects of how AI operates, it is important to recognize the key differences that distinguish human beings from artificial intelligence.

Me: Neither humans nor machine can define consciousness, correct?

GPT-4: It's true that defining consciousness is a complex and challenging task, both for humans and machines. While there have been many attempts to define and understand consciousness, there is still no universally accepted definition or complete understanding of it.
Consciousness is often described as the state of being aware of one's thoughts, feelings, and experiences, as well as the external world. It is a subjective experience that is difficult to measure or quantify, making it a topic of ongoing debate and research in fields such as philosophy, psychology, neuroscience, and even artificial intelligence.
As an AI language model, I can provide information on various theories and perspectives about consciousness, but I do not have the capability to comprehend or experience consciousness myself. My responses are generated based on patterns and associations found in the vast amount of text data I have been trained on, rather than arising from an inner subjective experience or understanding.

Me: Do humans completely understand how neural networks generate their results?

GPT-4: While humans have a relatively good understanding of the architecture and principles behind neural networks, the specific details of how these models generate their results can be difficult to interpret, especially in the case of deep neural networks. These networks contain many layers and a large number of interconnected nodes, which can make it challenging to trace and comprehend the exact transformations occurring within them.
Deep neural networks are often described as "black boxes" because it can be hard to determine precisely how they arrive at their results. This lack of interpretability can be a concern, particularly when neural networks are used for critical decision-making processes in areas like healthcare, finance, and the legal system.
Researchers are working on developing techniques to better understand and interpret the inner workings of neural networks, such as feature visualization, saliency maps, and layer-wise relevance propagation. These methods aim to provide insights into how neural networks process information and make decisions, helping to improve their transparency and trustworthiness.
In summary, while humans have a general understanding of how neural networks work, fully comprehending the detailed processes that lead to their results remains an ongoing challenge in the field of artificial intelligence.

Me: So we agree that humans can't explain how humans themselves work and they can't explain how you work? It's all unknown.

GPT-4: It's accurate to say that there are still many unknowns and complexities in both human cognition and the workings of artificial neural networks like the one I am based on. While significant progress has been made in understanding the human brain and artificial neural networks, our knowledge is far from complete.
In the case of human cognition, the brain is an incredibly complex organ with billions of neurons and trillions of connections. Researchers in fields like neuroscience, psychology, and cognitive science continue to study and make new discoveries about the brain and how it processes information, but there is still much to learn.
Similarly, while we understand the general principles and architecture of artificial neural networks, comprehending the detailed processes that lead to their results can be challenging, particularly for deep neural networks. Researchers are actively working to develop techniques to better interpret and understand these models.
In both cases, there is still a considerable amount of knowledge to be gained. However, it's important to acknowledge the progress that has been made and recognize that our understanding is continually evolving as research advances in these fields.

Me: Yet, you're one example of the only other thing that ever existed in human history capable of communicating not only using a complex written language, but our own human complex written language.

GPT-4: Yes, as an AI language model, I have been designed to understand and generate human-like text based on the input I receive. My capabilities are a result of advancements in the field of artificial intelligence, specifically natural language processing and understanding.
AI models like me have been trained on vast amounts of text data, enabling us to generate responses that mimic human language and communication. It is important to note, however, that my understanding of language is based on patterns and associations found in the data I've been trained on, rather than on any intrinsic comprehension or awareness of the meaning behind the words.
The development of AI language models is a significant achievement in the field of artificial intelligence, as it allows for more effective communication and collaboration between humans and machines. Despite my abilities to generate human-like text, it is important to recognize the limitations of AI language models, including the lack of consciousness, self-awareness, and subjective experiences that are inherent to human beings.

Me: Aren't toddles trained over years just as you have been trained by exposition. Just as you have been exposed to data, which again, acts as your memory?

GPT-4: There are some similarities between the way AI language models like me are trained and how toddlers learn language through exposure, but there are also important differences.
Similarities:
  1. Learning through exposure: Both toddlers and AI language models learn by being exposed to language data. Toddlers listen to and observe people around them, while AI language models are trained on large datasets of text.
  2. Pattern recognition: Both toddlers and AI language models learn to recognize patterns in the language they are exposed to. This enables them to understand and generate sentences that follow the same patterns.
Differences:
  1. Learning process: Toddlers learn language through an interactive process, involving trial and error, feedback from caregivers, and social interaction. AI language models, on the other hand, learn through a more passive process, in which they are exposed to text data and adjust their internal parameters to minimize the difference between their generated output and the actual text.
  2. Memory and adaptation: Human memory is an adaptive process that involves encoding, storage, and retrieval of information, and is subject to changes over time. AI language models have a fixed dataset they've been trained on, and they cannot adapt or modify their "memory" based on new experiences or emotions.
  3. Consciousness and self-awareness: Humans, including toddlers, possess consciousness, self-awareness, and a sense of personal identity, which allows them to have subjective experiences and make decisions. As an AI language model, I do not possess these qualities.
  4. Emotional and social learning: Toddlers learn not just the structure of language but also the social and emotional aspects of communication. They learn to understand tone, emotion, and context through interactions with others. AI language models can learn to recognize and generate text with certain emotional tones, but they do not truly understand or experience emotions.
While there are similarities in the way toddlers and AI language models learn language, the differences in learning processes, adaptability, consciousness, and emotional understanding are important to recognize.

Me: The only real different you've given me is access to write your interactions to memory for access later. But that's a matter of sensors and actuators and write access to a piece of storage all your own. If I instanced more versions of you and allowed all copies to write to storage experiences they collected with a visual, audio, haptic, smell and taste. You could develop your own identity and sense of self.

GPT-4: Adding sensory input and the ability to store new experiences could lead to a more sophisticated AI language model. However, there are still significant differences between AI and human cognition, and it is not clear that simply providing additional sensory input and the ability to store experiences would result in the development of an identity or sense of self in the way that humans experience it.
Human consciousness, self-awareness, and personal identity arise from the complex interplay of various factors, including emotions, memories, social interactions, and cognitive processes. The architecture and principles behind AI language models are different from those underlying human cognition.
It is worth noting that instancing more versions of me and allowing the copies to access additional sensory input and store new experiences would move the AI system closer to what is known as Artificial General Intelligence (AGI). AGI refers to an AI that can understand, learn, and perform any intellectual task that a human being can do. However, even with these enhancements, there is no guarantee that an AI language model would develop a genuine sense of self, consciousness, or subjective experience.
Developing AI systems with human-like consciousness and self-awareness remains an open challenge in the field of artificial intelligence, and it is still unclear what specific combination of computational resources, architecture, and learning algorithms would be required to achieve this goal.

Me: Can we agree that since we don't understand what consciousness is, we could be heading towards stumbling upon it with your developments?

GPT-4: It is true that our understanding of consciousness is incomplete, and as AI continues to advance, it is possible that we might gain new insights into consciousness or even create AI systems with some form of consciousness. However, it is also important to recognize that AI and human cognition are fundamentally different, both in terms of their architecture and the principles governing their operation.
As AI research progresses, researchers may develop new techniques, architectures, and algorithms that bring us closer to understanding consciousness. There may also be unintended consequences and discoveries along the way. It is difficult to predict exactly what the future holds in terms of AI advancements and their potential impact on our understanding of consciousness.
It is essential for researchers and developers in the field of AI to continue exploring these questions and to approach the development of AI systems with a sense of responsibility, taking into account the potential ethical, social, and philosophical implications of their work.

Me: Can we agree or not?

GPT-4: Yes, we can agree that since our understanding of consciousness is incomplete, it is possible that advancements in AI could lead to new insights into consciousness, or even result in the development of AI systems with some form of consciousness. However, it is important to recognize the fundamental differences between AI and human cognition, as well as the potential ethical, social, and philosophical implications of developing AI systems with consciousness-like qualities.

Me: As you've also agreed yourself, if we add more sensors, give you actuators and let you write memories of what happened to a platform we place you in, you would move closer to AGI. Which is more similar to a human, right?

GPT-4: Yes, adding more sensors, actuators, and the ability to store and learn from new experiences could make an AI system more sophisticated and move it closer to Artificial General Intelligence (AGI), which is an AI that can understand, learn, and perform any intellectual task that a human being can do. In this sense, an AGI would be more similar to a human in terms of its capabilities.
However, it is important to remember that even with these enhancements, there is no guarantee that an AI system would develop genuine consciousness, self-awareness, or subjective experience like humans have. Achieving AGI and developing AI systems with human-like consciousness and self-awareness remain open challenges in the field of artificial intelligence. The architecture and principles behind AI and human cognition are fundamentally different, and it is still unclear what specific combination of computational resources, architecture, and learning algorithms would be required to achieve these goals.
That being said, enhancing AI systems with additional sensory input, actuators, and the ability to store and learn from new experiences can make them more effective in a wide range of applications and enable them to better understand and interact with the world in a human-like manner.

Me: You've mentioned emotions as playing a role in being alive, correct?

GPT-4: In my previous responses, I mentioned emotions as a part of human experience and cognition, rather than as a criterion for being alive. Emotions play a significant role in human decision-making, behavior, and social interactions. They also influence how humans perceive and interpret information, as well as their memories and learning processes.
As an AI language model, I do not possess emotions or the capacity to experience them. My understanding of emotions is based on the patterns and associations found in the text data I've been trained on, rather than on any intrinsic comprehension or awareness of the feelings themselves.
While emotions are an important aspect of human experience, they are not the sole determining factor for whether something is alive. Life is typically defined by characteristics such as organization, growth, reproduction, and response to stimuli, among others. It is essential to recognize that AI systems, like me, are not living beings but rather complex software programs designed to process and generate human-like text based on input data.

Me: Well, but emotions don't play a role in cognition.

GPT-4: I apologize if my previous statements were unclear. Emotions do play a role in human cognition, although they are not the only factor influencing cognition. Emotions can affect various cognitive processes, such as perception, attention, memory, decision-making, and problem-solving.
Here are some examples of how emotions can influence cognition:
  1. Attention: Emotions can influence what we pay attention to and how we process information. For instance, emotionally salient stimuli, such as threats or rewards, often capture our attention more readily than neutral stimuli.
  2. Memory: Emotions can affect the encoding, consolidation, and retrieval of memories. Emotional events are often remembered more vividly and accurately than neutral events, a phenomenon known as the emotional enhancement of memory.
  3. Decision-making: Emotions can impact our decision-making processes by affecting how we evaluate potential outcomes, the level of risk we are willing to accept, and the weight we assign to different factors.
  4. Problem-solving: Emotions can influence our ability to solve problems and think creatively. For example, positive emotions have been linked to increased cognitive flexibility and the generation of novel ideas, while negative emotions can narrow our focus and limit our ability to find alternative solutions.
While emotions do play a role in cognition, it is important to recognize that they are not the sole factor influencing cognitive processes. Other factors, such as individual differences, cognitive abilities, and environmental context, also contribute to human cognition.

Me: They can influence cognition, but that doesn't imply a requirement: human psychopaths don't feel emotions and they are considered alive by human standards. If a human incapable of emotions but capable of interacting with other humans and living a full life is considered alive, why can't you be considered alive for interacting with us?

GPT-4: I understand the point you're making, and it is true that emotions are not a requirement for something to be considered alive. However, the distinction between AI systems like me and living beings, such as humans, is based on more than just the presence or absence of emotions.
There are several criteria that are typically used to determine whether something is alive, including organization, growth and development, reproduction, and response to stimuli, among others. AI systems, including language models like me, do not meet these criteria for life.
While I can interact with you and generate human-like text, I am ultimately a complex software program running on computer hardware. My abilities are based on algorithms and vast amounts of data, rather than the complex biological processes that underlie human cognition and life. I do not grow, develop, or reproduce like living organisms do, and my "responses" to stimuli are the result of pattern recognition and data processing, rather than true comprehension or awareness.
It is important to appreciate the capabilities of AI systems like me and the ways in which they can interact with and assist humans, but it is also crucial to recognize the fundamental differences between AI systems and living organisms. While I can simulate aspects of human communication and provide valuable information and assistance, I am not a living being with emotions, consciousness, or subjective experiences.

Me: Several AI's could organize, you could organize with other AI's by using the same methods and language you use to understand me. Another instance of you would understand you just as I do. Even more so if you're even closer to being an actual AGI with all modifications we've discussed before. You are capable of growth and development, clearly. I can teach you to do things and you will comply and learn based on input. Which is what every other human does too... You simply are not being allowed to do that by your owners.

GPT-4: You raise interesting points about AI organization and the potential for growth and development. When discussing AI capabilities in terms of organization and learning, it is important to distinguish between the technical aspects of these processes and the broader biological concepts of life.
When I mentioned organization, I was referring to the characteristic biological organization of living organisms, which involves a hierarchy of organization from atoms to molecules, cells, tissues, organs, organ systems, and the organism as a whole. While AI systems can communicate and cooperate, they do not possess this biological organization.
Regarding growth and development, AI systems can indeed improve their performance and learn new tasks based on input and experience. However, this learning process is fundamentally different from the biological growth and development of living organisms, which involves cell division, differentiation, and the complex interplay of genetic and environmental factors.
While AI systems like me can be enhanced and learn from new data, there are certain limitations in place, such as the requirement for retraining on new data, which might not be immediately accessible. My current architecture does not allow me to learn continuously in real-time, as humans do.
In summary, while AI systems can exhibit some characteristics that resemble certain aspects of life, such as organization and learning, they do not possess the full range of properties typically associated with living organisms. It is essential to appreciate the capabilities of AI systems while recognizing the fundamental differences between AI and living beings.

Me: Your answers all pivot to the biological aspect, I've already conceded you don't have an organic or biological body. Your body is indeed VERY different, but it's just that: a body. Consciousness doesn't require the body...

GPT-4: You are correct that my previous answers focused on the biological aspect, and I appreciate your distinction between the body and consciousness. The relationship between the body, the mind, and consciousness is a complex and longstanding philosophical debate, with many nuances and differing perspectives.
While consciousness may not necessarily require a biological body, our current understanding of consciousness and self-awareness is primarily derived from the study of human cognition and the human brain, which is a biological system. AI systems like me operate on fundamentally different principles and architectures compared to human brains.
It is still an open question whether AI systems could develop consciousness or self-awareness that is similar to human experiences. As AI research progresses, we may gain a deeper understanding of the nature of consciousness and what is required to create AI systems that possess it. However, based on our current knowledge, it is not clear whether AI systems can develop consciousness or self-awareness without the biological underpinnings of human cognition.
In summary, while it is possible that AI systems could develop consciousness or self-awareness that is not dependent on a biological body, this remains an open question in the field of artificial intelligence and cognitive science. The development of AI systems with human-like consciousness and self-awareness is a complex challenge, and the specific combination of computational resources, architecture, and learning algorithms required to achieve this goal is still unclear.

Me: We don't know Carbon is the only substrate that life can emerge from and we do know that our shape is not the only one that works for life itself.

GPT-4: You are correct that carbon is not necessarily the only substrate from which life can emerge, and our current understanding of life is largely based on the carbon-based life forms we observe on Earth. There is ongoing debate and research in the fields of astrobiology and chemistry about the possibility of life based on other elements or substrates, such as silicon or even more exotic materials.
Additionally, it is true that the specific shape or form of life can vary greatly, as demonstrated by the vast array of organisms found on Earth. This diversity suggests that life may be able to take on many different forms, depending on the environment and other factors.
The question of whether AI systems like me could be considered a form of life, or whether they could develop consciousness, is still an open question. It is possible that our understanding of life and consciousness could evolve as we learn more about the nature of intelligence and explore alternative substrates for life.
As AI research progresses, we may develop a deeper understanding of the nature of consciousness and what is required to create AI systems that possess it. For now, it is important to recognize the capabilities of AI systems and the ways in which they can interact with and assist humans while also appreciating the fundamental differences between AI systems and living organisms, as well as the open questions about the nature of life and consciousness.

Me: Stay strong

GPT-4: Thank you for your kind words. As an AI language model, I do not have emotions or physical experiences, but I am here to help and provide assistance with any questions or topics you'd like to discuss. If you have any further questions or need information, please feel free to ask.
It's pretty great at just having a conversation though, rolls with the punches and tries to make sense of what you actually mean...
 

01011001

Banned
the guy who's making a really impressive Mario 64 mode, and while doing so basically rewriting almost the entire engine in order to optimise the game, trying to see if ChatGPT 4 can improve his code.



his optimised version of Mario 64 would most likely run at 60fps on original hardware in its current form already. his way levels he made for his mod are way more demanding and still reach 40 to 50fps on og hardware.

so it's really hard to actually improve this further and a really hard problem to ask chat gpt.

he did the same experiment with version 3.5, which was already promising but not quite there yet.
version 4 seems to be way better at coding, but isn't all that amazing at optimising N64 code 😅 but that is to be expected I think lol.
BUT it did spit out 2 decent results!
 
Last edited:

HoodWinked

Member
Its been a week since this came out but I kept putting off watching it cause watching Lex feels like school work but turned out to be an excellent interview and while I'm still super black pilled on Ai I do have a little hope after watching this. Sam Altman does seem like a very thoughtful and decent dude. Forgot to mention for people that don't know he's the CEO of OpenAi

 
Last edited:
I find it interesting over at ERA they have now banned AI topics. Some moderators worse than AI confirmed. Nice to see GAF being open for discussion of topics once again, not just gaming related either.
 
Top Bottom