The Philosophy of NeoGAF [OT] • What is Artificial Intelligence? - March 2013

Status
Not open for further replies.
This is for the love of philosophy.

220px-Plato_Silanion_Musei_Capitolini_MC1377.jpg

[Plato - important dude]

•••

TOPIC FOR MARCH 2013:

What is artificial intelligence?

I have been somewhat taken by the advent of Google Glasses and the reception thereof. I've read enough Kurzweil to have a strong sense of suspicion about the technological singularity, and I have also watched the Star Trek TNG episode - Measure of a Man [required reading] which taught me that artificial intelligences are conscious and lively.

Given the congruence of these two ideas, I can't help but think that we will skip building a Data-like android. Rather, we ourselves will become androids as our integration with technology progresses.

This raises some serious ethical concerns. Will technologically affluent groups of people have an innate advantage, and therefore superiority, to those who do not have the same access? Is this already the case with smart phones, broadband access, the ability to purchase any product at any time via systems of distribution like Amazon, etc? Are we perpetuating, or exacerbating levels of economic inequality thru our access to these advantages?

•••

• What is philosophy?

-"A big waste of my fucking time", you might suggest. Well, that depends upon the definition of 'of'...

If you are able to resist the urge to you ingest your browser window and vomit it back at your computer, while ovulating, at the very thought of the pointless quest for knowledge, then I suggest that you read on.

Philosophy, historically, is 'the love of wisdom'. This was, of course, the ancient Greek excuse for gracefully sitting on couches and porking little boys, while shooting the shit. You could think of philosophical inquiry as a continuation of this tradition -- without the molestation, of course.

• Should you participate in this thread?

Well, are you:

1) Interested in asking questions?
2) A student of philosophy?
3) A student of science?
4) A student of anything?
5) A non-mindless participant in our culture?

If any of these apply, then go ahead and join in.

There is no initiation fee. But you must put on a Nixon mask and make three trips across a bed of hot coals to cleanse yourself of your logic-demons in order to make any sense in this thread. If you do not do this, then I relieve myself of any responsibility for what might befall you therefrom.
 
Hmm. I've sort of thought about it myself. I imagined that's where some of an extended lifespan would come from.

Is this not already the case with our hoards of life-extending drugs and medical techniques? It used to be that a flu pandemic would wipe out a huge percentage of the population. Now, we actually have vaccines that prevent entire outbreaks.

Additionally, we have a level of access to health and nutritional information via the internet that nobody in previous generations could have even imagined.
 
The lack of decent artificial intelligence in games is what has kept me from enjoying gaming for a while. While graphics and size of games have continued to improve, AI has not. Halo:CE feels the same as Halo 4 to me.

I would love to know what can actually be done with AI right now, developers just aren't allocating the CPU cycles to AI, which is why I think we are not seeing major advancements currently.
 
The lack of decent artificial intelligence in games is what has kept me from enjoying gaming for a while. While graphics and size of games have continued to improve, AI has not. Halo:CE feels the same as Halo 4 to me.

I would love to know what can actually be done with AI right now, developers just aren't allocating the CPU cycles to AI, which is why I think we are not seeing major advancements currently.

Polygons, man.
 
Maybe against my better judgment, I'd like to hope that if the affluent among us became supreme beings they'd be altruistic in a way not dissimilar to what Bill Gates is now. I mean if you're rich, powerful, and super intelligent? If you're not extremely generous you're a super villain. Besides, by the time we're cyborgs, I'd imagine that the standards of living around the world would have risen such that poverty may be completely incomprehensible to us now. Remember the faux-outrage that Fox News had for the high percentage of Americans of welfare who had TV's and refrigerators? I'm certainly not defending that now, but the Dickensian factory worker would probably wonder why the working poor now complain.

Imagine a world in which protests are held over the lack of genetic enhancement opportunities for the working poor. Or how those who are in the bottom 25% can't even regulate their emotions with cybernetic integration. Or something equally incomprehensible to us now. I often wonder what will make us seem simple and petty in the future
 
Topic doesn't match thread title. F.


edit: your topic is about transhumanism and economic disparity. I thought it would be about AI like the title states :(

double edit:


The lack of decent artificial intelligence in games is what has kept me from enjoying gaming for a while. While graphics and size of games have continued to improve, AI has not. Halo:CE feels the same as Halo 4 to me.

I would love to know what can actually be done with AI right now, developers just aren't allocating the CPU cycles to AI, which is why I think we are not seeing major advancements currently.

there are advancements, plenty in the academic world. It's just going to take time to get to a form where it can be applied to games. By this I mean out of the research stage and in to well defined and documented AI systems that allow both complex behaviours and easy content creation, and run fast enough to be implemented in a game. Over multiple platforms. Academically, where research is at AFAIK is exploring different paradigms of AI design, such as agent-oriented systems or goal planning systems, where the programming language semantics reflect a more knowledge-and-event-based design, rather than a simple finite-state-machine made up of case switches, which is the typical model. sure, in the end it compiles to one, but it allows much more expressive design for less work.
 

-"A big waste of my fucking time", you might suggest. Well, that depends upon the definition of 'of'...


I'm still chuckling at this. Not to be silly but there was the TNG episode "The Measure of a Man" that incorporated the thread-topic, on a philosophic and "legal" level. I am curious what people think. Some say that AI is the next step in human evolution. I don't see it from the biological point-of-view but do see how sentient technology could encapsulate our knowledge. Unless it exists in some undisclosed bunker with no connection to the interwebs, I'll do my best to help shape benevolence in such a mind.
 
Topic doesn't match thread title. F.


edit: your topic is about transhumanism and economic disparity. I thought it would be about AI like the title states :(

Don't be discouraged. Language is difficult, and this is broadly inclusive conversation.

How does transhumanism differ from artificial intelligence? To me, transhumanism is a standin for artificial intelligence, in so far as the idea of manipulating our bodies, consciousness, etc, with technology will necessarily lead to a source of beings who are capable of accomplishing post-human feats thru their technological enhancements.

I do not feel that it is necessary to create a completely separate 'artificial intelligence', or a HAL computer, given that we are already so dependent and existentially invested in our present technology -- ie: iPhones, iPads, internet access, google, wikipedia, etc.
 
I think that because intelligence is somehow proportionately linked to survival in biological entities, AI could be the biggest threat ever. Skynet, mannnn.

It raises all those questions that are perpetually suspended in science fiction, such as how much can you replace before you stop being human. There is no doubt that we will become more intrinsically linked with what at the moment is the internet. Our methods communication and sharing of data will increase by many magnitudes and coupled with the greater automation of menial tasks.

If human communication could be viewed as part of an ever increasingly sophisticated protocol suite which is a major factor in the overall intelligence of the species, it raises many questions about the symmetrical development of an Artificial Intelligence and how safe it would be.

I think intelligence needs to struggle against an enemy or an equal intelligence to truly become an intelligence. It needs to be aware of its possible mortality or its
finite existance and the factors which can effect that. It would need the ability and space to expand and adapt. The first AI could be the last AI if it decided to only reveal itself at the most opportune moment consistent with its survival *tinfoilhat.jpg*
 
I'm still chuckling at this. Not to be silly but there was the TNG episode "The Measure of a Man" that incorporated the thread-topic, on a philosophic and "legal" level. I am curious what people think. Some say that AI is the next step in human evolution. I don't see it from the biological point-of-view but do see how sentient technology could encapsulate our knowledge. Unless it exists in some undisclosed bunker with no connection to the interwebs, I'll do my best to help shape benevolence in such a mind.

Measure of a man is cited in the OP ;D
 
Is this not already the case with our hoards of life-extending drugs and medical techniques? It used to be that a flu pandemic would wipe out a huge percentage of the population. Now, we actually have vaccines that prevent entire outbreaks.

Additionally, we have a level of access to health and nutritional information via the internet that nobody in previous generations could have even imagined.

Yes - hence only "some." :P
 
Measure of a man is cited in the OP ;D
I was taken by the quote and must have ignored my reading. Most excellent.

edit: I was probably like: philosophy, yadda-yadda ; oh! I think I have an idea... No that came from something in my periphery!



edit2: Philosophy has helped shape our understanding of who we are and why we do. I think the works of Foucault, Derrida, and a Central/South American humanist that I can't recall are worth reading.
 
I don't necessarily see eye to eye with the "but only rich people can afford it!" argument because history has shown many forms of technology eventually becoming cheap enough for more and more people to afford.

Cars, TVs, air conditioning, washing machines, microwave ovens, mobile phones, etc.
 
I was taken by the quote and must have ignored my reading. Most excellent.

edit: I was probably like: philosophy, yadda-yadda ; oh! I think I have an idea... No that came from something in my periphery!



edit2: Philosophy has helped shape our understanding of who we are and why we do. I think the works of Foucault, Derrida, and a Central/South American humanist that I can't recall are worth reading.

I'm glad to see you cite some very relevant modern philosophers! Its funny how people will often scoff at philosophy, as if it was a dying art in the face of the rationalist machine, yet our understanding of our place on Earth if often very heavily influenced by the various philosophies that are espoused in our time.

I'll add Peter Singer, Jurgen Habermas, Hans Gadamer, and Dr Cornel West to the list of modern philosophers who inspire me.

I don't necessarily see eye to eye with the "but only rich people can afford it!" argument because history has shown many forms of technology eventually becoming cheap enough for more and more people to afford.

Cars, TVs, air conditioning, washing machines, microwave ovens, mobile phones, etc.

What is a mobile phone to a starving person in a consciously-desaturated, neglected African nation?

The reality is, the struggle for sheer relevance is still primal for more than half of the world's population. We are barely getting monetary aid to extremely poor countries as it is - we send money [as if money is automatically a solution to complex problems], and we expect some sort of peace of mind.

We cannot even manage to feed our own population, and this is within the auspice of the most economically powerful and wealthy nation that has ever existed. One in six people in the United States has no security in regard to where their next meal will come from.

This is all under the presumption that technological efficiency and capitalist distribution of said efficiency is necessarily good, an idea which is extremely dubious.
 
Can we define what we mean by "intelligence"?

And in what way can it be "artificial"?

Definitions are inherently difficult. I think what you're asking for is a linguistic trap.

To strictly define words like artificial and intelligence would be antithetical to the disclosure of truth thru discourse. That is to say: We are limiting ourselves by attempting to define artificial and intelligence as descriptors, instead of defining them by means of conceptual understanding after a process of deliberation.

The point of the conversation is to blur the line between real intelligence and artificial intelligence. I have suggested that artificial intelligence is a misnomer, and that it already exists, within our reliance upon information technology and the incorporation of said technology into almost everything that we do.
 
I'll add Peter Singer, Jurgen Habermas, Hans Gadamer, and Dr Cornel West to the list of modern philosophers who inspire me.

I'll make time in the future to become more familiar with the first three (Habermas rings a bell), but West had a quote I will never forget: "Justice is what love looks like in public."
 
I'll make time in the future to become more familiar with the first three (Habermas rings a bell), but West had a quote I will never forget: "Justice is what love looks like in public."

Yea! I was lucky enough to see him speak in person a couple weeks ago. Brilliant man. Philosophically educated Christians are some of my favorite people to speak to.

When he says 'love' he is referring to the love that is embodied by the Greeks, Christians, Muslims, Buddhists, Hindus, and post modernist continental philosophers like Heidegger: A sense of belonging and being-with all other things.

That is to say, if all things are an extension of the totality of all things, then all individual things are under the auspice of universal law. Thus, justice will emerge - the treatment of all things under the totality of things treated as if they are equally sacred.

Really mind blowing stuff.
 
the next milestone in the complexity of the system (the universe). Like the creation of new elements, and the genesis of biological life, it an inevitability of entropy.
 
I'm philosophically against the "Chinese Room" thought experiment and argument. I believe human consciousness is merely an experiential effect of our level of intelligence...there's nothing unique about our particular carbon-based minds, save for their relative complexity.

The term "AI", at least as usually used in modern contexts, is far from the free-wheeling, adaptable, creative intelligence that would be required to truly pass a proper Turing test. It's still decades away. But it's certainly possible. The earliest true AI's, I believe, will be computational neural nets, modeled after our own neurological structures...but way weirder structures may and almost certainly do exist.
 
My favorite definition of intelligence is "intelligence is what intelligence tests measure". Arguments surrounding AI are difficult because of the ambiguity in what the terms mean - there are formal definitions of intelligence, yes, but there is a constant battle against the intuitive idea people have about what intelligence "ought to be". There are a huge number of skills which, when performed by a human, we say is a display of intelligence, but when performed by a computer that's "just computation". The problem seems to be that once a problem has been "solved", with its inner workings laid out as code on the screen, it suddenly stops seeming so intelligent. That is to say, not knowing how something works goes a long way towards it being intelligence.

One thing to keep in mind all this time is Moravec's paradox, which is that high-level reasoning requires relatively little computation, while unconscious and automatic processes take enormous computational resources and are extremely difficult to program. Yet, we consider skill at chess to be a sign of intelligence while being able to balance upright without thinking about it to not be an example of intelligence. AI researchers spend little time on the former, while they spend enormous time on the latter. Good facial recognition came a long time after chess-playing computers, after all. The things we do not consider to be requiring intelligence are the things we are best at. Our brains are extremely good at them, often having dedicated "hardware" (so to speak) for dealing with them. As soon as it is something that requires the application of conscious effort, it suddenly becomes really, really hard, we notice that it's hard, and people who are good at them seem very "intelligent".

Since AI researchers are merely human, their research generally focuses on specific tasks, or sub-problems of specific problems. They don't sit there trying to build an electronic Einstein, they try to fix up one small thing at a time. Artificial persons will, I suspect, be no more than an aggregate of innumerable small, specialized subsystems, not some large revolution in programming. I also feel that in time, we will come to the uncomfortable realization that not only are we not as special as we once believed, but that consciousness is equivalent to computation.
 
I'm philosophically against the "Chinese Room" thought experiment and argument. I believe human consciousness is merely an experiential effect of our level of intelligence...there's nothing unique about our particular carbon-based minds, save for their relative complexity.

The term "AI", at least as usually used in modern contexts, is far from the free-wheeling, adaptable, creative intelligence that would be required to truly pass a proper Turing test. It's still decades away. But it's certainly possible. The earliest true AI's, I believe, will be computational neural nets, modeled after our own neurological structures...but way weirder structures may and almost certainly do exist.

I think most linguists and scholars of literature would be, at the very least, skeptical of the chinese room experiment. I think Derrida, among others, gives a good account of the problem of authorial intent.

There is nothing unique about our minds, save for complexity. Well, there is nothing unique about the paints and canvas that make up the Mona Lisa, save for its organization and complexity. What are you actually trying to say?

If we presume that human consciousness is an experiential effect of a complex neurological system, then do we simply cease talking about it? I don't see how experiential affectations would necessitate the relegation of consciousness simply to 'a process of the brain'.

Clearly, the ability to reason and the capacities of higher levels of imagination are capable of contributing to very different means of self-expression. An epic poem is different from a squirrel hoarding an acorn. But, as I have said before, the greatest artistic architectural achievement is, to a dog: A fuck ton of concrete.

It's interesting that you refer to a Turing test, as if it is a wholly truthful determination of higher level intelligence. How is it not simply a determination of higher level intelligence, only within our understanding of higher level intelligence?

Is WATSON intelligent?
 
I think most linguists and scholars of literature would be, at the very least, skeptical of the chinese room experiment. I think Derrida, among others, gives a good account of the problem of authorial intent.

There is nothing unique about our minds, save for complexity. Well, there is nothing unique about the paints and canvas that make up the Mona Lisa, save for its organization and complexity. What are you actually trying to say?

If we presume that human consciousness is an experiential effect of a complex neurological system, then do we simply cease talking about it? I don't see how experiential affectations would necessitate the relegation of consciousness simply to 'a process of the brain'.

Clearly, the ability to reason and the capacities of higher levels of imagination are capable of contributing to very different means of self-expression. An epic poem is different from a squirrel hoarding an acorn. But, as I have said before, the greatest artistic architectural achievement is, to a dog: A fuck ton of concrete.

It's interesting that you refer to a Turing test, as if it is a wholly truthful determination of higher level intelligence. How is it not simply a determination of higher level intelligence, only within our understanding of higher level intelligence?

Is WATSON intelligent?
I don't consider WATSON "truly" intelligent. TDM has a very valid point that the word "intelligence" is extremely difficult to define, but I use the word to refer to a creature or object's ability to take in any number of stimuli and define meaningful, unplanned relationships between them. WATSON can access data, perform mathematical algorithms, and spit out an answer, but isn't self-aware, it cannot deal with any minor variations in input or output, it cannot deduce a wholly new answer from pieces of other data, or logic.

I'm trying to define a "this is intelligence and that isn't", but that's not *really* how I view it...I just think WATSON is way, way off.
 
I don't consider WATSON "truly" intelligent. TDM has a very valid point that the word "intelligence" is extremely difficult to define, but I use the word to refer to a creature or object's ability to take in any number of stimuli and define meaningful, unplanned relationships between them. WATSON can access data, perform mathematical algorithms, and spit out an answer, but isn't self-aware, it cannot deal with any minor variations in input or output, it cannot deduce a wholly new answer from pieces of other data, or logic.

I'm trying to define a "this is intelligence and that isn't", but that's not *really* how I view it...I just think WATSON is way, way off.

It's interesting how your use of language is so fundamentally derisive and euphemistic. You say things like 'input' and 'spit out' and 'meaningful' and, surprisingly, you reference logic, seemingly unaware of the gravity of the implications of that language.

I would argue that WATSON displays a grasp of formal logic that far exceeds my capability, and likely, yours too.

How do you propose that WATSON isn't self-aware? It is certainly privy to its job. It is certainly efficient at responding to stimuli and producing creative answers to complex problems that it encounters. Is it simply because it does not reflect upon its existence, in human terms? It's funny that you set the barometer for self-awareness at the point of self-reflection, but you also seem to imply that intelligence comes from the ability to solve problems of logic by creative leaps. These two ideas are contradictory.

You can't throw out a term like 'meaningful' without some sense of what that actually means. Coincidentally, your proposition of meaning is sorely lacking in meaning.

Let's think for a moment about a race of beings that is impossibly more technologically advanced than us. It is so technologically advanced that it has no need to inquire into its own existence. It is so technologically advanced that the laws of physics are absolutely trivial and open to manipulation. This race of beings would have no need for self-consciousness and self-reflection. They are, to an extreme, much more effective than us at manipulating sets of data to discover mathematical understanding that far exceeds our abilities, but they have no knowledge of what it is to be embarrassed, angry, sad, etc.

Does this make this race of beings unintelligent?

What is stimuli? Electronic signals that elicit physical response? Because that is the truth of any electronic component in existence. I'm not sure that I understand what you mean by a meaningful, unplanned relationship between a being and its stimuli. It feels as if we are treading upon an all too human bias of what it means to respond to the world.

Ultimately, WATSON is an example, and obviously, a problematic one. However, your ideas about what makes an intelligent being do not appear to be consistent. They also appear to be heavily biased toward a human ideal of intelligence; which is to say: A being that displays the capacity for self-reflection and the use of creative formal logic. This is a concept, which I find, ironically inhuman. There is no mention of the capacity to love, hate, etc.
 
It's interesting how your use of language is so fundamentally derisive and euphemistic. You say things like 'input' and 'spit out' and 'meaningful' and, surprisingly, you reference logic, seemingly unaware of the gravity of the implications of that language.

I would argue that WATSON displays a grasp of formal logic that far exceeds my capability, and likely, yours too.

How do you propose that WATSON isn't self-aware? It is certainly privy to its job. It is certainly efficient at responding to stimuli and producing creative answers to complex problems that it encounters. Is it simply because it does not reflect upon its existence, in human terms? It's funny that you set the barometer for self-awareness at the point of self-reflection, but you also seem to imply that intelligence comes from the ability to solve problems of logic by creative leaps. These two ideas are contradictory.

You can't throw out a term like 'meaningful' without some sense of what that actually means. Coincidentally, your proposition of meaning is sorely lacking in meaning.

Let's think for a moment about a race of beings that is impossibly more technologically advanced than us. It is so technologically advanced that it has no need to inquire into its own existence. It is so technologically advanced that the laws of physics are absolutely trivial and open to manipulation. This race of beings would have no need for self-consciousness and self-reflection. They are, to an extreme, much more effective than us at manipulating sets of data to discover mathematical understanding that far exceeds our abilities, but they have no knowledge of what it is to be embarrassed, angry, sad, etc.

Does this make this race of beings unintelligent?

What is stimuli? Electronic signals that elicit physical response? Because that is the truth of any electronic component in existence. I'm not sure that I understand what you mean by a meaningful, unplanned relationship between a being and its stimuli. It feels as if we are treading upon an all too human bias of what it means to respond to the world.

Ultimately, WATSON is an example, and obviously, a problematic one. However, your ideas about what makes an intelligent being do not appear to be consistent. They also appear to be heavily biased toward a human ideal of intelligence; which is to say: A being that displays the capacity for self-reflection and the use of creative formal logic. This is a concept, which I find, ironically inhuman. There is no mention of the capacity to love, hate, etc.

I've enjoyed reading this thread so far, you make some interesting points. It's definitely an interesting question to ask and as others have shown it's an all too human reaction to base the measure of intelligence against ourselves.
 
It's interesting how your use of language is so fundamentally derisive and euphemistic. You say things like 'input' and 'spit out' and 'meaningful' and, surprisingly, you reference logic, seemingly unaware of the gravity of the implications of that language.

I would argue that WATSON displays a grasp of formal logic that far exceeds my capability, and likely, yours too.

How do you propose that WATSON isn't self-aware? It is certainly privy to its job. It is certainly efficient at responding to stimuli and producing creative answers to complex problems that it encounters. Is it simply because it does not reflect upon its existence, in human terms? It's funny that you set the barometer for self-awareness at the point of self-reflection, but you also seem to imply that intelligence comes from the ability to solve problems of logic by creative leaps. These two ideas are contradictory.

You can't throw out a term like 'meaningful' without some sense of what that actually means. Coincidentally, your proposition of meaning is sorely lacking in meaning.

Let's think for a moment about a race of beings that is impossibly more technologically advanced than us. It is so technologically advanced that it has no need to inquire into its own existence. It is so technologically advanced that the laws of physics are absolutely trivial and open to manipulation. This race of beings would have no need for self-consciousness and self-reflection. They are, to an extreme, much more effective than us at manipulating sets of data to discover mathematical understanding that far exceeds our abilities, but they have no knowledge of what it is to be embarrassed, angry, sad, etc.

Does this make this race of beings unintelligent?

What is stimuli? Electronic signals that elicit physical response? Because that is the truth of any electronic component in existence. I'm not sure that I understand what you mean by a meaningful, unplanned relationship between a being and its stimuli. It feels as if we are treading upon an all too human bias of what it means to respond to the world.

Ultimately, WATSON is an example, and obviously, a problematic one. However, your ideas about what makes an intelligent being do not appear to be consistent. They also appear to be heavily biased toward a human ideal of intelligence; which is to say: A being that displays the capacity for self-reflection and the use of creative formal logic. This is a concept, which I find, ironically inhuman. There is no mention of the capacity to love, hate, etc.
No need to get snippy.

WATSON is running a very advanced, but extremely strictly defined algorithm. It's basically using a natural language parser and then cross-referencing it through a massive web of predefined connections and relationships. The fact that there are more addition operations and multiply operations and the clock speed is higher does not in any way distinguish it from a computer from 1974 adding 2 and 3 together, or a physical abacus doing the same thing. There has to be something more, in my opinion. Perhaps that thing is "learning to do new things", perhaps it's "love", I don't know. It's just discussion.
 
Status
Not open for further replies.
Top Bottom