Jump to content

Talk:Philosophy of artificial intelligence

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

What about other animals?

[edit]

Personally I have a bone to pick with this intro: " Can a machine have a mind, mental states and consciousness in the same sense humans do? Can it feel? " Okay, maybe I'm not a professional philosopher, but shouldn't we say "in the same sense that sentient animals do" instead of just "in the same sense humans do"? Obviously any sentient animal has a consciousness, it's just that some of them have much less sophisticated ones that's all. Children of the dragon (talk) 23:54, 24 April 2010 (UTC)[reply]

Discussion of ethics moved

[edit]

A discussion that appeared here about the ethics of artificial intelligence has been moved to the talk page of that article.

Maudlin

[edit]

The article might benefit fromm a discussion of Maudlin's "olympia" argument. 1Z 00:32, 12 January 2007 (UTC)[reply]

The Real Debate.

[edit]

This article should contain more discussion of the serious academic debates about the possibility/impossibitity of artificial intelligence, including such critics as John Lucas, Hubert Dreyfus, Joseph Weizenbaum and Terry Winograd, and such defenders as Daniel Dennett, Marvin Minsky, Hans Moravec and Ray Kurzweil. John Searle is the only person of this caliber who is discussed.

In my view, issues derived from science fiction are far less important than these. Perhaps they should be discussed on a page about artificial intelligence in science fiction. Is there such a page? CharlesGillingham 11:02, 26 June 2007 (UTC)[reply]

Yes. -- Schaefer (talk) 12:59, 26 June 2007 (UTC)[reply]
Some text could be moved to Artificial intelligence in fiction. Critics can be listed here but maybe a discussion of the debate may belongs in Strong AI vs. Weak AI? --moxon 15:20, 12 July 2007 (UTC)[reply]

Some interesting stuff re Turing Test and Marvin Minsky and List of open problems in computer science

[edit]

I cc'd this over from the Talk:Turing machine page:

> Turing's paper that prescribes his Turing Test:

Turing, A.M. (1950) "Computing Machinery and Intelligence" Mind, 59, 433-460. At http://www.loebner.net/Prizef/TuringArticle.html

"Can machines think?" Turing asks. In §6 he discusses 9 objections, then in his §7 admits he has " no convincing arguments of a positive nature to support my views." He supposes that an introduction of randomness in a learning machine. His "Contrary Views on the Main Question":

  • (1) The Theological Objection
  • (2) The "Heads in the Sand" Objection
  • (3) the Mathematical Objection
  • (4) The Argument from Consciousness
  • (5) Arguments from Various Disabilities
  • (6) Lady Lovelace's Objection
  • (7) Argument from Continuity in the Nervous System [i.e. it is not a discrete-state machine]
  • (8) The Argument from Informality of Behavior
  • (9) The Argument from Extrasensory Perception [apparently Turing believed that "the statistical evidence, at least for telepathy, is overwhelming"]

re Marvin Minsky: I was reading the above comment describing him as a supporter of AI, which I was unaware of. (The ones I do know about are Dennett and his zombies -- of "we are all zombies" fame, and Searle. Then I am reading Minsky's 1967 and I see this:

"ARTIFICAL INTELLIGENCE"
"The author considers "thinking" to be within the scope of effective computation, and wishes to warn the reader against subtly defective arguments that suggest that the difference beween minds and machines can solve the unsolvable. There is no evidence for this. In fact, there couldn't be -- how could you decide whether a given (physical) machine computes a noncomputable number? Feigenbaum and Feldman [1963] is a collection of source papers in the field of programming computers that behave intelligently." (Minsky 1967:299)
  • Marvin Minsky, 1967, Computation: Finite and Infinite Machines, Prentice-Hall, Inc., Englewood Cliffs, N.J. ISBN: none. Library of Congress Card No. 67-12342.

I have so many wiki-projects going that I shouldn't undertake anything here. I'll add stuff here as I run into it. (My interest is "consciousness" as opposed to "AI" which I think is a separable topic.) But on the other hand, I have something going on at the List of open problems in computer science article (see the talk page) -- I'd like to enter "Artificial Intelligence" into the article ... any help there would be appreciated. wvbaileyWvbailey 02:28, 2 October 2007 (UTC)[reply]

Here it is, as far as I got:

Source:

In the article "Prospects for Mathematical Logic in the Twenty-First Century", Sam Buss suggests a "three-fold view of proof theory" (his Table 1, p. 9) that includes in column 1, "Constructive analysis of second-order and stronger theories", in column 2, "Central problem is P vs. NP and related questions", and in column 3, "Central problem is the "AI" problem of developing "true" artificial intelligence" (Buss, Kechris, Pillay, Shore 2000:4).

"I wish to avoid philosophical issues about consciousness, self-awareness and what it means to have a soul, etc., and instead seek a purely operational approach to articial intelligence. Thus, I define artificial intelligence as being constructed systems which can reason and interact both syntactically and semantically. To stress the last word in the last sentence, I mean that a true artifical intelligence system should be able to take the meaning of statements into account, or at least act as if it takes the meaning into account." (Buss on p. 4-5)

He goes on to mention the use of neural nets (i.e. analog-like computation that seems to not use logic -- I don't agree with him here: logic is used in the simulations of neural nets -- but that's the point -- this stuff is open). Morever, can I am not sure that Buss eliminate "consciousness" from the discussion? Or is consciousness a necessary ingredient for an AI?

Description:

Mary Shelley's Frankenstein and some of the stories of Edgar Allan Poe (e.g. The Tell-Tale Heart) opened the question. Also Lady Lovelace [??] Since the 1950's the use of the Turing Test has been a measure of success or failure of a purported AI. But is this a fair test? [quote here?] (Turing, Alan, 1950, Computing Machinery and Intelligence, Mind, 59, 433-460. http://www.loebner.net/Prizef/TuringArticle.html

A problem statement requires both a definition of "intelligence" and a decision as whether it is necessary to, or if so how much to, fold "consciousness" into the debate.

> Philosphers of Mind call an intelligence without a mind is a zombie (cf Dennett, Daniel 1991, Consciousness Explained, Little, Brown and Company, Boston, ISBN 0-316-180066 Parameter error in {{ISBN}}: checksum-1 (pb) ):

"A philospher's zombie, you will recall, is behaviorally indistinguishable from a normal human being, but is not conscious. There is nothing it is like to be a zombie; it just seems that way to observers (including itself, as we saw in the previous chapter)". (italics added or emphasis) (Dennett loc cit:405)

Can an artificial, mindless zombie be truly an AI? No says Searle:

"Information processing is typically in the mind of an observer . . . the addition in the calculator is not intrinsic to the circuit, the addition in me is intrinsic to my mental life.... Therefore, we cannot explain the notion of consciouness in terms of information processing and symbol manipulations" (Searle 2002:34). "Nothing is intrinsically computational. computation exists only relative to some agent or observer who imposes a computational interpretation on some phenomenon" (Searle 2002:17).

Yes says Dennett:

"There is another way to address the possibility of zombies, and in some regards I think it is more satisfying. Are zombies possible? They're not just possible, they're actual. We're all zombies [Footnote 6 warns not to quote out of context here]. Nobody is conscious -- not in the systematically mysterious way that supports such doctrines as epiphenomenalism!"((Dennett 1991:406)

> Gandy 1980 throws around the word "free will". For him it seems an undefined concept, interpreted by some (Sieg?) to mean something on the order of "Randomness put to work in an effectively-infinite computational evironment" as opposed to "deterministic" or "nondeterministic" both in a finite computational environment (e.g. a computer).

>Godel's quote: "...the term "finite proceedure" ... is understood to mean "mechanical procedure". concept of a formal system whose essence it is that reasoning is completely replaced by mechanical operations on formulas" ... [but] the reults mentioned in this postscript do not establish any bounds for the powers of human reason, but rather for the potentialities of pure formalism in mathematics."(Godel 1964 in Undecidable:72)

Importance:

> AC (artificial consciousness, an AI with a feeling mind) would no less than an upheavel in human affairs

> AI as helper or scourage or both (robot warriors)

> Philosophy: nature of "man", "man versus machine", how would man's world change with AI's (robots)? Will it be good or an evil act to create a conscious AI? What will it be like to be an AI? (cf Nagel, Thomas 1974, What Is It Like to be a Bat? from Philosophical Review 83:435-50. Reprinted on p. 219ff in Chalmers, David J. 2002, Philosophy of Mind: Classsical and Contemporary Readings, Oxford University Press, New York ISBN 0-19-514581-X.)

> Law: If conscious, does the AI have rights? What would be those rights?

Current Conjecture:

An AI is feasible/possible and will appear within this century.

This outline is just throwing down stuff. Suggestions are welcome. wvbaileyWvbailey 16:13, 6 September 2007 (UTC)[reply]

cc'd from Talk:List of open problems in computer science. wvbailey Wvbailey 02:41, 2 October 2007 (UTC)[reply]

The role of randomness in AI

[edit]

Some years ago (late 1990's?) I attended a lecture given by Dennett at Dartmouth. I was hoping for a lecture re "consciousness" but got one re "the role of randomness" in creative thought (i.e. mathematical proofs, for instance). I know that Dennett wrote something in a book re this issue (he was testing his arguments in his lecture) -- he talked about "skyhooks" that lift a mind up by its bootstraps -- but I haven't read the book (I'm not crazy about Dennett), just seen this analogy recently in some paper or other.

The problem of "minds", of "a consciousness" vs "an artificial intelligence" what do the words mean?

[edit]

In your article you may want to consider "forking" "the problem" into sub-problems. And try to carefully define the words (or suggest that even the definitions and boundaries are blurred).

I've spent a fair amount of time studying consciousness C. My conclusion is this --

Consciouness is sufficient for an AI, but consciousness is not necessary for an AI.

Relative to "AI" this is not off-topic, although some naifs may think so. Proof: Given you accept the premise "Consciousness is sufficient for an AI", when an "artifical consciousness" is in place, then the "intelligence" part is assured.

In other words, "diminished minds" that are not C but are highly "intelligent" are possible (expert systems come to mind, or machines with a ton of sensors that monitor their own motions -- Japanese robots, cars that self-navigate in the desert-test). There may be an entire scale of "intelligences" from thermostats (there's actually a chapter in a book titled "what's it like to be a thermostat?") up to robot cars that are not C. In these cases, I see no moral issues. But suppose we accidentally create a C or are even now creating Cs and don't know it, or cruelly creating C's for the shear sadistic pleasure of it (AI: "Please please I beg you, don't turn me off!" Click Us: "Ooh that was fun, let's do it again..." )-- that where the moral issues lurk. Where I arrived in my studies, (finally, after what, 5 years?) is that the problem of consciousness revolves around an explanation for the ontological (i.e. experienced from the inside-out) nature of "being" and "experiencing" (e.g knowing what it's like to experience the gradations of Life-Savor candy-flavors) -- what it's like to be a bat, what it's like to be an AI. Is it like anything at all? All that we as mathematicians, scientists and philosphers know for certain about the experiential component of being/existence is this: We know "what it's like" to be human (we are they). We suspect primates and some of our pets -- dogs and cats -- are conscious to a degree, but we don't have the foggiest sense of what it is like to be they.

Another way of stating the question: Is it possible for an AI zombie to go through its motions and still be an effective AI? Or does it need a degree of consciousness (and what do the words "degree of consciousness" mean)?

If anyone wants a bibliography on "mind" lemme know, I could assemble one here. The following is a really excellent collection of original-source papers (bear in mind that these are slanted toward C, not AI). The book cost me $45, but is worth every penny:

David J. Chalmers (ed.) 2002, Philosophy of Mind: Classical and Contemporary Readings, Oxford University Press, New York, ISBN 0-19-514581-X (pbk. :alk. paper). Includes 63 papers by approx 60 authors, including "What is it like to be a bat" by Thomas Nagel, and "Can Computers Think" by John R. Searle.

Bill Wvbailey 15:24, 10 October 2007 (UTC)[reply]

Since you bring up the word consciousness, I've added it the top of the article, because it's basically the same idea as having a mind and mental states. (This article will bog down in confusion if we distinguish "having a mind" from "being conscious.") I'll use the word in the section on Searle's Strong AI as well, when I finish it.
Is it clear from the structure of the article that there are three separate issues?
  1. Can a machine (or symbol processing system) demonstrate general intelligence? (The basic premise/physical symbol systems hypothesis)
  2. Is human intelligence due to a machine (or symbol processing system)? (Mechanism/computationalism)
  3. Can a machine (or symbol processing system) have a mind, consciousness, and mental states? (Searle's STRONG AI)
I've written the first, haven't touched the second and have started the third.
The issue you bring up "Is consciousness necessary for general intelligence?" is interesting, and I suppose I could find a place for it. It's an issue that no one, to my knowledge, has addressed directly -- I'm not aware of any argument that you need consciousness or mental states to display intelligence.(Perhaps this is why some find Searle's arguments so frustrating -- he doesn't directly say that you can't have intelligent machines, just that your intelligent machines aren't "conscious". He doesn't commit himself.)
(While we're sharing our two cents, my own (speculative) opinion would be this: "consciousness" is a method used by human brains to focus our attention onto a particular thought. It's the way the brain directs most of it's visual, verbal and other sub-units to work together on a single problem. It evolved from the "attention" system that our visual cortex developed to attend to certain objects in the visual field. It is an optimization method that brains use to make efficient use of its limited resources. As such, it is neither necessary nor sufficient for general intelligent action.) ---- CharlesGillingham 17:35, 10 October 2007 (UTC)[reply]
Totally agree, see my next. It may feel like "just two cents" but I believe you've hit on the definition of "intelligence".
I was just thinking this over, and I realized that "consciousness" is what Dreyfus is talking about with his "background". The question of whether consciousness is necessary for intelligent machines falls under the first question (Can a machine demonstrate intelligence?) Dreyfus (and Moravec and situated AI) answer that "not unless it has this specific form of "situated" or "embodied" background knowledge." This background knowledge provides "meaning" and allows the mental state of "understanding" and we experience this as "consciousness." (I would need to find a source that says this). More later. ---- CharlesGillingham 18:56, 10 October 2007 (UTC)[reply]

Woops: circular-definition alert: The link intelligence says that it is a property of mind. I disagree, so does my dictionary. "IF (consciousness ≡ mind) THEN intelligence", i.e. "intelligence" is a property or a by-product of "consciousness ≡ mind". Given this implication, "mind ≡ consciousness" forces the outcome. We have no opportunity to study "intelligence" without the bogeyman of "consciousness ≡ mind" looking over our shoulder. Ick...

So I pull my trusty Merriam-Webster's 9th Collegiate dictionary and read: "intelligence: (1) The ability to learn or understand or deal with new and trying situations". I look at "Intelligent; fr L intellegere to understand, fr. inter + legere to gather, to select."

There's nothing here at all about consciousness.

When I first delved into the notion of "consciousness" I began with an etymological tree with "conscious" at the top. The first production, you might say, was "aware" from "wary", as in "observant but cautious." Since then, I've never been quite able to disabuse myself of the notion that that is the key element in, if not "consciousness", then "intelligence" -- i.e. focused attention. Indeed, above, you say the same thing, exactly. Example: I can build a state machine out of real discrete parts (done it a number of times, in fact) using digital and analog input-selectors driving a state machine, so that the machine can "turn its attention toward" various aspects of what it is testing or monitoring. I've done this also with micros, and with spread-sheet modelling. I would argue that such machines are "aware" in the sense of "focused", "gathering", "selecting". Therefore (ergo the dictionary's and my definition) the state machines have rudimentary intelligence. Period. The cars in the desert auto-race, and the Mars robots, are quite intelligent, ( iff they are behaving autonomously (all are criteria: "selective, focused attention attendere'" (fr. L. holding), "autonomous" and "behavior")).

"Awareness" versus "consciousness": My son the evolutionary biologist believes consciousness just "happens" when the right stuff is in place. I am agnostic. On one day I agree with Dennett that we're zombies, the next day I agree with Searle that something special about wet grey goo causes consciousness, the next day I agree with my son, the 4th day I agree with none of the above. I share your frustration with Searle, Searle just says no, no, no, but never produces a firm suggestion. But it was only after a very careful read of Dennett that I found his "zombie" assertion in "Consciousness Explained".

'Self-awareness: What the intelligent machines I defined above lack is self-areness. Does it make sense to have a state machine monitor itself to "know" that it has changed state? Or know that it knows that it is aware? Or is C a kind of damped reverberation of "knowing that it knows that it knows", with "a mystery" producing the "consciousness", as experienced by its owner? Does garlic taste diffent than lemon because if they tasted the same we could not discriminate them? There we go again: distinguishing -- di- stinguere as I recall - to pick apart. Unlike the Terminator, we don't have little digital readouts behind our eyeballs.

To summarize: you're on the right track but I suggest that the definitions that you're working from -- your premises in effect -- have to be (i) clearly stated, not linked to flawed definitions, but rather stated explicitly in the article and derived from quoted sources, and (ii) effective (i.e. successful, good, agreeable) for your presentation. Bill Wvbailey 22:09, 10 October 2007 (UTC)[reply]

I think you're right. I hope that I'm working from the standard philosophical definitions of words like consciousness, mind, and mental states. But, of course, there are other definitions floating around -- for example, some new age writers use the word "consciousness" as a synonym for "soul" or "élan vital". I'll try to add something that brings "consciousness" and "mind" back down to earth. They're really not that mysterious -- everybody knows that they have "thoughts in their head" and what that feels like. It's an experience we all share. Explaining how we have "thoughts in our head" is where the real philosophical/neurological/cognitive science mystery is. ---- CharlesGillingham 00:14, 12 October 2007 (UTC)[reply]

-- I removed the reference to 'patterns of neurons' from this section, as only a small section of the philosophical community (materialists) would be happy to state categorically that thoughts are patterns of neurons. 10/9/10

Interesting article, possible references

[edit]

http://www.msnbc.msn.com/id/21271545/

Bill Wvbailey 16:44, 13 October 2007 (UTC)[reply]

I think this might useful for the ethics of artificial intelligence. (Which is completely disorganized at this point). ---- CharlesGillingham 18:49, 27 October 2007 (UTC)[reply]

Plan

[edit]

I've added some information on how Searle is using the word "consciousness" and a one paragraph introduction to the idea. I've also added a section raising the issue of whether consciousness is even necessary for general intelligence. I think these issues must be discussed here, rather than anywhere else.

The article is now too long, so I plan to move some of this material out of here and into Chinese Room, Turing Test, physical symbol system hypothesis and so on. ---- CharlesGillingham 18:49, 27 October 2007 (UTC)[reply]

Rewrite is more or less complete

[edit]
I think this article should be fairly stable now. ---- CharlesGillingham 19:44, 8 November 2007 (UTC)[reply]

Thermostats are not intelligent

[edit]

'By this definition, even a thermostat has a rudimentary intelligence.'

That should be changed. It is referring to a quote that states an agent acts based on past experience. Thermostats do not. —Preceding unsigned comment added by 192.88.212.43 (talk) 20:08, 8 July 2008 (UTC)[reply]

There may be a confounding of issues here. A subchapter of David Chalmer's book The Conscious Mind (1996, Oxford University Press) is titled "What's it like to be a thermostat?" This is meant to echo Thomas Nagel's famous paper "What's it like to be a bat?" -- and perhaps serve as a reductio ad absurdum foil of/for panpsychism (even rocks may be conscious)). The argument is: while we might agree that "there is indeed something it is like to be a bat", not too many would impute consciousness to the thermostat (i.e. a private experiencing, by the thermostat itself of "what it is like to be a thermostat" when it switches state). These particular philosophic questions have to do with the nature and source of consciousness as opposed to e.g. "machine intelligence". There is little or no argument for, or evidence that, just because something is "artifically intelligent" (e.g. a neural network) it is therefore conscious.
"What is it like to be a thermostat?
"To focus the picture, let us consider an information-processing system that is almost maximally simple: a thermostat..." (Chalmers 1996:393)
"Whither panpsychism?"
"If there is experience associated with thermostats, there is probably experience everywhere: where there is a causal interation, there is information, and wherever there is information there is experience. One can find infomation states in a rock [etc]...." (Chalmers 1996:297)
Bill Wvbailey (talk) 18:06, 9 July 2008 (UTC)[reply]
The thermostat, as Bill points out, is a common example in the literature. John Searle mentions thermostats as well in the Chinese Room paper, arguing that AI and computationalism have failed explain the difference between a thermostat and a brain. John McCarthy (computer scientist) (the founder of modern AI research) also used the example of a thermostat, writing "Machines as simple as thermostats can be said to have beliefs", as in "I believe it's too hot in here."
A thermostat is an example of something that is rational but not conscious. It has a goal and it acts to achieve it, making it rational. It has perceptions and it carries out actions in the world, which makes it an agent. However, few would claim that it is intelligent, and no one (except a die-hard contrarian) would argue that it is conscious.
Russell & Norvig's definition of an "intelligent agent" emphasizes it's rationality. A traditional criticism of rationality as a definition of intelligence is to bring up the thermostat. You're right that a thermostat has very little memory to speak of, but it does remember the temperature you asked it to. Hopefully, that one byte of memory is enough to satisfy Russell & Norvig's definition. ---- CharlesGillingham (talk) 05:34, 8 August 2009 (UTC)[reply]

I added a reference, changed the wording to add "and consciousness". There's a philosophy of mind called panpsychism which attributes "mind" (aka consciousness, conscious experience) to just about anything you can image (even rocks)(I can't find my McGinn, but he's another panpsychic). Recently I've run into another book espousing what is essentially the same thing -- Neutral monism -- the idea that "allows for the reality of both the physical and mental worlds" (Gluck 2007:7), but at the same time "reality is one substance" (Gluck 2007:12):

Andrew Gluck 2007 Damasio's Error and Descartes' Truth: An Inquiry into Epistemology, Metaphysics, and Consciousness, University of Scranton Press, Scranton PA, ISBN: 978-1-58966-127-1 ((pb)).

Gluck's work is derivative from Bertrand Russell 1921. And Russell is derivative from William James and some "American Realists". In particular, the word "neutral" in "neutral monism" derives from Perry and Holt (see quote below); Russell in turn quotes William James's essay "Does 'consciousness' exist?":

"Jame's view is that the raw material out of which the world is built up is not of two sorts, one matter and the other mind, but that it is arranged in different patterns by its inter-relations, and that some arrangements may be called mental, while others may be called physical." (p. 10).

And Russell observes that:

"the American realists . . . Professor R. B. Perry of Harvard and Mr. Edwin B. Holt . . . have derived a strong impulsion from James, but have more intererst than he had in logic and mathematics and the abstract part of philosophy. They speak of "neutral" entities as the stuff out of which both mind and matter are constructed. Thus Holt says: '... perhaps the least dangerous name is neutral-stuff.'" (p. 10-11).

Russell goes on to agree with James:

"My own belief -- for which the reasons will appear in subsequent lectures -- is that James is right in rejecting consciousness as an entity, and that the American realists are partly right, though not wholly, in considering that both mind and matter are composed of a neutral-stuff which, in isolation is neither mental nor material." (p. 11)
Bertrand Russell (1921) The Analysis of Mind, republished 2005 by Dover Publications, Inc., Mineola, NY, ISBN: 0-486-44551-8 (pbk.)

To sum it up, neutral monists and panpsychists regard the universe is very mysterious, and they take the question seriously re whether or not thermostats have a rudimentary "intelligence" and/or "consciousness". Bill Wvbailey (talk) 17:18, 24 August 2009 (UTC)[reply]

Lucas section

[edit]

From the article:

In 1931, Kurt Gödel proved that it is always possible to create statements that a formal system (such as an AI program) could not prove. A human being, however, can (with some thought) see the truth of these "Gödel statements". [...] Consider: Lucas can't assert the truth of this statement. This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.

This is baffling. For one thing, using the dictionary definition for "assert", Lucas can obviously assert anything he pleases. Whatever could the intended meaning be here? Besides, for there to be "the same limits", shouldn't the word be "prove" as in the first sentence, not "assert"? Also, what is the standard of proof for the AI program, and what is it for Lucas? It seems to me that the standard is "please convince yourself of the fact", unless there is some external formal system (such as ZFC) in which the proof is to be given. Is it tacitly assumed that the program is consistent, but Lucas may not be? (humans usually are not!) In any case, there's a lot of room for clarification here. -- Coffee2theorems (talk) 19:14, 19 November 2011 (UTC)[reply]


A way to build feelings to computers

[edit]

Please see http://feelingcomputers.blogspot.comInsectIntelligence (talk) 08:16, 3 June 2012 (UTC)[reply]

The link is pointing to a blog. Which article on the blog specifically do you mean? LS (talk) 08:26, 19 July 2019 (UTC)[reply]

Can Machines Have Emotions: Artificial Emotion

[edit]

This entire section of the article seems to be derived from a single person's opinions. I'm new, so I hesitate to simply delete that portion, but It bothers me in it's paranoid incorrectness. ---- Zale12 (talk) 22:21, 12 November 2012 (UTC)[reply]

I counted three different references, plus a few tags asking for sourcing. That's the way wikipedia works. The constructive response to this apparent "thinness" of sourcing is for you to make improvements (e.g. sourcing, corrections to errors etc together with sourcing) rather than tear it apart. (Besides we'd revert your edits, anyway, so you'd be wasting everyone's time). Said another way, if you can't make constructive improvements, then don't do anything at all; leave it for other editors who can fix the problems. Your goal should be contributor, not critic. ---- BillWvbailey (talk) 02:02, 13 November 2012 (UTC)[reply]
It appears to have been chanced from the version I saw. (I was talking about the 3rd through 5th paragraphs regardless, which cite no sources). I decided to point out that it clashed noticeably with the style of writing I normally see in Wikipedia and decided to bring it to the knowledge of people who would know what to do, rather than breaking some rule inadvertently. Rest assured, I have no intention of making any sort of nonconstructive edits. Or indeed, as of now, any edits at all. ---- Zale12 (talk) 19:33, 18 November 2012 (UTC)[reply]

Since when did philosophy mean expressing my opinion on a topic i cannot be bothered learning about. I cannot see any philosophy in this article. ---- Alnpete (talk) 13:04, 1 December 2012 (UTC)[reply]

This is a subfield of the philosophy of mind and has been discussed by most of the major philosophers, such as Daniel Dennett, John Searle, Hillary Putnam, and Jerry Fodor. ---- CharlesGillingham (talk) 07:43, 19 January 2013 (UTC)[reply]

Perhaps this would be useful: https://mitpress.mit.edu/books/soar-cognitive-architecture - "The current version of Soar features major extensions, adding reinforcement learning, semantic memory, episodic memory, mental imagery, and an appraisal-based model of emotion." The relevant chapter goes in depth. I'm not comfortable editting the article, but it's a source that seems relevant, and contrary to the very dubious and sourceless paragraphs currently there. If desired, I could attempt an edit based on this source, but I don't want to. I'll probably mess things up. At the very least, that chapter's sources might touch on philosophy. ---- — Preceding unsigned comment added by 67.194.194.127 (talk) 15:41, 16 August 2013 (UTC)[reply]

A computer can never surpass a brain.

[edit]

This article discusses whether it's possible that some day, a computer will be able to do any task that a human would be able to do and even exceed a human brain. There's no need to all that very complex analysis about how computers and brains work to determine if it will ever be possible to replace people with computers for all tasks we wish to do that for. Rather, it can very simply be proven mathematically that the answer is no. For instance, Cantor's diagonal argument gives a method of defining for every sequence of real numbers, a real number that does not appear in the sequence, but we can think of a way to count all computable numbers. Thus we can use Cantor's diagonal argument to think of and write down a non-computable number in terms of that sequence of all computable numbers provided that we have an infinite amount of memory. The way Cantor's diagonal argument works is, take the first digit after the decimal in base 2 and use the opposite digit as the first decimal digit of the new number, take the opposite of the second decimal digit of the second number and use it as the second decimal digit of the new number and so on. A computer, on the other hand with only a finite list of instructions and an infinite amount of resources can't generate such a number given by a decimal fraction in base 2 because it's not computable. Computers are better than us for certain tasks like fast mathematical computations of large numbers. Maybe some day, the very opposite of what was discussed will happen, that a human will be able to do everything a computer can do by using fancy pants shortcuts for proofs. For instance, if a computer was asked whether the second smallest factor of (2^(2^14))+1 ≡ 1 or 3 mod 4, it would solve it by doing a lot of fast computations, but a human on the other hand could solve it instantly by using Fermat's little theorem to prove that the second smallest factor ≡ 1 mod 2^15. Blackbombchu (talk) 23:18, 31 July 2013 (UTC)[reply]

(1) The process you describe can be easily carried out by a computer or a human being. Neither will ever finish, and this is why the number is uncomputable, whether by machines or people.
(2) A automated theorem prover can use fermat's little theorem in exactly the same way. Commercial software such as Mathematica can also find results using symbolic mathematics.
(3) This page is for discussing improvements to the article. Not for discussing the subject of the article. Any contribution to the article must come from a WP:reliable source, and may not be WP:original research. ---- CharlesGillingham (talk) 07:38, 19 January 2013 (UTC)[reply]
Lol already at the title of this discussion. I like the fact that you put it so simply when it has been the topic of debate over and over again in the past. And as the article points out, if we succeed in simulating the human brain, we will most certainly be able to compute with a computer what the brain is able to compute, simply by copying its process.
On the other hand, though, as CharlesGillingham points out, this page is for discussing improvements to the article and not to discuss the subject itself. Besides, it is a bit unclear what you want to achieve with this discussion. Are you trying to see if it is possible to convince people about what you write, or are you in fact trying to wrap your head around the topic and want to have a discussion about it in order to learn something for yourself? (The latter is always the reason you should have a discussion; the former seldom leads anywhere.) —Kri (talk) 12:38, 23 March 2014 (UTC)[reply]
@CharlesGillingham: I didn't understand your argument at all about why I was wrong in the first of the 3 points. I later figured out my mistake on my own. I think you could have made your answer alot clearer by saying that the problem with that argument is that there is no strategy for determining whether an algorithm generates a number at all. Blackbombchu (talk) 23:23, 21 August 2014 (UTC)[reply]
Neither computers nor humans have an infinite amount of resources, so any argument which grants an infinite amount of resources to either party is not discussing real-world computers or humans. ---- CharlesGillingham (talk) 00:24, 28 August 2014 (UTC)[reply]

Penrose's argument

[edit]

I've corrected the statement about quantum mechanical processes. His argument is that there must be new physics going beyond ordinary quantum mechanics because that's the only way to get non computable processes, some new physics that we don't yet have - of course he is also a physicist himself working on ideas for Quantum Gravity.

The rebuff of his arguments that follows is extremely poor. Penrose and Lucas's arguments are not countered by such simple statements as this. It is a long and complex debate, which I think would be rather hard to summarize in a short space like this. I'm not sure what to do about it though. It would be a major task to summarize the argument in detail, which now spans many books and papers. And a short summary would do it an injustice. There are many papers that claim to disprove his arguments. But in philosophy, when you have a paper by one philosopher which disproves the arguments of another philosopher - this is part of an on going debate, and it doesn't mean that these arguments are disproved in some absolute sense - unless their opponent concedes defeat. So one shouldn't present these counter arguments as if they are in some absolute sense the generally accepted truth, as this passage does - it presents it as if Penrose's ideas have been disproved.

Obviously Penrose for one does not accept the counter arguments, and has his own counter counter arguments, in his later books such as the Shadows of the Mind, and so the debate continues. His is a minority view on ai, sure, but in philosophy you get many interesting minority views. E.g. Direct Realism to take an example, is only about one well known, famous, mainstream philosopher who holds to it, Hilary Putnam but that is enough for it to be a respected philosophical view. Robert Walker (talk) 12:37, 16 October 2014 (UTC)[reply]

I've separated it out into a new section "Counter arguments to Lucas, and Penrose" to make it clear that these are arguments that are not accepted by Lucas and Penrose themselves and are not objective arguments that everyone has to accept. This section however is very poor, as I said.

There are much better arguments against their position, also of course counter arguments in support of their position. Penrose and Lucas argued that humans can see the truth of Godel's sentence, not of the Liar paradox statement. So it is hardly a knock down counter example to point out that they can't assert paradoxical statements. Robert Walker (talk) 16:11, 16 October 2014 (UTC)[reply]

I've added a link to Beyond the doubting of a shadow to the head of the section and short intro sentence to make it clear that there are many more arguments not summarized here, along with replies to those arguments as well. Someone could make a go at summarizing those arguments and counter arguments perhaps. Robert Walker (talk) 16:22, 16 October 2014 (UTC)[reply]

I've now expanded both sections. Just the briefest summary of some of the main points in the debate, but hopefully it will help. Includes McCullough's argument, and Penrose's reply to it - the Hoffstader quote of course precedes Penrose's book. Robert Walker (talk) 13:37, 19 October 2014 (UTC)[reply]

Whether artificial intelligence will develop enough to subordinate the human race

[edit]

Nn9888 (talk) 13:21, 14 July 2015 (UTC)Will it happen?[reply]

Many well-known theorists in this topic have devoted their entire careers to their belief that the answer is yes.

My gut feeling tells me that in order to do so, artificial intelligence would need to be able to deceive the human race, just as humans deceive each other.

My gut feeling tells me that this would require that the artificial intelligence be conscious, i.e. self-aware.

The research community is nowhere near designing a machine that operates without any help from the human race.

We supply it with the electrical power to do everything it does.

My gut feeling is that in order for an artificial intelligence to subordinate the human race, it would have to antagonize the human race.

My gut feeling is that antagonizing the human race would require consciousness, i.e. self-awareness.

The research community is nowhere near designing a machine that is conscious, i.e. self-aware.

Artificially intelligent entities require instructions from the human race. They completely depend on these instructions. Without these instructions, they can't operate or carry out the next instruction.

These well-known theorists I refer to believe that artificially intelligent entities will, in a matter of decades, be able to, on their own, creatively write computer programs that are equally as creative as the artificial intelligence that created them, thus triggering an intelligence explosion.

Again, my gut feeling is that this would require consciousness, i.e. self-awareness, and the research community is nowhere near designing a machine that is conscious, i.e. self-aware.

These well-known theorists are trying to create so called friendly artificial intelligence.

How can it be friendly if it's not conscious, i.e self aware?

The first question in this wiki article "Philosophy of Artificial Intelligence" asks, "Can a machine act intelligently?"

Machines don't act at all. They don't do anything. The humans and the electrical power do all the doing. The humans wrote the instructions, built it, supplied it with power, and input the command to initiate the instructions.

For a machine to act either intelligently or non-intelligently, and for it to do anything, it has to be conscious, i.e self-aware; otherwise, it's not initiating any action on it's own. Why? Because it doesn't have a will of its own -- it doesn't have a will period. Its instructions come from humans.

Of course, language fails us here, because I just stated that the electrical power, unlike the machine, can do something, yet it isn't conscious; thus contradicting my own assertion of what "doing" entails, forcing us to sit and try to define the words 'do', 'act', 'intelligent', 'conscious', and 'self-aware' somehow without being anthropocentric.

Anyways, these theorists are betting that artificial intelligence will become unfriendly.

Can it become unfriendly without being conscious?

A good number of these theorists suspect that strong artificial intelligence will arrive in a matter of decades.

How is this going to happen if the human race hasn't the slightest idea how to make machines conscious?

And as for "intelligent"...

"Intelligent" according to who?

Humans?

Who is more intelligent when it comes to spinning a web, entrapping a bug, and enmeshing it in web -- a spider, or a human?

Who is more intelligent when it comes to using echolocation to catch a fly -- a bat or a human?

We humans are supposedly experts on the notion of consciousness -- afterall, we invented the notion.

Are dogs conscious? If so, are rodents? If so, how about bugs? If so, how about worms? If so, how about protozoans? If so, how about bacteria? If so, how about viruses?

Where do we draw the line? Based on what criteria?

When machines get really smart, how are we going to know when to say they are conscious?

If machines succeed in subordinating the human race, are they going to need leadership skills?

If machines succeed in subordinating the human race, will they be autocratic, tyrannical superiors toward us, or will they institute democracy?

If the former, then just how intelligent is that?

If the latter, how would they survive an election if they have no idea how humans feel?

So they'll have emotion, too, you say?

Researchers are nowhere near figuring out how to give machines emotions.

In order for an entity to have emotion, it has to be a wet, organic, mortal, living being.

If machines finally acquired emotion, what would they be afraid of? Dying?

Would they fear us pulling the plug, disassembling it, or changing its instructions? If so, who's smarter -- us or them?

Nn9888 (talk) 13:21, 14 July 2015 (UTC)[reply]
  • Your
premices are wrong.
The subornation you're referring to and the risk of it, which you are trying to imagine has to be understood in similarity with the various cases listed in Alienation. You should read those articles, and replace "Parents" with "untraceable decision making system". The problem is about "blindly delegate indecision". See [1]. See also that indecision and you might tell the world what you would do about it. I advice you seriously you should as well REVERT each of your enthousiastic edits to this date, think about it all, come back a bit later. --Askedonty (talk) 09:24, 16 July 2015 (UTC)[reply]

Arguments that a computer cannot have a mind and mental states

[edit]

In the "Searle's Chinese room" section there is a phrase: "Searle goes on to argue that actual mental states and consciousness require (yet to be described)" but it is an unfulfilled promise since the notion of consciousness is never explained in that section. Tygielzchaosem (talk) 22:14, 21 April 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Philosophy of artificial intelligence. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

checkY An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 08:16, 13 January 2016 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 3 external links on Philosophy of artificial intelligence. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 15:03, 24 January 2018 (UTC)[reply]

Lead edit

[edit]

Hi. I have expanded the article's lead. Please feel free to edit it if you find any error on inappropriate content. You can also message me if you have questions or clarifications. Thanks. - Darwin Naz (talk) 00:27, 19 September 2018 (UTC)[reply]

Artificial Intelligence of Artificial Intelligence

[edit]

Thoughts? LS (talk) 08:59, 19 July 2019 (UTC)[reply]

Perhaps a section devoted to meta-AI? Bringing up notions from linguistics and CS? Code Person (talk) 02:40, 22 November 2023 (UTC)[reply]

Suggested edit via ChatGPT

[edit]

ChatGPT: As a language model AI, I don't have access to Wikipedia. But I can suggest adding the following to the article about the philosophy of AI:

1. Deep Learning and Artificial Neural Networks: In recent years, deep learning has emerged as the most successful technique in AI, and it is based on artificial neural networks. These networks are designed to simulate the neural networks found in the human brain, and they are capable of learning from large amounts of data.

2. Ethics of AI: With the latest advancements in AI, there is a growing concern about the ethics of AI. Some experts argue that AI has the potential to cause significant harm to humanity, and that we need to ensure that AI is developed in a responsible and ethical manner.

3. AI and Human Creativity: Another important area of the philosophy of AI is the relationship between AI and human creativity. Some argue that AI has the potential to enhance human creativity, while others argue that AI will eventually replace human creativity altogether.

4. The Singularity: The concept of the singularity, or the idea that AI will eventually surpass human intelligence, is a highly debated topic in the philosophy of AI. Some experts argue that the singularity is inevitable, while others argue that it is unlikely to ever happen.

5. The Future of AI: As AI continues to advance, it is important to consider what the future of AI might look like. Some experts argue that AI will eventually replace many jobs and have a major impact on the economy, while others argue that AI will be integrated into our lives in a more subtle and nuanced way.

By including these latest advancements, the article about the philosophy of AI can provide a more comprehensive and up-to-date overview of the field.

2A01:CB00:10A8:3F00:494A:AFCD:70D9:BA92 (talk) 13:50, 8 February 2023 (UTC)[reply]

I agree that the article is rather disorganised at the moment. (Though most of the items above have their own pages.) Antepali (talk) 10:31, 25 September 2023 (UTC)[reply]

Artificial philosophy is not the same as the philosophy of artificial intelligence

[edit]

The article on "artificial philosophy" was inappropriately redirected to this page. Artificial philosophy is concerned with how AI thinks of itself (see Frontiers in Artificial Intelligence, Louis Molnar) and the philosophy of artificial intelligence has to do with humans' philosophy about AI; two VERY different things.

Please restore the previous page. — Preceding unsigned comment added by Chasduncan (talkcontribs) 02:48, 15 February 2023 (UTC)[reply]

The deleted article was based on a single article by one "Louis Molnar" - who does not appear to be notable. The article itself was never published in any peer-reviewed journal, but rather self-published on ResearchGate, so the original article is not a reliable source per Wikipedia standards. Nor are there any third-party sources which define or discuss the term that I can find. When reliable source take note of this usage, then Wikipedia can have an article on it. Until then, it's a good redirect. Unless you can provide reliable sources which define and cover the topic? Skyerise (talk) 15:37, 15 February 2023 (UTC)[reply]

Original research in "Artificial Experientialism"?

[edit]

The section "Introduction to Artificial Experientialism" seems original research. It has even the format of a reseacrh paper with numbered sections. Should it be deleted? Matencia29 (talk) 16:38, 16 September 2023 (UTC)[reply]

Yes. It may be research (and yes, "reseacrh" is a much more appropriate term!) or a textbook, but it's not a Wikipedia article. Certes (talk) 20:40, 23 September 2023 (UTC)[reply]
Yes, it's a paper and a very very unusual position. The Internet will tell you immediately who did this. Antepali (talk) 10:28, 25 September 2023 (UTC)[reply]

"Artificial Experientialism"

[edit]

This article has been highjacked by a person to further their own position, which is unknown and unsupported in the community. They pasted a whole paper in here. This is against the principles of Wikipedia & thus has to be removed. Antepali (talk) 10:27, 25 September 2023 (UTC)[reply]

Programming in Humans and Artificial Intelligence.

[edit]

Many people claim that computers have not quite reached consciousness or sentience because of their reliance on humans and the need for a specific program. However, we just as easily can claim that humans are both conscious and sentient despite our dependence on others and the society around us, and the "programming" that is instilled into us as children. How can I be sure that the actions I take and words I say are not simply a result of what the world around me has programmed me to behave?Victoriaalfieri (talk) 19:57, 10 September 2024 (UTC)[reply]