Thinking Machines The philosophical question of whether machines are able to think has been a central question, debated by philosophers for many centuries. There have been various positions and beliefs of many different prolific philosophers. But more specifically by Rene Descartes who, rejects the idea that machines can in fact think, and Alan Turing who proposed a behavioural test, which dealt with the question whether or not machines were able to think. John Searle’s Chinese Room argument also has a strong position upon this question. In this paper, I will argue that Searle’s argument is sufficient because information-processing machines do not have intentionality, therefore do not think. Descartes, a philosopher with a dualist position …show more content…
He does not deny that machines can think, however he distinguishes the difference between humans being machines and machines who are information-processing computers. Searle states that they are both different and that computer’s cannot think. His Chinese Room argument, argues that while meeting the criteria of: using a comprehensive manual, and given that Searle follows this manual correctly, Searle in the room does not know a single word of Chinese, the Chinese room theory can pass the Turing Test without semantics. This attempts to solve the debate over whether or not machines can think, trying to prove the Turing Test wrong. As long as a machine or a person follows the manual of this logical language to answer questions given to them, in a test they have the potential to pass a thinking test such as the Turing Test. The Chinese Room argument tackles the debate of thinking machines, and tries to prove that programmed machines may seem to understand language, however it does not in fact understand the language (Searle 1980). Therefore, proving the Turing Test to be insufficient of proof of thinking
I believe, B.F. Skinner meant when he stated, “The real problem is not whether machines think but whether mean do” is they, machines need to be program to work efficiently and effective and the individuals should not get so comfortable with machines completing a task that they forgot how they functioned prior to the machines. For instance, people who depend on an alarm clock to wake up. Lord forbid, the electricity goes out. I have an internal alarm clock that wakes me up every morning around the same time. I believe that it is a learned behavior and they can program themselves to do so, if they really wanted
The documentary Alien of the Deep Sea presented us with six different experiments aimed at studying different aspects of octopuses' intelligence. I will focus on just one of those experiments and attempt to apply Jackendoff's First Fundamental Argument which argues that a language user's mind can be viewed as an internal computational system containing unconscious set of rules.
In the article, “Net neutrality hits a nerve, eliciting intense reactions”, Cecilia Kang discusses how the pending repeal of Net Neutrality by the FCC and Chairman, Ajit Pai, is adamantly contested by most of the Internet community and most companies, big or small. To develop her argument, Kang uses a wide variety of appeals from established and startup companies, statistics and evidence related to the reaction to the repeal, and demonstrations on how polarizing the issue is, and the repeal’s effect on solving the problem of Internet regulation. Kang cites a multitude of Internet-based companies or organizations, such as Mozilla, Google, Netflix, and Free Press, to demonstrate their concern and clarify their resentment of the repeal. For instance, Google and Netflix argued that “telecom companies should not be able to split sites because that would allow them to become a sort of gatekeeper.” These responses better clarify companies’ concerns about the repeal and its effect on their business, while also aiding Kang in developing her article on explaining the concern and the response it has elicited. According to Kang,
Even if a computer was programmed with all the information ever known to man, how could it be capable of conscious thought?
John Searle 1980(in Cooney, 2000), provides a thought experiment, commonly referred to as the Chinese room argument (CRA), to show that computers, programmed to simulate human cognition, are incapable of understanding language. The CRA requires us to consider a scenario where Searle, who is illiterate in Chinese, finds himself locked in a room with a book containing Chinese characters. Additionally, he has another book which has a set of instructions written in English (which he understands), that allows him to match and manipulate the Chinese characters so that he can provide appropriate written responses (in Chinese) to incoming questions, which are also written in Chinese. Moreover, Searle has a pile of blank paper with which he uses to jot down his answers. Subsequently, Searle becomes so proficient in providing responses that the quality of his answers matches that of a native Chinese speaker. Thus, Searle in the CR functions as a computer would, where he is the system while the books are the program and the blank paper acts as storage.
The reply states a computer can only derive the semantics from syntax if given enough connections to assist with the derivation from syntax to semantics. However; since the reply is essentially an incomplete argument against Searle’s argument, it is not a very devastating criticism. If the Chinese Room was put into the robot, and I was given all of this syntactical information about the world, I still would not know what the symbols would mean despite what the robot is actually doing because I am only shuffling symbols. I am unable to gain any meaning from them. (Cole, 4.2) The actions of the robot do not prove that the computer operating it is thinking, it only means that it is still able to run through its own
In Reason and Responsibility, John Searle presents his Chinese room argument to refute Strong AI, or artificial intelligence. Strong AI is an idea that a computer is as genuine and accurate by virtue of those who programmed it, not a simple tool. Essentially, a system that has a mental state, M, and follows a set of programmed rules as if it behaves as M. Searle wishes to prove that a mechanical application of communication rules to a system does not give the machine the ability to understand the language, hence, can’t think for itself. In other words, Searle is saying that computers cannot think. Searle’s argument is as follows:
Christopher says “… I didn’t understand about other people having minds” (116). Christopher does not understand that other people have their own thoughts. He starts to compare the human mind to a computers; It’s like computers. People think computers are different from people because they don’t have minds, even though, in the turning test, computers can have conversations with people about the weather and the wine and what Italy is like, and they can even tell Jokes.
Searle believed that materialism and functionalism did not give a full explanation to the human mind, that there was much more to the human mind then electrochemical activity. He believed that we could teach machines syntax (sentence order) but not semantics (understanding of theme). Therefore computers would not know what they were doing but basically just replying to specific stimuli.
The conditions of the present scenario are as follows: a machine, Siri*, capable of passing the Turing test, is being insulted by a 10 year old boy, whose mother is questioning the appropriateness of punishing him for his behavior. We cannot answer the mother's question without speculating as to what A.M. Turing and John Searle, two 20th century philosophers whose views on artificial intelligence are starkly contrasting, would say about this predicament. Furthermore, we must provide fair and balanced consideration for both theorists’ viewpoints because, ultimately, neither side can be “correct” in this scenario. But before we compare hypothetical opinions, we must establish operant definitions for all parties involved. The characters in
defines the existence of a subjective state as a prerequisite for consciousness. According to Searle’s
In his essay, Minds, Brains, and Computers, John Searle attempts to prove that Strong AI does not exist. He still believes that there is some merit to weak AI, as a means for understanding how certain elements of the brain function, but using computers as a way of not only mimicking the brain but actually being a mind. The most concrete example that Searle used in support of his claim was that computers cannot understand, and he demonstrated this through the Chinese Room experiment. In the Chinese Room Experiment, a person can functionally answer questions in a language that they do not understand, as long as they are given the tools to match certain words to words that they understand in their native language. In this experiment the subject
John Searle formulated the Chinese Room Argument in the early 80’s as an attempt to prove that computers are not cognitive operating systems. In short though the immergence of artificial and computational systems has rapidly increased the infinite possibility of knowledge, Searle uses the Chinese room argument to shown that computers are not cognitively independent.
The Representational Theory of Mind proposes that we, as both physiological and mental beings, are systems which operate based on symbols and interpretations of the meanings of such symbols rather than beings which operate just on physiological processes (chemical reactions and biological processes). It offers that humans and their Minds are computing machines, mental software (the Mind) which runs on physical hardware (the body). It suggests, too, that we are computing machines functioning as something other than a computing machine, just as every other machine does.
He demonstrates this by distinguishing what he calls Strong and Weak Al. Searle contrasts these two Artificial Intelligence relationship between minds and computers. The definition for Weak Al is the investigation of using computers as a very powerful tool for running experiments to help explain and study the mind. Searle does not have any disapproval with weak Al. By contrast, Searle argument is with Strong Al using the “Chinese room” to explain that the strong Al is not true. Strong Al means that an appropriately programmed computer is a mind and that does understand with the appropriate program given. For this reason Searle created the “Chinese room” experiment to demonstrate Strong Al is false. Searle identified strong Al as functionalism because programs found in computers are independent of its performance in machines. Functionalism is physical disposition that enables a function to