Here is my Sophie version of Alan Turing’s “Computing Machinery and Intelligence.” Enjoy!
Turing’s “Computing, Machinery and Intelligence” is a fascinating piece to consider when discussing the potential power of machines and what that means to us and our humanity. I’d like to extend his idea by introducing John Searle’s counter argument, as presented in a 1980 paper entitled, “Mind, Brains, and Programs.” Almost sixty years since Turing published his work, I think it’s harder to argue that a machine couldn’t be developed to convincingly pass Turing’s test–if one hasn’t already been developed. Searle introduces a counter point, the Chinese Room argument, and suggests that a machine could pass Turing’s “imitation game” without understanding what it’s doing nor helping to explain the way humans think. His example involves a English speaker in a room with a set of Chinese inputs and outputs and a set of rules to answer any potential question posed to them. With enough practice and a complete enough set of rules, it seems as if this machine (the person, rules and symbols) could answer any question in Chinese–and pass Turing’s test–without the person inside having any understanding of Chinese. If such were the case, what truly comprises thought and thinking? In these examples, I think you can go back and forth and what constitutes “thinking” and a “machine” forever, but to me I think you’re going to reach a point where the line is significantly blurred. I see no reason why–with enough time, technological advances, and understanding of ourselves–a machine couldn’t be developed to think and to understand what it’s thinking. For one thing, I’d say you certainly can’t rule anything out until, at least right now with what we know.
Bush’s piece read oddly familiar today, in the age of computing. He obviously wasn’t using many of the terms–“internet,” “computer,” and such–because they hadn’t been made up yet, but the detail with which he is able to talk about some of these future technologies is pretty incredible. On one hand, I’m not extremely surprised–I’ve read enough science fiction to have encountered similar speculation to this–but the detail with which he wrote nonetheless stood out pretty strongly. The aspect I found most interesting about his article, though, was the setting and inspiration with which he wrote it. The article is dated July 1945 and thus was published in the close months of World War 2. I really liked his suggestion that we undergo a shift and direct the powers of our intellect away from building physical objects that give us power and instead focus on more abstract, mental processes. In many ways, that shift certainly has happened today. But what implications has it had on, say, the financial markets, where it seems we’re making money from money and not really producing anything physical. What can we take from this driver of Bush’s, when you consider the fact that we’re currently at war, but that we’ve just undergone a shift in presidents?
As the world and technology progresses, fantasies become reality and past notions looked at as absurd. Both Turing and Bush bring up the limits of technologies as well as make us question where all this is heading. Personally I’m not in fear of the possibility of human like machines capable of thought. Sure, it might further render actual people inadequate, increasing unemployment, but I think we should accept the seemingly inevitable.
The sort of people who challenged Turing thinking that machines could never match the complexities of the human mind, could probably think up programming to at least emulate those complexities.
With the amount of storage for each machine continually increasing it is likely that gigantic programs tackling with the intricacies of the human brain would be able to fit. It may take a ridiculously long time till we achieve this point in history however.
On the other hand who would even want machines capable of that. Bush already motioned towards the overabundance of information, and the difficulty of sorting through this all. We’re already having a hard time dealing with important things getting lost in the sea of knowledge, unable to float to the surface…perhaps it would be a good thing if machines didn’t reach that level. I suppose regardless of what peoples opinions are, if it happens we’ll just have to deal with it then.
While reading the article by Turing I thought a lot about how frightening it might be if machines today were actually capable of thinking completely for themselves. It would no doubt be something amazing, but whenever I think about thinking machines, I always come back to the (really terrible and not worth watching ever) Disney Channel Original Movie “Smart House”. In this movie, the machine or the house learned to think entirely for itself. At first this was very beneficial to the family that moved into the house, but after some time (if I remember correctly) got extremely attached to the idea that the family needed a mother and it would be able to fulfill the role perfectly. Unfortunately this attachment to the family led to the house practically taking the family hostage in their own home. Like I said, it is an extremely awful movie, yet it is because of this movie that I am still pretty hesitant about the development of things like artificial intelligence.
This article also made me think about the robot Bender from the television show Futurama. In a way, the show commented on the fact that there is something innately human that machines seemingly cannot replicate. Although Bender is fully capable of thinking in the show, in one episode he was given an “emotion chip”. With this chip for the first time he was able to feel things, which is to suggest that simply being and learning as a robot is not enough to capture and process emotions as a human would. Though I think machines are very smart and can process more information than is humanly possible, I would have to agree with the creators of Futurama that machines will never be just like us.