Let us examine Turing’s proposition, “Can machines think?” (50). Inevitably, like the beginning of the article we have to define the terms of such a proposition in order to find an answer or perhaps a better question worth raising. Using Turing’s definition, which will be expanded upon later, the class of ‘thinking machines’ is constrained by imaginable discrete state machines that could pass the Turing test (55). Turing proceeds to delineate various objections ranging from the theological, creative and mathematical to the role consciousness and the nervous system plays in describing the Turing machine.
I propose extending the objections of consciousness and informal behavior and look at the proposition by posing a new one (56, 60). Can thinking machines have intentionality? Of course, this is about as simple as writing a proof for P is equal or unequal to NP. What do I mean by intentionality? According to John Searle, roughly speaking, intentionality is the difference between syntactic processing and semantic understanding. Searle describes this objection through his thought experiment here: http://plato.stanford.edu/entries/chinese-room/.
If a computer program is competent only in syntax but not semantics can it really be said that it can understand or is it thinking in the Turing sense through the computation and processing of raw information.
A short example in symbolic semantics: I made them duck (noun) vs. I made them duck (verb).
A more thorough example of the problem with (strong) AI research can be seen in the indirect speech act; for instance, various utterances can be used to impart propositions that can appear to be different from the proposition expressed or implied by the utterance itself. “Can you pass the salt,” is the most commonly used example.
Take this for instance:
A speaker S1 performs a linguistic action of type A1 if and only if
(a) S1 utters an expression E1, where E1 is a device for doing A1 and
(b) The felicity conditions C1 for that type of speech act obtain.
Speaker S1 makes a promise by uttering the expression E1 in the presence of H1, if and only if
(a) S1 utters an expression E1, where E1 is a device for promising and
(b) The felicity conditions C1 for promising obtain.
Accordingly, it then breaks down as follows assuming normal conditions obtain
S1 expresses the proposition that P1 in the utterance of E1.
In expressing that P1, S1 predicates a future act A1 of S1.
The hearer H1 would prefer S1’s doing to S1’s not doing A1. S1 believes H1 would prefer his doing A1.
It is not obvious to both S1 and H1 that S1 will do A1 in the normal course of events.
S1 intends to do A1.
S1 intends that the utterance of sentence E1 will place him or her under an obligation to do A1.
S1 intends that the utterance of E1 will produce in H1 a belief that S1 intends to do A1; and that S1 intends to be placed under the obligation of doing A1. Moreover, S1 intends to induce this belief in H1 by getting H1 to see that S1 intends to induce it.
This deceiving simple example is paradoxical. It consists of a few conditional statements that any programmer should know how to produce yet it requires understanding of contextual intentionality. Instead of asking ourselves “Can machines think?” Maybe we should be asking can machines that think, understand?
Bonus link: Eliza Online in case anyone wants to play http://www-ai.ijs.si/eliza/eliza.html