I suppose we have to define what thinking is when we consider if machines can think. Thinking is not computing figures or following directions, obviously. Humans will think about things that a computer could never comprehend, and so computers could never fully think like a human.
Computers cannot assume human form, they cannot interact socially, and if they were to act in their own self-interest, they would still be different than humans because the needs of a computer are radically different from that of a person. A computer would never be subject to something like peer pressure, nor would they try to become queen bee of a clique. Computers have very clearly defined goals and orders, where as hormones and emotions can cloud the mind of a human, leading us to deviate from our long-term goals.
Another aspect: in psychology I learned about a pyramid that describes motivation. Primarily, as humans, we are concerned with basic desires such as hunger. Next, we require a sense of safety, and next we are motivated by a desire to belong. In general, once these primary motivations are fulfilled, we encounter the motivation of self-actualization, the desire to reach your greatest potential (eg. becoming a great athlete, being a great mother, etc). But all these motivations are intertwined, so at any time you may experience a combination of all these motivations. Computers can’t really think like humans do because the motivation to feel safe and to have a sense of belonging is impossible in a machine. Furthermore, a computer will rank its priorities, and act predictably as to which motivation it wants to fulfill. I feel that humans are more random in that the motivation we choose to fulfill depends on a lot of random things. Lastly, self-actualization is a powerful drive for humans, and by definition, a computer can’t really have this drive. If a machine were intelligent enough to try and improve its functions of its own free will without a programmer…well then it would require some sort of autonomous consciousness that closely replicated that of a brain.
I suppose another problem in the way of machines thinking like humans is that every human is different, and that it’s inherently extremely difficult to define and to code for the aggregate behavior of a human. Even if you could provide a function that described aggregate human behavior, the resulting behavior may seem dreadfully…artificial, as there would be no small quirks or subtleties that make us distinctly human.
Just my thoughts.