The discussion of machines performing repetitive and menial tasks to save human’s time and effort is almost as old as the discussion of the potential for self-examination which might expand the programs beyond the limited confines of static code. In New Media, Gitelman talks about a bot which was able to turn pictures into characters, and thereby make the New York Times’ archive searchable by an indexer on the World Wide Web. Of course the program was not 100% accurate, but its limitations were not in its ability to read, but in gaps in its programming for dealing with exceptions which might arise do to a wrinkle in the page, or a smudge of a letter. Extending from this, he examines the programming of understanding, the idea of having a machine do not only the tedious searching, indexing, or calculating, but also the thinking, and the other ramifications to our patterns of thought which the internet challenge. Anyone who has worked in photoshop understands the frustration of “I just want it to outline the PERSON” which seems like an easy concept to our minds, but which the computer is unable to understand and complete for us. Similarly, the internet processes require us to break down our patterns of thought and mechanize them in tiny chunks in order for a computer to be capable of performing them in varied settings. This has been important not only to the development of thinking machines, but to our own conceptualization of how we think.