I had read through 17776 once before when it was first published and was instantly fascinated by it. Upon a second reading (and as the title suggests), a scene from Mr. Penumbra’s 24-Hour Bookstore came into my mind. Namely, when Kat describes how, when thinking about the future, it is basically impossible to think further than 1000 years down the road. You have your flying cars, your clean energy, your world piece, what have you – but that is still, in the grand scheme of things, so close to where we live now.
Jon Bois’s 17776, however, seems to counteract this. The piece seems to actually think down the line, beyond 1000 years. Earth is wiped of humanity, the only communication is between satellites, and football is now played by tornadoes and the endzones are old state lines – now that’s thinking in the future.
Alongside this, I was also reminded of (as I am about basically everything other text in this class) Flusser’s new consciousness. Quite literally, of course, with the piece’s advent of technology becoming sentient – this is a literal new form of consciousness. But I also felt it in the way time is depicted in the piece. There is something quite ironic about two sentient A.I.’s only being able to communicate every few years due to humanity’s way of creating them. The same way our processes for thinking and communication have been limited by our own minds, the piece’s A.I.’s are limited by our ability to create them. This might not be fully fleshed out, but I think the point is there. I would love to know if this piece (or any of the other texts) made you think of past readings.
Personally (and which seems to be the case from reading other blog posts), this week’s reading was a challenge. However, not in the traditional sense provided by previous texts. I didn’t find the content to be difficult to comprehend. Instead, the act of reading itself was extremely difficult for me.
Whenever possible, I avoid reading through a screen. In all cases, I prefer reading a physicalized copy over a PDF or webpage or even Kindle. For me, reading through technology’s interface makes it incredibly hard to focus on the content. What would have taken me an hour to read “Not A Case of Words” ended up being several hours, simply due to the fact I was reading something through a screen.
Reading “Between Page and Screen” proved the same challenge. At first, I was intrigued. The augmented reality was interesting, and instantly drew my attention to all the interfaces that were in play in order for me to able to see the words. However, once the “gimmick” disappeared, I quickly lost interest and found myself unable to focus. Moreover, the AR was quite finicky (which “Not A Case of Words” touches on). When it failed, I was instantly taken out of the world it attempted to create and I was drawn to notice its thingness. However, upon writing this, I realize this may be the point to BPaS. It uses multiple interfaces to deliver an interesting story, but it also seems to poke at those interfaces. Perhaps the awkward holding position and “unfinished” AR are parts of that?
All in all, this week’s reading – while being difficult to get through – inherently shed light on everything we’ve been discussing throughout the semester. Which makes me think the readings did their job.
First, I found this book to be a refreshing read, especially for the nonfiction we have read in the class. This book was an easier read than some of the other theory-heavy texts, and the points it was attempting to make were clear and concise, at least for me.
That being said, I was highly interested in this quote on page 55: “The movement of computers into people’s homes makes it important for us personal system users to focus our efforts towards having computers doing what we want them to do rather than what someone else has blessed for us…” and later on the same page, “Imagine having your own self-contained knowledge manipulator… the Dynabook would have allowed users to realize Englebart’s dream of a computing device that gave them the ability to create their own ways to view and manipulate information.”
These two quotes tore me. One on hand, I felt ruled by our corporate overlords, the ones who made the decision(s) to give my computer the abilities it has. I felt robbed of my right to create my own “self-contained knowledge manipulator.” But, on the other, I feel like there is no way the computer I would have come up with would be near the same ability that those at Apple have made.
In other words, I’m torn between my personal “right” to create a personalized computer or delegating to the experts and trusting that they are creating the best machine for me. I’m sure there are features that this Mac (that I’m currently typing on) has that I never would have even dreamed of making, but I’m still curious what features I would have come up were I to build my own computer. I would love to know your thoughts and viewpoint on this topic.
First things first, I absolutely loved this novel. For the first 350 or so pages, I was completely mesmerized for how the two worlds would connect. When they finally did, I was blown away and loved it all.
For the entire set-up and leading up to the realization that the lab has more or less been created for military purposes, the novel’s message seems to be “all art is commercialized,” or, at least commercialize-able.
However, the reveal – when the lab saves Taimur’s soul from completely breaking – the message seems to completely reverse. Or rather, become realistic, more complicated. Art’s sole purpose is no longer to be commercialized, profited off of; it also carries the unexplainable power to heal, save someone from the brink of insanity.
Simply put, I loved this message, although there are undoubtedly others as well. What do you guys think? What other messages have you drawn from the novel?
While I found the theory behind Hookway’s “Interface” to be quite interesting, it was still a very tough read. I often felt his message was obscured behind the structure of his sentences, but then I wondered how the actual thingness of the book itself aided in my difficulty. To be clear, I bought the actual book, not the excerpt; and to me, it felt like I wasn’t reading a book, but a theoretical essay I had just printed off the internet. I wonder how the physical size, shape, and typeface of the book aided in this. All of it was very unconventional – a square shaped book with blocky, technological-esque text.
In contrast, Galloway’s text was a much easier read for me. Plus, it felt and looked like a normal book compared to Hookway’s. It was rectangular, had a “normal” typeface.
How much did the physical forms of the books aid in my interest or difficulty with each? I would love to know your guys’ thoughts!
Edit: in addition to the layout of the two texts, for me, Hookway seems to focus more on the interactions of the interface than Galloway, but I still found both to not quite make the leap I was hoping for. Personally, Hookway’s text seemed quite abstract which made it harder to grasp and apply to my own life/reading. In contrast, I found Galloway’s text to be less dense, but it seemed to focus less on the interactions that Hookway defines. I think as a paired set of texts, they come extremely close to a painting a clear picture – but something seems to be missing for me.
Hey all! I’m leading discussion for this upcoming class, so I figured I would pose some initial questions for my blog post so you all can get an idea of what discussion will center around:
- This was hinted at in the previous class, but why do you think this book came after Whitehead? What are some similarities between the two?
- On page four there’s a brief mention of “digital thingness” – What is digital thingness?
- Going off of that, how (much) does thingness rule our relationships to the thing itself?
When I was reading this, I became hyper-aware of how ruled I felt by the format, layout, and design of whatever it was I was working with – be it a book, laptop, piano, etc. I would love for us to think about truly how often the thingness of our objects, specifically in the realm of reading and writing, comes into play in our everyday lives. How often do you take a step back and look at the thing itself? Only when it breaks? How do lines on paper define how we write our notes? What about word docs? Powerpoints? Blog Posts? Etc.
Looking forward to talking about it all with you guys!
First and foremost, this book had me laughing out loud with every flip of the page. Two moments in this book perhaps made me laugh harder than any other novel I have ever read, both coming from scenes with Jim and John: “Hot dogs and mustard is Jim’s favorite meal, mustard being a discrete element and not mere condiment. John likes hamburgers with ketchup-fine distinctions are not lost on John…” and, after kidnapping Ben Urich, “‘Is that okay with you, Jim? Just nod because I know it hurts to talk, what with your tooth and all.’ Jim nods, grateful that his friend and partner understands him so well.”
But, satire and humor aside, the ending of this book (and the whole idea of the black box) inherently made me think of Flusser’s idea of surpassing our current linear history for a new type of consciousness, leaving us – the people with linear mindsets – with an outdated perception of the world; precisely in the passage on 198: “They will have to destroy this city once we deliver the black box. The current bones will not accommodate the marrow of the device.” In essence, this new way of thinking about elevators will send the city “back to kindergarten,” having to start anew and relearn (or rebuild) their entire ways of thinking (construction). Additionally, the repeated phrase “they aren’t ready, but they will be” in the final pages hints at the same idea.
I would love to know if other passages in the novel made you guys think of Flusser.
With my original project proposal, I stated I would like to investigate the idea of an algorithm (or AI, rather) that, were you to input all of Rilke’s poetry, could produce its own poetic works based off the input it was fed. I was unsure what question I wanted to dive into surrounding this topic, but I knew I wanted to use the lens of Authorship Theory – and I think I have it now:
Were you to feed an algorithm all of Rilke’s poetry and have it create its own poetry based on the input, who would the author be? Would it be the algorithm, the program that is devoid of life (and thus Barthes’s DotA idea of historical and societal contexts); would it be Rilke, whose poems were input to the algorithm, so you could argue the poem was just an inevitability of Rilke’s work; or would it be the programmer of the algorithm?
Personally, I’m still unsure where I side. All three have compelling viewpoints and arguments. I do know, however, I want to the centerpiece text to be Barthes’s “Death of the Author,” because I think his text allows for some fascinating entry points.
The first that comes to mind would be with the algorithm – since the program has no outside life itself (it doesn’t breath, walk to the store, have opinions), it would be perfectly void of outside influences and the text could be read as it stands on its own two legs. However, one could also argue that it does have a societal context because a human created it. Either way, (again, still unsure where I side) I think this second draft has really narrowed down the avenue I want to travel through for this essay. I would love to know your thoughts and suggestions!
On the bottom of page 161 and top of 162, Kirschenbaum (or rather his word-processor) writes: “Today some believe that the grail of human-computer interaction is to operate machines directly by neural impulse. Writing can unfold at the speed of thought, without regard for fumbling fingers, inflammation of the carpal tunnel, or other bodily functions.” After reading that, I became extremely uneasy and felt like, if that technology were to happen, I would find extreme difficulty writing with it.
First, it deeply unsettled me personally physicalizing the text, whether it be through writing with ink or typing, is a large part of what attaches me to my text, what makes me feel like I have written it. Also, when I’m ‘in the zone’ with my writing, it’s almost like I’m not thinking and my fingers are the ones doing the writing.
If this technology were to become present in my lifetime, I don’t think I would be rushing to use it. I’d dip a toe in, maybe. Of course, this is how many writers felt about the invention of the word processor, as evident in Track Changes. I wonder if, in a few generations, another Track Changes-esque text will be published, tracking the invention of the thought-processor and various authors’ relationships to it.
Much like how Prof. Fitzpatrick was unable to put down the Crying of Lot 49 upon her first reading, I personally found Sloan’s novel completely gripping and it never left my hands until I was finished. Speaking in terms of the class’s focus, there is some obvious interplay here between technology and literature. Namely, the characters use the biggest, most bad-ass technology to attempt to solve an ancient text’s code. In this interplay that happens throughout the entire novel, I felt it became clearer and clearer that Sloan’s message almost directly contrasted that of Flusser’s.
First, in the scene where they basically shut down Google for a split second in a failed attempt to solve the code, I thought it to be a pretty direct message of “Hey, technology can’t provide what original literature does,” a direct to Flusser’s idea that all of literature is completely replaceable.
I again found this idea at the end of the book (spoiler alert?) when the code is solved. The secret to breaking the code was inside the literal creation of text itself (in a typeset), not in technology.
And finally, I found it interesting that the entire novel actively intertwines technology and text as two players for the same team. They both use and try to help each other throughout – again, quite contrary to Flusser’s idea that technology is here to supersede literature, not work alongside it. Of course, there are similarities between the two as well, and I’d love to hear your guys’ thoughts!