The term virtual construct is used here to denote a non-physical, digital, artificial form; the interaction with which is made possible through a human/machine interface. The term software is often used to refer to virtual constructs, but in many cases proves to be of limited use. Indeed, software can be classified as virtually constructed, but the term loses much of its significance when referring to emergent virtual constructs, whose properties are difficult to pinpoint, as they extend far beyond the boundaries of the software used to create, modify, and interact with them.
There are two types of virtual constructs: discrete and emergent.
The majority of software applications shipped today can be classified as discrete virtual constructs. Generally “feature complete” with all necessary resources contained within the application itself, examples of discrete virtual constructs include word processors, image editors, calculation engines, and even some single- and multi-player video games. Software developers may periodically issue updates to DVCs that add features or improve stability, but DVCs overall lack the ever-evolving natures of their emergent cousins.
Discrete Virtual Constructs live locally on the hard disks upon which they are installed, and carry out a user’s commands by taking advantage of the local machine’s computational resources. They may include a limited feature set requiring network access, but these are not the DVC’s defining characteristics. Most DVCs would be able to carry out their intended functions without ever connecting to the internet.
Virtual constructs are metaphysically unique forms in that in some sense they are otherworldly. Unlike spirits, gods, and other supernatural phenomena, virtual constructs surely exist as part of the material world, but we don’t interact with them or through them as such. Not only do virtual constructs physically appear to exist nowhere in particular, but they also have the ability to exist in more than one place at a time. They can be accessed on every desktop computer and smartphone simultaneously, yet the absence of their presence on any one of these devices in no way affects the existence of the construct as a whole. The existential nature of virtual constructs is as complex and contradictory as that of the now defunct conception of an omnipresent god-like being.
Where does a construct like Twitter or a virtual world like Minecraft exist? It can be said with certainty that they do exist, but where are they? They span vast distances, occupying disk space on servers across the world; servers that can be geographically located and physically accessed, but in no way can one point to a set of atoms and say “That is Twitter,” or “Here is Minecraft.”
Indeed, one could theoretically collect the entire complement of servers upon which every instance of World of Warcraft is stored, but even then it would be absurd to refer to an array of rack-mounted servers and claim “That is World of Warcraft.”
This is because a reductionist approach to the explanation and identification of emergent systems is absurd. For this is the very idea of emergence: that properties metaphysically necessary to the system as a whole are not, and cannot, be found in the system’s constituent parts. Emergent virtual constructs cannot be understood by studying the interactions of particles at the atomic level of a hard disk. Nor do these constructs begin to take shape or show characteristics uniquely identifiable as virtually constructed until level upon level of complexity is added to the system.
Recall now the concept of the me in Neal Stephenson’s Snow Crash (257). In his search to understand the ancient Sumerian civilization, the novel’s hero, Hiro Protagonist, searches long-forgotten recesses of the Central Intelligence Corporation’s archives. In a conversation with the Librarian, a virtual research assistant which is itself an example of an EVC, we gain valuable insight into how emergent properties within virtual constructs can mimic those within a civilization:
HIRO: “Execution? Like executing a computer program?”
LIBRARIAN: “Yes. Apparently, they are like algorithms for carrying out certain activities essential to the society. Some of them have to do with the workings of priesthood and kingship. Some explain how to carry out religious ceremonies. Some relate to the arts of war and diplomacy. Many of them are about the arts and crafts: music, carpentry, smithing, tanning, building, farming, even such simple tasks as lighting fires.”
HIRO: “The operating system of society.”
LIBRARIAN: “I’m sorry?”
HIRO: “When you first turn on a computer, it is an inert collection of circuits that can’t really do anything. To start up the machine, you have to infuse those circuits with a collection of rules that tell it how to function. How to be a computer. It sounds as if these me served as the operating system of the society, organizing an inert collection of people into a functioning system.”
Here, the Librarian has identified the me as an emergent property of civilization, and Hiro likens it to the role an operating system plays in the overall functioning of a computer. Emergence is manifest in the society through the me, and in computers through the OS.
The me—the operating system of society—is metaphysically essential to the civilization, but it cannot be identified within any constituent part of the civilization. Without the me the civilization would cease to be. Buildings, roads, people, etc. would remain, but without the governing, life-giving force of the me, the civilization would crumble, and what would be left would be something quite different. Further, the me cannot be created nor destroyed simply by adding or removing those constituent parts. Made manifest only when the entire system works together in harmony, this emergent property, the me, is the lifeblood of the civilization. Found everywhere is evidence of its existence, yet nowhere can the property itself be isolated and identified.
Just as Hiro Protagonist interacts with and utilizes emergent virtual constructs as tools to combat the rapidly spreading virus Snow Crash, we too now live in a world of EVCs that empower us to accomplish epic feats once found only within the pages of fiction.
As opposed to the static nature of DVCs, EVCs exist in a dynamic state of agile development and constant growth. Emergent virtual constructs are defined as virtual constructs containing the property of emergence, which is manifest today in four classes of EVC.
Examples of Emergent Virtual Constructs include:
The rapidly increasing significance of emergent virtual constructs on our daily lives and their impact on the future progress of civilization is the defining force of the New Era of Tech.
Until next time, I hope to hear from you. Goodbye.
My great-grandfather Dart lived to be 105 years old. Nearly everyone downstream of him on the family tree is still alive, including my grandfather, who was born in the 1930s and still goes whitewater rafting. Longevity runs in my genes.
That said, the last 20 or so years of Grandpa Dart’s life were less than pleasant. Although set financially, and cared for by his children, Grandpa was mostly deaf, almost entirely blind, and confined to a wheelchair by the time he passed. He had not only outlived all his friends and much of his family, but almost everyone he had ever heard of. Worst of all, he was forced to live the last 28 years of his life without his wife Olive, who died in 1983 after a decades-long battle with Alzheimer’s disease. In his last years he was granted only sporadic moments of mental clarity, but always seemed to manage a scoff when overhearing children referred to as “kids” (a word he believed should be reserved for youngling goats).
My 105-year-old great-grandfather is an outlier by today’s standards, but what if the average human lifespan wasn’t 67 worldwide and 80 in the US but an even 300 all around? What if we could extend our lives to 1,000 years or more? With advances in science, medicine, and technology, it won’t be long before centenarians outnumber those who live to be merely 90 or 95 years old. Researchers at the Methuselah Foundation go so far as to say that the first person to live to be 1,000 years old has already been born.
This is an exciting prospect, but such a drastic change in the human condition brings with it many uncertainties. For example, what would be the benefit of living for hundreds of years only to have your mind and body deteriorate like those of my great-grandparents? It is rare that one’s physical health and mental acuity improve with age, and adding a century or two to the average human lifespan could usher in a swath of unpredictable healthcare costs and challenges.
This is one hurdle the growing movement known as the Quantified Self seeks to leap past. Sometimes shortened to QS, the Quantified Self aims specifically to increase self-knowledge by providing insight through self-reporting and activity tracking. QS is often referred to as a movement, though I find it more fitting to call it a practice, much like a daily yoga or meditation routine.
The modification of routine and habitual behavior, regardless of how minute it may seem, can have large effects on overall health, particularly if practiced for years and years. Consider the man who stops drinking soda on a regular basis, and rapidly drops 15-20 pounds. Now consider the long term health benefits of similar positive choices over a lifetime. To be clear, I’m not talking about 6-week diets and point systems for meals. Starting a QS practice is as simple as taking a look at your daily activities, recording that data, then through gradual progression, improving your health by modifying behavior.
The term “Quantified Self” has only recently come into mainstream usage, but there’s nothing new about people keeping track of specific data in hopes of improving overall health. Many of us monitor our weight on a daily or weekly basis, some of us count calories, and even the recommended 8 glasses of water per day is an example of quantification. On a more formalized level, it has now been commonplace for decades to track the blood glucose levels in individuals with diabetes. What is new about QS is the ever-increasing number of consumer tech products available to aid in monitoring all kinds of heretofore ignored and underreported data.
It has been said that one cannot hope to effectively improve that which one cannot accurately measure. Luckily, it’s never been this convenient to track and improve so many aspects of our overall health. Wearable pedometers used for tracking steps, pocket-sized blood pressure cuffs that work in conjunction with smartphone apps, and wifi-enabled scales that automatically upload weight and BMI information to the cloud are common examples of how the consumer tech sector is focusing efforts on the growing number of people interested in QS.
The Flex keeps track of how many steps I take everyday, and syncs automatically with my smartphone and the fitbit web interface. I periodically check the fitbit app throughout the day to make sure I’m on track to reach my daily goal of 10,000 steps. The Flex sends my data to the cloud all on its own. The only thing I have to remember is to remove the sensor from the wristband every few days for a charge.
The Flex also has the capability to track my sleep. I simply tap the sensor when I lay down for the night and once again when I wake up. In the morning I can check the fitbit app to see how long I was asleep, as well as how “efficient” my sleep was, i.e. whether or not I was tossing and turning, or sleeping soundly. Apart from steps and sleep, fitbit also provides its users with manual data tracking tools, like the ability to input weight and daily water consumption.
When combined with a daily journal, these four metrics (steps, sleep, weight, and water) are incredibly useful for identifying trends in physical and mental health. Now I can easily look back over the months and see how my sleep schedule correlated to whether or not I was grumpy throughout the day, and how my hydration levels influenced my exercise routine. It didn’t take long for me to pinpoint the exact amount of water I need to drink every day in order to feel fully capable of completing my workout with ease. And if I ever need a motivation boost, I can connect with my friends on fitbit.com and compare my activity levels to theirs.
Fitbit’s app and web interface are great tools for analyzing data from my Flex, but numbers alone aren’t very useful. I use Evernote to track additional metrics—like vitamin and supplement consumption—and for the qualitative side of my QS practice. My daily to do list, the best photos I take throughout the day, and my daily journal all live in Evernote.
I use a dedicated notebook for this, called “Daily Fill.” Within this notebook, I create an individual note for every day of the year, one month at a time. I then use a template note as a starting point and copy and paste its contents into each day’s note. Throughout the day I launch the Evernote app from my phone or web browser to fill in the blanks and complete the checklist in the Daily Fill note for that day. As the months have progressed, my Daily Fill notebook has become a living record of my life, containing both quantitative data and qualitative content.
Starting a QS practice of your own is one way of taking a proactive role in monitoring and improving your physical and mental health. Soon there will be even more useful tools and services for unlocking the secrets of our genetic heritage and discovering previously unknown insights. In just a few years it will become affordable for anyone to have their individual genome sequenced, leading to highly targeted and personalized healthcare strategies. Already, the cost of sequencing a single human’s genome has decreased from billions of dollars to less than $10,000.
Until this practice becomes commonplace, there are some useful tools already available. 23andMe is a service that provides insight into one’s personal health at the genetic level. By analyzing one’s saliva sample, 23andMe delivers reports on over 240 inherited or acquired health conditions, as well as reveals ancestral connections between other 23andMe users. Stories abound of families that have finally sorted out mysterious allergies, and of adopted children discovering links to biological relatives from across the globe.
One awe-inspiring truth and hallmark feature of the New Era is that we live in a post-memento mori world. No longer need we remember that we will die in order to live full, productive lives. With imminent advances in the fields of genetics, nanotechnology, and biotech, the radical extension of life is no longer science fiction. Never before in history has it been reasonable to believe that there are people alive today who may live long enough to live forever. But now, with the growing promise and infinite potential of cloud technologies, the sun, the moon, and the stars are closer than ever before.
A sufficiently robust QS practice is extremely useful for ensuring that our physical bodies remain as functional as possible as they approach, then move beyond the previous decades-long lifespan that was our species’ former biological destiny. I’ve learned that by combining the qualitative aspects of a daily journal and photos with the metrics and data of a QS practice, I can achieve a truly sublime sense of self-awareness. Some days I’m much more active than others, but regardless of how many steps I pack into any 24-hour period, there’s still nothing quite like a leisurely walk through the neighborhood at dusk.
Tentatively scheduled for publication on the eve of my thirtieth birthday (2020), my second memoir is set to chronicle the ways in which I stayed up all night to get lucky in the days leading up to the Singularity.
First published 16 November 2010, Epistemic Value in the Tweet Economy quite accurately predicted the ways in which activists and young revolutionaries the world over would utilize services such as Twitter to stand against social injustice and organize to oppose political corruption. Given recent events in the United States surrounding government surveillance of US citizens, I now find it appropriate to repost this piece with modest and timely revisions.
Since Twitter’s launch in July 2006, this simple service has emerged to provide a new vehicle for the transfer of communication to hundreds of millions of users.
With the finite, 140-character count each tweet is constrained to, Twitter users must thoughtfully consider the content of each tweet. 140 characters can be limiting, and often Twitter users must take time to shorten links using services such as bit.ly, include relevant hashtags, tag fellow Twitter users using @replies, and consider a number of other aspects of the culture(s) and etiquette which have sprung forth from the emergent virtual structure that is Twitter.
The efficient Twitter user utilizes the 140-character constraint to convey complex cognitive concepts. Photos and images can be shared via services such as TwitPic and Twitter’s own image service. Screenshots are also commonly shared via these services as a way to convey information about ideas being produced, and events occurring exclusively in emergent virtual spaces.
Links to recorded video can be shared via services such as TwitVid, and live video streaming can be integrated into tweets by using services such as Ustream. Other tools used by the efficient Twitter user include location integration via services such as Foursquare and Twitter’s own application programming interface (API), which allows for Google Maps integration.
And as mentioned above, hashtags, @replies, and link-shortening are all important and continually evolving means of facilitating information transference that the efficient Twitter user employs in order to increase the sum of epistemic value incapsulated within a given tweet. Indeed, the efficient Twitter user has many available tools at her disposal allowing her to convey complex cognitive concepts containing vast amounts of valuable information.
Many have criticized services such as Twitter, trivializing the increasing role these emergent virtual spaces play in our everyday lives. These critics have attempted to shield themselves from the truth that each day our physical lives are further shaped by these structures.
Criticism ranges from the common “Why should I care that ‘OMG that was the most #awesome sandwich!!1’?”, to claims that carry deeper implications, such as “the revolution will not be tweeted.” Malcolm Gladwell may have expressed his lack of faith in the service, but Twitter and its loyal users should not feel slighted. Chris Dixon thinks there is value to be found in the tweet economy, as well as in other social services that cater to similar sociological needs.
Government officials have taken note as well.
In June 2006 the US State Department requested that Twitter delay a scheduled update which would necessarily entail a period of downtime for the service. This period of downtime was to coincide with what emerged to be the violent civil unrest of the recent Iranian elections, and Twitter was serving a vital function: namely that of providing a vehicle for the transfer of information in a time essential to the protection of personal and political freedoms of millions of individuals.
It is clear that despite a number of critical views, Twitter has served as an essential space for the transference of information among individuals who may be geographically, politically, socio-economically, ethnically, and religiously disparate.
It has also been reported that Jack Dorsey, co-founder of Twitter, has continued to play an active role in developing the ways Twitter can be used as part of what US Secretary of State Hillary Clinton (in a speech harkening back to Franklin D. Roosevelt’s Four Freedoms) has termed “21st Century Statecraft.”
The epistemic value of the tweet economy is often utilized by everyday citizens who wish to exchange information relevant to presently-occurring local events.
When the musical artist Jack’s Mannequin performed in Tempe, Arizona, one way the Phoenix New Times reported on the event was by curating tweets from concert attendees. By making sense of hashtags and location-integration, the New Times was able to harvest the epistemic value of tweets in order to contribute to the New Times’ journalistic enterprise.
On 29 July 2010, a number of citizens gathered in Phoenix, Arizona at a Maricopa County Sheriff’s detention facility to protest the recently-enacted and controversial immigration law, known as SB1070. Protesters—many of whom may be considered efficient Twitter users—utilized the value of the tweet economy to inform one another and the world of events unfolding before them.
This information included live video recordings of peaceful demonstrations, geographic data pertaining to traffic and parking, and photographs documenting the actions of law-enforcement officials. Hashtags, retweets, and a variety of other services were each used uniquely in order to add relevant epistemic value to tweets, which in turn was transferred to individuals accessing those tweets.
Thus, it is not only evident that adding to the epistemic value of tweets can add value to our physical lives, but that adding to the epistemic value of tweets in fact does add value to our physical lives.
One barrier to harnessing this wealth of epistemic value is finding an effective manner with which to make sense of this knowledge. Fortunately, there are many tools available to the efficient Twitter user that aid in the task of tweet-curation. In September 2010, Twitter launched New Twitter, which provided basic and useful ways of visualizing the epistemic value of the tweet economy. Twitter also offers apps for a multitude of devices that aid in tweet-curation. These apps are available for every major mobile platform, as well as the desktop environment.
Additionally, vehicles for tweet-curation have been created by innovators outside of Twitter itself. One such 3rd party application available to the efficient Twitter user is TweetDeck (acquired by Twitter in 2011). These are useful tools for organizing the epistemic value of the tweet economy, and many other useful methods of tweet-curation are emerging as well.
Clearly, in order to effectively harness the epistemic value of the tweet economy, one requires access to a sufficiently robust system of tweet-curation tools.
Looking forward, how will the tweet economy continue to shape our lives? In what ways will the physical domain become increasingly shaped and molded by Twitter, and similar platforms?
Given the wealth of epistemic value manifest in the tweet economy, Twitter may be viewed in a number of ways. To me, the most useful way to perceive Twitter is as a manifestation of collective consciousness. Keeping in mind the 140-character constraint inherent to tweets, combined with the ability to convey complex cognitive concepts, each tweet may be viewed as an individual thought, and a constituent part of the collective consciousness.
Thoughts can range in complexity from concrete, simple, and metaphysically informative, e.g. “My name is John Doe,” to abstract, elaborate, and epistemically valuable, e.g. “I believe X because of Y, and you should too because of Z.” In a similar manner, the epistemic value of an individual tweet can span a vast range of complexity.
Let us take a moment here to conduct a thought experiment. Consider the number of individual tweets published at a given point in time. Now, take a mental snapshot of Twitter as a whole, freezing it at that moment. If tweets are virtual analogues of thoughts, and if Twitter is a virtual manifestation of collective consciousness, then by taking a snapshot of each tweet created in that instant, one may survey the entire constituency of the collective consciousness.
If Twitter is to be viewed as a manifestation of collective consciousness, then proper curation and understanding of the epistemic value within the tweet economy are vital to the real-time thought-trajectory of our physical lives. Although I will not provide an argument here for this claim, I believe that access to these sources of epistemic value must be viewed as an inalienable right to members of the global community.
I share the sentiment expressed by Jack Dorsey in September 2010:
My hope for Twitter? People use it as a peacemaker. Immediate & shared understanding inspires empathy which reduces conflict.
I recently posted a sentence to my personal tumblr which said nothing more than “I do not have time for that Christian nonsense.” This post caused quite the unexpected fervor, and although I do not believe I need to explain myself, I do believe there to be value in expounding a bit, and so I have prepared some remarks.
There seems to be a general understanding among self-appointed experts in social media that the topic of religion should be off limits. That the issue is too volatile and that viewers or listeners are too easily alienated. Personally, I don’t think this does justice to the audience’s ability to appreciate an honest and frank discussion. Nonetheless, the cardinal rule for many appears to be that above all one’s job is to never offend anyone. Indeed, there is a time and place for all things, but there are some topics that I simply cannot choose to not talk about. Religions, primarily being modalities of understanding based on faulty metaphysical principles, fall within this category.
It is my work, and my goal as philosopher to seek after that which is true and eschew that which I find has no value. I see no value in the use of religious rhetoric as a tool for expanding humanities’ understanding of the world. I believe we would be much better off holding to other realms of discourse. Yes, a proper dose of epistemic humility is required, but I’ve always been more of a Philip Pullman than a CS Lewis.
Now, We have a word for things that do not make sense. That word is nonsense. I do not believe religious ideology makes any sense. It’s nonsense. And I believe it’s quite unfortunate that more of us do not find within ourselves the courage to publicly relinquish our romantic notions of faith before we have invested too much—particularly monetarily.
I recognize that many find solace, comfort, a sense of community, and a connection to history and cultural tradition through religious expression. I wasn’t raised in a secular home myself, and I do not advocate for the public shaming or trolling of members of any religion. I acknowledge there are a multitude of reasons for individuals believing in the principles instilled in them at birth. However, there is no reason sufficient enough to justify the continued endorsement of a bankrupt ideology.
I recently viewed a YouTube clip of an interview with the late Christopher Hitchins in which he noted the inherent sado-masochism in Christianity—that one should simultaneously love and fear their God. It’s a good point, and what I think it most provides insight into is the cognitive dissonance that religious nonsense instills in its believers. That is to say, the mental mismatch between what is metaphysical reality, and what is a romantic mysticism. The fact that religion is rife with these contradictions, these incongruities, literally exposes the falsity of the principles within.
We mock belief systems such as astrology and pagan rituals which are at least pseudo-scientific in their attempts at explaining the physical world and yet find it entirely impossible to wake up, and move beyond the superstitious magical thinking that is monotheistic religion.
I oppose any ideology that would gain entrance into one’s heart with empty promises of eternal life and anecdotes of love while minimizing the requirement of unquestioning discipleship to a corrupt, patriarchal authority. This is offensive, and should not be sustained.
Some claim religion is a force for good around the world. No. It is willfully naive to believe only in the potential good a religious institution may provide while ignoring the lasting harm that is actually done to people across the world because of far-reaching and entrenched religious dogma. Even in past cases when doctrine has been modified to appear inclusive of individuals or communities previously abandoned, religionists are never held accountable for the lasting effects of the fear and the hatred they once propagated.
Modern religion has become much too concerned with which kind of person has worth. There are sects that preach acceptance of all, but in practice we see this is not the case. I invite you to emerge from your self-imposed stupor of thought and be honest with yourself. For if you are truly honest with yourself, you will see with stark clarity that when our neighbors live, look, or love differently, your religious leaders jump at the chance to deny those people not only of supernatural “blessings,” but also of the rights and protections that our secular institutions are obliged to provide them. This is dangerous, unjust, and should not be sustained. I challenge you to surprise me. Next time, when given the chance, oppose this.
There are many praiseworthy features and benefits sought after in religion, but I believe those sources of inspiration can be found in abundance elsewhere. It is my hope that by working together humanity will prevail; that we can forge a new foundation upon which to build up an empire of science, knowledge, and education and leave behind antiquated methods of attempting to explain reality.
For the sake of us all, I dare you to wake up and move beyond that nonsense. We do not have time for it.
For those unfamiliar with the concept of transhumanism, you can read more about it here from the Fountainhead of Knowledge, Wikipedia. Here’s a taste:
Transhumanism: (abbreviated as H+ or h+) is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities.
The entire Iron Man franchise is packed with transhumanist themes, but number 3 lays it on the heaviest. (Unless it was more prevalent in #2, which I literally can’t make it through without falling asleep.) By the time we get to this movie in the trilogy it’s obvious there is little to no distinction between Tony Stark and the Iron Man suit.
The miniature arc reactor has been embedded in Stark’s chest since the first movie, but by Iron Man 3, the existential fusion of man + machine is complete. The suit isn’t a part of him, it is him just as much as his conscious mind and biological body are. Yes, the suit has the ability to carry out Tony’s orders independently as relayed by Jarvis and built-in AI, but when acting autonomously, the suit is portrayed as more of a silent-assassin character, separate from the Stark/Suit combo.
Here are a few points that struck me with the release of Iron Man 3:
1) Check out this movie poster. Iron Man is shown in a disheveled way, with a damaged suit and no helmet. It is significant to portray Iron Man in this manner, as it shows no clear point at which the suit ends and Tony begins. When there is no suit at all in the picture/scene, we see the man, Tony Stark. When the entire suit (helmet and all) is on screen, we see the hero, Iron Man. To portray Tony Stark as Iron Man in full armor but sans helmet—a mechanized knight in shining armor but with human face revealed—is to portray the epitome of the Transhuman Übermensch. This person is neither man, nor machine, but rather a transcendant being. He exists in a state beyond human, through technology. Both aspects (human + tech) are clearly present, yet without clear distinction.
Pepper Potts is depicted holding on to Tony/Iron Man, grasping the suit’s chest plate as if it were Stark’s own chest. Sections of the suit have been destroyed in battle, and its inner workings are revealed—mechanical bone, joints, and sinew perfectly resembling their biological counterparts. Portions of Tony’s bioskin is revealed to have been injured as well, echoing the damage done to Stark’s second skin, the suit.
Viewing the scene in three dimensions, we see a continuum of biological/technological beings. Iron Man is positioned with the (apparently) fully-human Pepper Potts in front of him while Stark’s fully-robotic fleet of Iron Man shells hover behind him. Visually, this serves to further solidify Stark’s existential status as not only-human, nor fully-machine. Here he is a class of his own: Transhuman.
2) In the movie itself, there is a scene where Tony loses power mid-flight and crashes in the forest. There is one shot in particular that captures my point. As the camera films from above, the suit trails behind Tony as he trudges through the snow in search of safety and shelter. Here the suit isn’t just a piece of wrecked machinery, it is a representation of Tony’s inner self: frozen and deflated, but not yet defeated; in need of power, but not without potential. The Iron Man suit is a shadowy doppelgänger of a hitherto omnipotent Tony Stark. Again, the suit doesn’t represent a part of him, or an extension of him. The suit is him.
Iron Man raises interesting questions about the relationship between humans and technology, and provides a useful lens through which to view a possible future. If we are to responsibly shape tomorrow’s tech, we would do well to more fully examine today’s fiction. The Iron Man mythos is a great one, but do we want to see it made real?
Today, many hail “wearables” as the New Era’s next great tech trend. Google Glass and Fitbit Flex are current examples of this class of device and have intrigued tech journalists and consumers alike. I find the term “wearables” to be limiting, and instead prefer “embedded,” in order to denote a class of technology that will eventually range from smartwatches to consumer-grade nanotech.
I predict embedded tech will come to resemble some of what is portrayed in media such as Iron Man, but rather than becoming weaponized, the focus will be in two areas: the preservation of qualitative lived experiences and the quantification of routine activities.
It will be through embedded tech that we come to more closely know ourselves and what it is to live fully-actualized lives. For this is the promise of Transhumanism: to empower humans with capabilities far beyond the limits of our paleolithic bodies and unaugmented minds. Through technology we will become so radically connected with our inner humanity that we will transcend it.