Written by Dustin Breitling
The ballooning video-game industry has exploded globally, accounting for an estimated 3 billion players and a whopping 30 billion dollar market. With a market that has come to engulf the globe, visions or renderings of the past and future upon our screens are ceaselessly imported and circulated.
Video games are effectively time capsules, whether we are rewriting or playing the annals of great historical events or transported to alien landscapes. It is here that we can understand how their deep relationship to temporality plays a role in lubricating the machinery of our social imaginary, which we can understand as “the background sense-making operations that make the idea of society and its practical reality possible.”(Kirkpatrick, 2013) What is ‘possible’ also couples with how the future and the present are intertwined in ways that games are vehicles that plot as well as fuse ‘possible’ with ‘plausible’ scenarios of our future. Here ‘plausible’ becomes informed by climate reports, models, forecasts, films, documents, and literature that bleeds into the settings of gameworlds.
We can draw upon Richard Grusin’s work on the concept of ‘Premediation’ which is informed by a dimension of plausibility, whereby conceptions of the future become akin to the “logic of designing a video game as many of the possible worlds, or possible paths, as the future could be imagined to take” (Grusin, 2004). Images of a planet facing ‘poly-crises’ like nuclear war, climate-change, economic crises, and pandemics that traverse the circuits of our contemporary media environment paint a world of ‘projective closure’ or foreclosed imaginable future possibilities. We can observe gameworlds conjuring landscapes ravaged by nuclear devastation or atompunk (FallOut series), addressing the fallout of climate change (Terra Nil, Fate of the World), colonies salvaging the world in the wake of catastrophe (Surviving The Aftermath, Endzone), or rebuilding infrastructure or rewilding planets in skeletal and hellish landscapes (Death Stranding, Rewilding).
Premediation builds upon the element of repetition that is materially operated through an ‘algo-rhythm’ that, in effect, choreographs a haptic sense of anticipation and response, in one sense a repetition of expectation that follows the plots of an array of a large number of diverse outcomes or possibilities that are available. Thus, with the procedural operation of algorithms as the basis of our gameworlds, we become locked into a branch-like structure of a decisional-tree that curates player outcomes and invites specific responses, ultimately resulting in the player adapting or resigning themselves to their avatar’s predetermined futures (Op de Beke, 2021).
Nonetheless, the manner in which the cancellation of the future, or more precisely the reality of a diminishing time of our collective and total existence, as portrayed in gameworlds, should highlight their paradoxical role as prostheses or blueprints that unlock mental time travel modalities that can combine the ingredients of imagination, counterfactualising, and worldbuilding. Tinkering and modifying states of worlds within the virtual laboratories of these gameworlds effectively nurtures explorations into about what might have been, what could be, or what almost was, where ‘what-ifs’ unveil novel cognitive and causal relations. This ties in with what Kari Kraus has discussed concerning “fault lines,” which are representative of moments catalogued in contemporary or historical record where game designers and storytellers can incorporate counterfactual evidence, storylines, and occurrences(Kraus, 2018).
We can now shift our focus to the earliest iterations of game-like planetary simulations, namely Sim Earth, which arguably influenced similar Earth simulators. Sim Earth was influenced by prominent scientist James Lovelock’s Gaia theory and his simulation Daisy World which was designed to explore the mechanisms of planetary self-regulation. Sim Earth has players modulating the dials on carbon emissions or the mutation rate of species, which can lead to the emergence of non-human species from the hybrid blending of organic and inorganic material. Here, the player is in charge of simulating an evolutionary economy from the geological to the Anthropocene-technological eras. This links up with conceiving of gameworlds as research objects that enable the composition of ontologies within virtual terrariums, or what Alenda Y. Chang posits as ‘mesocosms’. Mesocosms are effectively experimental tools that examine a part of the natural environment under controlled conditions. In this approach, mesocosms provide a bridge between observational field research conducted in natural habitats and controlled laboratory environments under conditions that may be rather unnatural.
Thus, the function of ‘controlled environments’ is to drive and direct feedback and behavior, as well as to investigate the connections between key properties and relationships among entities that attempt to replicate these conditions in laboratories, terrariums, greenhouses, and ponds by manipulating temperature, humidity, and subtracting or adding variables to observe their effects. Yet, Chang also elucidates a more slippery definition of mesocosms, focusing on the edge effects or more porous boundaries that demarcate apparent zones, as well as the ripple effects that unfold between distinct ecosystems, such as forests and surrounding human habitations.
For Chang, we can import the concept of edge effects into the nature of gameworlds and the interpenetration of planetary infrastructure, with digital technology and its realization dependent on a mishmash of minerals and wasteful resource extraction serving as the substrate for software and hardware, both physical-material and logical-digital.As Chang’s asks, ‘How virtual is the virtual when the ubiquity of digital technology is premised on globe-spanning resource extraction and waste?’ (Chang, 2020). These questions, which form the basis of our gameworlds, can be interpreted as ‘virtual thought experiments’ in which players address the prototypes of worlds plagued by wicked problems such as coordinating resource extraction, food insecurity, biodiversity loss, and their interconnected cascading effects.
We can draw our attention to the interfaces that wrap, border, or interface the complex phenomenon of gameworlds, particularly grasping the entities that populate gameworlds in relation to their spatial and temporal scales, serving as ‘cognitive assemblages’(Hayles, 2017). Respectively, player behavior and actions are gradually modified by the knowledge or data gathered from sensors registering resource inventories. This is possible because players are equipped with overlays,interfaces, minimaps, toolbar resource inventories that function similarly to the senses and processors of machinic eyes.
Machinic eyes could be exemplified in the game ECO, that tools players with multispectral overlay maps that supply updates within the globe concerning resource availability for growing crops, population, as well as environmental data featuring the emission of chemical pollutants that parallels our remote sensing constellation and data acquisition e.g. LANDSAT satellites, Synthetic Aperture Radar. These satellites with hyperspectral sensors take images of the planet in hundreds of spectral bands, conducting a variety of measurements, and identifying trends in urbanization, biomass changes, drought, and climate change. Similar to our game worlds, we can comprehend remote sensing as part of a collection of ‘environing media’ that demonstrates how media technologies are critical for generating knowledge about a global environment and also influence how we alter and intervene in an environment.
Remote Sensing is among a number of enterprises, including seafaring, navigation, mapping, natural history, hot air ballooning, and railroad construction that reflect the evolving transformation in how we mediate the world. These mediations morph from horizontal to vertical scales, with data made at previously undiscovered oceanic and aerial altitudes, are exemplified by these enterprises where media and environment interact to change the face of the planet(Wickberg, A.; Gärdebo, J, 2020).
Our increasing recognition of ‘Environing Media’ coincides with a Planetary Turn, that understands our role as an agent that has transformed into a geological force. It implants the human within a meshwork of geochemical and biological historical processes that reveal to us an inhuman vista of deep time scales. Earth in turn is a trans-scalar milieu from within which to situate thought and activity, from its magnetic core out to its near space environment, it reveals a planet that is a tapestry of multiple strata, inorganic properties, and inhuman timescales. (Reed)
Earth Observation Systems, which consist of a constellation of polar-orbiting and low-inclination satellites for long-term worldwide monitoring of the land surface, biosphere, solid Earth, atmosphere, and seas, have mushroomed to over 8,261 satellites at the time of writing. Earth Observation Systems are operative within a cognitive assemblage that is “precisely structured by the sensors, perceptors, actuators, and cognitive processes of the interactors” (Hayles, 2017). We can highlight how cognitive assemblages are intimately related to our cognitive modalities of environmental perception by being “intrinsically co-created, since agents modify their environment and respond to changes made by others.” Thus, environments are scenes of sensuous activity, arguing that they become transformed into “a kind of distributed memory; modifications left by others provide cybernetic feedback, driving both emergence of novel system-level behavior from local interactions of agents”(Tamari et al. 2022). Environments choreograph ways to inscribe and rebuild our environs via sematectonic stigmergy, which directly alters the environment, such as the deployment of drones to reseed and repair landscapes, the formation of urban settlements, and the construction of data centers. Environments become annotated through sensor data-feeds, updates, and sensory readings, which in turn signal morphing and evolving cognitive assemblages to respond (Hayles, 2017).
Further, the modular and ever-evolving connected knots of sensory devices within the topology of networks are now paired with Machine Learning and Deep Learning models that analyze the data generated from billions of heterogeneous devices, which becomes ‘data-oil’ to be stored to perform in real-time forecasting analytics or potentially stored in databases for later prediction. This is especially relevant to location-based services that depend on apps to create and update actionable hypotheses based on Wi-Fi readings, geo-targeted advertising, weather forecasting services using atmospheric data, urban planning or traffic management, weather prediction, and oceanographic applications.
Here, the Earth becomes programmable, where the ‘becoming environment of computation’ encompasses “computationally enabled sensors that are distinct and yet shifting media formations that traverse hardware and software, silicon and glass, minerals and plastic, server farms and landfills, as well as the environments and entities that would be sensed” (Gabrys, 2016).
As Gabrys contends different climatic conditions, research sites, and interfacial response devices are bringing forth a variety of ‘unfolding Earths’ where the mesh of local and global sites range from particular datasets dispatched to forests monitoring temperature, humidity, wind speed, and wind direction to link up with interlocking planetary dataset networks designed to monitor Earth systems. These interlocking global networks such as the international program can also be Argo, an international program, that collects information from inside the ocean , which is designed to collect salinity, temperature of our oceans through profiling floats are programmed to sink to a particular depth and remain there for a specific period of time (NASA).
In addition, citizen-led initiatives promote smart forest management that includes the coordination of participatory mapping among users to establish forest boundaries and forest protection that integrates the use of UAVs, sensors, computer models, data analytics, and artificial intelligence to accelerate reforestation initiatives that interact with participatory technologies to create environments that are actionable, current, sensing, and accessible (Gabrys & Pritchard, 2022). This form of actionability aligns with the transformation of the planet into an addressable grid of coordinates and unique identifiers that contain spatial features and their relations, postal codes, etc. A type of identificatory mapping, whether from individual persons to sensors, have served as instruments for governance techniques structured around discipline, control, and surveillance (Dhaliwal, 2022).
Notably, the key thread of ‘addressability’ or converting the world into a global index of latitudinal or longitudinal coordinates, lays out a gridded architecture that is amenable to surgical points of intervention and monitoring.
The Control Revolution
The conversion of the planet into an addressable entity has been coextensive with ‘The Control Revolution’ with the desire to image or divide the world and its multi-scale phenomena into a numerically processed representation. The Control Revolution arises in the wake of increasing evolution, differentiation, and interdependence of complex industrial and social systems entwined with technologies and predicated on the ability to derive laws, statistical probabilities, and control through closed-loop feedback devices (Beniger, 1994). These devices are exemplary, along the lines of a steam governor and preprogrammed open-loop controlled Jacquard Loom punch cards, and 19th and 20th century air defense systems engineered to detect oncoming nuclear attacks to counter the expansion of Soviet influence.
The Control Revolution further embodied through cybernetics, “the entire field of control and communication theory, whether in the machine or in the animal” (Wiener 1948). The direction and focus on stewarding systems toward predetermined goals attempted to format understandings of matter, energy, and information as inputs, throughputs, and outputs for measurement, observation, and most importantly, a mechanism of self-correction to track and generate “predictive states” of dynamic and complex systems i.e. weather systems, supply chain management, evolution of animal populations, flow of fluids.
Notably, it is the focus on dynamic complex systems interrelated to the further materialisation of computational devices that laid down the foundations of numerical weather prediction that came to shape and be an example of a changing formation and techno-scientific landscape of human-machine prediction. The emergence of the Control Revolution provided the impetus for predicting the future with symbol-manipulating machines and computers that would assist or fully automate the observation, control, and reasoning about complex systems, such as meteorology and ballistics. With the assistance of symbol-manipulating logic machines, we would be able to be assisted or fully automate the observation, reasoning, and control of complex systems (Edwards, 1997). These goals were first achieved by anti-aircraft weaponry during World War I, and they sparked the creation of both cybernetics and artificial intelligence.
Here, what becomes incubated from the clinical atmosphere of laboratories also becomes some of the earliest iterations of ‘video games’ using oscilloscopes and the estimated trajectories of ballistic missiles such as Tennis for Two, where each player manipulates the ball’s trajectory over a rudimentary tennis course, that simulated tennis or ping pong, developed by William Higinbotham.
It was also the Cold War and post-World War II milieu that catalyzed a prototype of user display interfaces to modulate and monitor complex processes. We can recognize the relevance of personalities that combined a focus on early computing and the nature of complex simulations. One key figure, Jay W. Forrester was a major architect behind the arrangements of magnetic cores into grids for memory retrieval which would become a dominant standard in computing in the 1970s. He gained a reputation for his development of Magnetic Core Memory, it was also his oversight on MIT’s Whirlwind which was the first computer to use this technology in 1953. Project Whirlwind, an analog flight simulator that eventually became SAGE, was a network of computers and hardware that combined and processed data from various radar sites to enable the creation of a single, comprehensive picture of the airspace over a wide area in order to prepare for a high-altitude bomber attack (Monteiro, 2017), (Edwards, 1997).
Additionally, it was Forrester and his contemporaries who gave rise to System Dynamics and who influenced individuals such as Will Wright and the Sim Franchise with their infamous urban dynamics simulation. Their vision would become integrated in industries to be able to manage and identify the unforeseen messy complexities of supply chain and manufacturing production to that of urban growth and decay, and eventually planetary overshoot models (World Dynamics) that were the basis for the infamous 1972 Club of Rome Limits to Growth study.
It is this cross-fertilization of modeling complex systems that also coincided with the first nonmilitary earth observing satellite that was designed to transmit television pictures back to earth showing broad weather patterns (1960) with the first TIROS-1 Satellite photographs of clouds from space. The global satellite system has evolved into many societal application areas and has been a key factor in the dramatic improvement in weather forecasts and warnings, opening ways that remote sensing is an ‘outside viewing in.’
The imprint of cybernetics and the modeling of environments came also to inform and interpenetrate into the world of ecology and Earth System Science that historically coevolved in the 1960s. Figures such as Eugene Odum, and previously mentioned James Lovelock drew upon cybernetics and visions of the biosphere that emphasized the self-regulatory properties of ecosystems that which came to define the cycling of energy and materials across ecosystems in terms of input and output, information flow and feedback controlling the whole biosphere and the interchange among its parts. Thus, systems could be understood as sharing a universal feedback logic (Rispoli, 2023). With the proliferation of satellites in Earth’s orbit, fueled by the Cold War’s realpolitik, and the colonisation of the Earth’s Low Earth Orbit, now incorporates Artificial Intelligence and Deep Learning as a driving force for image classification, segmentation, and projecting the potential future states of the Earth through massive data volumes.
Deep Learning
Deep Learning, involves a multitude of neural network layers that are made up of thousands of elements with multiple parameters. This entails training the model involving hundreds of repetitive adjustments of parameters that develop probabilistic models for classificatory or generative purposes as we have noticed with the recent A.I. Art boom. In respect to Deep Learning and visualising or imaging the planet, typical methods apply such as ‘semantic segmentation’ which assigns each pixel in a given image to it, deals with recognizing which objects are shown and where exactly these are presented in the image. The adoption and ingress of Deep Neural Networks within Remote Sensing, in particular Convolutional Neuronal Networks, have played a more pronounced role in the field of image classification over the past decade and have served as the main method for image classification, becoming recently matched by generative transformative models. What we understand by ‘imaging’ or ‘visualizing’ can be built on a network of relational extracted features where pixels are segmented in the effort for neural networks to detect shapes, edges, motion, objects, horizontal lines, vertical lines, and curvature from arrays of pixel values (MacKenzie, A., & Munster, A. (2019).
Further, what we might perceive as ready renditions of the planet, follows a network file supply-chain of formatting, compression, and rendering that involves standardisation formats i.e. bitmap, Universal Scene Description, datasets are imported into 3-D models follow along a complex pipeline that can be shuttled into game engines such as Unreal, Unity or Blender with notable plugins i.e. Cesium Runtime that facilitate designers to render together environments through terrain data, satellite imagery, and 3D buildings with interior and exterior BIM(Building Information Modeling) data, vector data, photogrammetric data, and point clouds.
We can also observe how games have become instrumental in developing Artificial Intelligence models notable through Go and AlphaStar, that eventually migrated into sectors for purposes of optimizing the reduction of Google Data Centre cooling and accelerating matrix multiplication (AlphaTensor), which is at the heart of deep learning. DeepMind effectively trained their milestone AlphaGo through the inputs of pixels and game scores that was engineered to learn by observing a myriad of 19×19 Go board positions. With the 19×19 pixel image snapshots, the deep learning neural network would discern positions and their spatial correlations that were connected with rewards for actions executed on the board. Ultimately, the model trained via the DeepMind platform ‘learns’ by observing many images (Mackenzie & Munster, 2019). Undergoing several iterations, the circuit between observation and action drives the training of the model, which does not rely upon ‘prior knowledge’ due to the model receiving pixels and game scores as input.
GPUs were indispensable in the the rise of Machine Learning and AlphaGo, by virtue of their suitability for parallel processing and vector processing capability, and have since expanded to perform numerical computations as well as graphics processing. The marked role of infrastructure, servers, GPUs, connects to what Mackenzie & Munster also points out with respect to a form of ‘Platform Seeing.’ The Platform Economy or Alphabet, Amazon, Google, Microsoft, Meta and now the more prominent DeepMind harness the infrastructural and capital-heavy investment capacities in Machine Learning.
We can identify the major players that are hoisting enormous infrastructural and hard drive capacities, with major industry titan, NVIDIA, prominent for their production of GPUs for gaming, cryptocurrency mining, and professional applications, as well as chip systems for use in vehicles and robotics, aims to realize a colossal Digital Twin project ‘Earth 2’. It is touting itself as “the world’s first scalable multi-GPU physically accurate world simulation platform”(NVIDIA), whereby its deep neural networks can simulate 100,000 times faster than traditional numerical weather models capable of visualizing data through its Omniverse platform. A platform for artists, designers, engineers, and developers to connect and build custom 3D pipelines to unlock full-design-fidelity, real-time virtual worlds. NVIDIA recently announced a partnership with the NOAA to process numerous types of geophysical data, such as observations of water temperature and solar wind data, in order to forecast future realities, including advancing progress toward sub-meter resolution in Earth system modeling.
Digital Twins and the influx of real-time data have also become trafficked into popular simulators such as Microsoft Flight Simulator, which includes a feature such as MeteoBlue that ports incoming data linked to air flow, brewing storms, and wind directions, which can effectively pit players against storms and hurricanes as they unfold in real time. Thus, the processual dynamics of the planet are stitched together from the ensemble of geo-spatial imagery, i.e. 2-d maps, vector data, and 3D photogrammetry datasets that generate photorealistic reconstructions of 3D models of buildings, trees, and terrain. Microsoft teamed up with startup Blackshark.Ai that built specialised algorithms tasked with analysing 2D photos, identifying and classifying buildings according to their roof design and other characteristics that indicate their outline.The deep learning neural networks of their AI were then able to reconstruct remarkably accurate 3D copies of the classified assets.
Ultimately, through multiple input sources from satellites, aerial, digital elevation models, effectively were used to train an AI model to auto-generate realistic 3D digital assets (terrain, trees, mountains, buildings) (Blackshark. ai, 2021). Similar ventures and projects that also have been gestating from the studios MicroProse Software and Slovakian studio, such as the Outerra World Sandbox have also utilised an arsenal of satellite data from a variety of resources in their bid to also replicate our own planet and furnish the player assets to spawn and construct worlds that are anticipated to support VR tools such as Valve.
It also considers how gameworlds and their similarities to mesocosms also nourish a practice known as ‘hypo-modeling’ which Wendy Hui Kyong Chun explores. Chun analyses the imaginary of climate modeling that has locked us into becoming habituated to expecting states of the world that are arguably only statistically possible or probable. Hypo-modeling, on the other hand, would try to chisel or dig out paths for unexpected or unanticipated correlations that go beyond what is statistically likely. These relationships may be hard for scientists to see, but they may show up in our social and perceptual realm. The construction of models that are artefacts of ‘synthetic intelligences’ can entrench or wrench open pathways for thought, fostering encounters or ‘habitual disruptions of the habitual’ (Chun, 2015), are key to experiencing which Chun identifies as the ‘inexperiencable’, hatching open novel perceptual points or abstractions exemplified by Remote Sensing, revealing the invisible through pulses and measured frequencies.
With efforts to comprehend and orient ourselves within planetary entanglements, the ensemble of datasets, infrastructure, global supply chains, servers, and deep learning models are further utilized to become the critical resources that are reciprocally entrapping and pruning our horizons of the future as we train models to approximate the probabilities or likely trajectories of probable worlds. These likelihoods or probable trajectories are engraved in the current social cultural imaginary, particularly incarnated in models linked to biodiversity loss, climate change, and, importantly, geopolitical frameworks that regard a world as inherently risky, hostile, and anarchic in its order, assumes the need to securitize or prepare to effectively preempt the potential emergence of threats. This logic of preemption, is understood when a threat becomes real, even though it might not have manifested initially, but is over time materially recognised or responded to as an actual reality.(Massumi, 2015)
This underscores a more palpable tension between increasingly autonomous and unpredictable technologies and the systemic control and anticipation of expectations by digital apparatuses of capture is highlighted by digital uncertainty (Marenko, 2018). Fundamentally, pixels become disciplinary mechanisms that in effect can control and predict the actions of a non-human or human user in the future.
Pixels arguably stitch and assemble imagery within the virtual, yet it is not an understanding of the virtual in the sense of augmenting or extending reality, rather can be understood as a reservoir or potential from which the actual may be extracted or selected. The generation and selection of possible constructions of the world, follow from the fact that these images ramify through an armamentarium of perceptual or imperceptible apparatuses that execute and format specific ‘vectors’ or expressions of the future. Machine learning, grounded firmly within the complex or temporal economy of the future, will generate and encode ‘predicted images’ of what is expected to unfold in frames that are inductive and probabilistic approximations of the future.
Here is where Machine Learning takes part in modeling a habit or a repetition of the expectation of what the future is, prompting us to consider how in turn we can nurture new practices of ‘habit’ through acts of ‘repetition’. ‘Habit’ and ‘repetition’ attends to how gameworlds also sculpt a certain grammar and choreography of stimulus-response psycho-motor responses that becomes conditioned by a ‘universe of images.’ How our choreography of habitual repetition of anticipated future states or probabilities will be reconfigured or rearranged via the nexus of machine learning, gaming, and generative models reminds us once again that future worlds are nested within the infrastructure and detritus of our already existing world. Further, it also cedes territory to embracing a need for uncertainty to become a valuable resource, which violates a linear predictability of closure and doom. Rather, it produces the possibilities of indeterminacy in the state of the world, in which the creation of what is conceivable depends on random, contingent, and incompletely understood components.
As Betti Marenko contends “If contingency and uncertainty are resources to capitalize upon, then future-crafting strategies that embrace uncertainty rather than shun it or flatten it, should be employed to experiment with scenarios of cohabitation, entanglements of the human and the non-human, and to test the creative responses emerging in the space between them” (Marenko, 2018). Gameworlds, in turn, complement FutureMaking or Marenko’s ‘FutureCrafting’, in generating interventions that can disturb us and crucially to generate fictions that unleash and cause frictions. Ultimately, gameworlds are artefacts that can orientate, unveil, and rewire given entanglements, unlocking the sandboxes of worldmaking.
References:
Baker, Bowen. “Learning to Play Minecraft with Video Pretraining (VPT).” OpenAI, OpenAI, 24 June 2022, openai.com/blog/vpt/.
Beniger, J. R. (1994). The control revolution: Technological and economic origins of the Information Society. Harvard University Press.
Blackshark.ai. “Creating a Digital Twin of Planet Earth.” Blackshark.ai, 1 Apr. 2022, https://blackshark.ai/technology/.
Creating a digital twin of Planet Earth. Blackshark.ai. (2022, April 1). Retrieved November 21, 2022, from https://blackshark.ai/technology/
Chang Alenda Y. (2019) Playing Nature : Ecology in Video Games. University of Minnesota Press.
Chun, Wendy Hui Kyong. (2015) “On Hypo-Real Models or Global Climate Change: A Challenge for the Humanities.” Critical Inquiry, vol. 41, no. 3, 2015, pp. 675–703. JSTOR, Available at: https://doi.org/10.1086/680090.
Dhaliwal, Ranjodh Singh (2022). On Addressability, or What Even Is Computation? Critical Inquiry 49 (1):1-27.
Edwards Paul N. (1996)The Closed World : Computers and the Politics of Discourse in Cold War America. 1st MIT Press pbk. ed 1st MIT Press pbk. ed. MIT Press. INSERT-MISSING-DATABASE-NAME http://hdl.handle.net/2027/heb.01135. Accessed 21 Nov. 2022.
Gabrys, Jennifer. (2016). Program earth: Environmental sensing technology and the making of a computational planet. University of Minnesota Press.
Gabrys Jennifer and Pritchard Hellen (2022).“Environmental Sensing Infrastructures and Just Good Enough Data.” The Nature of Data. Edited by Jenny Goldstein and Eric Nost. Lincoln: University of Nebraska Press.
Grusin, Richard. (2004). Premediation. Criticism. 46. 17-39. 10.1353/crt.2004.0030.
Hayles, Katherine (2017) Unthought: The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press, 2017.
Kraus, Kari. (2018) “Finding Fault Lines: An Approach to Speculative Design.” The Routledge Companion to Media Studies and Digital Humanities, edited by Jentery Sayers. Routledge. pp. 162-73.
MacKenzie, A., & Munster, A. (2019). Platform Seeing: Image Ensembles and Their Invisualities. Theory, Culture & Society, 36(5), 3–22. https://doi.org/10.1177/0263276419847508
Marenko, Betti (2018). FutureCrafting: the Nonhumanity of Planetary Computation or, how To Live with Digital Uncertainty. Ed.Witzgall, S., Kesting, M., Muhle, M., & Nachtigall, J. Hybrid Ecologies. DIAPHANES.
Monteiro, Stephen. (2017) The Fabric of Interface: Mobile Media, Design, and Gender. S.n.
NASA. (n.d.). NASA Sea Level Change Portal. NASA. Retrieved November 27, 2022, from https://sealevel.nasa.gov/missions/argo#:~:text=Argo%20is%20a%20global%20array,6562%20feet)%20of%20the%20ocean.
op de Beke, L.(2021).Premediating climate change in videogames: Repetition, mastery, and failure. Nordic Journal of Media Studies,3(1) 184-199. https://doi.org/10.2478/njms-2021-0010
Reed, Patricia. Planetarity and Pragmatics: On Materialist Entanglement. Unpublished.
Rispoli, Giulia.(2023) “Planetary Environing: The Biosphere and the Earth System.” Environing Media ed. by Adam Wickberg and Johan Gärdebo, 54–74. London: Routledge.
Ronen Tamari, Daniel A Friedman, William Fischer, Lauren Hebert, and Dafna Shahaf. (2022). From Users to (Sense)Makers: On the Pivotal Role of Stigmergic Social Annotation in the Quest for Collective Sensemaking. In Proceedings of the 33rd ACM Conference on Hypertext and Social Media (HT ’22), June 28-July 1, 2022, Barcelona, Spain. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3511095.3536361
Wickberg, A.; Gärdebo, J. (2020) Where Humans and the Planetary Conflate—An Introduction to Environing Media. Humanities. 9, 65. https://doi.org/10.3390/h9030065
Wiener N. (1961) Cybernetics; or Control and Communication in the Animal and the Machine. 2nd ed. M.I.T. Press
Leave a Reply