top of page
Search

Digital Materialities of AI-Enhanced Archival Imagery

Updated: Dec 15, 2022

By Elise de Somer | 14 December 2022


Scholars and consumers often view digitization unfavorably as a subtractive rather than additive process, resulting in a copy that lacks the original animus of a physical document. For example, Whitney Trettien calls digitization a “zombie-like revitalization” of an original document (27). In periodical studies, marveling at the materiality of an original, unadulterated copy of a periodical in person is practically the gold standard. There is nothing exactly like holding a physical time capsule of articles, and smelling the enchanting “old book” aroma of mildew as you feel the crispness of the paper, hear the sound of pages flipping, and get a sense of dimensionality in this unique, tactile, sensory experience. However, the question remains, should digitization always be viewed as a second rate, better-than-nothing, but not ideal choice of study next to viewing a “real” copy? Can and will digitization ever constitute a unique, valid, and additive mode of study? In this article I will argue that while three dimensional, “real” material objects are certainly different from digital files, digitization imparts its own mode of materiality. By looking at AI enhancements of text and images with Topaz Labs AI software, we encounter a digital materiality that is specific to the algorithmic processes of the software, just as the materiality of a “real” physical document is specific to its medium.

The existence of digitization poses philosophical dilemmas both within and without archival periodical studies. Suspicions of the equivalency between digital and physical materialities permeate online economies and subsequently, global markets. As a recent example, over the pandemic times, Mark Zuckerberg tried to pioneer a digital landscape this year with the launch of the “Metaverse” – a digital playground of storefronts, real estate, commerce, leisure, all built into a social network. It was supposed to be the next big thing since Facebook. As it turns out fewer people than he expected wanted to buy digital real estate in a make believe world. Zuckerberg’s project has been deemed a failure. Another example of the collapse of the digital world this year was the severe crash in the values of NFT art, which I can only describe to the layman as imaginary digital art that costs money. Digital make-believe, it seems, was not recession-proof, or even pre-recession-proof, because digital products do not have the same type of existence as tangible, tactile objects.

Theorist Eric Bulson writes about similar ontological dilemmas associated with digital objects, specifically within the context of periodical studies and archival research. He highlights how slippery it is to conceptualize digital reproductions as equivalent to their real-life 3D counterparts. In “The Little Magazine, Remediated,” Eric Bulson pithily argues that instead of living in the age of the little magazine, “[w]e are now living in the age of the 'diglittle magazine' even if it might be tempting to pretend that the medium is the same as it ever was” (201). Bulson rightly argues that this difference in media and materiality changes the user experience entirely.

Another argument for the ontological differences between the materiality of physical text and the materiality of digital media is exemplified by Gitelman's scholarship on PDF formats. She writes:

For instance, Webopedia.com (a self-described dictionary of computer and Internet terms that claims to contain “everything you need to know”) alleges that “essentially, anything that can be done with a sheet of paper can be done with a pdf”. “Anything” in this case must refer to a tiny range of activities: Beyond Paper mentions printing, sharing, reading, filing, copying, and archiving. These are the gerunds that animate the myth of the paperless office. Forget all of the other things that you can do with paper, like folding, smelling, tearing, crumpling, shuffling, and wiping. (127)

Gitleman shows how digital and physical equivalencies can only go so far in their relation to human senses. While both digital and physical objects exist and bear their own discrete materialities, they still exist autonomously with differing impacts on human senses. Gittleman’s example of the sensations associated with physical objects: touch, smells, sounds, recalls how on April Fool's Day 2013 as a prank, Google released a sham function that allegedly allowed users to search by scent. "Google Nose" turned out to be a lie. Digital products simply do not and will not have smells. However, although digital materiality is cut off from olfactory experiences, this does not mean digital materiality is entirely devoid of its own unique materiality, nor does it necessarily mean digital objects do not exist altogether.

When it comes to materiality in digital humanities, it is important to recognize that digitization creates a whole new material object, rather than a restoration. If we do not view it this way, the digital reproduction is surely a zombie as Trettien suggests, devoid of its original animus as a shell-like monster. However, if digitization can be viewed as a generative and creative process of a new object rooted in an original, then we can view digital objects as ontologically equal to, but different from, physical objects. The value of digitization hinges on flattening rather than hierarchical-izing our ontological vantage points. Object-oriented ontology offers insights for conceptualizing flattened dynamics between digital objects, physical objects, and human subjects. OOO is a branch of speculative realism where theorists posit that objects exist in themselves, not in the human observance of them. In effect, while a human cannot see the opposite side of an opaque object, nor through the noise of film grain, they can speculate that the object behind it does exist. If a tree falls in the forest and no one can hear it, an object-oriented ontologist would say that tree still is real.

According to OOO proponent, Graham Harman, OOO posits that both sensual objects (immaterial things, concepts, thoughts, ideas) and real objects (things with a physical presence) are all equally objects that exist independently and interdependently amongst one another (Harman 5). By flattening our ontological vantage points in this way, digital objects as sensual objects derived from real physical objects can remain both autonomous and interconnected. In the words of video game theorist, Ian Bogost, object-oriented ontology suggests the following dynamic: “Things are independent from their constituent parts while remaining dependent on them” (23). In this sense, a digital piece based on a physical object is both an independent, autonomous object, while simultaneously remaining networked to an equally existent original, physical object. Therefore, digital objects and physical objects are not reducible to each other, nor equivalent to one another, but rather they are unique yet entangled with one another.

By adopting an OOO perspective on digitization, we can see how artificial intelligence creates new objects which are interconnected with their original references. For example, new artificial intelligence software like Topaz AI can aid in visually enhancing texts when human eyes, cameras and Photoshop can only go so far in visualizing and perceiving long-gone physical objects beyond the realms of human cognition. Topaz functions in a manner that is both additive and referential. AI expert, James Abbot, describes Topaz Labs thusly:

Topaz Photo AI is auto-pilot for your image quality needs. Import an image to Topaz Photo AI and it will use specially trained AI models to first detect the unique problems in detail, clarity, and resolution before intelligently applying adjustments that will maximize your image quality.

A few key matters stand out in their description. First, the word “auto-pilot” highlights the autonomy of the software itself, which reminds us that in the view of OOO, software counts as on object, as equally as does a human, a photo, or a pixel. Second, the word “maximize” leans into the additive qualities of digitization implicit in my argument, that digitization is not necessarily a subtractive, diminutive bastardization of an original. Third, the adjustments available in Topaz AI such as the “clarity” adjustment noted above, denote a digital materiality that is specific to the software. While Bolson called the digital magazine the “diglittle magazine” to suggest there is something unique about digital materialities, I will use the word “tech-ture” to refer to the material adjustments to digital texture afforded by Topaz AI.

Artificial intelligence contributes a new set of complications to the notion that digitization subtracts or leaves out the bulk of experience, because AI can actually add material in where it was previously lacking. Theorist James Mussell backs the notion that digitization is not necessarily a loss, but an addition to embodied objects:

One of the curious effects of the digital moment is the way it has enhanced the aura of the archive. Embodied print seems to offer a point of resistance to those who would make it disembodied, digitized, broken into bits. Yet there is no such thing as pure content, an unmediated soul, and a body of some sort is always necessary for reading to take place. What is at stake in digitizing periodicals is not the issue of loss—loss of materiality, loss of authenticity, loss of whatever aspects of the printed object one particularly cherishes, be it smell, texture, whatever—but rather that digital media make scholars rethink what exactly constitutes print. In the digital age, print becomes reborn, no matter when it was published. Print returns, Lazarus-like, with forbidden knowledge of the grave. 344

The word “Lazarus-” is highly applicable to AI restoration techniques. Intelligent image enhancement can provide an inert, unintelligible object or file with an animus that brings the image to life. The caveat here is that digital “restoration” will never fully become the original; its animus is AI-real, not real-world real, but it can offer alternative routes of envisioning periodical media beyond what the present-day naked eye can perceive.

The tension with ontologically conceptualizing AI resides in balancing the notion that digitization is “zombie-like” in Trettien’s terms, while also resisting the urge to give AI mystical, Lazarus powers in Mussell’s terms. In object-oriented ontology, the zombie-like view may be described as what Harman calls “undermining” an object—which is to claim “objects only gain their identities from elsewhere” and so a digital representation of a physical object would have no identity beyond the original object that is missing from the zombied digital reproduction (10). Conversely, the Lazarus quality of digital objects may be described with Harman’s term, “overmining,” which entails the view that objects can be reduced upwards into lofty, godly, metaphysical abstractions (11). The trick to conceptualizing the digital ontology of AI then, hinges upon viewing objects both as the whole and sum of their parts.

With this philosophical framework in mind, Topaz AI can help build a speculative materialist approach to imagining alternative iterations of historic objects. Up until recently, enlarging an image in software involved changing the dimensions of a photo without enlarging details. In result, a printed, analog family photo could be enlarged to the size of a billboard—with the catch being, the image quality would be inferior, pixelated, and blurry. Among aficionados of film photography, digitization has gotten a bad reputation due to the “lossy” nature of most file formats. This means that every time you make a copy of a photo, the file somewhat degrades. This prompted the form of “lossless” file formats like PNGs and TIFFs that preserve the integrity of the original image when duplicated, or copied and pasted unlike conventional JPEG images used on the web.

Therefore, one innovation that challenges the notion that digitization degrades pure, original, tactile objects is the advent of AI software. AI can imagine and fill in the gaps when high quality imagery is lacking. For example, the way Topaz Labs Gigapixel works is by robotically analyzing every little dot in an image and then surrounding that dot with more dots that have been intelligently selected by the software to fill in the blanks. This could be useful to re-envision pixelated archival images that were digitized before technology had evolved enough to reproduce text at a high enough resolution for ease of reading.

For example, downloaded JPEGs from the Edwardian women’s romance magazine, Forget-Me-Not, is digitized with just enough megapixels to get a sense for what sentences are saying, but the low quality slows down the reading process to a more glacial speed than original readers would have experienced. This is an example of where digitization is surely second rate to the original, because not only is the reader lacking a physical object, the digital object is defective in its purpose of conveying information. Digital restoration with AI has the potential to reimagine alternative facets of the original experiences in terms of reading speed by adding quality to low quality digital scans and photos.

To put Topaz to the test, I ran two types of images through this AI program: digitized, non-OCR text blocks, and human portraits. Despite all the amazing features of Topaz AI, text enhancements tested in my trial of this software did not go swimmingly. This failed feature highlights the algorithmic limitations that characterize this software’s ability to render digital materialities. Take a look at these examples of text from Forget-Me-Not. First is the pixelated, low contrast, difficult to read version from the digital archive (Fig. 1).

Fig. 1

Next appears a somewhat restored version that is sharpened with the clarity function in Adobe Photoshop Lightroom. As we can see, the original digital version has faded type, fuzzy letters, and visible pixels that indicate the file has a low dpi (dots per inch). Thinking of paintings, this version is like a Seurat pointillism painting where the viewer must both relax their eyes, and look closely in order to determine visual information.

Fig. 2

This pointilism effect is most evident in Fig. 2 with the increase clarity adjustment in the non-AI software, Adobe Photoshop Lightroom CC. Overall, the darkness of the letters is most intense in Fig. 2, as if the ink is more saturated, but the gray tones in the background remain about the same as Fig. 1.

Fig. 3

Next, in Fig. 3, this image has been sharpened with AI in Topaz Labs. The results are underwhelming. The software promises to enlarge images without pixelation, and if it did what it promised, there would not be miniscule textured cubes in the background. Since I have used this feature with success in non-textual images, I suspect Topaz's ability to enhance text is not as well developed.

Fig. 4

Here, in Fig. 4 the image has been plugged back into Lightroom to attempt to smooth the texture in the background using a tool called “luminance range” which enabled me to select the grays in the background but not the text, so that the text was unchanged by the background blur. This did not work well either if the goal is to have an easy to read image with a clean, non-inked background.

Fig. 5

Lastly, in Fig. 5 we can see the difference between the original scan and the final edit. The results are both dramatic and underwhelming. The reconstructive properties of AI seem nascent in Topaz in the sense that increasing the size with the “Gigapixel” feature did nothing to reduce pixilation or background noise effectively under these conditions. Overall, I would deem the final edit a little easier to read, but it is not a perfect enhancement. Furthermore, the Topaz AI program seemed to be little more “intelligent” than Photoshop software at text detection. I had to trace each letter with my mouse, one by one, in order to apply edits because Topaz “failed to detect the subject.” The selection tool in Topaz was a lot more “sticky” then brushed in Photoshop because it kept grabbing portions of the background along with the text, to my utter frustration. Overall, it seems that AI is not quite there yet when it comes to text enhancement. The results were a stark reminder that when it comes to archival studies, "each attempt to exert bibliographical control exposes how much more there is to know and how much can never be known" (344 Mussel). In effect, as it stands, AI is not yet Lazarus-like, omnipotent and omniscient—some things may never be fully visualized despite cutting-edge efforts.

One curious material feature comes to light in the final two edits, however. In Fig. 3 and Fig. 4, the “ghosts” of the print from the pages opposite the quotation at hand come alive, almost as if the page were backlit by a lamp. This effect is most prominent on the far left of the bottom two lines where the mirror images of two indistinguishable letters reveal themselves. Although this experiment was an abysmal failure, it reminds us that digital objects have their own tech-tures and existences distinct from embodied, tangible reality. The distinct imperfections in this above experiment correspond with Eric Bulson's concept of the "bad copy":

Technologies for digital reproduction will only continue to get cheaper, making it possible for scholars to generate copies that can be used privately or circulated publicly through various channels on the web. But even more important, I think, is the actual need for the bad copy, the one that can be accompanied by the marginalia, the annotations, the overexposures, the cropping, the cuts, the tags (as seen in fig. 6), even the fingers. This is not the kind of visual information that would ever get encoded, and whether it is there on the document beforehand or added later on, it is precisely the thing that helps to foreground the materiality of the object and bring into view a community of readers through which it was and is circulating. Bulson 217

Digitization in the form of “bad copies” shot by rogue researchers with handheld cameras, bears more materiality than a perfectly clean archival scan meant for an online publication of a text suitable for, say, Archive.org. Bulson celebrates this notion that scholars could circulate these copies among their own tight knit reading communities. The thing with AI when it comes to text is it is very much still a proto-tool that can likewise create “bad copies,” which highlight different aspects of a text than a perfect scan would foreground. While Bulson here talks about tools for reproduction in terms of photography more so than photo editing, his argument holds water in the sense that technology for digital archival imaging, can only become cheaper and more readily available. Text restoration with AI simply is not there yet, but it reminds us that AI enhancement changes and makes new the material conditions of digitized print. This hyper-real, high contrast result unintentionally unveils the ghosts of the text on the opposite side of the page. The materiality of this text constructs an alternate reality. Although Topaz failed to isolate the foreground text from the backwards text showing through the other side of the page, this side effect can be viewed as a happy accident that in Bulson’s aforementioned words, “foreground the materiality of the object.” This software-specific materiality overall cannot be equated fully with the “real world” but it can exist in its own right with ontological status as a digital object with its own discrete yet interconnected existence.

In contrast to AI's lackluster abilities to reimagine text, the results of AI-treated portraiture demonstrate more sophisticated technological developments. In the past, software such as Photoshop has provided digital scholars with the ability to restore faded images using contrast and hue adjustments, and sharpness. However, these capabilities of Photoshop are limited when it comes to reconstructing faces. The ability of Photoshop to restore faces is somewhat superficial and often depends on the user’s ability to use Photoshop tools with the skill of a painter to selectively sharpen, highlight, or darken different features. What if Topaz AI could do this with the click of a button instead of three to thirty hours of Photoshop work by hand? What if multiple covers of a whole magazine series could be visually enhanced with the click of a button, therefore allowing archivists to restore every image of a faded magazine without much manual effort or graphic design classes?

Topaz Labs offers another AI tool that can potentially be useful to periodical researchers, and this tool is facial reconstruction. Although “re-imagining” is more apt a word than "reconstruction," this tool is far more advanced than the functions utilized in the above text-sharpening experiment. Additionally, Topaz Labs offers options to recover faces from faded archival images by running algorithms that construct faces from the limited information available in low quality images. This feature could provide archivists with the ability to reimagine images that have faded beyond a reasonable level of recognition. I have used this tool many times on hundreds of photographs that were not in-focus enough to send to portrait clients when they came straight out of the camera, but after Topaz AI applications, faces were not just superficially sharpened, but intelligently constructed as new yet recognizable images. The following paragraphs will explore the potential for AI to enhance digitizations of hundred year old magazine photos depicting human faces.

The results of running Topaz AI on facial features from antique periodicals is fairly dramatic, but still constrained by the newness of the technology which imparts its own material artifacts. Below we have two different types of antique photos: one that is already rather high quality with most facial features evident, and another that bears facial features shrouded in ample film grain, just beyond human perception, like a tree falling in the forest with no one there to hear it.

Fig. 6: Before, Fig. 7: After

Fig. 7 appears to simply have been airbrushed, at first glance, since Topaz removed the textured film grain in the process of uncovering the woman behind it. Upon zooming in further, detail not seen in the original is born anew in the AI-enhanced version. Her hair has more strands visible at the crown. She has a few freckles or beauty marks evident on the lower right. The wrinkles under her eyes are more realistic and life-like once grain is removed and the AI extrapolates where fine lines might be.

Fig. 8 AI-enhanced eyebrows and eye wrinkles

This “after” photo in Fig. 8. also shows more details in the eyebrows than the before image below in Fig. 9. In the AI version, you can count the individual eyebrow hairs. Under conditions where the original scan of the photograph is high resolution and perfectly lit, the AI software executes results without much room for error. In effect, periodical scholars can envision greater detail conveyed through artificial materiality.

Fig. 9: Close-up of Eye without AI enhancement

As an experimental control, I also ran Topaz AI on a photo from a 1910 issue of Good Housekeeping with a smaller resolution and greater amount of film grain yields stranger results via Topaz AI. In figure 10, the most apparent shortcoming of the software is its inability to fill in the details of the model’s hair (below, right).

Fig 10: Before, Fig 11: After

Her hair emerges as a smooth blob that degrades the visibility of her curls, which are slightly more evident in the original. Since that portion of the photo had the most film grain, it seems that the AI struggled to differentiate the texture of the grain from the texture of her curls, thereby smoothing everything over instead. Her lips and eyes are more defined. In her right eye, she appears to have a bruised sclera; it is hard to say whether the artifact in her eye in the original is a bruise/burst vessel in her eye from weariness, or if it is a fleck accidentally imposed by a little smudge, or a mishap in the film developing process (Fig. 12).

Fig. 12: Bruised Sclera

Nevertheless, the AI interprets it as a minor trauma to her eye, thereby prompting many possibilities for viewers to latch onto – was she abused by her husband? Is she overworked and rubbed her eyes too hard in fatigue? Alternative photo narratives emerge with the artificial reimagining of details. Most notably, the AI processed image has a more ghostlike quality than the before image. In the before image, there is a slight outline to the right of the subject’s face (Fig. 10). This ghosting can happen in the process of projecting film onto photosensitive paper–if the paper or projector is moved briefly while the exposure light is lit.

Fig. 13

In the AI-enhanced photo, the outline is exaggerated and given a halo-like glow (Fig. 13). By creating a new image, the AI brings out the subtle materialities of the original image-making process through its own unique processes.

For expansion in a larger conference paper: I will talk about Freud’s uncanny valley, with reference to Tim Morton’s chapter on aesthetics in Hyperobjects where he links computer-generated images to what he calls “the age of asymmetry” in which both digital existences and physical existences are uncannily not what the mind expects. In the context of his larger work, Morton links the age of asymmetry to climate change, postulating that what we have as a result of climate change is a world that is “a strange stranger” – an uncanny being that walks the line between what human minds are primed to expect based on past experiences, versus what they see in the here and now. While I do not plan to talk about climate change in this specific piece, I am intrigued by the notion that both the “real world” and the AI-world can exhibit this uncanny asymmetry because it backs my notion that digital and physical realms enmesh, although they are not equivalent to one another. See Morton, Hyperobjects p. 130.

By viewing AI through the lens of OOO, the notion that digitization is equivalent to or inferior to physical objects can be dispelled. Instead, we may view digital objects as:

  1. Ontologically equal to original physical objects in the sense that they both exist

  2. Different from original physical objects with regards to the particular materialities imparted by software

  3. Entangled and enmeshed with their original, physical reference objects despite their autonomous status.

These considerations may assist periodical studies going forward as scholars grapple with the long-term implications of increasingly more powerful software. While text-based AI is not as developed as facial recognition AI, the future holds generative possibilities for re-imagining text-based images. At this point in time, technology has not reached a stage where it is an exact mirror of reality. As we saw in the images discussed, the “bad copy” qualities of AI-enhanced imagery still render AI-images as distinctive in their material conditions. AI certainly is not a pure imitation of reality, but an alternative reality in which periodical scholars can speculate to a certain degree of nuanced alternative realism.


Works Cited

Abbott, James. “Topaz Labs Sharpen AI Review.” Digital Camera World, 2021.

https://www.digitalcameraworld.com/reviews/topaz-labs-sharpen-ai-review.

Bogost, Ian. Alien Phenomenology, or What It’s Like to Be a Thing. University of Minnesota

Press, 2012, https://doi.org/10.5749/j.ctttsdq9.

Bulson, Eric. “The Little Magazine, Remediated.” The Journal of Modern Periodical Studies,

vol. 8, no. 2, Penn State U.P. 2017. pp. 200-225

Gitelman, Lisa. “FOUR. Near Print and Beyond Paper.” Paper Knowledge, Duke University

Press, 2020, pp. 111–35, https://doi.org/10.1515/9780822376767-006.

Goethe, Arnold. “Girlhood.” Good Housekeeping Magazine, vol. 51, no. 2, Phelps Publishing

Company, 1910. 144. https://modjourn.org/issue/bdr472197/.

Harman, Graham. The Quadruple Object. Zero Books, 2011.

“Love the Conqueror.” Forget-Me-Not: A Dainty Journal for Ladies. Fleetway Press. 22 Jan.

1916. pp. 362. http://www.victorianpopularculture.amdigital.co.uk.proxy.library.nd.edu/

Documents/Details/EXEBD_21896.

Mello-Klein, Cody. “Why is Mark Zuckerberg’s Metaverse Failing?” News@Northeastern. 3

Nov. 2022. https://news.northeastern.edu/2022/11/03/metaversefailure

/#:~:text=Metaverse%E2%80%99s%20failings%20come%20down%20partly%

20to%20what%20Pearce,maintain%20audiences%20over%20time%2C%20succeed

%20by%20fostering%20creativity

Milmo, Dan. “NFT Sales Hit 12-Month Low After Cryptocurrency Crash.” The Guardian. 2 July

2022. https://www.theguardian.com/technology/2022/jul/02/nft-sales-hit-12-month-low- after-

cryptocurrency-crash.

Morton, Timothy. Hyperobjects. University of Minnesota Press, 2013,

https://doi.org/10.5749/j.ctt4cggm7.

Mussel, James. “Repetition: Or, ‘In Our Last.” Victorian Periodicals Review, vol. 48, no. 3, John

Hopkins U.P. Fall 2015. 343-358.

“Stage Beauties Posed Exclusively for Cosmopolitan.” Cosmopolitan, vol. 51, no. 1,

International Magazine Company, 1911. p. 82. https://modjourn.org/issue/bdr469027/.

Trettien, Whitney. “A Deep History of Electronic Textuality.” Digital Humanities Quarterly, vol.

7, no. 1, 2013. http://www.digitalhumanities.org/dhq/vol/7/1/000150/000150.html.



 
 
 

Comments


bottom of page