Stewart Smith
Stewart Smith Mindcasting Hero 3a

In the future, we humans will be able to broadcast our visual imaginations in realtime. The art of mind-broadcasting—“mindcasting”—will favor folks who can imagine visuals with great precision in order to overcome the low fidelity of the medium. Who better than typographers?

The Gallant Lab at UC Berkeley recently published a scientific paper and accompanying video titled “Reconstructing visual experiences from brain activity evoked by natural movies.” The paper describes a method for (roughly) recreating what a person sees through their eyes—a process that involves meticulously scanning a designated observer’s brain activity as they perform this act of seeing. (Eat your heart out, John Berger.)

Lacuna employee Stan Fink (Mark Ruffalo) scans the brain of customer Joel Barish (Jim Carrey) in the film Eternal Sunshine of the Spotless Mind (2004).

Lacuna employee Stan Fink (Mark Ruffalo) scans the brain of customer Joel Barish (Jim Carrey) in the film Eternal Sunshine of the Spotless Mind (2004).

If you’re familiar with the film Eternal Sunshine of the Spotless Mind you may recall that in order for Lacuna employees to erase a customer’s emotionally painful memories, the customer first had to visit the office for a brain scan. In the film we witness Joel Barish (played by Jim Carrey) sitting in the Lacuna office with his head in a scanner as he handles physical objects relevant to the memories he wishes to erase. He focusses on those objects as well as their emotional context—all as the scanner whirs and records the particular ways in which his brain lights up in reaction to these thoughts. This process of scanning-while-focussing creates a map of precisely where in his brain those painful memories reside; a map of what locations the Lacuna technicians will later dutifully damage into erasure as Joel sleeps at home.


Jack Gallant speaks about brain mapping and brain decoding at a TEDx event.

Fuzzy reverse-lookups

Apparently, a similar mapping principle applies in real life: Jack Gallant’s lab at UC Berkeley records the intricate brain activity of participants as they view photographs or moving images, pairing each frame of visual input with a snapshot of the subject’s brain during that moment, thereby creating a library of such photograph / brain activity pairings. “When you see this, your brain does this. But when you see this other thing, your brain does this other thing.” With these pairings of photos and brain data, the Gallant Lab staff can do a reverse-lookup of sorts: “If your brain is now doing this, that’s very similar to what it did back when you looked at this image, therefore we think you’re looking at something visually similar to that image.”

Let’s say you’ve just sat in the scanner for a recording session at some future “mindcasting” studio. The staff have assembled a stack of 100 photographs of human faces. Each portrait is a closeup—the eyes, nose, and mouth filling the frame so that every detail is crisp for your gaze to digest. (Obligatory nod to Barbara Kruger.) One by one, you focus on each portrait individually. As you concentrate on the face’s details, the scanner records your brain activity and this data is fed into a computer that creates a library of pairings: each photograph paired with your corresponding brain scan data.

Once you’ve finished viewing each of the 100 faces, your recording session is complete and it’s time for the real magic. A technician hands you a new portrait—it’s rather mysteriously contained within an opaque black sleeve. You’re told it’s a face that you’ve never seen before, and that the staff has likewise never seen before. (Perhaps it’s been provided by an anonymous volunteer.) Crucially, the computer has never seen it before either. You’re instructed to un-sleeve the photo, concentrate on its features, then re-conceal it, making sure that no one else is able to catch a peeking glimpse of the face. So you carefully reveal the portrait to yourself and study each line, plane, and pore of this new face as the scanner records the minute articulations of your brain in reaction to what your eyes are seeing. Satisfied, you slip the photo back into its sleeve as the computer rapidly sifts through this latest brain scan data, comparing it to each of the 100 scans you’ve previously recorded.

On screen, the computer reveals a blurry rendering of a face. The staff look on expectantly as you dramatically unveil the mystery portrait and hold it up beside the screen. It’s the same face! (More or less.) You and the staff marvel at the similarity between the physical photograph in your hand and what the computer has rendered solely based on your brain’s activity. You can see how the computer’s rendering has been constructed from the catalog of 100 “known faces”—1% like his face, 10% like her face, and so on. The computer has described what it thinks you were looking at as a combination of photos it has previously recorded you observing—an overlay of “fuzzy” reverse-lookup matches. In doing so, the computer has read your mind.

The face of Kate Winslet (who portrays Clementine Kruczynski in the film Eternal Sunshine of the Spotless Mind) as a composite of several photographs.

The face of Kate Winslet (who portrays Clementine Kruczynski in the film Eternal Sunshine of the Spotless Mind) as a composite of several photographs.

It’s worth noting that because Gallant Lab’s result images are combinations of pre-existing images, they are visually similar to some of artist Jason Salavon’s works, particularly Class.

Out of the lab, onto the streets

This interesting constraint—output images limited in their variety and detail by the breadth of the “known images” library—begs an interesting question: What ought to be in your library of known images? (The more expansive the library the better, yes?) And how might we get out of the lab in order to record observations of real life, rather than observing photographs?

The Gallant Lab uses fMRI scanners to observe brain activity. (EEG does not deliver nearly the depth of data required for this task.) An fMRI machine isn’t exactly a mobile device. (They’re very big. And very heavy!) But suppose in the future you could shrink down an fMRI into something that is indeed mobile—like a special hat that could pair with your smart phone. (Dear reader, consider yourself warned that we are now venturing into wild speculation.)

A Yellow-billed Blue Finch rendered as a composite of several photographs.

A Yellow-billed Blue Finch rendered as a composite of several photographs.

Imagine a novice birder, wearing such a mind-reading hat, and gazing in the direction of a species they’ve yet to identify. The hat reads our birder’s brain activity, passes this data to their smart phone where it is processed locally or handed off to the cloud, and finally a text-to-speech voice helpfully whispers, “The bird you are looking at is a Yellow-billed Blue Finch.” Perhaps the hat comes with LCD glasses that can paint this just-in-time identification into our birder’s field of view. This isn’t merely mind-reading, it’s mind-augmentation.

Imagination as creation

One point I’m unclear on is how much the act of remembering an image resembles the brain activity of physically observing that image. What if our birder could close their eyes and imagine that Yellow-billed Blue Finch with enough clarity that the software would think they’re actually looking at it? Or, venturing a bit further here, could one imagine an entirely made-up landscape with enough gusto that the computer could render it as if our intrepid mindcaster had been viewing a landscape that physically exists?

What could this capacity do for the film or video game industry if directors, designers, and cinematographers could merely imagine scenes and sequences in order to render them? (And even if this technology was only good enough for rough cuts and sketches, just think of the labor hours that could be saved, as well as the freedom that comes with being able to iterate through many disparate ideas so nimbly.)

And for those of us brave enough to fall asleep in the scanner, could we render our dreams?

Funding the research

These technologies aren’t cheap, but there’s a good hack for funding fun research: Tap into someone else’s marketing budget. Large corporations pour stomach-churning sums of money into publicity stunts, and I see this as a great opportunity for mindcasting.

Imagine a car company unveiling an entirely new model of luxury vehicle during a much-watched sporting event—perhaps the Super Bowl. (Late January might be an odd time to advertise a new car when models are traditionally released in the autumn, but let’s roll with this hypothetical scenario for a moment.) For this marketing stunt we tell the audience that our new car model is so advanced that we’ve kept it absolutely secret. No one outside of a small production team knows what the exterior of this new car looks like—and because it doesn’t go on sale for a few months, there’s no way for the public to catch a glimpse. As of the Super Bowl, the image of this car is tightly controlled information.

With this premise in place, the magic trick begins. On stage-left hangs a large curtain obscuring the mystery vehicle; the dangling curtain’s length just short enough that its few-inch distance from the floor teases the rubber of tire bottoms, but reveals practically nothing. From stage-right enters some celebrity figure; someone with charisma as well as sharp eyes—presumably with great eyesight and a highly visual imagination. (Perhaps a famous artist, fighter pilot, recently retired quarterback, or similar.) The celebrity smiles and waves, then stands such that we can see them and they can see behind the curtain. The scanner (is it a hat, or by this point in the future is it a remote device?) scans our celebrity’s brain activity as they concentrate on the form and contours of the mystery car. After a proper drum roll, the mindcast-rendered image of the mystery car is projected onto the very curtain that still hangs between us and the genuine vehicle. Ta-da!

As an audience, we have not been shown the car itself, which continues to sit obscured behind the curtain. Instead we have witnessed the car through the eyes and mind of a celebrity. And for now, during this blitz of Super Bowl halftime commercials, that will have to do. (There’ll be additional buzz down the road when the actual form of the car is revealed in more traditional commercials closer to its release date. And if images of the car do happen to leak into the public before then, that’s essentially free marketing.)

Is it starting to get a little David Carson in here?

Is it starting to get a little David Carson in here?


Typographers for the win

Looking over the Lab’s result images and video it’s clear that mindcasting will be a fuzzy process. On this fuzzy frontier, would we experience a return to a design culture dominated by the likes of Paul Rand and Saul Bass as the simplicity of form allows for higher fidelity telepathy? How robust would our new typography need to be in order to survive translation from imagination to pixels and back again? And more pressing: whose visual imagination is crisp enough to consistently construct the lines and curves with enough precision to be understood, even when their mind is exhausted? (Imagining is a tiring sport, after all.)

Artists are a natural fit—from painters to illustrators, to photographers, designers, architects, and beyond. But in addition to transmitting images that are generally compelling, what specific nuggets of data will a future content industry necessarily need to broadcast at the speed of imagination? Text! Brand marks! Other robust, two-dimensional, monochromatic graphics. These compositions are catnip to skilled character glyph crafters who practice the subtle art of minute bezier adjustments that must not only communicate but thrive in a sea of visual noise. No one is better suited for this future world of imagination-manifested shapes than typographers.

An illustration of the ink traps present in the Bell Centennial typeface, designed by Matthew Carter in the 1970s.

An illustration of the ink traps present in the Bell Centennial typeface, designed by Matthew Carter in the 1970s.

It’s naive to assume that just re-using our existing typographic forms would be the most efficient solution for visual telepathy. In the 1970s when Ma Bell, on the occasion of its 100th anniversary, commissioned a typeface to replace its flagship Bell Gothic, the brief was clear: Design a typeface to be maximally legible at small sizes using cheap ink on even cheaper paper stock. Matthew Carter’s solution? Bell Centennial: a typeface with prominent ink traps—notches cut out of the letterform—that accommodate for the ink bleed that would inevitably occur during the printing process, thereby maintaining greater fidelity despite the technical constraints of ruthlessly economical phonebook print runs.

Much like these ink traps in lead type, mindcasting will require medium-conscious typographic features in order to improve legibility both for ourselves and for our software interpreters. And by the way, what exactly is the nature of a “brain-ready” font license? How do we negotiate intellectual property that might be duplicated simply by imagining it? I’m excited at the prospect of a new generation of Mathew Carters, responding to this medium of the imagination, on a quest to make our fantasy future legible.

Espionage

And on the flip side: who’s got the worst eyes? Or better yet: who can intentionally scramble their visual thoughts so as to be illegible and somewhat unique each time? Is that an impossible task? (Do visually impaired folks hold the high ground—or does the brain leak intent regardless?) Because if we find ourselves in a future where brain-scanning magic hats have given way to remote brain scanning—CC TV, but for recording brain activity at a distance—we’re going to be in some pretty tricky territory.

Punching in your PIN at the ATM? Don’t visualize those digits! Don’t even look at the number pad! But here too, typographers may hold an edge. These explorers on the forefront of brain legibility, who have endeavored to learn what makes for high-fidelity visual transmission, surely must also have tips for what imagined visuals make for low fidelity, illegible muck. How fascinating it would be to listen in on a crash course for spies taught by mindcasting typographic experts, up-skilling future James Bonds on methods for masking their visual wanderings in ways that might defeat the latest in brain signals intelligence.

What is privacy when your imagination itself is visible?

Christopher Walken as Dr. Michael Brace in Brainstorm (1983).

Christopher Walken as Dr. Michael Brace in Brainstorm (1983).

And beyond…

And if you’re tempted to wander even further beyond this future’s horizon, you might ask yourself: If you can read visual activity from the brain, could you also write to it? A society of mindcasters, conversing in visuals at the speed of imagination.


Meet me in Montauk.


I originally wrote this essay for AIGA’s Design Envy blog where it was first published on Friday, 11 November 2011 for an audience of graphic designers and typographers. It concluded my week-long stint as Design Envy’s guest poster. In the decade since, Design Envy has fallen into disrepair and today the website is no longer functional. [Broken link to original post.] On the up side, in the ten-plus years since writing the above, I’ve had the pleasure of actually meeting Jack Gallant in person—and nearly the opportunity to work with him on some interesting ideas together.