Joseph Gray

Portfolio

I'm a creative technologist and artist balancing commercial collaboration with independent practice — bringing clarity, form, and human-centered expression to complex ideas.

My work moves between design, technology, and contemporary art. I help studios and organizations create meaningful interactive experiences, while developing artworks that explore light, geometry, and transformation.

Across both practices, I approach projects with the same curiosity: turning complexity into something elegant, useful, and alive. Whether collaborating with creative teams or pursuing independent commissions, I bring a balance of technical fluency, conceptual depth, and visual clarity to each project.

Commercial Work Artworks

Artworks & Commissions

Frisson

June 2019
Custom Software, dimensions variable, duration infinite. Premiered at Fremont Studios, Seattle; June 2019.
A generative art piece written in Processing and GLSL. This piece was premiered at Fremont Studios for the Wunderman/Thompson merger launch party on the Studios’ giant projection wall as an ever evolving backdrop and mood setter for the duration of the event. The piece uses randomness, simplex noise and low frequency oscillators with heavy use of custom coded vertex and fragment shaders written in GLSL. The work can be rendered at any size or duration. Creative director Jared Lovejoy of Tashkent Park Creative contacted me to provide visuals for this party. Notably Grandmaster Flash did a set at this party.
A generative art piece written in Processing and GLSL. This piece was premiered at Fremont Studios for the Wunderman/Thompson merger launch party on the Studios’ giant projection wall as an ever evolving backdrop and mood setter for the duration of the event. The piece uses randomness, simplex noise and low frequency oscillators with heavy use of custom coded vertex and fragment shaders written in GLSL. The work can be rendered at any size or duration. Creative director Jared Lovejoy of Tashkent Park Creative contacted me to provide visuals for this party. Notably Grandmaster Flash did a set at this party.
Frisson image 1
Frisson image 2
Frisson image 3
Frisson image 4
Frisson image 5

Aether's Reverie: Ver

June 2014
abstraction, generative, processing
A video capture of the output of a custom coded artwork dubbed “Aether’s Reverie”. This version is called “Ver”, latin for spring, with a lighter, verdant, palette. This piece was selected for King County’s Gallery4Culture e4c program at the Bill & Melinda Gates Foundation.
A video capture of the output of a custom coded artwork dubbed “Aether’s Reverie”. This version is called “Ver”, latin for spring, with a lighter, verdant, palette. This piece was selected for King County’s Gallery4Culture e4c program at the Bill & Melinda Gates Foundation.
Aether's Reverie: Ver image 1
Aether's Reverie: Ver image 2
Aether's Reverie: Ver image 3
Aether's Reverie: Ver image 4
Aether's Reverie: Ver image 5

Angry Fruit Salad

July 2010
Joseph Gray (software) & Keith Tilford (sculpture). Materials: Foam core, adhesive, acrylic paint, gesso, spackle, fasteners, digital projector, IR web cam, computer, cables & custom software written in Processing. Exhibited in ACTION! at Ghost Gallery, Seattle 2010. Curated by Cait Willis & Laurie Kearney. (Photo credit: Gabriel C. Herbertson)
installation, interactive, projection mapping, sculpture
Reductivism, Schwitters, formalization, Lobachevsky, Euclid’s parallel postulate, new geometries; that the consistency of an axiomatic system cannot be proven within that system. Gödel. Endless endlessness. Cantor. Pure indifferent multiplicity. void setup( ){actual = new Actual(virtual);} ‘Post-contemporary’ bricolage. Differentiation. Dialectics and diametric oppositions. Histories instead of History (Badiou). Entropy, Brownian motion, the clinamen, Meillassoux’s ‘hyper-chaos’. Machines. Non-organic life. Contingency and incompleteness, the passage of idea and concept through form. Glitch, crunchy fuzz. Complexity Theory, Vitalism, Kool-Aid, Tang, Consciousness, the Brain and Commodore 64. Litanies. Realist Ontologies. Speculative metaphysics, materialisms and topologies. The scientific color pallet. Interactive technologies, open-source software, isomorphisms, new forms of life and forms of thought. New politics. Ideas of the commons. Diaspora: impossible, coming, interoperable. Try again, fail again, fail better (Beckett). Undecidable propositions. Art and philosophy and moving trains. Also, Quaternions and involution. K holes and Halo. New Rave and aging detritus. Noise. Wii rhetoric and Agambenian inflections. Gaming. The dice throw.
Reductivism, Schwitters, formalization, Lobachevsky, Euclid’s parallel postulate, new geometries; that the consistency of an axiomatic system cannot be proven within that system. Gödel. Endless endlessness. Cantor. Pure indifferent multiplicity. void setup( ){actual = new Actual(virtual);} ‘Post-contemporary’ bricolage. Differentiation. Dialectics and diametric oppositions. Histories instead of History (Badiou). Entropy, Brownian motion, the clinamen, Meillassoux’s ‘hyper-chaos’. Machines. Non-organic life. Contingency and incompleteness, the passage of idea and concept through form. Glitch, crunchy fuzz. Complexity Theory, Vitalism, Kool-Aid, Tang, Consciousness, the Brain and Commodore 64. Litanies. Realist Ontologies. Speculative metaphysics, materialisms and topologies. The scientific color pallet. Interactive technologies, open-source software, isomorphisms, new forms of life and forms of thought. New politics. Ideas of the commons. Diaspora: impossible, coming, interoperable. Try again, fail again, fail better (Beckett). Undecidable propositions. Art and philosophy and moving trains. Also, Quaternions and involution. K holes and Halo. New Rave and aging detritus. Noise. Wii rhetoric and Agambenian inflections. Gaming. The dice throw.
Angry Fruit Salad image 1
Angry Fruit Salad image 2
Angry Fruit Salad image 3
Angry Fruit Salad image 4
Angry Fruit Salad image 5

Artist Reserve Note

April 2009
Printed notes, money bands. Created by Joseph Gray and Peter Nelson as part of the Art Department at the University of Washington's annual Strange Coupling exhibition at Ouch My Eye Gallery.
conceptualism, currency, economics
A conceptual art currency by Joseph Gray and Peter Nelson.The Artist Reserve Note is designed to be used by an Artist as currency, particularly when no other funds are available. The Artist creates the value of the currency by drawing, painting, printing or otherwise ornamenting the blank reverse side of the Note. The value of the Note is not fixed, but rather becomes the value of the Art that is rendered upon it. Artist Reserve Note exists in the environment of a failing economy and the attempted solution of “stimulus” to the US, paid for with money that does not yet exist. The Artist Note attempts to create its own form of stimulus by providing artists with a currency whose value is created by the work the Artist adds to the physical currency itself. Project website: artistreservenote.com These Artist Reserve Notes was made available for sale for $5.00 US in packs of twenty-five (25) notes. These packs are labeled with $500 money bands, suggesting the value-growth potential of Artist Notes. The initial run of Notes will be available at Ouch My Eye Gallery in Seattle as part of the annual Strange Coupling project. More information can be found at the bottom of this page. Complementary Currencies have existed in many practical forms for centuries. Artists as well have found ways to use their own art as payment for goods and services. There are many notable artist made currencies in a variety of contexts, both historical and contemporary. Marcel Duchamp’s Monte Carlo Bonds “A parody of a financial document in a system for playing roulette, this Readymade revolves around the idea of monetary transactions. Giving himself the position of Administrator, Marcel Duchamp conceived of a joint stock company designed to raise 15,000 francs and thus “break the bank in Monte Carlo” (Sheets 38). It was to be divided into 30 numbered bonds for which Duchamp asked 500 francs each. However, less than eight were actually assembled. Tenino Wooden Money “Some samples of “slice wood”, a new printing material, had been received from Albert Balch of Seattle, who was promoting it for Christmas cards and other items. This was made in a special machine at Aberdeen by a man named Eckersley. Sitka Spruce and Port Orford and red cedar were used. The first pieces were flimsy sheets of 1/80th of an inch thick. The 25 on hand were sufficient to put Tenino in the wooden money business. Free Lunch: Picasso vs. Rockefeller “At the end of a fine meal, Picasso reach for the bill, ‘Let me pay. I’ll write a personal check, draw a few squiggles, and sign it. The manager won’t ever cash it. She will display it as a work of art. And we’ll have a free lunch. Ithaca Hours “Ithaca Hours is a local currency system that promotes local economic strength and community self-reliance in ways which will support economic and social justice, ecology, community participation and human aspirations in and around Ithaca, New York. Ithaca Hours help to keep money local, building the Ithaca economy. Fluxus Bucks “In the 90’s Julie Paquette “ex posto facto” started the fluxus buck movement. Fluxus Bucks are amazing, unique creations using stamps of “dollar bills” as a background for art. Fluxus Bucks are traded around the world. (…) Fluxus bucks are shared in the mail and traded on the street. Fluxus Bucks can never be printed in vast quantities like dollars so they never lose their value. Complementary Currency “Complementary currency (CC) is a currency which is meant to be used as a complement to a national currency. Complementary currency is sometimes referred to as complementary community currency (CCC) or as community currency. The term local currency, describing a complementary currency which is limited to a single locality, is sometimes used interchangeably with complementary currency. There are, however, some complementary currencies which are regional or global, such as the WIR or Friendly Favors, or the proposed Terra Currency. Legal Tender “The U.S. Constitution, Art. I Sec. 10 Cl. 1, states, in part: No State shall … coin Money; emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts; … In 1798, Vice President Thomas Jefferson wrote that the federal government has no power ‘of making paper money or anything else a legal tender,’ and he advocated a constitutional amendment to enforce this principle by denying the federal government the power to borrow. “The United States Supreme Court ruled the practice unconstitutional in Hepburn v. Griswold in 1870, but later reversed this decision following the appointment of two new judges by President Ulysses S Grant. The Court held that paper money, even that not backed by species such as the United States Notes can be legal tender, in the Legal Tender Cases, ranging from 1871 to 1884. The printing of these Notes was in conjunction with the annual Strange Coupling exhibition. Strange Coupling is a project organized by masters degree candidates in the Art Department at the University of Washington. Working artists from the community are paired with a masters candidate at UW based on aesthetic commonalities.
A conceptual art currency by Joseph Gray and Peter Nelson.The Artist Reserve Note is designed to be used by an Artist as currency, particularly when no other funds are available. The Artist creates the value of the currency by drawing, painting, printing or otherwise ornamenting the blank reverse side of the Note. The value of the Note is not fixed, but rather becomes the value of the Art that is rendered upon it. Artist Reserve Note exists in the environment of a failing economy and the attempted solution of “stimulus” to the US, paid for with money that does not yet exist. The Artist Note attempts to create its own form of stimulus by providing artists with a currency whose value is created by the work the Artist adds to the physical currency itself. Project website: artistreservenote.com These Artist Reserve Notes was made available for sale for $5.00 US in packs of twenty-five (25) notes. These packs are labeled with $500 money bands, suggesting the value-growth potential of Artist Notes. The initial run of Notes will be available at Ouch My Eye Gallery in Seattle as part of the annual Strange Coupling project. More information can be found at the bottom of this page. Complementary Currencies have existed in many practical forms for centuries. Artists as well have found ways to use their own art as payment for goods and services. There are many notable artist made currencies in a variety of contexts, both historical and contemporary. Marcel Duchamp’s Monte Carlo Bonds “A parody of a financial document in a system for playing roulette, this Readymade revolves around the idea of monetary transactions. Giving himself the position of Administrator, Marcel Duchamp conceived of a joint stock company designed to raise 15,000 francs and thus “break the bank in Monte Carlo” (Sheets 38). It was to be divided into 30 numbered bonds for which Duchamp asked 500 francs each. However, less than eight were actually assembled. Tenino Wooden Money “Some samples of “slice wood”, a new printing material, had been received from Albert Balch of Seattle, who was promoting it for Christmas cards and other items. This was made in a special machine at Aberdeen by a man named Eckersley. Sitka Spruce and Port Orford and red cedar were used. The first pieces were flimsy sheets of 1/80th of an inch thick. The 25 on hand were sufficient to put Tenino in the wooden money business. Free Lunch: Picasso vs. Rockefeller “At the end of a fine meal, Picasso reach for the bill, ‘Let me pay. I’ll write a personal check, draw a few squiggles, and sign it. The manager won’t ever cash it. She will display it as a work of art. And we’ll have a free lunch. Ithaca Hours “Ithaca Hours is a local currency system that promotes local economic strength and community self-reliance in ways which will support economic and social justice, ecology, community participation and human aspirations in and around Ithaca, New York. Ithaca Hours help to keep money local, building the Ithaca economy. Fluxus Bucks “In the 90’s Julie Paquette “ex posto facto” started the fluxus buck movement. Fluxus Bucks are amazing, unique creations using stamps of “dollar bills” as a background for art. Fluxus Bucks are traded around the world. (…) Fluxus bucks are shared in the mail and traded on the street. Fluxus Bucks can never be printed in vast quantities like dollars so they never lose their value. Complementary Currency “Complementary currency (CC) is a currency which is meant to be used as a complement to a national currency. Complementary currency is sometimes referred to as complementary community currency (CCC) or as community currency. The term local currency, describing a complementary currency which is limited to a single locality, is sometimes used interchangeably with complementary currency. There are, however, some complementary currencies which are regional or global, such as the WIR or Friendly Favors, or the proposed Terra Currency. Legal Tender “The U.S. Constitution, Art. I Sec. 10 Cl. 1, states, in part: No State shall … coin Money; emit Bills of Credit; make any Thing but gold and silver Coin a Tender in Payment of Debts; … In 1798, Vice President Thomas Jefferson wrote that the federal government has no power ‘of making paper money or anything else a legal tender,’ and he advocated a constitutional amendment to enforce this principle by denying the federal government the power to borrow. “The United States Supreme Court ruled the practice unconstitutional in Hepburn v. Griswold in 1870, but later reversed this decision following the appointment of two new judges by President Ulysses S Grant. The Court held that paper money, even that not backed by species such as the United States Notes can be legal tender, in the Legal Tender Cases, ranging from 1871 to 1884. The printing of these Notes was in conjunction with the annual Strange Coupling exhibition. Strange Coupling is a project organized by masters degree candidates in the Art Department at the University of Washington. Working artists from the community are paired with a masters candidate at UW based on aesthetic commonalities.
Artist Reserve Note image 1
Artist Reserve Note image 2
Artist Reserve Note image 3
Artist Reserve Note image 4
Artist Reserve Note image 5

Chromatic Skyscape

March 2019
Custom software, projector, CPU. Presented at NBBJ Seattle February 2019.
A new work in the Subtle Chroma series exhibited in mid-February at NBBJ’s Seattle headquarters in the lobby area there known as the Giant Steps. This piece is designed to interact with the ambient light in the space and tailored to its specific dimensions and features. Custom software written in the Processing programming language continuously generates slowly animating color gradients governed by low frequency oscillators affecting their size, position, rotation and opacity. The oscillations are slightly offset in duration so that the effect rarely, if ever, repeats itself. These works are meant to somewhat fade into the background and lived with over longer periods of time, almost functioning as a architectural surface material.
A new work in the Subtle Chroma series exhibited in mid-February at NBBJ’s Seattle headquarters in the lobby area there known as the Giant Steps. This piece is designed to interact with the ambient light in the space and tailored to its specific dimensions and features. Custom software written in the Processing programming language continuously generates slowly animating color gradients governed by low frequency oscillators affecting their size, position, rotation and opacity. The oscillations are slightly offset in duration so that the effect rarely, if ever, repeats itself. These works are meant to somewhat fade into the background and lived with over longer periods of time, almost functioning as a architectural surface material.
Chromatic Skyscape image 1
Chromatic Skyscape image 2

Country Western

April 2008
Visual performance at The Meridian Gallery, San Francisco, California. Collaboration with composer Zachary James Watkins. Real-time animation with MIDI controller, tablet, networked data, two projectors, two computers and custom patches in Quartz Composer. Landscape photo credit: Zachary Watkins
new music, visual performance
Country Western is a complex, long toned structured improvisation of texture and harmonics composed by Zachary Watkins a Texas native, and California resident. The name would suggest a cowboy and horse vision, yet the work is derrived from a sense of displacement by many younger US Westerners created by their own real experience of the West and the oftentimes opposite “Western” ideals. Conceptually it is a reaction against the classic Western mode, and a comment by Westerners who have a different perception of their native homes: one fed by a communications-age fueled awareness coupled with a respect for the land. The video was added as an additional improvisational instrument by visualist Joseph Gray, a California native and Washington state resident. The performance tool created focused on ethereral texture fields whose origin are photos taken by the composer and visualist of earthen landscapes in the American West. Country Western premiered at Meridian Gallery in San Francisco. Meridian Gallery is well known for their experimental composers series whose performances often include a video or digital animation element. The gallery is also active in local at-risk youth education and providing exhibition opportunities for under-represented artists and artistic forms. Composer: Zachary Watkins zacharyjameswatkins.com Ensemble: Shayna Dunkelman (percussion) Kanoko Nishi (koto) Noah Phillips (prepared guitar) Marielle Jakobsons (violin/electronics) Emily Packard (violin) Theresa Wong (cello) Aram Shelton (woodwinds/electronics) Jen Baker (trombone) Dennis Somera (voice/poetry) Zachary James Watkins (laptop/network) Visualist: Joseph Gray (video/network) Press Release: Download
Country Western is a complex, long toned structured improvisation of texture and harmonics composed by Zachary Watkins a Texas native, and California resident. The name would suggest a cowboy and horse vision, yet the work is derrived from a sense of displacement by many younger US Westerners created by their own real experience of the West and the oftentimes opposite “Western” ideals. Conceptually it is a reaction against the classic Western mode, and a comment by Westerners who have a different perception of their native homes: one fed by a communications-age fueled awareness coupled with a respect for the land. The video was added as an additional improvisational instrument by visualist Joseph Gray, a California native and Washington state resident. The performance tool created focused on ethereral texture fields whose origin are photos taken by the composer and visualist of earthen landscapes in the American West. Country Western premiered at Meridian Gallery in San Francisco. Meridian Gallery is well known for their experimental composers series whose performances often include a video or digital animation element. The gallery is also active in local at-risk youth education and providing exhibition opportunities for under-represented artists and artistic forms. Composer: Zachary Watkins zacharyjameswatkins.com Ensemble: Shayna Dunkelman (percussion) Kanoko Nishi (koto) Noah Phillips (prepared guitar) Marielle Jakobsons (violin/electronics) Emily Packard (violin) Theresa Wong (cello) Aram Shelton (woodwinds/electronics) Jen Baker (trombone) Dennis Somera (voice/poetry) Zachary James Watkins (laptop/network) Visualist: Joseph Gray (video/network) Press Release: Download
Country Western image 1
Country Western image 2
Country Western image 3

Crystalline Chlorophyll

September 2009
Paper, acrylic gesso, PVA, foamcore, two video projectors, two computers, two IR web cams, custom software in Processing. Dimesions variable. Exhibited in Kerfuffle at Bumbershoot 2009, Seattle. Curated by Lele Barnett and Chris Weber.
abstraction, installation, interactive, projection mapping, sculpture
Crystalline Chlorophyll is an interactive digital sculpture. Its surface starts as an icy-crystalline shimmering and is slowly overcome by a verdant-green growth of texture during the course of the exhibition based on the movement of the visitors observing it. By tracking movement in the room it randomly adds a bit of green to the surface using generative pseudo-organic image algorithms. When the sculpture is left alone for a while the mossy layer decays revealing the crystal-like surface again, waiting for the next group of visitors. The physical sculpture is built from a 3D virtual model using cutouts from it’s flattened triangles printed on 11X17″ card stock. The surface of the sculpture is painted with white gesso. The original object was designed in Blender, it’s facets were unfolded using the “unfold” python script. The unfolded meshes are then exported as .svg UV maps. More information on the technique of going from 3D model to papercraft can be found in this tutorial. Two ceiling mounted video projectors project from opposite sides onto the sculptural surface. The images are corrected for distortion by mapping textures to a 3D model of the real-world object, utilizing techniques also found in video game technology. The work’s software was written in the Processing programming language. The software loads a 3D model of the sculpture, made in Blender, and allows for manual rotation, scaling and positioning of the object so that it can be aligned to the real world object. When the two models align the physical version has a projection-mapped surface that can be treated as though it were a 3D software texture-map. The software generates a real-time texture on the object’s surface using camera tracking to seed random image generation. Virtual lighting completes the effect. Camera tracking is done with a simple difference algorithm. Each channel (RGB) of each pixel of the camera image is tracked for the amonut of change from the previous frame. If the number is above a certain threshold a random number is generated to determine if the associated pixel in the real-time texture map should change. A height map generated in Blender partially determines the probability of a pixel changing from the static “ice-crystal” texture to a “mossy” one. Other influences are neighboring pixels values and the amount of change in the camera pixel.
Crystalline Chlorophyll is an interactive digital sculpture. Its surface starts as an icy-crystalline shimmering and is slowly overcome by a verdant-green growth of texture during the course of the exhibition based on the movement of the visitors observing it. By tracking movement in the room it randomly adds a bit of green to the surface using generative pseudo-organic image algorithms. When the sculpture is left alone for a while the mossy layer decays revealing the crystal-like surface again, waiting for the next group of visitors. The physical sculpture is built from a 3D virtual model using cutouts from it’s flattened triangles printed on 11X17″ card stock. The surface of the sculpture is painted with white gesso. The original object was designed in Blender, it’s facets were unfolded using the “unfold” python script. The unfolded meshes are then exported as .svg UV maps. More information on the technique of going from 3D model to papercraft can be found in this tutorial. Two ceiling mounted video projectors project from opposite sides onto the sculptural surface. The images are corrected for distortion by mapping textures to a 3D model of the real-world object, utilizing techniques also found in video game technology. The work’s software was written in the Processing programming language. The software loads a 3D model of the sculpture, made in Blender, and allows for manual rotation, scaling and positioning of the object so that it can be aligned to the real world object. When the two models align the physical version has a projection-mapped surface that can be treated as though it were a 3D software texture-map. The software generates a real-time texture on the object’s surface using camera tracking to seed random image generation. Virtual lighting completes the effect. Camera tracking is done with a simple difference algorithm. Each channel (RGB) of each pixel of the camera image is tracked for the amonut of change from the previous frame. If the number is above a certain threshold a random number is generated to determine if the associated pixel in the real-time texture map should change. A height map generated in Blender partially determines the probability of a pixel changing from the static “ice-crystal” texture to a “mossy” one. Other influences are neighboring pixels values and the amount of change in the camera pixel.
Crystalline Chlorophyll image 1
Crystalline Chlorophyll image 2
Crystalline Chlorophyll image 3
Crystalline Chlorophyll image 4
Crystalline Chlorophyll image 5

Cube Etude 2.0

February 2014
In collaboration with Reilly Donovan; Wood cube painted with gesso, two projectors, one CPU, Blender, BLAM plugin
formalism, projection mapping, sculpture
Continuing the fine 21st century tradition of projecting a virtual cube onto an actual cube to align them into a physical simulacra. This time using BLAM for Blender 3D. Made during a generous residency provided by Cornish College of the Arts as part of the newly formed Institute of Emergent Technology + Intermedia
Continuing the fine 21st century tradition of projecting a virtual cube onto an actual cube to align them into a physical simulacra. This time using BLAM for Blender 3D. Made during a generous residency provided by Cornish College of the Arts as part of the newly formed Institute of Emergent Technology + Intermedia
Cube Etude 2.0 image 1
Cube Etude 2.0 image 2
Cube Etude 2.0 image 3
Cube Etude 2.0 image 4
Cube Etude 2.0 image 5

Cube Etude 1.0

May 2008
Particle Board, acrylic gesso, two video projectors, two computers, web cam, software patch in Quartz Composer. Dimesions variable. Exhibited in OBVious at 911 Media Arts Center, Seattle. Curated by Steven Vroom.
formalism, generative, projection mapping, sculpture
A simple cube one foot square on each side with one corner cut so that it can stand on end. Moving imagery is projected on all 6 surfaces of the cube. The projected image is a 3D real-time generatively texture-mapped cube. In occular perspective things farther away appear smaller. Images projected on irregular surfaces will distort, with the portions farther away from the projection source being larger than those closer. An image of an object rendered in the correct perspective when projected on a real object of the same proportions will automatically correct for the projected distortion. The cube demonstrates this phenomenon. Projected on the surface of the cube are a checkerboard patterns on three faces and a live camera feed on the other three. The checkerboard patterns change density over time, being governed by LFO chains to create a seemingly random and continuous timing. The live camera feed is aimed directly at the cube creating visual feedback and allowing the viewer to interact with the medium. The work is also a conceptual abstraction commenting on the human obsession with ninety degree angles. It also exists partially to blur the boundaries between the virtual and the real; and to demonstrate the virtual becoming real (i.e. a virtual texture-mapped cube becoming a real texture-mapped cube).
A simple cube one foot square on each side with one corner cut so that it can stand on end. Moving imagery is projected on all 6 surfaces of the cube. The projected image is a 3D real-time generatively texture-mapped cube. In occular perspective things farther away appear smaller. Images projected on irregular surfaces will distort, with the portions farther away from the projection source being larger than those closer. An image of an object rendered in the correct perspective when projected on a real object of the same proportions will automatically correct for the projected distortion. The cube demonstrates this phenomenon. Projected on the surface of the cube are a checkerboard patterns on three faces and a live camera feed on the other three. The checkerboard patterns change density over time, being governed by LFO chains to create a seemingly random and continuous timing. The live camera feed is aimed directly at the cube creating visual feedback and allowing the viewer to interact with the medium. The work is also a conceptual abstraction commenting on the human obsession with ninety degree angles. It also exists partially to blur the boundaries between the virtual and the real; and to demonstrate the virtual becoming real (i.e. a virtual texture-mapped cube becoming a real texture-mapped cube).
Cube Etude 1.0 image 1
Cube Etude 1.0 image 2
Cube Etude 1.0 image 3
Cube Etude 1.0 image 4
Cube Etude 1.0 image 5

Culturevirus Photo Booth

July 2012
Custom video arcade console with custom software and network feeds. In collaboration with Brent Watanabe during our tenure at Superfad, fabrication by Metrix:Create Space. Curated by artist Cait Willis and Ghost Gallery gallerist Laurie Kearney.
code, interactive, network, sculpture
An arcade console that finds your face and puts someone else’s face on it, then makes an animated GIF and uploads it to Tumblr. Debut of the photo booth was at the Narwhal room, below Unicorn, during Capitol Hill Block Party 2012. The feed of it’s output is here: culturevirus.tumblr.com The console has no hands-on controls or instructions, rather it uses a webcam embedded above the screen to recognize faces. Faces are then captured and stored in a database. On occasion it sees things as being faces that are not actually faces. Previously saved faces are then randomly selected and placed on the current viewers faces. Additionally a set of venue specific iconography is added to the viewers likeness. In the case of the CHPB version these included sparkly unicorn horns and fancy top hats. The console was designed in Adobe Illustrator and cut on a CNC router and etched with a laser cutter. Electronics include a CRT monitor, mini CPU and webcam. Software was written in processing using a combination of video, OpenCV, GIF and network libraries.
An arcade console that finds your face and puts someone else’s face on it, then makes an animated GIF and uploads it to Tumblr. Debut of the photo booth was at the Narwhal room, below Unicorn, during Capitol Hill Block Party 2012. The feed of it’s output is here: culturevirus.tumblr.com The console has no hands-on controls or instructions, rather it uses a webcam embedded above the screen to recognize faces. Faces are then captured and stored in a database. On occasion it sees things as being faces that are not actually faces. Previously saved faces are then randomly selected and placed on the current viewers faces. Additionally a set of venue specific iconography is added to the viewers likeness. In the case of the CHPB version these included sparkly unicorn horns and fancy top hats. The console was designed in Adobe Illustrator and cut on a CNC router and etched with a laser cutter. Electronics include a CRT monitor, mini CPU and webcam. Software was written in processing using a combination of video, OpenCV, GIF and network libraries.
Culturevirus Photo Booth image 1
Culturevirus Photo Booth image 2
Culturevirus Photo Booth image 3
Culturevirus Photo Booth image 4
Culturevirus Photo Booth image 5

Daughters of Air

June 2010
Visuals and sculptural sets. Two Projectors, two computers and custom software. Tulle fabric, wire and fasteners. Concept and Composition with Kelli Frances Corrado and Ivory Smith. Presented at On The Boards as part of the Northwest New Works festival 2010.
generative, live projection, new music, performance, set design
The Daughters of Air is a musical-visual performance evoking fairytales under the sea. The aqueous world is formed with sound, light, costume and sculpture. Non-traditional vocals, utilizing electronic audio effects, create the ethereal narrative sound score. Visualists paint with light, using digital projection, on an environment of organic costume and set pieces. Daughters of Air is inspired by the notion of “mermaid heaven” a concept that appears in Hans Christian Anderson’s original ending to his fairy tale The Little Mermaid. In his vision mermaids, being non-human, do not have a soul, turning to sea foam when they die. The central character in his story does a deed of such unselfish goodness and is so true to her own conviction that she transcends her soullessness gaining eternal life in, presumably, a mermaid heaven. Music composed for this work by Kelli Frances Corrado and Ivory Smith utilizes live and pre-recorded sounds: effected vocal scores, songs, live sampling, electronic instruments and percussion, and samples from the Antarctic ocean. Samples of natural aquatic sounds, eliciting an otherworldly effect, are processed in the studio and additionally modified live in the performance. Visuals for this performance are generated live with custom software written in the Processing programming language by Joseph Gray, leaning heavily on the MSAFluid Library. The controller used is the MSARemote, a multi-touch OSC controller for iPhone. Both MSAFluid and MSARemote are the creations of Memo Akten (memo.tv) Two visual performers control the light of two separate projectors whose imagery combine on translucent set pieces and costume. Sponsors: Initial project funding was provided by On The Boards as part of the festival. Additional support for sets and visuals development was provided by 911 Media Arts Center 911media.org with funds from the Andy Warhol Foundation for the Visual Arts warholfoundation.org
The Daughters of Air is a musical-visual performance evoking fairytales under the sea. The aqueous world is formed with sound, light, costume and sculpture. Non-traditional vocals, utilizing electronic audio effects, create the ethereal narrative sound score. Visualists paint with light, using digital projection, on an environment of organic costume and set pieces. Daughters of Air is inspired by the notion of “mermaid heaven” a concept that appears in Hans Christian Anderson’s original ending to his fairy tale The Little Mermaid. In his vision mermaids, being non-human, do not have a soul, turning to sea foam when they die. The central character in his story does a deed of such unselfish goodness and is so true to her own conviction that she transcends her soullessness gaining eternal life in, presumably, a mermaid heaven. Music composed for this work by Kelli Frances Corrado and Ivory Smith utilizes live and pre-recorded sounds: effected vocal scores, songs, live sampling, electronic instruments and percussion, and samples from the Antarctic ocean. Samples of natural aquatic sounds, eliciting an otherworldly effect, are processed in the studio and additionally modified live in the performance. Visuals for this performance are generated live with custom software written in the Processing programming language by Joseph Gray, leaning heavily on the MSAFluid Library. The controller used is the MSARemote, a multi-touch OSC controller for iPhone. Both MSAFluid and MSARemote are the creations of Memo Akten (memo.tv) Two visual performers control the light of two separate projectors whose imagery combine on translucent set pieces and costume. Sponsors: Initial project funding was provided by On The Boards as part of the festival. Additional support for sets and visuals development was provided by 911 Media Arts Center 911media.org with funds from the Andy Warhol Foundation for the Visual Arts warholfoundation.org
Daughters of Air image 1
Daughters of Air image 2
Daughters of Air image 3
Daughters of Air image 4

Extra-Dimensional Rift

April 2009
Video projector, computer, custom software in Processing. Dimesions variable. Exhibited at 108 Occidental Gallery, Seattle, 2009. Curated by Cait Willis.
generative, projection mapping
A reference to a plot device used in science-fiction and particularly fantasy books and games. The idea of an extra-dimensional rift is that a tear is formed in the fabric of space-time creating a passage to places, eras and even Universes other than our own. The illusion of such a rift is created by projecting a simple generative animation on a surface. See also Scott Roberts’ Dimensional Rift
A reference to a plot device used in science-fiction and particularly fantasy books and games. The idea of an extra-dimensional rift is that a tear is formed in the fabric of space-time creating a passage to places, eras and even Universes other than our own. The illusion of such a rift is created by projecting a simple generative animation on a surface. See also Scott Roberts’ Dimensional Rift
Extra-Dimensional Rift image 1

Geo Chroma

September 2016
Custom software, Fisheye Camera, CPU, Video Wall; Installed at Progenics, One World Trade, NYC
Geo Chroma is an interactive video wall installed in the lobby of Progenics offices at One World Trade in New York, NY. The piece was commissioned by NBBJ Architects and produced by Samuel Stubblefield, a principal at the firm whose group focuses on environmental design. The piece uses motion tracking of passerby to generate colorful three dimensional geometry that rises and fades over time. Colors and lighting in the piece are modified by the time of day and year, making the work an ever changing response to both its immediate interior environment and the exterior conditions of seasons and hours.
Geo Chroma is an interactive video wall installed in the lobby of Progenics offices at One World Trade in New York, NY. The piece was commissioned by NBBJ Architects and produced by Samuel Stubblefield, a principal at the firm whose group focuses on environmental design. The piece uses motion tracking of passerby to generate colorful three dimensional geometry that rises and fades over time. Colors and lighting in the piece are modified by the time of day and year, making the work an ever changing response to both its immediate interior environment and the exterior conditions of seasons and hours.
Geo Chroma image 1
Geo Chroma image 2
Geo Chroma image 3
Geo Chroma image 4

Hyperlocal Zeitgeist: Dendroica

November 2015
HTML/JS/CSS/PHP, Instagram API Geography Endpoint, Google Static Maps API. Dimensions variable. Exhibited at Dendroica Gallery, curated by Martha Dunham.
code, css3, generative, html5, javascript, network, social media, web app
Project Site: http://hyperlocal.culturevirus.org/dendroica/ Hyperlocal Zeitgeist (Dendroica) is an Internet based artwork by Joseph Gray (grauwald.com) that displays public geotagged Instagram posts within 1.444 kilometers of the Dendroica Gallery in Seattle, WA. The area lies primarily in the Capitol Hill neighborhood but also includes portions of downtown and the Cascade neighborhood. The Instagram API is tracked continuously for new posts which are displayed in a documentary style animation along with a map generated by the Google Maps API showing the tagged location within the geographic search radius. Time, location name, user handle and caption text are also displayed. Continued viewing of the piece begins to reveal the underlying cultural and social activity of the topography that surrounds the gallery location. The work gives a nod to traditional representations of landscape, capturing a locality in a moment of time, and rendering it with an invisible metadata that permeates our contemporary physical reality. For inquiries on commissioning a custom version of this artwork centered on another geographic location please contact Martha Dunham at http://dendroica.gallery/
Project Site: http://hyperlocal.culturevirus.org/dendroica/ Hyperlocal Zeitgeist (Dendroica) is an Internet based artwork by Joseph Gray (grauwald.com) that displays public geotagged Instagram posts within 1.444 kilometers of the Dendroica Gallery in Seattle, WA. The area lies primarily in the Capitol Hill neighborhood but also includes portions of downtown and the Cascade neighborhood. The Instagram API is tracked continuously for new posts which are displayed in a documentary style animation along with a map generated by the Google Maps API showing the tagged location within the geographic search radius. Time, location name, user handle and caption text are also displayed. Continued viewing of the piece begins to reveal the underlying cultural and social activity of the topography that surrounds the gallery location. The work gives a nod to traditional representations of landscape, capturing a locality in a moment of time, and rendering it with an invisible metadata that permeates our contemporary physical reality. For inquiries on commissioning a custom version of this artwork centered on another geographic location please contact Martha Dunham at http://dendroica.gallery/
Hyperlocal Zeitgeist: Dendroica image 1
Hyperlocal Zeitgeist: Dendroica image 2
Hyperlocal Zeitgeist: Dendroica image 3
Hyperlocal Zeitgeist: Dendroica image 4
Hyperlocal Zeitgeist: Dendroica image 5

Infinite Sunset

July 2014
abstraction, canvas, data analytics, generative, html5, processing
A net art piece depicting the sun infinitely setting into an oceanic horizon. Color palettes are pulled from images found via Google’s image search feature. This piece was a runner up in Google’s DevArt Competition. View the work here: infinite-sunset.com
A net art piece depicting the sun infinitely setting into an oceanic horizon. Color palettes are pulled from images found via Google’s image search feature. This piece was a runner up in Google’s DevArt Competition. View the work here: infinite-sunset.com
Infinite Sunset image 1
Infinite Sunset image 2
Infinite Sunset image 3

At The Josephine

October 2009
generative, pop music, visual performance
Visual improvisations to Ivory in Ice World and Kelly Frances Corrado. Excerpt from visual performance at The Josephine, Seattle, 2009. Real-time animation with custom arduino-based controller and custom software written in Processing.
Visual improvisations to Ivory in Ice World and Kelly Frances Corrado. Excerpt from visual performance at The Josephine, Seattle, 2009. Real-time animation with custom arduino-based controller and custom software written in Processing.
At The Josephine image 1

Light_Paper_Sound

February 2006
Joseph Gray (sculpture/video) and Gabriel Herbertson (audio) with original music by Beth Fleenor (clarinet) and Paris Hurley (violin). Hand-cut paper, string, fasteners, 3 digital projectors, 3 computers, 6 speakers, custom Flash application "aXiMaL". Dimesions variable (room sized). Exhibited at 911 Media Arts Center, Seattle, February-March 2006. Self-Curated.
generative, new music, projection mapping, sculpture
This exhibition is an installation using randomly cross-faded digital drawings projected on sculptural paper surfaces synced with randomized audio tracks played on six speakers. The overall effect creates a unique environment of dynamic color and sound.
This exhibition is an installation using randomly cross-faded digital drawings projected on sculptural paper surfaces synced with randomized audio tracks played on six speakers. The overall effect creates a unique environment of dynamic color and sound.
Light_Paper_Sound image 1
Light_Paper_Sound image 2
Light_Paper_Sound image 3
Light_Paper_Sound image 4
Light_Paper_Sound image 5

MapSculpt

September 2011
Laser-cut acrylic sheet, spandex (pearlized surface), thread, video projector, computer, custom software in Processing. Dimesions variable. Exhibited at Cornish College of the Arts Main Gallery, Seattle, 2011. Curated by Cable Griffith. (Photos: Zac Culler)
generative, installation, projection mapping, sculpture
mapSculpt is a digital projection-mapped, bas-relief, landscape painting. The physical sculpture is constructed of spandex with a projection screen coating sewn to laser-cut acrylic supports. A ceiling mounted digital projector casts the image of a landscape with the same proportions as the sculpture. The combination of the real world object and virtual model align, physically, to be perceived as a single construct. The design of mapSculpt utilizes the technique of height-mapping which is found primarily in 3D video games. Height-mapping translates a flat 2D grayscale image into a 3D mesh where the height of each vertex in the mesh is determined by the brightness of the corresponding pixel in the 2D image. Both the physical and virtual components of mapSculpt use a 16×9 pixel image as their blueprint. As a result there are 144 acrylic supports and 144 corresponding vertices in the 3D mesh. The virtual image is texture-mapped with a perlin noise algorithm modified by the original height map and colored with a woodland palette. There is cutoff, also based on the height map, that determines if a pixel will be rendered with the yellows and sepias found in some pastures as opposed to being set to the color range of a dense forest seen from above. This image is much higher resolution lending a more visually complex surface variation to the low polygon model it is textured onto. A virtual water line is created using undulating translucent planes. Passing virtual lights mimicing a sun and moon rotate around the 3D model. A suite of custom software is used to both construct and present the piece. This allows for a near infinite variation (improvements) on the same theme and technique. This software is written by the artist using the Processing programming language. This sculpture can be stored in a small box and transported, then set up again with different hardware. Because the software is written in Processing it is also easily portable. The software suite also includes a tool to create the laser paths for fabrication, and the tool for creating the image file and is therefore completely “stand-alone” in terms of creating new variations of the artwork.
mapSculpt is a digital projection-mapped, bas-relief, landscape painting. The physical sculpture is constructed of spandex with a projection screen coating sewn to laser-cut acrylic supports. A ceiling mounted digital projector casts the image of a landscape with the same proportions as the sculpture. The combination of the real world object and virtual model align, physically, to be perceived as a single construct. The design of mapSculpt utilizes the technique of height-mapping which is found primarily in 3D video games. Height-mapping translates a flat 2D grayscale image into a 3D mesh where the height of each vertex in the mesh is determined by the brightness of the corresponding pixel in the 2D image. Both the physical and virtual components of mapSculpt use a 16×9 pixel image as their blueprint. As a result there are 144 acrylic supports and 144 corresponding vertices in the 3D mesh. The virtual image is texture-mapped with a perlin noise algorithm modified by the original height map and colored with a woodland palette. There is cutoff, also based on the height map, that determines if a pixel will be rendered with the yellows and sepias found in some pastures as opposed to being set to the color range of a dense forest seen from above. This image is much higher resolution lending a more visually complex surface variation to the low polygon model it is textured onto. A virtual water line is created using undulating translucent planes. Passing virtual lights mimicing a sun and moon rotate around the 3D model. A suite of custom software is used to both construct and present the piece. This allows for a near infinite variation (improvements) on the same theme and technique. This software is written by the artist using the Processing programming language. This sculpture can be stored in a small box and transported, then set up again with different hardware. Because the software is written in Processing it is also easily portable. The software suite also includes a tool to create the laser paths for fabrication, and the tool for creating the image file and is therefore completely “stand-alone” in terms of creating new variations of the artwork.
MapSculpt image 1
MapSculpt image 2
MapSculpt image 3
MapSculpt image 4
MapSculpt image 5

Night Rider

December 2008
24 dual head tripod halogen worklights, theater lighting gels, 6 DMX dimmer packs, 6 DMX cables, programmable DMX controller, several hundered feet of power cables, a hundered or so cable ties. Dimesions variable. Elements Too Building, Bellevue, Washington. Curated by Van Diep. Technical and Fabrication Support: Joel Cain, Michael Foltz, Randy Moss, Christopher Overstreet, Rebekah Slavin Community Support: Abigail Guay and Laura O'Quin of http://www.opensatellite.org/ Steven Vroom and Ian Harrison of http://www.911media.org/
architecture, electronics, light sculpture
24 industrial work lights are hooked up to 6 DMX lighting control dimmer packs. The lights are arranged across the entire floor of a downtown Bellevue Washington construction site. A stand-alone DMX controller is sending an oscillating wave of light fade commands, creating an undulating stripe of light flowing across the width of the building. This work was a commision from Su Development and the Elements Too building to celebrate the holiday season and enliven the Bellevue skyline during the darkest month of the year. Additional support came from Open Satellite and 911 Media Arts Center. Night Rider Blog Press Seattle Weekly Downtown Bellevue Network
24 industrial work lights are hooked up to 6 DMX lighting control dimmer packs. The lights are arranged across the entire floor of a downtown Bellevue Washington construction site. A stand-alone DMX controller is sending an oscillating wave of light fade commands, creating an undulating stripe of light flowing across the width of the building. This work was a commision from Su Development and the Elements Too building to celebrate the holiday season and enliven the Bellevue skyline during the darkest month of the year. Additional support came from Open Satellite and 911 Media Arts Center. Night Rider Blog Press Seattle Weekly Downtown Bellevue Network
Night Rider image 1

NYC4ft3rm4th

November 2007
Two video projectors, two computers, concrete blocks, scrap wood, wire, plexiglass, styrofoam, custom patch in Quartz Composer. Dimensions variable. Exhibited at the TK Building, Studio G, Seattle, 2007.
assemblage, new rave, projection mapping
A hastily created installation furthering the study of video projection augmenting sculptural surfaces. A random collection of used building materials is fashioned into a generic portrait of a city skyline. Bright colors and patterns flash on the various surfaces governed by a series of LFOs. The piece was inspired after the artists first trip to NYC (and the East Coast US as a whole), and a series of visits to various other N. American cities. The modern skyline is fairly consistent across these cities, begging the question “what is the difference between them in a post-contemporary, communications overloaded, age?” Seems to mostly be a difference in weather, though in modern travel it is possible to not leave a climate controlled environment whilst globe trotting. Going from the condo, to the taxi, to the airport, to the airplane, to the hotel, to the conference center, to the mall and back again. The work is also inspired by the New Rave movement that was being carefully constructed as the next “big thing”, primarily in London, NYC and Tokyo. New Rave is a conglomerate of former popular styles, and in a very self-conscious way. It is by definition not meant to be taken seriously. New Rave could be considered a deeply pop-educated extension of rave culture, punk rock and hip-hop. It’s distinguishing characteristic is sound and colors that immediately barrage the senses; chiaroscuro on growth hormones.
A hastily created installation furthering the study of video projection augmenting sculptural surfaces. A random collection of used building materials is fashioned into a generic portrait of a city skyline. Bright colors and patterns flash on the various surfaces governed by a series of LFOs. The piece was inspired after the artists first trip to NYC (and the East Coast US as a whole), and a series of visits to various other N. American cities. The modern skyline is fairly consistent across these cities, begging the question “what is the difference between them in a post-contemporary, communications overloaded, age?” Seems to mostly be a difference in weather, though in modern travel it is possible to not leave a climate controlled environment whilst globe trotting. Going from the condo, to the taxi, to the airport, to the airplane, to the hotel, to the conference center, to the mall and back again. The work is also inspired by the New Rave movement that was being carefully constructed as the next “big thing”, primarily in London, NYC and Tokyo. New Rave is a conglomerate of former popular styles, and in a very self-conscious way. It is by definition not meant to be taken seriously. New Rave could be considered a deeply pop-educated extension of rave culture, punk rock and hip-hop. It’s distinguishing characteristic is sound and colors that immediately barrage the senses; chiaroscuro on growth hormones.
NYC4ft3rm4th image 1
NYC4ft3rm4th image 2
NYC4ft3rm4th image 3
NYC4ft3rm4th image 4
NYC4ft3rm4th image 5

Outpost Basel

June 2015
Projection on architectural surface, custom code in Processing, 3 projectors, 3 CPUs, 3 depth cameras, cables, network. Design Miami 2015 Basel, Switzerland. In collaboration with Olson Kundig and Reilly Donovan.
abstraction, architecture, generative, installation, interactive, network, processing, projection mapping
Seattle based Architecture firm Olson Kundig invited Reilly Donovan and I to create an interactive, projection mapped installation within the VIP Collectors Lounge they produced for Design Miami/Basel 2015 in Basel, Switzerland. Design Miami is the sister fair of Art Basel focusing on primarily interior design, art jewelry and architecture, the exclusive lounge area serves as a resting spot for collectors attending the fair. We arrived in Basel with a few pieces of code sketches and then worked in the environment itself for a week prior to the fair opening so that the piece could be optimized both aesthetically and technically to the temporary fabricated environment that Olson Kundig created. Before hand arrangements were made to install three projectors, driven by one CPU each, to cover three walls in the lounge to immerse visitors in generative light art. This system acted as the paint and plaster of an immersive generative mural. Custom code written in Processing generated the brick pattern to precisely projection map the physical environment. A calibration mode brought up custom controls to set brick size and spacing in situ, along with four corner pin keystoning allowed us to accurately match the architectural surface. Three depth cameras were also used to track general movement in the room. This tracking data was then used to subtly offset the brick mappings themselves, creating a sense, often out of the corner of one’s eye, that the walls were breathing slightly. Each brick was coded as a discrete object in our application, an individual brick’s transparency was then governed by slightly offset LFOs, creating the cascading pattern. Additional horizontal lines constantly drifted upwards creating the gradient effect. Vertical and horizontal lines traced the surface, also coded as independent objects that controlled their own animation. The entire time scale was slowed to an architectural pace, to create a relaxing, conversational and pensive environment.
Seattle based Architecture firm Olson Kundig invited Reilly Donovan and I to create an interactive, projection mapped installation within the VIP Collectors Lounge they produced for Design Miami/Basel 2015 in Basel, Switzerland. Design Miami is the sister fair of Art Basel focusing on primarily interior design, art jewelry and architecture, the exclusive lounge area serves as a resting spot for collectors attending the fair. We arrived in Basel with a few pieces of code sketches and then worked in the environment itself for a week prior to the fair opening so that the piece could be optimized both aesthetically and technically to the temporary fabricated environment that Olson Kundig created. Before hand arrangements were made to install three projectors, driven by one CPU each, to cover three walls in the lounge to immerse visitors in generative light art. This system acted as the paint and plaster of an immersive generative mural. Custom code written in Processing generated the brick pattern to precisely projection map the physical environment. A calibration mode brought up custom controls to set brick size and spacing in situ, along with four corner pin keystoning allowed us to accurately match the architectural surface. Three depth cameras were also used to track general movement in the room. This tracking data was then used to subtly offset the brick mappings themselves, creating a sense, often out of the corner of one’s eye, that the walls were breathing slightly. Each brick was coded as a discrete object in our application, an individual brick’s transparency was then governed by slightly offset LFOs, creating the cascading pattern. Additional horizontal lines constantly drifted upwards creating the gradient effect. Vertical and horizontal lines traced the surface, also coded as independent objects that controlled their own animation. The entire time scale was slowed to an architectural pace, to create a relaxing, conversational and pensive environment.
Outpost Basel image 1
Outpost Basel image 2
Outpost Basel image 3
Outpost Basel image 4
Outpost Basel image 5

S34sc4p3

February 2010
Paper, PVA, fishing hooks, wire, video projector, computer, IR web cam, custom software in Processing. Dimesions variable. Exhibited at the Seattle Art Museum Gallery, Seattle, 2010. Curated by Barbara Shaiman.
interactive, projection mapping, sculpture
S34sc4p3 is a multi-media interactive installation that tracks movement outside of the gallery that controls generative wave animations projected onto the sculptural surface suspended in the gallery window. The piece is best observed at night. The custom software was written in the Processing language, a code format designed for artistic applications.
S34sc4p3 is a multi-media interactive installation that tracks movement outside of the gallery that controls generative wave animations projected onto the sculptural surface suspended in the gallery window. The piece is best observed at night. The custom software was written in the Processing language, a code format designed for artistic applications.
S34sc4p3 image 1
S34sc4p3 image 2
S34sc4p3 image 3

Light_Paper_Noise

March 2007
Paper, string, microphone, video projector, computer, software patch in Quartz Composer. Dimesions variable. Exhibited in Paperwork at Unit B Gallery, San Antonio, Texas. Curated by Catherine Walworth.
generative, interactive, sculpture
Light_Paper_Noise is similar to the previous work Light_Paper_Sound, except rather than generating new mixes of audio from recordings, it listens to the ambient sound of the room to affect the real-time animation projected on the paper surface. The piece was created entirely in the gallery within a few days time. Catherine Walworth curated the group exhibition of paper based works entitled Paperwork which also included artists Rhonda Kuhlman (San Antonio, TX), Sachi Komai (Madison, WI) and Michael Velliquette (Madison, WI/San Antonio, TX). The images in the photographs show how the piece appeared during daylight hours.
Light_Paper_Noise is similar to the previous work Light_Paper_Sound, except rather than generating new mixes of audio from recordings, it listens to the ambient sound of the room to affect the real-time animation projected on the paper surface. The piece was created entirely in the gallery within a few days time. Catherine Walworth curated the group exhibition of paper based works entitled Paperwork which also included artists Rhonda Kuhlman (San Antonio, TX), Sachi Komai (Madison, WI) and Michael Velliquette (Madison, WI/San Antonio, TX). The images in the photographs show how the piece appeared during daylight hours.
Light_Paper_Noise image 1
Light_Paper_Noise image 2
Light_Paper_Noise image 3
Light_Paper_Noise image 4

Schroedinbug Descending a Staircase

July 2011
Keith Tilford (sculpture) & Joseph Gray (software). Projection-mapped interactive sculpture. hand-cut acrylic and acetate sheet with acrylic paint. Custom software written in Processing running on two computers, two digital projectors, two IR webcams. Wall mounted approx. 8'w x 4'h x 2'd. Exhibited at Ghost Gallery, Seattle, 2011. Curated by Laurie Kearney. Documentation photography and videography by Gabriel C. Herbertson
installation, interactive, projection mapping, sculpture
Schroedinbug Descending a Staircase image 1
Schroedinbug Descending a Staircase image 2
Schroedinbug Descending a Staircase image 3
Schroedinbug Descending a Staircase image 4
Schroedinbug Descending a Staircase image 5

Suite For String Quartet

July 2009
Composition by Zachary Watkins (zacharyjameswatkins.com). Excerpt from visual performance at The Lab, San Francisco, California. Real-time animation with custom arduino-based controller and custom software written in Processing. At The Lab thelab.org As part of the Collision Performance Series
new music, visual performance
Suite For String Quartet is a minimalist composition for strings and electronic audio processing. The piece is based in long tones with improvisational variations within complex chord structures built in a modern tuning. The duration of the piece is 30 minutes, an excerpt is shown in the video.
Suite For String Quartet is a minimalist composition for strings and electronic audio processing. The piece is based in long tones with improvisational variations within complex chord structures built in a modern tuning. The duration of the piece is 30 minutes, an excerpt is shown in the video.
Suite For String Quartet image 1
Suite For String Quartet image 2
Suite For String Quartet image 3
Suite For String Quartet image 4
Suite For String Quartet image 5

Design Collaborations

29Rooms 2016

September 2016
advertising, fashion, generative, interactive, processing
Refinery29 commissioned Reilly Donovan and myself to create an interactive video wall for their 2016 fashion week event 29Rooms. We created a custom coded platform integrating designs from Refinery29’s creative team, with custom generative animation. The visual style was in support of Fossil Watches Q line, the room’s sponsor. The software, dubbed “29Mirrors”, tracked visitors to the room outlines using a depth camera and filled them with colorful animations against various backdrops. Over the course of the weekend event thousands of visitors played with the installation and posted hundreds of photos and videos to social media of their virtual silhouetted selves.
Refinery29 commissioned Reilly Donovan and myself to create an interactive video wall for their 2016 fashion week event 29Rooms. We created a custom coded platform integrating designs from Refinery29’s creative team, with custom generative animation. The visual style was in support of Fossil Watches Q line, the room’s sponsor. The software, dubbed “29Mirrors”, tracked visitors to the room outlines using a depth camera and filled them with colorful animations against various backdrops. Over the course of the weekend event thousands of visitors played with the installation and posted hundreds of photos and videos to social media of their virtual silhouetted selves.
29Rooms 2016 image 1
29Rooms 2016 image 2
29Rooms 2016 image 3
29Rooms 2016 image 4
29Rooms 2016 image 5

Coca-Cola: Ahh Posse

April 2013
canvas, html5, javascript, web app
An HTML5 canvas interactive toy for Coca-Cola’s ambitious Ahh campaign. The physics are provided by a javascript port of the Box2D library. The number of bubbles that appear are based on screen pixel size, which allows smaller devices to perform less calculations creating smoother animations even on handheld. Project site: ahhhh.com Client: Coca-Cola Agency: WK Production: Superfad Producer: Beck Henderer-Peña Art Direction: Gretchen Nash Creative Coding: Joseph Gray
An HTML5 canvas interactive toy for Coca-Cola’s ambitious Ahh campaign. The physics are provided by a javascript port of the Box2D library. The number of bubbles that appear are based on screen pixel size, which allows smaller devices to perform less calculations creating smoother animations even on handheld. Project site: ahhhh.com Client: Coca-Cola Agency: WK Production: Superfad Producer: Beck Henderer-Peña Art Direction: Gretchen Nash Creative Coding: Joseph Gray
Coca-Cola: Ahh Posse image 1
Coca-Cola: Ahh Posse image 2
Coca-Cola: Ahh Posse image 3

Alibi Room

January 2005
advertising, print
Some design work for the Alibi room during the time of their 10th anniversary, mostly print ads, plus one party invitation.
Alibi Room image 1
Alibi Room image 2
Alibi Room image 3
Alibi Room image 4
Alibi Room image 5

Autotrader.ca: Autolyzer

May 2012
as3, data analytics, facebook, flash, real-time compositing
The Autolyzer pulls in your likes from Facebook and suggests three cars based on your proclivities. We used a weighting system of various likeable categories in Facebook and compared them to various automobiles whose appeal correlated to these categories. For example if you liked several construction companies and hardware stores you’d be more likely to get a work truck as a result. Real-time 2D compositing of into 3D rendered scenes also required some technical mastery over four corner pinning in AS3. Photos of relevant Facebook friends, again using custom algorithms to determine who these people are, are integrated into the Autolyzer’s various “machine rooms” to convey the concept that the system is making a detailed analysis of your preferences. The Autolyzer campaign was developed at DDB Canada, Toronto, by creative directors Todd Mackie, Denise Rossetto, copywriter Daniel Bonder, art director Pete Ross, senior cultivator Melissa Smich, agency producer Luc Quartarone, technologist Joe Dee, strategist Kevin McHugh, senior vice president Michael Davidson, account director Peter Brough, account supervisor Carly Sutherland, account coordinator Lindy Scott, PR manager James Loftus, PR supervisor Gabrielle Totesau and senior PR consultant Erin Bodley. Production was produced at Superfad, Seattle, by executive creative director Will Hyde, art director Gretchen Nash, editor Ryan Haug, senior interactive producer Beck Hendrerer-Peña, producer Aimée Safko, executive producer Chris Volckmann, simulation/VFX artist Phipat Pinyosophon, lighting/texturing artists Andrew Butterworth and Kyle Humphrey, animators Patrick Clarke and Greg Bekken, lead developer Joseph Gray, Flash developer Brent Watanabe, lead 3D artist Matt Guzzardo, lead modeler/texturer artist Adam Rosenzweig, lead compositor Joel Voelker, compositing team Kaleb Coleman, Paul Barkshire and Paul Cantor. Sound was designed and mixed at Grayson Matthews.
The Autolyzer pulls in your likes from Facebook and suggests three cars based on your proclivities. We used a weighting system of various likeable categories in Facebook and compared them to various automobiles whose appeal correlated to these categories. For example if you liked several construction companies and hardware stores you’d be more likely to get a work truck as a result. Real-time 2D compositing of into 3D rendered scenes also required some technical mastery over four corner pinning in AS3. Photos of relevant Facebook friends, again using custom algorithms to determine who these people are, are integrated into the Autolyzer’s various “machine rooms” to convey the concept that the system is making a detailed analysis of your preferences. The Autolyzer campaign was developed at DDB Canada, Toronto, by creative directors Todd Mackie, Denise Rossetto, copywriter Daniel Bonder, art director Pete Ross, senior cultivator Melissa Smich, agency producer Luc Quartarone, technologist Joe Dee, strategist Kevin McHugh, senior vice president Michael Davidson, account director Peter Brough, account supervisor Carly Sutherland, account coordinator Lindy Scott, PR manager James Loftus, PR supervisor Gabrielle Totesau and senior PR consultant Erin Bodley. Production was produced at Superfad, Seattle, by executive creative director Will Hyde, art director Gretchen Nash, editor Ryan Haug, senior interactive producer Beck Hendrerer-Peña, producer Aimée Safko, executive producer Chris Volckmann, simulation/VFX artist Phipat Pinyosophon, lighting/texturing artists Andrew Butterworth and Kyle Humphrey, animators Patrick Clarke and Greg Bekken, lead developer Joseph Gray, Flash developer Brent Watanabe, lead 3D artist Matt Guzzardo, lead modeler/texturer artist Adam Rosenzweig, lead compositor Joel Voelker, compositing team Kaleb Coleman, Paul Barkshire and Paul Cantor. Sound was designed and mixed at Grayson Matthews.
Autotrader.ca: Autolyzer image 1
Autotrader.ca: Autolyzer image 2
Autotrader.ca: Autolyzer image 3

Bill Patton: Get's it On

January 2006
album art, print
Album cover design for modern folk balladeer Bill Patton’s first release. The unicorn hidden inside the jacket was actually a horse in Eastern Washington.
Album cover design for modern folk balladeer Bill Patton’s first release. The unicorn hidden inside the jacket was actually a horse in Eastern Washington.
Bill Patton: Get's it On image 1
Bill Patton: Get's it On image 2
Bill Patton: Get's it On image 3
Bill Patton: Get's it On image 4
Bill Patton: Get's it On image 5

Biologistex UX/UI refresh

June 2017
UI/UX Client: Biologistex
design, illustrator, ui, ux
Biologistex is a medical shipping company who creates IoT enabled cold storage shipping containers.  They engaged me in late 2016 and early 2017 to redesign their online shipping application, both in terms of user flow and interface design. The process began as a high level analysis of their existing app and an audit of all of it’s functionality.  This audit produced a functional outline that hadn’t existed as the previous UI was created organically as the product was being developed.  Using the outline as a guide wireframes were then developed and once approved given a refreshed design polish.  Google’s Material Design language was used as it lent itself naturally to the functional needs of the app. The Biologistex containers are tracked with an app designed primarily for tablets and desktop computers and provides location and various sensor data, along with the ability to create and manage shipments.  Because the contents are bio materials the app needs to show what the internal and external environment is at fairly regular intervals and send alerts when the sensors detect that conditions have gone out of range.  These created certain challenges on how to intuitively present the combination of shipments and individual device data.  The resulting high level comps of the suggested redesign is shown in the slides above.
Biologistex is a medical shipping company who creates IoT enabled cold storage shipping containers. They engaged me in late 2016 and early 2017 to redesign their online shipping application, both in terms of user flow and interface design. The process began as a high level analysis of their existing app and an audit of all of it’s functionality.  This audit produced a functional outline that hadn’t existed as the previous UI was created organically as the product was being developed.  Using the outline as a guide wireframes were then developed and once approved given a refreshed design polish.  Google’s Material Design language was used as it lent itself naturally to the functional needs of the app. The Biologistex containers are tracked with an app designed primarily for tablets and desktop computers and provides location and various sensor data, along with the ability to create and manage shipments.  Because the contents are bio materials the app needs to show what the internal and external environment is at fairly regular intervals and send alerts when the sensors detect that conditions have gone out of range.  These created certain challenges on how to intuitively present the combination of shipments and individual device data.  The resulting high level comps of the suggested redesign is shown in the slides above.
Biologistex UX/UI refresh image 1
Biologistex UX/UI refresh image 2
Biologistex UX/UI refresh image 3
Biologistex UX/UI refresh image 4
Biologistex UX/UI refresh image 5

Ekata Technology Explainer

August 2020
Client: Ekata Agency: Schema Design Role: Creative Technologist URL: https://content.ekata.com/interactive-verifying-global-identities-at-scale.html
Ekata is a world leader in identity verification for online and financial transactions.  This microsite is an explainer to C-Suite clients on how Ekata’s technology works and why it is robust and secure. At Schema I worked with creative technologist colleague Caprice Carstensen to build out this brief, yet deceivingly complex, scroll based interactive from designs created by art director Jeff Paletta.  We tag-teamed the development of the piece, with Caprice focusing on the network d3 data visualizations and myself working on the responsive page layout, scrolling engine, transition animations and general content flow.
Ekata is a world leader in identity verification for online and financial transactions. This microsite is an explainer to C-Suite clients on how Ekata’s technology works and why it is robust and secure. At Schema I worked with creative technologist colleague Caprice Carstensen to build out this brief, yet deceivingly complex, scroll based interactive from designs created by art director Jeff Paletta.  We tag-teamed the development of the piece, with Caprice focusing on the network d3 data visualizations and myself working on the responsive page layout, scrolling engine, transition animations and general content flow.
Ekata Technology Explainer image 1
Ekata Technology Explainer image 2
Ekata Technology Explainer image 3
Ekata Technology Explainer image 4
Ekata Technology Explainer image 5

Floss: Vitamin A

March 2010
Album artwork for Monktail band Floss. Majorly “out jazz”, as they say.
Album artwork for Monktail band Floss. Majorly “out jazz”, as they say.
Floss: Vitamin A image 1
Floss: Vitamin A image 2
Floss: Vitamin A image 3
Floss: Vitamin A image 4

Gallo: Crest Creator

September 2013
angular.js, canvas, css3, html5, javascript
A fun tool to create family crests that you can share online and even get t-shirts and mugs printed of. This project required a variety of technical and UI/UX complexities to keep it feeling natural and smooth on both desktop and mobile. Angular.js managed data bindings, and node.js server handled server interactions and some tricky integration of jQuery UI and CSS3 transitions rounded the piece out. Project Site: crestcreator.com Client: Gallo Wineries Agency: BBDO Production: Royale
A fun tool to create family crests that you can share online and even get t-shirts and mugs printed of. This project required a variety of technical and UI/UX complexities to keep it feeling natural and smooth on both desktop and mobile. Angular.js managed data bindings, and node.js server handled server interactions and some tricky integration of jQuery UI and CSS3 transitions rounded the piece out. Project Site: crestcreator.com Client: Gallo Wineries Agency: BBDO Production: Royale
Gallo: Crest Creator image 1
Gallo: Crest Creator image 2
Gallo: Crest Creator image 3
Gallo: Crest Creator image 4
Gallo: Crest Creator image 5

Garden of Health

April 2019
Client: Adventist Health Agency: Schema Design Role: Creative Technologist
data visualization, generative, installation, medical, three.js, web app, webgl
Adventist Health is a California based medical provider focused on preventive medicine as a core part of it’s practice.  Schema Design was commissioned to create a data visualization for their Roseville HQ lobby utilizing near real-time patient interactions with the health network across physical sites. This project was my first collaboration with the Schema Design studio, who brought me in midway through the project to provide both creative and technical support.  By the time that I arrived into the project the concept of a garden of flowers filled with butterflies had been approved by the client.  The flowers represented patients currently receiving care within the health network and butterflies represented healthcare providers working with these patients. Creative director Sergei Larionov and technical director Jeff MacInnes had made significant headway with the concept, data wrangling, and an overall development framework based on three.js.   The data wrangling, which was complete by the time I joined the project, was particularly complex as the personally sensitive patient data had to be anonymized and offset before it could be shared via a secure API. My contribution to the piece began with the client not being completely satisfied with the initial design direction.  After becoming familiar with their feedback I recommended we take classical still life painting as a visual reference, particularly baroque era paintings of flower arrangements.  This visual direction was brought to the client and was approved. To achieve this look we created a flower designing web app just for this project that allowed adjustments of parameters to create unique, fantastical, breeds of flowers.  These flowers used GLSL shaders to both modify their coloring and shape.  These custom shaders were built from scratch to elicit the aesthetics of still life paintings, including a custom lighting shader to give them a dramatic chiaroscuro effect.  Additional animation effects, like wind and changes in time of day, were also generated using shaders.
Adventist Health is a California based medical provider focused on preventive medicine as a core part of it’s practice. Schema Design was commissioned to create a data visualization for their Roseville HQ lobby utilizing near real-time patient interactions with the health network across physical sites. This project was my first collaboration with the Schema Design studio, who brought me in midway through the project to provide both creative and technical support.  By the time that I arrived into the project the concept of a garden of flowers filled with butterflies had been approved by the client.  The flowers represented patients currently receiving care within the health network and butterflies represented healthcare providers working with these patients. Creative director Sergei Larionov and technical director Jeff MacInnes had made significant headway with the concept, data wrangling, and an overall development framework based on three.js.   The data wrangling, which was complete by the time I joined the project, was particularly complex as the personally sensitive patient data had to be anonymized and offset before it could be shared via a secure API. My contribution to the piece began with the client not being completely satisfied with the initial design direction.  After becoming familiar with their feedback I recommended we take classical still life painting as a visual reference, particularly baroque era paintings of flower arrangements.  This visual direction was brought to the client and was approved. To achieve this look we created a flower designing web app just for this project that allowed adjustments of parameters to create unique, fantastical, breeds of flowers.  These flowers used GLSL shaders to both modify their coloring and shape.  These custom shaders were built from scratch to elicit the aesthetics of still life paintings, including a custom lighting shader to give them a dramatic chiaroscuro effect.  Additional animation effects, like wind and changes in time of day, were also generated using shaders.
Garden of Health image 1
Garden of Health image 2
Garden of Health image 3
Garden of Health image 4
Garden of Health image 5

Grindline Skateparks

September 2014
css3, google maps api, html5, javascript, wordpress
A refresh for Grindline Skateparks, a top rated designer and builder of custom concrete skateparks for civic and private clients based in Seattle’s Duwamish River area. I was honored to be able to create their new site, focusing particularly on designing an interface for skaters to find their parks and for civil engineers to learn about Grindline’s professional qualifications. Project Site: grindline.com
A refresh for Grindline Skateparks, a top rated designer and builder of custom concrete skateparks for civic and private clients based in Seattle’s Duwamish River area. I was honored to be able to create their new site, focusing particularly on designing an interface for skaters to find their parks and for civil engineers to learn about Grindline’s professional qualifications. Project Site: grindline.com
Grindline Skateparks image 1
Grindline Skateparks image 2
Grindline Skateparks image 3

HBOD 2015 Video Wall

June 2015
Her Brain on Digital (HBOD) is an annual industry event hosted by fashion influencer Refinery29 where leaders from major brands attend to gain insights into the latest in digital marketing techniques. Reilly Donovan and I (under the moniker Glymmer) were brought on board in 2015 to create a 96 foot long interactive video wall developed in coordination with R29’s design team, collaborating remotely between Seattle and NYC. The piece was installed in less than 24 hours at Industria Superstudios in NYC for the one night only event. A smaller two screen version of the wall also travelled to the HBOD 2015 event in Los Angeles where it was installed by Benjamin Van Citters and Brandon Aleson. The NYC version used eleven HD screens scattered up the entry ramp along one side. Eight Microsoft Kinects tracked attendees movement as they entered, and later left, the event triggering videos and quotes as they passed by. Subtle background graphics shifted in response to the general flow of foot traffic. Each display was powered by it’s own CPU networked together to sync graphics across screens. Schematics, install logistics and custom software (written in Processing) were developed in the month and a half prior to the event. Audience, and client, response to the entry was overwhelmingly positive. Press from BizBash.com
Her Brain on Digital (HBOD) is an annual industry event hosted by fashion influencer Refinery29 where leaders from major brands attend to gain insights into the latest in digital marketing techniques. Reilly Donovan and I (under the moniker Glymmer) were brought on board in 2015 to create a 96 foot long interactive video wall developed in coordination with R29’s design team, collaborating remotely between Seattle and NYC. The piece was installed in less than 24 hours at Industria Superstudios in NYC for the one night only event. A smaller two screen version of the wall also travelled to the HBOD 2015 event in Los Angeles where it was installed by Benjamin Van Citters and Brandon Aleson. The NYC version used eleven HD screens scattered up the entry ramp along one side. Eight Microsoft Kinects tracked attendees movement as they entered, and later left, the event triggering videos and quotes as they passed by. Subtle background graphics shifted in response to the general flow of foot traffic. Each display was powered by it’s own CPU networked together to sync graphics across screens. Schematics, install logistics and custom software (written in Processing) were developed in the month and a half prior to the event. Audience, and client, response to the entry was overwhelmingly positive. Press from BizBash.com
HBOD 2015 Video Wall image 1
HBOD 2015 Video Wall image 2
HBOD 2015 Video Wall image 3
HBOD 2015 Video Wall image 4
HBOD 2015 Video Wall image 5

Ivory in Ice World: Self-Title EP

March 2010
album art, illustration, print
Ivory in Ice World was an incarnation of Ivory Smith’s modern ballads with a fantastic line up of musicians. This album was their one and only release. (Full disclosure: Ivory is also my wife… )
Ivory in Ice World was an incarnation of Ivory Smith’s modern ballads with a fantastic line up of musicians. This album was their one and only release. (Full disclosure: Ivory is also my wife… )
Ivory in Ice World: Self-Title EP image 1
Ivory in Ice World: Self-Title EP image 2
Ivory in Ice World: Self-Title EP image 3
Ivory in Ice World: Self-Title EP image 4

Local Roots Farm

April 2014
css3, html5, wordpress
A refresh of the original site I did for Local Roots Farm in Duvall Washington. The design was created in Adobe Illustrator and then translated into a responsive WordPress theme. Project Site: localrootsfarm.com
A refresh of the original site I did for Local Roots Farm in Duvall Washington. The design was created in Adobe Illustrator and then translated into a responsive WordPress theme. Project Site: localrootsfarm.com
Local Roots Farm image 1

Microsoft: Focus/Forests

March 2020
Client: Microsoft Agency: Schema Design Role: Creative Technologist URL: https://focusforests.microsoft.com/#discover
css3, data visualization, gis, javascript, mapbox, python
Microsoft commissioned this piece to showcase their AI cloud computing capabilities.  Using Silvia Terra’s (now NCX) vast data set of forest composition across the continental US, the site doubles as a story about the effects, and improvements, we, as humans, can have on our beloved forests. This is one of the first projects I collaborated on with data visualisation studio Schema Design, working with fellow creative technologist Caprice Carstensen starting with a brief on the story they wanted to tell, wrangling insanely large forestry datasets with help from then technology director Jeff MacInnes, concepting an interactive map-based UX and engaging scroll based UI, and implementing the whole thing with polished fit and finish using cleaned up static designs from art director Jeff Paletta. We used a combination of custom MapBox layers with bespoke PNG renderings of the large datasets that were wrangled and exported using Python.  These PNG files were then hand finished in Photoshop to give a painterly blending between various levels of basemap resolutions as the user zooms in and out of forests and explores the data story.  3d charts aligned to maps of each showcased forest region providing a visceral view of the risks these forests face.  All of this was wrapped in a responsive scrolly-telling environment for a smooth user experience.
Microsoft commissioned this piece to showcase their AI cloud computing capabilities. Using Silvia Terra’s (now NCX) vast data set of forest composition across the continental US, the site doubles as a story about the effects, and improvements, we, as humans, can have on our beloved forests. This is one of the first projects I collaborated on with data visualisation studio Schema Design, working with fellow creative technologist Caprice Carstensen starting with a brief on the story they wanted to tell, wrangling insanely large forestry datasets with help from then technology director Jeff MacInnes, concepting an interactive map-based UX and engaging scroll based UI, and implementing the whole thing with polished fit and finish using cleaned up static designs from art director Jeff Paletta. We used a combination of custom MapBox layers with bespoke PNG renderings of the large datasets that were wrangled and exported using Python.  These PNG files were then hand finished in Photoshop to give a painterly blending between various levels of basemap resolutions as the user zooms in and out of forests and explores the data story.  3d charts aligned to maps of each showcased forest region providing a visceral view of the risks these forests face.  All of this was wrapped in a responsive scrolly-telling environment for a smooth user experience.
Microsoft: Focus/Forests image 1
Microsoft: Focus/Forests image 2
Microsoft: Focus/Forests image 3
Microsoft: Focus/Forests image 4
Microsoft: Focus/Forests image 5

OK OK Inc.

January 2005
advertising, print
Various print work for former Capitol Hill (Seattle) toy, clothing and curiosity store OK OK. It was at the same auspicious location that Ada’s Technical Books started at and where Freeman Clothing is now located. OK OK’s owners now run Ambach & Rice, a contemporary gallery based in Los Angeles.
Various print work for former Capitol Hill (Seattle) toy, clothing and curiosity store OK OK. It was at the same auspicious location that Ada’s Technical Books started at and where Freeman Clothing is now located. OK OK’s owners now run Ambach & Rice, a contemporary gallery based in Los Angeles.
OK OK Inc. image 1
OK OK Inc. image 2
OK OK Inc. image 3
OK OK Inc. image 4
OK OK Inc. image 5

PacSci Data Exhibit

January 2020
Client: Pacific Science Center Agency: Schema Design Role: Creative Technologist
data visualization, education, installation, interactive, museum, processing, projection mapping
The Pacific Science Center (PacSci) in Seattle commissioned Schema Design to concept, design and produce an exhibit to teach what data is, on the most fundamental levels, to young folks visiting the museum, and to explore what the outcomes of sharing individual data can be. I was brought in later in the design/ideation phase and helped steer the overall vision and concept after a few initial drafts weren’t sticking with the client.  Working with designer Dwayne Franco and content strategist (and artist) Eryn Kendig we developed an experience model around whimsical questions about participants’ opinions of specifically Pacific Northwest concerns, such as whether the Sasquatch is real or not along with a few personal, non-identifying, demographics requests. This was then combined with a highly abstracted design system of particles whose aspects represented each opinion from the survey.  For instance, if a participant preferred sun over rain their particle would have a warm color gradient, while those who profess belief in Sasquatch would get a “hairy” outline treatment to their particle.  Once a survey was complete participants could optionally share their particle to a larger projection screen where it joined others to be sorted by various cross-references of demographic and opinion data from the survey. Along with exhibit designers at PacSci I worked on general layout of physical screens, hardware selection and infrastructure, and general UX and UI.  PacSci provided the physical and interior design from their internal team, riffing off of the design system we created at Schema. Once the core creative was approved Dwayne Franco created polished layouts based on rough comps and user flows I created.  Creative director Sergei Larionov provided final design guidance and on-site fit-and-finish to the digital aspects of the interactive during final installation with things like typography updates and layout adjustments to make everything feel right in the actual physical environment. I led software development, along with technical infrastructure and digital install, with the help of developer Anna Peng and tech wizard Jeff MacInnes.  As is often the case with fairly complex on-site interactive installations some of the software development was done literally in the exhibit space so we could tailor everything to be as natural feeling as possible to our audience of enthusiastic young data scientists.
The Pacific Science Center (PacSci) in Seattle commissioned Schema Design to concept, design and produce an exhibit to teach what data is, on the most fundamental levels, to young folks visiting the museum, and to explore what the outcomes of sharing individual data can be. I was brought in later in the design/ideation phase and helped steer the overall vision and concept after a few initial drafts weren’t sticking with the client.  Working with designer Dwayne Franco and content strategist (and artist) Eryn Kendig we developed an experience model around whimsical questions about participants’ opinions of specifically Pacific Northwest concerns, such as whether the Sasquatch is real or not along with a few personal, non-identifying, demographics requests. This was then combined with a highly abstracted design system of particles whose aspects represented each opinion from the survey.  For instance, if a participant preferred sun over rain their particle would have a warm color gradient, while those who profess belief in Sasquatch would get a “hairy” outline treatment to their particle.  Once a survey was complete participants could optionally share their particle to a larger projection screen where it joined others to be sorted by various cross-references of demographic and opinion data from the survey. Along with exhibit designers at PacSci I worked on general layout of physical screens, hardware selection and infrastructure, and general UX and UI.  PacSci provided the physical and interior design from their internal team, riffing off of the design system we created at Schema. Once the core creative was approved Dwayne Franco created polished layouts based on rough comps and user flows I created.  Creative director Sergei Larionov provided final design guidance and on-site fit-and-finish to the digital aspects of the interactive during final installation with things like typography updates and layout adjustments to make everything feel right in the actual physical environment. I led software development, along with technical infrastructure and digital install, with the help of developer Anna Peng and tech wizard Jeff MacInnes.  As is often the case with fairly complex on-site interactive installations some of the software development was done literally in the exhibit space so we could tailor everything to be as natural feeling as possible to our audience of enthusiastic young data scientists.
PacSci Data Exhibit image 1
PacSci Data Exhibit image 2
PacSci Data Exhibit image 3
PacSci Data Exhibit image 4

Rebuilding U.S.-China Climate Cooperation

January 2021
Client: Asia Society Policy Institute Agency: Schema Design Role: Creative Technologist URL: https://asiasociety.org/policy-institute/us-china-climate#hero_scroll_video
css3, d3, data visualization, gis, javascript, web app
The Asia Society Policy Institute is “The leading force in forging closer ties between Asia and the West through arts, education, policy and business outreach.”  This project lays out the case for China and US cooperation on climate policy as the two largest carbon producers in the world and the outsize impact these nations can have on the future of climate change through their cooperation. Working at Schema Design I built the landing page opening interactive from static designs by creative director Sergei Larionov. The piece uses d3.js and d3-geo.js to create the interactive visualizations from data provided by the client.  The most challenging, and interesting, part of the build process was using html canvas to render the spinnable globe from geographic climate data, rather than the typical SVG rendering used in d3.  This allowed for much more performant real time rendering even on handheld devices.
The Asia Society Policy Institute is “The leading force in forging closer ties between Asia and the West through arts, education, policy and business outreach.”  This project lays out the case for China and US cooperation on climate policy as the two largest carbon producers in the world and the outsize impact these nations can have on the future of climate change through their cooperation. Working at Schema Design I built the landing page opening interactive from static designs by creative director Sergei Larionov. The piece uses d3.js and d3-geo.js to create the interactive visualizations from data provided by the client.  The most challenging, and interesting, part of the build process was using html canvas to render the spinnable globe from geographic climate data, rather than the typical SVG rendering used in d3.  This allowed for much more performant real time rendering even on handheld devices.
Rebuilding U.S.-China Climate Cooperation image 1
Rebuilding U.S.-China Climate Cooperation image 2
Rebuilding U.S.-China Climate Cooperation image 3
Rebuilding U.S.-China Climate Cooperation image 4
Rebuilding U.S.-China Climate Cooperation image 5

Ripple Effect

March 2015
Interactive agency Possible commissioned Reilly Donovan and myself to create a generative floor projection to enrich the onsite experience of an event thrown by Microsoft OneNote at Pennsylvania State University for their Collective Project series. The theme of the event, ripple effect, was based on a quote from student Neha Gupta whom the event honored: “It is our time to be the igniters of change. Find a cause that touches your heart. Convert your empathy into action and let those actions ripple out.” Neha is a champion of access to education for orphans in her home country of India and worldwide. To give a unique energy to the environment where the event was held we created and installed a custom piece of software projected on the floor throughout the space. Whenever a post on Twitter was detected with the hashtag “#rippleeffect” or “#collectiveproject” an animated ripple would spread out from a random location on the floor. The entire piece was concepted, approved by the client and executed in less than three weeks.
Interactive agency Possible commissioned Reilly Donovan and myself to create a generative floor projection to enrich the onsite experience of an event thrown by Microsoft OneNote at Pennsylvania State University for their Collective Project series. The theme of the event, ripple effect, was based on a quote from student Neha Gupta whom the event honored: “It is our time to be the igniters of change. Find a cause that touches your heart. Convert your empathy into action and let those actions ripple out.” Neha is a champion of access to education for orphans in her home country of India and worldwide. To give a unique energy to the environment where the event was held we created and installed a custom piece of software projected on the floor throughout the space. Whenever a post on Twitter was detected with the hashtag “#rippleeffect” or “#collectiveproject” an animated ripple would spread out from a random location on the floor. The entire piece was concepted, approved by the client and executed in less than three weeks.
Ripple Effect image 1
Ripple Effect image 2
Ripple Effect image 3
Ripple Effect image 4

Special OPS: Arm Me

January 2008
album art, print
Album artwork for sonic jazz terror Special O.P.S. released on the Monktail label.
Album artwork for sonic jazz terror Special O.P.S. released on the Monktail label.
Special OPS: Arm Me image 1
Special OPS: Arm Me image 2
Special OPS: Arm Me image 3
Special OPS: Arm Me image 4

Santa Monica Wellbeing Summit

November 2019
Client: City of Santa Monica Agency: Schema Design Role: Creative Technologist
The City of Santa Monica spearheaded an initiative that municipalities across the globe have begun to explore called the Wellbeing Index: a system for determining, and addressing, the wellbeing of its citizens. As part of this initiative the City held a Wellbeing Summit in the fall of 2019 to invite citizens to participate in workshops and conferences, present their findings and create a street fair style event with health and wellbeing oriented local businesses participating. Schema Design was commissioned to create a video presented at the Summit showcasing findings from the Santa Monica Wellbeing Index research and data.  I worked with designer Dwayne Franco to craft this video from raw survey results.  Using Processing and the Box2D library we created a whimsical, yet informative, motion data visualization providing citizens with insights into their own communities’ well being.
The City of Santa Monica spearheaded an initiative that municipalities across the globe have begun to explore called the Wellbeing Index: a system for determining, and addressing, the wellbeing of its citizens. As part of this initiative the City held a Wellbeing Summit in the fall of 2019 to invite citizens to participate in workshops and conferences, present their findings and create a street fair style event with health and wellbeing oriented local businesses participating. Schema Design was commissioned to create a video presented at the Summit showcasing findings from the Santa Monica Wellbeing Index research and data.  I worked with designer Dwayne Franco to craft this video from raw survey results.  Using Processing and the Box2D library we created a whimsical, yet informative, motion data visualization providing citizens with insights into their own communities’ well being.
Santa Monica Wellbeing Summit image 1
Santa Monica Wellbeing Summit image 2
Santa Monica Wellbeing Summit image 3
Santa Monica Wellbeing Summit image 4
Santa Monica Wellbeing Summit image 5

Wrong Lobby

March 2017
advertising, generative, installation, interactive, processing
Wrong Lobby is an interactive installation in the lobby of the Wongdoody Seattle HQ, commissioned as a collaboration with members of Wongdoody’s creative team.
Wrong Lobby image 1
Wrong Lobby image 2
Wrong Lobby image 3
Wrong Lobby image 4
Wrong Lobby image 5

Y&R: New Year's Therapy

December 2012
css3, html5, javascript, youtube api
Hastily built this one-off site as a somewhat controversial New Years card from Young and Rubicam leading up to their move to new corporate headquarters in 2013. Y&R provided flat design comps and youtube videos of stuff being smashed. Superfad built it out. View the site: newyearstherapy.com Client/Design/Concept: Young and Rubicam Web Production: Superfad
Hastily built this one-off site as a somewhat controversial New Years card from Young and Rubicam leading up to their move to new corporate headquarters in 2013. Y&R provided flat design comps and youtube videos of stuff being smashed. Superfad built it out. View the site: newyearstherapy.com Client/Design/Concept: Young and Rubicam Web Production: Superfad
Y&R: New Year's Therapy image 1
Y&R: New Year's Therapy image 2
Y&R: New Year's Therapy image 3

Technology Collaborations

SuttonBeresCuller: Pan Optos

October 2010
Large scale robot, camera, artist curated exhibit, wi-fi devices, computer, flat screen, joystick, Arduinos, Internet, Flickr. Concept and Artists: SuttonBeresCuller. Mechanical Engineering: Paul Shemeta. Electronics and Code: Joseph Gray. Electronics, Code and Technical Review: Chuck Harrison. Electronics, Camera and Network: Matt Westervelt. Exhibited as part of Vortexhibition Polyphonica at The Henry Gallery, Seattle 2010-2011. Curated by Sara Krajewski.
arduino, electronics, flickr api, installation, network, processing
Panoptos is a giant, room-sized, robot that allows the viewing of a gallery exhibition by driving a remote camera around with a joystick while looking at an HD screen. The camera is positioned close to the artwork giving a unique, displaced, experience of an art collection that would be rarely seen otherwise. The joystick control is coupled with a shiny red button that when pressed takes the current image being viewed by the robot and uploads it to it’s own Flickr stream. SuttonBeresCuller hired me to consult with them on the technical aspects of this project during its earlier development. I designed the majority of the overall electronics system and wrote all of the software that runs the exhibition. Matt Westervelt of Metrix Create Space also worked with the team closely through out the entire process providing technical logistics and ideas. Matt also setup the wireless network and got the camera stream to look pretty. Paul Shemeta, an industrial design veteran at Boeing, drew up the schematics for the mechanics and structure of the robot from the artist’s original concept renderings. Chuck Harrison, a robotics designer and MIT alum, provided support and technical review of the code and electronics. Technical Overview The robot motor logic was run on a standard Arduino with a WiFi shield that connected to a remote computer running an Apache server. The robot retrieved motor instructions via HTTP from a plain text file containing motor speeds multiple times a second. The text file was updated by a Processing sketch that listened to a second Arduino attached to a ruggedized joystick (salvaged from a giant crane) that gallery patrons could use. Moving the joystick changed the numbers in the text file which was then read by the robot and used to set motor speeds for both the x and y axis. The two 24V motors were driven by a Sabertooth 2X25 dual motor driver which received instructions from the WiFi enabled Arduino on board the robot via software serial. The Arduino WiFi Shield library was modified to increase the clock speed enough to make communications capable of parsing real-time control data streams multiple times a second. An Axis IP camera sat on the y-axis chassis feeding it’s stream via ethernet to an Ubiquity wireless router sitting adjacent to it on the chassis. The stream was then relayed via local area network to the computer attached to the joystick Arduino. The stream was played using VLC on a large HD flatscreen positioned in front of the joystick, thereby creating the round-trip visual feedback of the joystick control. The final element was a red button beside the joystick that uploaded the current still image from the camera to the Panoptos Flickr stream. This button was connected to the same Arduino as the joystick. The same Processing sketch that handled motor control data from the joystick also created an FTP connection to a remote server host and uploaded the still image capture to a folder there. The software then made an HTTP request to a PHP script on the remote server that uploaded the image to Flickr using their API and added the image to the Panoptos collection. The Flickr stream produced during the exhibition is here: Panoptos Flickr Stream
Panoptos is a giant, room-sized, robot that allows the viewing of a gallery exhibition by driving a remote camera around with a joystick while looking at an HD screen. The camera is positioned close to the artwork giving a unique, displaced, experience of an art collection that would be rarely seen otherwise. The joystick control is coupled with a shiny red button that when pressed takes the current image being viewed by the robot and uploads it to it’s own Flickr stream. SuttonBeresCuller hired me to consult with them on the technical aspects of this project during its earlier development. I designed the majority of the overall electronics system and wrote all of the software that runs the exhibition. Matt Westervelt of Metrix Create Space also worked with the team closely through out the entire process providing technical logistics and ideas. Matt also setup the wireless network and got the camera stream to look pretty. Paul Shemeta, an industrial design veteran at Boeing, drew up the schematics for the mechanics and structure of the robot from the artist’s original concept renderings. Chuck Harrison, a robotics designer and MIT alum, provided support and technical review of the code and electronics. Technical Overview The robot motor logic was run on a standard Arduino with a WiFi shield that connected to a remote computer running an Apache server. The robot retrieved motor instructions via HTTP from a plain text file containing motor speeds multiple times a second. The text file was updated by a Processing sketch that listened to a second Arduino attached to a ruggedized joystick (salvaged from a giant crane) that gallery patrons could use. Moving the joystick changed the numbers in the text file which was then read by the robot and used to set motor speeds for both the x and y axis. The two 24V motors were driven by a Sabertooth 2X25 dual motor driver which received instructions from the WiFi enabled Arduino on board the robot via software serial. The Arduino WiFi Shield library was modified to increase the clock speed enough to make communications capable of parsing real-time control data streams multiple times a second. An Axis IP camera sat on the y-axis chassis feeding it’s stream via ethernet to an Ubiquity wireless router sitting adjacent to it on the chassis. The stream was then relayed via local area network to the computer attached to the joystick Arduino. The stream was played using VLC on a large HD flatscreen positioned in front of the joystick, thereby creating the round-trip visual feedback of the joystick control. The final element was a red button beside the joystick that uploaded the current still image from the camera to the Panoptos Flickr stream. This button was connected to the same Arduino as the joystick. The same Processing sketch that handled motor control data from the joystick also created an FTP connection to a remote server host and uploaded the still image capture to a folder there. The software then made an HTTP request to a PHP script on the remote server that uploaded the image to Flickr using their API and added the image to the Panoptos collection. The Flickr stream produced during the exhibition is here: Panoptos Flickr Stream
SuttonBeresCuller: Pan Optos image 1
SuttonBeresCuller: Pan Optos image 2
SuttonBeresCuller: Pan Optos image 3
SuttonBeresCuller: Pan Optos image 4
SuttonBeresCuller: Pan Optos image 5

Processing and Arduino in Tandem

August 2010
Course creation and teaching: Joseph Gray. Producer: Elisabeth Robson. Video Production: Creative Live.
arduino, education, online video course, processing
An online course created for O’Reilly media on learning coding for creatives with Processing and a simple method for diving in to integrating Arduino. Course page: oreilly.com/training/arduino/ O’Reilly’s course description:
An online course created for O’Reilly media on learning coding for creatives with Processing and a simple method for diving in to integrating Arduino. Course page: oreilly.com/training/arduino/ O’Reilly’s course description:
Processing and Arduino in Tandem image 1

projBox 1.0

May 2010
Designer: Joseph Gray. Fab Lab: Metrix Create Space. Advisor: Matt Westervelt.
arduino, craft manufacturing, electronics, laser cut
The projBox is an open hardware project enclosure for prototyping with a standard electronics protoboard and an Arduino. This project was created originally as a supplementary toolkit for the course Processing and Arduino in Tandem. Project site: projbox.org It is designed for projects that use an Arduino as a physical interface for software running on a separate computer. The lid is removable and has holes for attaching components like switches and knobs. Ports in the end of the box allow access to both the USB and power jack of the Arduino. The entire box can be constructed from laser-cut 1/8″ thick baltic birch plywood, which is a sturdy but easily tool-able material for drilling additional holes or adding other modifications. The schematics for creating/modifying a projBox enclosure are freely available on the templates page in a variety of vector formats.
The projBox is an open hardware project enclosure for prototyping with a standard electronics protoboard and an Arduino. This project was created originally as a supplementary toolkit for the course Processing and Arduino in Tandem. Project site: projbox.org It is designed for projects that use an Arduino as a physical interface for software running on a separate computer. The lid is removable and has holes for attaching components like switches and knobs. Ports in the end of the box allow access to both the USB and power jack of the Arduino. The entire box can be constructed from laser-cut 1/8″ thick baltic birch plywood, which is a sturdy but easily tool-able material for drilling additional holes or adding other modifications. The schematics for creating/modifying a projBox enclosure are freely available on the templates page in a variety of vector formats.
projBox 1.0 image 1
projBox 1.0 image 2
projBox 1.0 image 3

Gary Hill: Writing Corpora

February 2012
Artist: Gary Hill. Production: Reilly Donovan. Exhibition: Active Presence - Action, Object and Public. Venue: Museo MARCO, Vigo, Galicia (Spain). Curators: Sergio Edelsztein and Kathleen Forde
installation, interactive, kinect, performance, processing, projection, sound
When first getting hired on at Süperfad I created a set of custom software for media artist Gary Hill’s 2012 work Writing Corpora. The new work was created for the international group exhibition Active Presence: Action, Object and Public, which debuted at the Museo MARCO in Vigo, Galacia (Spain) during February of this year. The exhibition, curated by Sergio Edelsztein and Kathleen Forde, focused on artists whose practice contains both performance and installation elements. The overall gist of Writing Corpora, without paraphrasing the artist too much, is physical gestures triggering text, video and audio relating to idiomatic phrases that refer to the human body (i.e. “put your foot in your mouth”), in both English and, in this version, Galician (the native tongue in Vigo). The piece is a continuation of the artist’s conceptual work focused on the convergence of body and language utilizing our current era’s ever evolving new technologies for self-expression. For this project Süperfad provided technical support and code development, creating a software framework that allowed the artist to play back media elements throughout the gallery space with no physical controller other than the performers movements. The result is a full-featured toolkit that can trigger any audio and/or video media with practically any body pose or gesture. Overview We created two systems of custom software for Writing Corpora. The first was a fluid-dynamics “touch-floor”, an alphabet soup of letters that congeal into legible phrases and words when certain regions of the floor projection are stepped on, which is also an interactive element that remains for current and future gallery attendees. When a phrase formed in the text fluid it also simultaneously played an audio clip on overhead speakers of the same idiom being spoken in Galician if the text was in English and vice-versa. The other software, used only during the performance by the artist, tracked skeleton data from a Kinect depth camera for real-time control of audio/visual elements with physical body gestures and movements. This tracked the physical distance between almost every possible combination of skeletal joints (elbow to head, foot to torso, knee to neck … you name it) and played back specific audio and/or video clips on one of three projectors in the room while a forth displayed this tangle of gesture data overlaid on the user’s tracked skeleton. Process I was contacted by Gary’s studio assistant Reilly Donovan in late December to write custom code to fulfill the artist’s concept as they were hitting barriers with pre-built software. During this time Süperfad was bringing me in as their lead creative developer, and this project came with me. Süperfad founder and director Will Hyde turned out to be a fan of Mr. Hill and not only agreed to send me to Spain to help install the work but also sent along art director Loren Judah to assist and document the process. Primarily we were all excited about a collaboration in the realm of “pure art” guided by Gary Hill’s vision and decades long experience of creating conceptual works with new electronic mediums and combining this with Süperfad’s digital tech skills. Reilly had been experimenting with the Kinect platform, and some of the open source performance software that has been developed for it, which lead to their request for custom software to achieve certain ideas. As we worked together developing the software these objectives changed, sometimes due to a limitation we found in the hardware, but also when a new possibility was discovered as we began to understand the toolkit we were working with more clearly. We continued developing the system for the MARCO performance right up until the morning before the exhibition opening, and spent that afternoon creating video clips (shot and edited by Superfad art director Loren Judah, on the spot). Hours before showtime we frantically entered references to the media files in xml notation, along with gesture definitions, to configure the real-time applications we’d spent the previous month and half creating. Amazingly it worked. The result was that once the performer was being tracked by the Kinect almost any sudden series of movements created a cacophony of enveloping media. The piece was performed once, recorded as a four channel video, and is now, as of this writing, playing back daily in the Museum gallery where it was recorded. The video recordings were captured directly from the feed going to the projectors in the gallery, providing a time-delayed semblance of the once live performance, sans performer. Technicalities The software was written in the Processing programming language using a variety of third-party libraries for that coding environment. Several networked computers ran the software which communicated via OSC to pass control data between them. The MARCO galleries are quite large, this provided a large canvas of white gallery walls to project on. In the end we used five projectors, each fed images by a Mac-Minis networked together via ethernet. The entire hardware system was mounted to a large circular lighting truss that the museum had on hand. For the performance a stage sound system was utilized for audio playback into the gallery. The first application we built, TextFluidCongeal, was a fluid dynamics particle system where each particle was an individual letter in a phrase. A Kinect, looking down from above and right beside the video projector, tracked movement of feet walking through the fluid. We used the very stable NecTouch by French software artist Benjamin Kuperberg for control data here. NecTouch outputs TUIO, a multi-touch focused specification of OSC, sending the x,y locations of each foot it detects (normally it would be fingers, but we tweaked some settings). We also used Memo Akten’s overly-abused fluid dynamics framework MSAFluid, which, incidentally, receives TUIO quite readily in one of it’s examples for Processing. Some hacking of these two things together, along with several dozen lines of custom code (Processing: the duct-tape of programming languages), allowed us to read phrases we notated in an XML file, and set trigger points that would “congeal” an individual phrase when it’s trigger area was stepped on. A simultaneous phrase would be spoken on the overhead speakers when this occurred by sending an OSC message to another computer on the network to trigger an audio only version of the software described below. The other application, which I ultimately dubbed MegaFlip, tracked skeleton data and could activate triggers when two or three joints of the users skeleton were in certain range-distance of each other. For example, while the app tracked your movements, touching your head with your right hand could trigger a video clip displayed on a video projector, while touching your right foot with your left hand might trigger an audio clip played back on overhead speakers. Additionally the software could send and receive OSC messages to the other computers so that a gesture tracked by any one computer could trigger play back on any other computer. Each computer’s instance of the application was configured with an XML file. Each node in the XML defined a relationship between joints (i.e. right foot to left hand) and the distance those joints had to be within to trigger a function, also defined in the XML element, along with a reference to which audio or video file to play back and on which computer (by IP address on the local network). Live cameras were also hooked into the software and could be turned on and off via various gestures. MegaFlip was constructed with multiple Processing libraries as well. SimpleOpenNI was used for interfacing with the Kinect and skeleton tracking. Andres Colubri‘s library GSVideo was used for audio and video playback. Andreas Schlegel‘s oscP5 library was used for OSC communication. Benjamin Kuperberg’s MapiNect was also a great inspiration for our XML mapping schema, as we essentially created a simplified version of this within the software. During the performance five computers ran MegaFlip simultaneously. The original plan was to have each connected to it’s own Kinect and projector, having them all track the performers skeleton simultaneously and trigger various media. We discovered though that the Kinects interfered with each other when their field of view overlapped too much, and given the circular arrangement of the gallery lighting grid we had to abandon having multiple skeleton tracking computers (though I did manage to get all 5 Kinects track me for half an hour without loosing track during some tests… ). Interestingly the Kinect facing the floor for the text fluid software described above did not interfere with the one Kinect we used for skeleton tracking. The system was truly put through it’s paces during the actual performance, with dozens of gesture mappings sending signals to play media around the gallery space via the network. The system ran without a hitch during the entire performance, suggesting that it could be stable enough for longer or more complex configurations. It is generic enough to be used for a near infinite variety of unique performances and interactive installations due to it’s ultra configurable nature.
When first getting hired on at Süperfad I created a set of custom software for media artist Gary Hill’s 2012 work Writing Corpora. The new work was created for the international group exhibition Active Presence: Action, Object and Public, which debuted at the Museo MARCO in Vigo, Galacia (Spain) during February of this year. The exhibition, curated by Sergio Edelsztein and Kathleen Forde, focused on artists whose practice contains both performance and installation elements. The overall gist of Writing Corpora, without paraphrasing the artist too much, is physical gestures triggering text, video and audio relating to idiomatic phrases that refer to the human body (i.e. “put your foot in your mouth”), in both English and, in this version, Galician (the native tongue in Vigo). The piece is a continuation of the artist’s conceptual work focused on the convergence of body and language utilizing our current era’s ever evolving new technologies for self-expression. For this project Süperfad provided technical support and code development, creating a software framework that allowed the artist to play back media elements throughout the gallery space with no physical controller other than the performers movements. The result is a full-featured toolkit that can trigger any audio and/or video media with practically any body pose or gesture. Overview We created two systems of custom software for Writing Corpora. The first was a fluid-dynamics “touch-floor”, an alphabet soup of letters that congeal into legible phrases and words when certain regions of the floor projection are stepped on, which is also an interactive element that remains for current and future gallery attendees. When a phrase formed in the text fluid it also simultaneously played an audio clip on overhead speakers of the same idiom being spoken in Galician if the text was in English and vice-versa. The other software, used only during the performance by the artist, tracked skeleton data from a Kinect depth camera for real-time control of audio/visual elements with physical body gestures and movements. This tracked the physical distance between almost every possible combination of skeletal joints (elbow to head, foot to torso, knee to neck … you name it) and played back specific audio and/or video clips on one of three projectors in the room while a forth displayed this tangle of gesture data overlaid on the user’s tracked skeleton. Process I was contacted by Gary’s studio assistant Reilly Donovan in late December to write custom code to fulfill the artist’s concept as they were hitting barriers with pre-built software. During this time Süperfad was bringing me in as their lead creative developer, and this project came with me. Süperfad founder and director Will Hyde turned out to be a fan of Mr. Hill and not only agreed to send me to Spain to help install the work but also sent along art director Loren Judah to assist and document the process. Primarily we were all excited about a collaboration in the realm of “pure art” guided by Gary Hill’s vision and decades long experience of creating conceptual works with new electronic mediums and combining this with Süperfad’s digital tech skills. Reilly had been experimenting with the Kinect platform, and some of the open source performance software that has been developed for it, which lead to their request for custom software to achieve certain ideas. As we worked together developing the software these objectives changed, sometimes due to a limitation we found in the hardware, but also when a new possibility was discovered as we began to understand the toolkit we were working with more clearly. We continued developing the system for the MARCO performance right up until the morning before the exhibition opening, and spent that afternoon creating video clips (shot and edited by Superfad art director Loren Judah, on the spot). Hours before showtime we frantically entered references to the media files in xml notation, along with gesture definitions, to configure the real-time applications we’d spent the previous month and half creating. Amazingly it worked. The result was that once the performer was being tracked by the Kinect almost any sudden series of movements created a cacophony of enveloping media. The piece was performed once, recorded as a four channel video, and is now, as of this writing, playing back daily in the Museum gallery where it was recorded. The video recordings were captured directly from the feed going to the projectors in the gallery, providing a time-delayed semblance of the once live performance, sans performer. Technicalities The software was written in the Processing programming language using a variety of third-party libraries for that coding environment. Several networked computers ran the software which communicated via OSC to pass control data between them. The MARCO galleries are quite large, this provided a large canvas of white gallery walls to project on. In the end we used five projectors, each fed images by a Mac-Minis networked together via ethernet. The entire hardware system was mounted to a large circular lighting truss that the museum had on hand. For the performance a stage sound system was utilized for audio playback into the gallery. The first application we built, TextFluidCongeal, was a fluid dynamics particle system where each particle was an individual letter in a phrase. A Kinect, looking down from above and right beside the video projector, tracked movement of feet walking through the fluid. We used the very stable NecTouch by French software artist Benjamin Kuperberg for control data here. NecTouch outputs TUIO, a multi-touch focused specification of OSC, sending the x,y locations of each foot it detects (normally it would be fingers, but we tweaked some settings). We also used Memo Akten’s overly-abused fluid dynamics framework MSAFluid, which, incidentally, receives TUIO quite readily in one of it’s examples for Processing. Some hacking of these two things together, along with several dozen lines of custom code (Processing: the duct-tape of programming languages), allowed us to read phrases we notated in an XML file, and set trigger points that would “congeal” an individual phrase when it’s trigger area was stepped on. A simultaneous phrase would be spoken on the overhead speakers when this occurred by sending an OSC message to another computer on the network to trigger an audio only version of the software described below. The other application, which I ultimately dubbed MegaFlip, tracked skeleton data and could activate triggers when two or three joints of the users skeleton were in certain range-distance of each other. For example, while the app tracked your movements, touching your head with your right hand could trigger a video clip displayed on a video projector, while touching your right foot with your left hand might trigger an audio clip played back on overhead speakers. Additionally the software could send and receive OSC messages to the other computers so that a gesture tracked by any one computer could trigger play back on any other computer. Each computer’s instance of the application was configured with an XML file. Each node in the XML defined a relationship between joints (i.e. right foot to left hand) and the distance those joints had to be within to trigger a function, also defined in the XML element, along with a reference to which audio or video file to play back and on which computer (by IP address on the local network). Live cameras were also hooked into the software and could be turned on and off via various gestures. MegaFlip was constructed with multiple Processing libraries as well. SimpleOpenNI was used for interfacing with the Kinect and skeleton tracking. Andres Colubri‘s library GSVideo was used for audio and video playback. Andreas Schlegel‘s oscP5 library was used for OSC communication. Benjamin Kuperberg’s MapiNect was also a great inspiration for our XML mapping schema, as we essentially created a simplified version of this within the software. During the performance five computers ran MegaFlip simultaneously. The original plan was to have each connected to it’s own Kinect and projector, having them all track the performers skeleton simultaneously and trigger various media. We discovered though that the Kinects interfered with each other when their field of view overlapped too much, and given the circular arrangement of the gallery lighting grid we had to abandon having multiple skeleton tracking computers (though I did manage to get all 5 Kinects track me for half an hour without loosing track during some tests… ). Interestingly the Kinect facing the floor for the text fluid software described above did not interfere with the one Kinect we used for skeleton tracking. The system was truly put through it’s paces during the actual performance, with dozens of gesture mappings sending signals to play media around the gallery space via the network. The system ran without a hitch during the entire performance, suggesting that it could be stable enough for longer or more complex configurations. It is generic enough to be used for a near infinite variety of unique performances and interactive installations due to it’s ultra configurable nature.
Gary Hill: Writing Corpora image 1
Gary Hill: Writing Corpora image 2
Gary Hill: Writing Corpora image 3
Gary Hill: Writing Corpora image 4
Gary Hill: Writing Corpora image 5

Contact

For inquiries or collaborations, feel free to reach out directly: