I create text-based art that utilizes scientific data, and has been exhibited in galleries and sold to art collectors. Personally, I have grapheme-color synesthesia, which lends enhanced visual meaning to my perception of text. Each letter and number is associated unidirectionally with a color; here, I depict each colored pixel of the original photograph in its semantic form. In these pieces, I have “reverse-engineered” my synesthesia, creating a visual code through which to depict landscape scenes. Thus, once the piece is completed, the textual and the visual representations are nearly identical, according to my synesthetic perception.
Furthermore, the graphemes that I choose to represent each pixel are informed by my original multisensory perception research [link to project page], which deduced the first pattern amongst grapheme-color synesthetic associations. This research showed that the hue of a synesthetic association varies according to the letter’s frequency in the synsethete’s native language. Therefore, the letter-color pairings chosen in these artworks reflect a data-driven model for English-language synesthetic associations. In this way, the aforementioned visual code should be parsable for any English-speaking synesthete with similar statistical exposure to the Latin alphabet.