how does a computer see through water? ₊˚.༄
how does a computer see through water? ₊˚.༄
marine learning (2025)
I hadn’t really considered how a computer sees water until I came across a vision model trained on it. The paper included warped underwater footage and barely legible text—images that looked more like CAPTCHAs than photographs. The kind of visual puzzles that prove our humanity by exposing what machines still can’t grasp: fractured text, distorted patterns, the kind of visual debris machines still struggle to decode.
But why teach a machine to see through water at all? The computer vision model was part of a class of optical systems designed to “maximize imaging system flexibility.” Using liquid lenses—technology found in barcode scanners, surveillance tools, and industrial automation—it could dynamically shift focus, seeing faster and farther than the human eye. But this ambition to out-see us often falters in quiet, revealing ways. What fails isn’t just the tech—it’s the assumption that vision is objective. Machine or human, all seeing is constructed, conditional, and prone to error.
The model’s fragility was clearest in how it processed images. It didn’t rely on object recognition. It needed motion—125 frames per second—to slowly assemble meaning from flickers of light and blur. A single image wasn’t enough. It had to watch, wait, and let sense emerge.
This temporal, interpretive approach to vision reminded me of textile production. Like the machine, the knitter builds meaning gradually—loop by loop, stitch by stitch. You don’t see the whole at once. It’s accumulated through repetition. So I responded by knitting a panel, interpreting the model’s awkward first attempts to understand water.
I sent the panel to Alicia Wright, an artist working at the intersection of digital residue and material form. She responded with a frame—based on a 17th-century seaweed illustration, pulled from the Internet Archive, image-traced, warped, and re-rendered in crochet. A scientific drawing, once meant to capture natural truth, now hollowed out—filtered through Illustrator, pencil, screen glare, and hand.
Alicia reflected on how early botanical illustrators aimed for objectivity, to pin down essence. But every drawing is shaped by the biases of its time, its tools, its hand. Today, those images are viewed less as science and more as ornament. And perhaps that’s where machine vision is headed too—not as a pure lens on the world, but as an artifact. Something that reveals its assumptions through its failures.
To finish, Alicia dyed the frame—water, pigment, vinegar—and bound it to the panel using SeaCel, a fiber spun from processed seaweed. Water returned again. This time, not as confusion—but connection.
Throughout this project, the boundaries between digital and physical kept dissolving. Machine learning outputs became knit swatches. Vector files became tangible frames. What started as a curiosity about artificial perception became a meditation on how meaning is built: slowly, imperfectly, in layers—through repetition, through error, through time.
You can view the Are.na board with our project references here.