Saturday, October 1, 2011
Posted by Youtube user: gallantlabucb
Last week researchers from the University of California released a paper describing an experiment in which scientists managed to generate rough representations of visual stimuli by monitoring activity in the brain. This paper's release was also accompanied by the above video, which shows a comparison of the original images shown to participants in the test, along side the reconstructed versions generated by the computer.
While it's an impressive, even if imperfect, result. Chances are the reconstructed images seen in the video were not generated the way you think they were. Assuming of course that, like me, your mind is filled with images of people wearing funny devices on their heads, staring into some kind of strange optical device with their eyelids wired open like Clockwork Orange, or maybe even having their brains somehow jacked directly into computers all Johnny Mnemonic style -whoa-. No? Okay, so maybe it's just me, and I need to get out more and watch less sci-fi. Either way, Not surprisingly, that isn't the case.
In reality, these images were not collected directly from the subject's mind using any form of what you'd likely consider to be "mind-reading", in the traditional sci-fi sense. Instead, these images were generated by first collecting data from a subjects brain, via FMRI(Functional, Magnetic, Resonance, Imaging), and then asking a computer to reinterpret that data, and generate an image.
In order to accomplish this, the machine tasked with generating these images was first fed some 18 million 1 second YouTube clips -clips that were never shown to the participants in the experiment. Next, the subjects themselves each spent several hours lying inside an MRI machine and staring at a blue dot while being shown random YouTube clips; this allowed researchers to generate a map of basic visual activities within the brain during viewing using FMRI. Finally, those activity maps were then fed into the computer as well, and that computer was then was then asked to select from it's newly generated database of video clips, the images which best represented those being seen by the participants based solely on the activity shown on the scans.
In other words; the images you're seeing aren't actually images taken from "inside" any one's head like you might think. But rather, are a collection of images chosen by a computer and compiled together to represent what it determined to be the best visual representation of what the subject was seeing at the time, based on his or her brain activity.
So while this may not yet be the astonishing sc-fi milestone you may have thought it to be upon first reading the headlines surrounding it, it is an impressive feat. And though actually reading someone's thoughts and turning them into images is a very different thing than reinterperating direct visual stimulations of the brain, this could still potentialy be a major step towards achieving that goal. An accomplishment that would be invaluable to individuals who are otherwise unable to communicate.
Source: dawn.com Paper Summary: sciencedirect.com
Octopus kites by Tamas Kalman, available for purchase - assuming of course you have $400 to spend on a kite, and really, who doesn't- HERE in his online store.
That is all.