Researchers at Osaka University have developed an artificial intelligence system for reconstructing images using brain scans.
Neuroscientist Yu Takagi and his research partner Shinji Nishimoto used a model they created, along with the 2022 German AI algorithm Stable Diffusion, to produce the images by converting brain activity from individuals inside an MRI machine.
Stable Diffusion is typically used to convert words and phrases into visual representations. The technology was trained to scan existing images and their captions, eventually learning to make connections between specific images and words.
Takagi and his team incorporated their own training into this technology with two different AI models: one capable of connecting images with functional magnetic resonance imaging (fMRI) data and the other able to link fMRI data to text descriptions of the images.
“I still remember when I saw the first images. I went into the bathroom and looked at myself in the mirror and saw my face and thought, ‘Okay, that’s normal. Maybe I’m not going crazy’,” Takagi told Al Jazeera.
The technology, which received an accuracy rating of roughly 80%, uses the first AI model to create a vague and indistinct image of what the participants in the MRI machine had seen, and then uses the second model to recognize and clarify the images using previously recorded brain pattern associations.
“We really didn’t expect this kind of result,” Takagi said.
The 34-year-old researcher, who is also an assistant professor at the university, emphasized that their discovery should not be thought of as mindreading, as their method is only able to create images that a person has previously seen.
“Unfortunately there are many misunderstandings with our research,” Takagi said. “We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”
Although the breakthrough is remarkable, it has also sparked concerns and debates about the potential risks it may pose to society, particularly individual privacy.
Takagi has acknowledged these concerns as reasonable, recognizing that some with dangerous intentions may attempt to misuse the technology.
“For us, privacy issues are the most important thing,” Takagi said. “If a government or institution can read people’s minds, it’s a very sensitive issue. There needs to be high-level discussions to make sure this can’t happen.”
Takagi and Nishimoto published their findings in December and have plans to present their work at the Conference on Computer Vision and Pattern Recognition in June.
Many people might not know this, but NextShark is a small media startup that runs on no outside funding or loans, and with no paywalls or subscription fees, we rely on help from our community and readers like you.
Everything you see today is built by Asians, for Asians to help amplify our voices globally and support each other. However, we still face many difficulties in our industry because of our commitment to accessible and informational Asian news coverage.
We hope you consider making a contribution to NextShark so we can continue to provide you quality journalism that informs, educates, and inspires the Asian community. Even a $1 contribution goes a long way. Thank you for supporting NextShark and our community.