Its creator, known as Zovya, said they developed the AI model after noticing that most AI-generated images were heavily influenced by Asian features and culture.
If someone wants to generate an image that looks less Asian on an AI image-generating tool such as Stable Diffusion, one simply needs to apply the AI model.
Zovya clarified that the model removes the influence of Asian imagery used to train the AI so that the generated images can be more effective in portraying other races and cultures.
Zovya said they were trying to create a different AI model intended to generate images of South American people and culture called “South of the Border.”
But even with the tool, Zovya discovered populated images still included Asian features and references. According to the programmer, this trend of images featuring Asian people or an Asian style has emerged in AI-generated images in general.
Zovya notes in the model’s description that such bias may be attributed to most popular AI models on Civitai being trained on Asian imagery.
Most of the recent, good, training has been using anime and models trained on Asian people and their culture. Nothing wrong with that, it’s great that the community and fine-tuning continue to grow. But those models are mixed in with almost everything now, sometimes it might be difficult to get results that don’t have Asian or anime influence. This embedding aims to assist with that.
Recent studies have shown that AI-generated images often reflect the biases present in the datasets that the models were trained on.
For instance, a study conducted by researchers at Boston University and IBM found that commercially available facial analysis algorithms from major tech companies showed higher error rates for women and darker-skinned individuals compared to men and generally lighter-skinned individuals.
Another study by researchers at the University of Cambridge found that an AI model trained on a dataset of online news articles was more likely to associate the word “man” with the word “computer programmer” than with the word “nurse.”
Biased AI models can have serious real-world implications as they perpetuate harmful stereotypes and reinforce existing power imbalances. Inaccurate facial recognition technology, for example, could lead to false arrests and wrongful convictions.
Mitigating bias in AI models remains a challenge for researchers who are now developing methods to address these issues. Some techniques for “debiasing” datasets may involve removing or balancing out biased examples.
Meanwhile, there are also those who are developing methods for “adversarial training,” in which an AI model is trained to recognize and overcome bias in its inputs.
Users on Civitai have commented positively on Zovya’s model, with some noting the importance of addressing the issue of Asian bias in AI images.
“The most important embedding since noise offsets!” wrote a user. “Makes it possible to use anime/illustration models for photorealistic embeddings. Thanks!”
“Incredible as always! you make the best stuff, I have noticed that most models [are] very Asian leaning. Going to test it right away,” another commenter wrote.
“It is great that someone is seeing and realizing this over biasing into the wrong direction,” said another.
Many people might not know this, but NextShark is a small media startup that runs on no outside funding or loans, and with no paywalls or subscription fees, we rely on help from our community and readers like you.
Everything you see today is built by Asians, for Asians to help amplify our voices globally and support each other. However, we still face many difficulties in our industry because of our commitment to accessible and informational Asian news coverage.
We hope you consider making a contribution to NextShark so we can continue to provide you quality journalism that informs, educates, and inspires the Asian community. Even a $1 contribution goes a long way. Thank you for supporting NextShark and our community.