Meta has created an AI language model that is a refreshing twist on ChatGPT. The open-source MMS project has been created to preserve language diversity and encourage research and can recognize more than 4,000 spoken languages and produce text (speech) in over 1,100. The company has publicly released its models and code today to further its goals.
“We are publicly sharing our creations and code in order to encourage others in the research community to build upon our work,” Meta wrote. “Through this endeavor, we hope to preserve the tremendous language variety of the world.”
The difficulty of training speech recognizers and text-to-speech models on large quantities of audio without transcription labels is typical. Labels are critical to machine learning, which can correctly identify and classify data. However, for languages that will disappear in the coming decades, “this data simply does not exist,” as Meta explains.
Meta used audio recordings of religious texts to collect data in an unconventional manner. “We used translations of religious texts such as the Bible, which have been widely studied for text-based language translation research in many languages because they are translated in many different languages,” the company said. We extracted audio recordings of people reading these texts in different languages from publicly available translations.” Meta’s researchers added over 4,000 languages to the model.
The approach sounds like a recipe for a heavily biased AI model that favors Christian worldviews. However, before you scoff at the idea, consider it from Meta’s perspective: Researchers believe this to be the case because they employ a connectionist CTC temporal classification (or sequence-to-sequence or sequence-type model) that is much more limited in terms of computational power compared with large language models (also known as sequence types) or sequential models for speech recognition. Meta says that this did not result in a male bias in the religious recordings recorded by most male speakers.
Meta used wav2vec 2.0, a “self-supervised speech representation learning” model, to train a wav2vec 2.0 alignment model that makes data more usable. The self-supervised speech model that Meta self-supervised from unlabeled data led to great results. Meta found that the massively multilingual speech models performed well compared to existing models and covered 10 times as many languages, particularly compared to Whisper. Meta achieved half the word error rate, while Massively Multilingual Speech covered 11 times as many languages.
Meta says that its new speech-to-text models aren’t perfect. For example, they might mistranslate words or phrases, which could result in offensive and/or incorrect speech, the company wrote. The responsible development of AI technologies must be accomplished through collaboration among the AI community.
As Meta has released MMS for open-source research, it hopes that it can reverse the trend of language usage disappearing. In this vision, assistive technology, TTS, and even virtual reality and augmented reality tech might allow everyone to speak and learn in their native languages. It stated, “We envision a world where technology has the opposite effect, prompting people to keep their languages alive since they can access information and use technology by speaking in their preferred language.”
- Recently, Meta has announced financial results for the first quarter of 2023. Despite recent restructuring efforts, the company surprised investors with an unexpected increase in sales for the first quarter. Shares surged 12% on Wednesday.
Read more related articles:
Read More: mpost.io