French tech company Mistral AI has launched a recent online moderation tool based on the AI model Ministral 8B that may detect and take away offensive or illegal posts routinely. (There remains to be a risk of some misjudgments, nonetheless.)
In accordance with Techcrunch, for instance, some studies have shown that posts about individuals with disabilities might be flagged as “negative” or “toxic” regardless that that’s not the case.
Initially, Mistral’s recent moderation tool will support Arabic, English, French, Italian, Japanese, Chinese, Korean, Portuguese, Russian, Spanish and German, with more languages are on the way in which later. Mistral in July launched a big language model that may generate longer tranches of code faster than other open-source models.