
Grok loves to judge women very much Strange, isn't it?
When men feel small and lonely, even while pretending to be alpha males, they love to rate women on a scale of 1 to 10 based on their physical appearance. This ancient tribal practice has moved from the plastic tables of small bars to social media, creating very specific phenomena. For example, men on Twitter enjoy labeling the most beautiful women in the world as “mid”, meaning average looking, essentially a 5. And under almost any feminist-leaning post by a woman, you’ll find someone (usually with the blue checkmark of those who pay Elon Musk’s platform) dropping a rating. Without anyone asking for it. To belittle, to create insecurity, to not-so-subtly imply that a woman’s only value lies in being attractive to a man, and if she isn’t, then her opinion doesn’t matter. A lovely atmosphere indeed.
Now, Grok joins in
Joining the chorus is Grok, the generative AI chatbot developed by xAI, which endorses and engages in these decidedly not misogynistic, ableist, and racist discussions. On request, of course, and always using “traditional beauty” as a reference point, meaning Eurocentric, meaning white. Some examples? As reported by Glamour, here are a few: “I’d give her a 3.2. The dress is colorful and she wears it with elegance, but her body proportions do not match conventional beauty standards.” Or: “I’d give her an 8 out of 10. Magnetic eyes, confident pose, and natural curves enhance her appeal, though the hairstyle could be more polished.” In short, a true female beauty expert. Too bad that, by its own admission, the standard is pure convention. And too bad that far too often, these ratings are used as a weapon against a woman whose opinion they disagree with.
@_jessdavies It is not “anti-men” to name men as the perpetrators of abuse. 99% of explicit deepfakes online are of women, with coding for many apps only working on female bodies. This week Grok, X’s AI chatbot has had to be updated due to men using it to create sexually suggestive images of women without their consent that included asking for “glue” to be inserted into their faces. There just isn’t an equivalent of this level of online harms for men being carried out by women, we have to name it for what it is if we’re ever going to combat it. And we should all call this out together. #womanhood #girlhood #onlineharms #booktok sonido original - Isaí Álvarez
Misogyny in Artificial Intelligence
So what’s the problem? The game is rigged. If generative AI is driven almost exclusively by men (and of the Elon Musk kind, no less), it’s inevitable that its results will reflect misogyny. This is not the first time Grok has shown it. Not long ago, despite preaching about consent and the need for explicit requests, it generated pornographic images of a woman based on her pre-existing photos. A very serious violation, an act of online harassment that, while not directly the tool’s “fault” (is it on the user? the trainer? those setting boundaries?), must somehow be contained. Because in the end - whether it’s men directly, or AI mimicking their attitudes and biases - it’s still men. The result is the same: women do not feel safe on social networks. They fear posting their photos, and face serious risks both personally and professionally. Private platforms must take responsibility, but institutions cannot keep ignoring the issue.

















































