If you’ve watched any Olympics coverage this week, you’ve likely been confronted with an ad for Google’s Gemini AI called “Dear Sydney.” In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone.
“I’m pretty good with words, but this has to be just right,” the father intones before asking Gemini to “Help my daughter write a letter telling Sydney how inspiring she is…” Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be “just like you.”
I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, “Dear Sydney” presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.
Inserting Gemini into a child’s heartfelt request for parental help makes it seem like the parent in question is offloading their responsibilities to a computer in the coldest, most sterile way possible. More than that, it comes across as an attempt to avoid an opportunity to bond with a child over a shared interest in a creative way.
It teaches the kid to rely more and more on AI for everything, just like Google wants.
They’re already ‘thanking’ siri and alexa, this will be a very dangerous development.
Roko’s basilisk is kind of bullshit but the meme is funny.
rokos basilisk is the most stupidest thing and I hate it so much. it’s so obviously just plain wrong. it’s just wrong. it’s not even an interpretation thing. most stupidest and insane and useless idea ever.
edit: I’m still mad at that one YouTuber that did a video about rokos basilisk pretending it made even a little bit of sense.
It’s creepypasta for edgelad nerds
Thanking a personified character doesn’t strike me as a bad thing.
Surely theres a more positive perspective where people are just naturally polite in their words and would struggle to communicate differently to a language bot.
It’s pretty frustrating how the venn diagram of ‘people who treat people like things’, and ‘people who treat things like people’ is a near circle.