One day, you won't be able to delete your social network account anymore. There will be a delete button, but the account will stay, and it will keep posting after you're gone, it won't care whether you are doing something else entirely or whether you're dead, the show will go on.
The shareholders will be content, because they see value in that. The users might not, but not many of them are actual humans, nowadays they're mostly AI, who has time to read and/or post on social media? Just ask your favorite AI what's the hottest trends on social networks, it should suffice to scratch the itch.
I made a tiktok account to write a comment on a video I hated. Now when i sign in again I am presented with lots of awful videos from the guy I dislike. I cannot delete my viewing history using the website, and following other accounts doesn't remove the obsession tiktok has with always showing me his videos as the default.
I'm not installing the app, so the only way around this is to delete my account completely.
> the only way around this is to delete my account completely
You can choose the option to tell TikTok you are 'not interested' in videos like these, or block the account entirely. There are legitimate criticisms about social media algorithms, but I don't understand why you jump to the conclusion that you have to delete your account.
Just recently, Twitter started making the default view "For You" instead of "Following" with no way to switch back. Fortunately there's an extension that fixes that and lets you eliminate the For You view entirely.
I'm having trouble finding it now but I recall a mostly dead physics forum using LLMs to make new posts under the names of their once prolific users. So this has already happened at least on a small scale.
It seems nuts to me shareholders would be happy about a bunch of fake users, at least ones that don't have any money.
We crawled the Internet, identified stores, found item listings, extracted prices and product details, consolidated results for the same item together, and made the whole thing searchable.
And this was the pre-LLM days, so that was all a lot of work, and not "hey magic oracle, please use an amount of compute previously reserved for cancer research to find these fields in this HTML and put them in this JSON format".
We never really found a user base, and neither did most of our competitors (one or two of them lasted longer, but I'm not sure any survived to this day). Users basically always just went to Google or Amazon and searched there instead.
However, shortly after we ran out of money and laid off most of the company, one of our engineers mastered the basics of SEO, and we discovered that users would click through Google to our site to an item listing, then through to make a purchase at a merchant site, and we became profitable.
I suppose we were providing some value in the exchange, since the users were visiting our item listings which displayed the prices from all the various stores selling the item, and not just a naked redirect to Amazon or whatever, but we never turned any significant number of these click-throughs into actual users, and boy howdy was that demoralizing as the person working on the search functionality.
Our shareholders had mostly written us off by that point, since comparison shopping had proven itself to not be the explosive growth area they'd hoped it was when investing, but they did get their money back through a modest sale a few years later.
I don't even think it's a secret anymore, open or otherwise. As long as Vanity Metric Goes Up, it doesn't seem like anyone, anywhere, actually cares how much "economic activity" is fake.
It seems like it is fiction, from what I could find. I was doubting it at times but it feels like for how old it was some of the tech wasn’t quite there then.
It's fiction, there's breadcrumbs at the top that list it as in the "Fiction" category. qntm is good at plausible sci-fi, e.g. https://qntm.org/mmacevedo
Someone created a tiktok account using my email address. Tiktok won’t let me delete the account without first verifying it with my phone number. I refuse to give tiktok my phone number because I don’t want my phone tied to social media. I don’t have tiktok (or any other social media accounts) and don’t look at it. But I’m stuck getting several email notifications a day from them.
Not quite what you’re saying, but a couple of steps in that direction.
It might not even be an actual account. Tiktok does the LinkedIn and Facebook "growth hack" thing of pushing users to let it slurp their entire address book on their phones. One of the reasons that it "requires" a phone number is to do address book graphing. Tiktok will send the emails it collects "Hey, your friend X is on Tiktok" to try to drive more accounts. All it takes is one friend/acquaintance to click yes on "Allow Access to Contacts" and your email address is considered fair to spam "on behalf of" your friends.
Someone signed up for a Walmart account with my email address. Once every few weeks they order either sex toys, Dolly Parton paraphernalia, or beef jerky in incredible quantities, or some combination of the above, and I get the email receipt.
I am never, ever requesting that they delete the account.
I saw some of this stuff browsing other peoples' "Christmas wish lists", but I believe it's some kind of marketing attempt to probe your interests. "Hey, other people are buying this. Doesn't it look good/exiting/tasty?"
Poisoning data used by data brokers has been a tactic for at least a decade
If anyone using palantir wants to draw incorrect conclusions based on unverified data, the impact to them is certainly going to worse than it is to any of us normal citizens
If your credit is impacted because someone made a mistake, that still fucks you over. It doesn't matter if it's real or not because the entire point of centralized data collection and analytics is that you don't need to care, the people doing the collecting and analyzing do it for you. So you just trust them with whatever. It's on YOU, the consumer, to catch these mistakes and spend a painstaking amount of time trying to fix them, and ultimately the consumer is the only one who will face any consequences. And when it comes to credit, these consequences are very material. It means maybe you can't get a car, or a home, or even a job these days. I know my job ran a credit check.
If we imbue these new-age data collection and analysis companies like Palantir and Flock in our systems, a lot of people will suffer, and I don't think anyone cares.
A lot of this stuff happens at a level above the legal system - no amount of juries will save you. Thats one of the troubles with putting off essential shit to the private sector. The private sector is almost the wild west. We have EULAs and terms of service, so basically nothing.
My credit example is actually giving the opponents too much credit here. The bureaus are kinda government. Even that is better!
It's also not just about the juries. I recall when the "stingray" fake cell tower thing was first spreading across police departments there were articles about how some decided not to prosecute because the defendant had good lawyers that would require the whole setup being exposed. Now there is a lockdown mode on apple that disables 2G. (maybe also, but not sure about, android)
But I’m stuck getting several email notifications a day from them.
I have a cellular hotspot with a phone number apparently recycled from someone who still has it tied to a fintech account (Venmo, or something similar). Every time this person makes a purchase, my hotspot screen lights up with an inbound text message notification.
This person makes dozens of purchases each day, but unlike my previous hotspots, this one does not have a web interface that allows me to log in and see the purchase confirmations. All I get to see is "Purchase made for $xx.xx at" on the tiny screen several dozen times a day.
Report the emails as spam, report the sender address to spamhaus. When enough people do this and tiktok's emails stop getting delivered, a one-click unsubscribe button in the email body that actually works will very quickly be born.
I don't know, I've heard for years that everything your write will be forever on the internet, but from my experience, it's the opposite. I tried looking into my old blogger, photobucket or AIM conversations and they're nowhere to be found.
Sure maybe they exist in some corporate servers when the companies were sold for scraps. And I suppose if I became famous and someone wanted to write an expose about my youthful debauchery, but for all practical purposes all this stuff has disappeared. Or maybe not. How much do we know about the digital presence of someone like the guy who shot Trump or Las Vegas shooter. Or maybe it's known but hidden? I'm impressed that Amazon has my very first order from over 10 years ago, but that's just not par for the course.
Why would AI steal my identity and post as me? I'm not that interesting.
My data is just not the valuable and I imagine that within the next 5-10 years AI will be trained almost entirely on synthetic data.
About 20 years ago, my name showed up on a handful of websites that I could find. Was related to school activities I participated in. Used to surprise me then.
Even my damn personal website was in the top 5 Google results for my name, despite no attempt at SEO and no popularity.
Today those sites are all gone and it’s as if I no longer exist according to Google .
Instead a new breed of idiots with my name have their life chronicled. I even get a lot of their email because they can’t spell their name properly. One of them even claimed that they owned my domain name in a 3-way email squabble.
There seem to be quite some negative comments to this post.
Regardless of the title and the full story, I mostly feel empathy for the writer of the article.
Half a year ago a colleague of mine told me, in tears, that her spouse suddenly left her after living together for 7 years. I felt her pain and cried with her, i tried to comfort her with kind words. I had a nightmare that night. She's truly a very good person and I still feel very sad for her. She told me later that his personality suddenly seemed to have changed, he was not the man she used to know.
The bottom line of what I want to say: please have empathy with people going through a break of relationships, even if such things happen every day. Be thankful if you are in a good relationship.
> I already felt immense pain and anger by the decision of my husband to suddenly end our marriage. And now I feel a double sense of violation that the men who design and maintain and profit from the internet have literally impersonated my voice behind the closed doors of hidden metadata to tell a more palatable version of the story they think will sell.
That's a bit dismissive of women, does she think that women aren't capable of designing and maintaining software too?
It's easier to swallow when you can blame a group of people for things that are bad. You can other them and not sit with the possibility that people who look like you (maybe even are you) can also do things that harm others. I couldn't do something bad, I'm not one of _them_.
You see this later as well when she slyly glides over women who do what her husband did. When her husband decided to end their marriage, it was representative of men. When women do it, it's their choice to make.
That’s an understandable misreading of what she said. I appreciate that the rhetorical effect could feel like a sly way to slip in asymmetric gender standards about how to interpret divorce.
But I am a pedantic person who prefers to focus on the literal statements in text rather than the perceived underlying emotional current. So I’ll pedantically plod through what she actually said.
She’s dealing with two dimensions of divorce: who initiated it (husband, wife, or collaborative), and whether it was surprising or unsurprising.
That gives several possibilities, but she lists three. What unifies them is that they are all written from the perspective of the abstract woman undergoing the experience.
1. Woman initiated, surprise unspecified.
2. Collaborative, so assume unsurprising.
3. Man initiated, surprising (her situation).
She doesn’t claim this covers all possibilities. The point of that bit is just to emphasize that divorces are different, and to object to treating them as a genre for wellness AI slop.
Here is the original text containing that part so others can easily form their own opinion.
“I also object to the flattening of the contours of my particular divorce. There are really important distinctions between the experiences of women who initiate their own divorces versus women who come to a mutual agreement with their spouses to end the marriage versus women, like me, who are completely blindsided by their husbands’ decisions to suddenly end the marriage. All divorces do involve self-discovery and rebuilding your life, but the ways in which you begin down that path often involve dramatically different circumstances.”
It makes perfect sense if you include the two sentence before your quote:
> We already know that in a patriarchal society, women’s pain is dismissed, belittled, and ignored. This kind of AI-generated language also depoliticizes patriarchal power dynamics.
Man does something bad, it's the fault of patriarchy. Woman do something bad, it's also men's fault because patriarchy made her do it. Either way you cannot win with a person like that. I think I understand why the husband wanted a divorce.
I disagree with her argument as well but it’s a huge leap from that to “I understand why the husband wanted a divorce.” That’s a pretty shitty thing to say (especially given the trauma of the divorce she writes about) and has nothing at all to do with what she’s saying.
Marriage is surely the #1 patriarchy control scheme?
I feel terrible asking whether her accusation against Instagram is true... The comments below https://news.ycombinator.com/item?id=46354298 discuss how she might be mistaken. I can't read the link she referred to 404media but that appears to be about retitling headlines, and not about rewriting content.
If Meta were generating text, how would Meta avoid trouble with the Section 230 carveout?
A more cynical me would think they were just trying to juice links for their SEO.
I probably shouldn't be commenting on human slop - that doesn't help either.
It's the bigotry of low expectations that the right often accuses the left of (arguably justifiably). Each side has their shibboleths and hypocrisies, and this is a very "left" one. Everything is the fault of the "other", in this case "all men", apparently.
As someone else said, the red flags of insufferability abound here, first and foremost with announcing something like this which is as personal and momentous as it is, on public social media.
The internet and the web were, and still are, made by and for white rich men. It's not about who can be a prick, everyone can. It's about what the ecosystem pushes towards, and it's not the safety and general good life of women, black people, handicapped people, etc...
Individual actions like this will never do anything, because the average person is not going to spend hours upon hours investigating platforms. They just want an easy way to connect with their friends and family, follow artists, etc.
Which is why I think the only solution has to come at the governmental regulatory level. In “freedom” terms it could be framed as freedom from, as in freedom from exploitation, unlawful use of data, etc. but unfortunately freedom to seems to be the most corporate friendly interpretation of freedom.
This is correct. Which is why "vote with your wallet" is also a flawed strategy. At the scale these companies are operating, individual action does not move the needle, and it is impossible to coordinate enough collective action to move the needle.
There is no feasible way for a normie like me to convince enough people to take any kind of action collectively that will be noticed by FAANG.
I think we like to pretend otherwise, like oh if enough people stop using Instagram, they will fail. This is only true in the most literal sense, because "enough" is an enormous number, totally unachievable by advocacy.
"Vote with your wallet" implies that the rich deserve more votes. Individual action in dollars per vote simply can't matter against the rivers of wealth in ad spend and investors. It's not just a flawed strategy, but sometimes believing in "vote with your wallet" signifies consent or at least complicity that the advertiser buying a lot of ads or the rich idiot with a lot of money invested in gaining your private data "should" win.
We need far better strategies than "vote with your wallet". I think it is at least time to get rid of "vote with your wallet" from our collective vocabularies, for the sake of actual democracy.
Sayings like "Vote with your wallet" come about as a byproduct of living in an economic system that is on its face democratic and capitalist yet somehow still concentrates political and market power in the hands of a few.
If something is bad, it's said that the free market will offer an alternative and the assumed loss of market share will rein in the bad thing. It ignores, as does most un-nuanced discourse about economy and society, that capitalism does not equate to a free market outside of a beginner's economics textbook, and democracy doesn't prevent incumbents from buying up the competition (FB/Instagram) or attempting to block competition outright (Tiktok).
>They just want an easy way to connect with their friends and family
You'd be surprised how many people in your life can be introduced to secure messaging apps like Signal (which is still centralized, so not perfect, but a big step in the right direction compared to Whatsapp, Facebook, etc) by YOU refusing to use any other communication apps, and helping them learn how to install and use Signal.
I got my parents and siblings all to use Signal by refusing to use WhatsApp myself. And yet all of them still use WhatsApp to communicate among each other. They have Signal installed, they have an account, they know how to use it, and yet they fall back to WhatsApp. Some people really do want to choose Hell over Heaven.
The primary and most important feature of a messaging app is the ability to message a lot of people.
Signal is the best messaging app, but not by metrics people use to measure messaging apps, because not a ton of people use it. I use signal, but I also still use SMS (garsp!) because ultimately sometimes I just need to send a message.
It sucks and it's stupid, what we need more than anything else, more than any app, is open and federated messaging protocols.
Great first step either way! The pressure for social conformity is a hell of a drug and I try to have compassion for those suffering from it, even as I try to gently encourage them to grow past it.
Correct. I was shocked when one of my non-technical family members moved over to Proton Mail. I was super proud of them even if it came from left field!
Owning your own site and using federated spaces like Mastodon is absolutely a healthier model, and I wish it were more viable for more people. But until discovery, reach, and social norms shift in a big way, a lot of folks are going to be stuck straddling both worlds
The platforms and their convenience that one "only" has to write the post yet the internet needs so much metadata, so it tried to autogenerate it, instead of asking for it. People are put off by need to write a bloody subject for an email already, imagine if they were shown what's actually the "content" is.
About convincing: get the few that matters on deltachat, so they don't need anything new or extra - it's just email on steroids.
As for Mastodon: it's still someone else's system, there's nothing stopping them from adding AI metadata either on those nodes.
Delta.Chat is really underappreciated, open-source and distributed. I recommend you at least look into it.
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
Not even knowing what deltachat is, however Signal was suspected from the start of being developed by the NSA (read the story about the founder and the funding from the CIA) and later received tens of million USD each year from the US government to keep running. So it is never advisable option when the goal is to acquire some sense of privacy.
Nowadays even YouTube comments are more anonymous than using a "deltachat" or "signal". On the first case there is zero verification on their claims, on the second case there is plenty of evidence of funding from the CIA.
At least commenting from an unknown account on any random youtube video won't land you immediately at a "Person of Interest" list and your comments will be ignored as a drop of water inside an ocean of comments.
This is such a recurring topic that it might be better for me to one day write a blog post that collects the details and sources.
In absence of that blog post:
Start by the beginning, how Moxley left Twitter as director of cyber over there (a company nowhere focused on privacy at the time) to found the Whisper Foundation (if memory serves me the right name). His seed funding money came from Radio Free Asia, which is a well-known CIA front for financing their operations. That guy is a surf-fan, so he decided to invite crypto-experts to surf with him while brainstorming the next big privacy-minded messenger.
So, used his CIA money to pay for everyone's trip and surf in Hawaii which by coincidence also happens to be the exact location of the headquarters for an NSA department that is responsible for breaking privacy-minded algorithms (notably, Snowden was working and siphoning data from there for a while).
Anyways: those geeks somehow happily combined wave-surf with deep algo development in a short time and came up with what would later be known as "signal" (btw, "signal" is a well-known keyword on the intelligence community, again a coincidence). A small startup was founded and shortly after that a giant called "whatsapp" decided to apply the same encryption from an unknown startup onto the billion-sized people-audience of their app. Something for sure very common to happen and for sure without any backdoors as often developed in Hawaii for decades before any outsiders discover them.
Only TOR and a few new tools remain funded, signal was never really a "hit" because most of their (target) audience insists on using telegram. Whatsapp that uses the same algorithm as signal recently admitted (this year) that internal staff had access to the the supposedly encrypted message contents, so there goes any hopes for privacy from a company that makes their money from selling user data.
Not discounting the suggestions and implications there, for all we know all of that could be true, but that's still a tremendous amount of speculation. And the fact itself that the US gov and US institutions have invested in cryptography or anything at all doesn't automatically make those investments "tainted" (for lack of a more inspired word).
I'd be interested in reading that blog post eventually.
Most people write to be read. Surely I can write on my own blog, but no one would read them (not that my social media is much more worth reading though.)
Plus, what about videos? How is a non-tech savvy creator going to host their content if it's best in video format?
I left Insta the day FB bought it; closed my FB, twitter, and Google accounts a couple of years later; WA was the hardest to leave, I'll grant. Since I left, I've used: phone; email; Signal; Telegram; letters; post cards; meeting up in person; sms; Mastodon; tried a couple of crypto chats. There are so many options it's not worth worrying about.
In the cases of special interest groups (think school/club/street/building groups), I just miss out, or ask for updates when I meet people. I am a bit out of the loop sometimes. No-one's died as a result of my leaving. When someone did actually die that I needed to know about, I got a phone call.
Honestly... just leave. Just leave. It's not worth your time worrying about these kind of "what ifs".
How about close friends who live on the other side of the world?
Telegram and Signal are, to me, about as trustworthy as WhatsApp. Well, actually, nobody really uses Signal, and Telegram is about the same as WhatsApp so who cares.
Waiting to meet my friends once every 1-2 years is not enough. I want to chat daily with them, because they are my close friends.
Daily telephone conversations with a group of them? Nope. Snail mail? It doesn't work for daily conversation.
BTW HN also mangles with your submissions/posts in your name. They change the date, and the submission text, while keeping your name.
While I don't think it has a high risk of causing anyone any harm, I kinda hate it, like I DID NOT POST A SUBMISSION WITH THAT TITLE and I MADE THAT COMMENT AT A DIFFERENT TIME. I'd prefer if texts that are altered got a [last edited at [date] by moderator] stamp.
I'm a little bit confused about what's going on here. Is this nothing more than an LLM-generated summary of her post? She shows the metadata but also shows it coming up in the post. I don't use any of these apps so I'm not really sure what a normal user would have seen. ie, would that text have been appended visibly to her post, making users think she wrote that, but also have been in tags which would have optimized for search engines?
Either way, I don't know what to tell people. Social media exists to take advantage of you. If you use it, your choices are "takes more advantage" vs. "takes less advantage," but that's as good as gets.
It looks like it's a third-party UI, her Mastodon client, using the description metadata in a way that kind of makes it look like that metadata is part of the post.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
Thanks for the explanation, that makes a lot of sense. I'll bet that when it's not a sensitive topic, this totally goes unnoticed by a lot of users. Frustratingly, I would imagine that the response from most people would just be that the LLM summarizations / metadata tagging should be censored in "sensitive cases," but will otherwise be accepted by the user base.
Author posted to Instagram >
Author shared the Instagram link on Mastodon >
Mastodon mobile app unfurled the link into a preview >
app concatenated mystery text from a hidden metadata field in the Instagram page > turns out Meta's LLM wrote first-person inspiration slop in the "I" voice for SEO > Author feels impersonated
It's unacceptable that Meta did something like this.
But this doesn’t change the fact that she shouldn’t share anything personal on social media. Consider social media the new "streets". A street with dim lights or an alley that you go at 3am and shout something or showing your images/videos to strangers there. This is exactly what you should keep in mind before you share anything personal on social media.
And either way, who wants to be an unpaid Meta employee that provides any kind of content for free?
Even if you don't care much about your own privacy, sharing too much or too widely can lead to a loss of privacy for everyone.
Much of privacy law is based on a "reasonable" expectation of privacy. What counts as "reasonable" can change depending on what people in general believe it to be.
Here's an essay [1] by an appeals court judge from 2012 for some more on this.
I'll lay odds that the Meta employee that made the decision to do this, has an HN account. I notice how quickly this story is descending through the pages. It's already off the front page.
Generally when I post something unflattering about Tesla, Meta or Airbnb it gets flagged. Just a pattern I've noticed. The Tesla ones get flagged the fastest!
So you don't know why exactly this story was "taking a dive", but regardless you're going to assume the Meta employee responsible for the feature has an HN account and is somehow causing it?
Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
No, but if it was flagged, it's possible that it was by Meta employees. I can't see why it would be flagged, otherwise. I'll bet there's enough for a good flash mob, like the Elon stans that always flag down stories critical of him.
>Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
Sorry, I thought you were the OP, which made the claim of
>Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
It's always so wild to see the denial of what is almost certainly going on. "Article critical of tech company X or tech celebrity Y gets quickly buried in flags" is a tale as old as time here. It is not a huge leap of faith to suppose that the flagging activity often comes from employees or fans who have an interest in burying criticism. But if you mention it, people act as though you are talking crazy. That Meta employees, with a strong incentive to see Meta succeed, would NEVER stoop to flagging an article critical of Meta or demonstrating Meta's wrongdoing. My goodness! How could one even imagine that could happen??
Technically, no we don't know this is going on. Only HN's admins can know this. But come on...
Agree with both - it's a shitty thing for the company to do.
But I do not understand why someone who's so passionate about the issues raised in the post would do something as silly as post this on a Meta-owned property at all. The end result is blindingly obvious, and anyone who doesn't expect exactly this is living in a bizarre fantasy-world, where social media (and moreso Meta-owned social media) isn't inherently evil and run/maintained by evil people (and yes, I understand the irony).
I've noticed for a while these bizarre and sometimes inaccurate AI summaries of pages showing up in Google search results where matches to actual page content used to be. I assumed it was Google generating them, not the 3rd parties themselves. Why does Google even allow indexing on text that isn't really on the page?
”I posted content to a proprietary social network, then got upset when it generated a page description with AI”
Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
If you don’t enjoy using a service that does this to the content you post then don’t use that service.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
When the behavior is not only something something you "don't like" but is also (as this woman perceives it) a professional threat (she makes a living out of carefully choosing her words; she felt this attributed to her words she would never have said) and furthermore is unexpected, to simply quietly leave the platform seems insufficient. One ought to warn other users about the unexpected dangerous practice -- which is precisely what this article accomplishes!
> I share my pain publicly as a gesture of solidarity with other people, but especially women, who have been profoundly traumatized by those they thought they could love and trust.
This is about her husband divorcing her. I find this to be a very unfair way to frame someone else's decision to not spend their life with you anymore. Your partner does not owe you a relationship. Interestingly it is not even me coming up with the word "framing". She herself describes her Instagram post as deliberate framing.
She also claims that the AI chose words dismissive of her pain because she is a woman (rather than just because it's fake-positive corpo slop) and does not substantiate that in any way.
I'm all against this AI slop BS, especially when it's impersonating people. The blog post is mostly not about that.
If anything there's an interesting angle in the facts of this story about a new form of "mansplaining," but it's the algorithm doing "robosplaining" for the human race.
there is a part of the marriage vows where a loving couple promise each other til death do us part ... it's selfish to the max to go back on a promise like that for a reason outside of your partners control ... after retyping this a dozen times to stamp the snark out , I am now genuinely curious as to what has reversed the victim role in your mind ...
People being allowed to part ways and not having to stick with their partner until death is one of the great achievements of feminism. It goes both ways.
You cannot control that you will love someone forever, so you cannot promise that. What you can promise someone is that you plan on spending the rest of your life with them and that you have so much love that you trust it will last forever. Sometimes that does not work out. That is no one's fault and no one owes to anyone to stay together with a person they no longer love.
> You cannot control that you will love someone forever, so you cannot promise that.
Yet, people routinely do in wedding vows. Maybe that tradition should end. Maybe the traditional wedding vows should be changed to "Hey, we'll give it a shot but no promises!"
I'm a big fan of annual contracts versus the whole wedding model. As your next anniversary approaches you review your contract and decide to re-up with the same terms, make changes, or dissolve the marriage.
> People being allowed to part ways and not having to stick with their partner until death is one of the great achievements of feminism.
And it has been one of the greatest mistakes humanity has ever made. If there is a good reason, sure, you cannot be expected to live with someone who has been cruel or irresponsible towards you. But no-fault divorce just because you got bored? Fuck off, you made a commitment at the time. Relationships do take work, always have and always will. Especially when there are children a no-fault divorce is pure selfishness.
With that said, we only know one side of this story, so I'm not going to argue for either side in this particular case. I'm talking in general here.
That's just not how humans work. Love can fade. People change. It's the natural course of things. Sometimes there is just no one at fault for love being lost and no way to prevent that. We just gave up the illusion that love in marriage is always forever.
love is not an emotion , love is willing sacrifice , breaking a lifelong commitment because of faded emotions is pathetic and has nothing to do with your feminism argument
The problem here is the usage of "no-fault". It can be interpreted differently by everyone.
Does fault only include cheating? Can the fault be on the same one who initiated the divorce? What if the fault is simply someone has changed so much that they're no longer compatible with person they fell in love with before? The fault could be on oneself without any inkling of infidelity.
Til death do us part has been ironically dead for decades now since people have been divorcing at high rates for long enough that it doesn't really mean much anymore, and that's okay. Things change.
>The problem here is the usage of "no-fault". It can be interpreted differently by everyone.
No, it's a legal term. From wikipedia:
>No-fault divorce is the dissolution of a marriage that does not require a showing of wrongdoing by either party.[1][2] Laws providing for no-fault divorce allow a family court to grant a divorce in response to a petition by either party of the marriage without requiring the petitioner to provide evidence that the defendant has committed a breach of the marital contract.
It quite literally means that people can request divorce for any reason.
It might be painful short term, but excellent long-term. Many people already realized they gave away control over many aspects of their lives, especially the most important one, attention, to big corporations who are exploiting whatever they can ruthlessly. Many people already quit Facebook and the like; the one who remain are bound to experience quite a few surprises.
> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
Obviously I am putting words in the author's mouth here, so take with a grain of salt, but I think the reasoning is something like: such LLM-generated content disproportionately negatively affects women, and the fact that this got pushed through shows that they didn't take those consequences into account, e.g. by not testing what it would look like in situations like these.
> Ahead of the International Women's Day, a UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men ¬– four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.
> Our analysis proves that bias in LLMs is not an unintended flaw but a systematic result of their rational processing, which tends to preserve and amplify existing societal biases encoded in training data. Drawing on existentialist theory, we argue that LLM-generated bias reflects entrenched societal structures and highlights the limitations of purely technical debiasing methods.
> We find that the portrayals generated
by GPT-3.5 and GPT-4 contain higher rates
of racial stereotypes than human-written por-
trayals using the same prompts. The words
distinguishing personas of marked (non-white,
non-male) groups reflect patterns of othering
and exoticizing these demographics. An inter-
sectional lens further reveals tropes that domi-
nate portrayals of marginalized groups, such as
tropicalism and the hypersexualization of mi-
noritized women. These representational harms
have concerning implications for downstream
applications like story generation.
The question is whether these LLM summaries disproportionately "impact" women, not whether LLMs describe women as more often working in domestic roles.
Unfortunately I can't provide that, since I'm merely trying to come up with the reasoning of the author. If they have sources, though, that could lead to this reasoning.
> Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
The misleading aspect is that the AI generated content was in first person, so any reasonable reader would falsely attribute the statement to the person involved, when in fact it was concocted entirely by Meta's AI.
The dystopian part isn't that AI impersonation is possible - we've known that for years. It's that Meta proactively created an AI profile without explicit opt-in, using someone's personal photos and life events to train a simulacrum that interacts with their actual friends. This crosses a fundamental consent boundary that feels qualitatively different from "AI suggested you write this reply."
The legal framework is completely unprepared for this. Current identity theft laws require financial harm or fraud intent. But what's the legal status of an AI that impersonates you with your own data on a platform you actively use? It's not fraud in the traditional sense, but it's definitely some kind of identity violation. We need new categories: "computational identity theft," "algorithmic impersonation," something that recognizes the harm of having your digital self puppeteered by a corporate AI.
The metadata implications are worse than people realize. Even if you never post personal content, Meta can infer relationship status, location patterns, health issues, political leanings from likes, tags, and behavioral signals. An AI profile built from that could plausibly interact in your name with significant accuracy. The person being impersonated might not even know unless someone explicitly asks "wait, did you really say that?"
The immediate solution is legislation requiring explicit opt-in for any AI feature that generates content attributed to a user's identity. No defaults, no dark patterns, no "we'll enable it and let you opt out later." But the deeper problem is the power asymmetry - these companies own the platforms and the data, so they define what's acceptable. We need data portability rights and mandatory AI disclosure so users can at least migrate to platforms that don't pull this.
It's not uncommon. My cousin sent out a Christmas card announcing her divorce - I think it stops a lot of 1-1 conversations with people which can be quite draining when you're already pretty raw.
What is the alternative to announcing a divorce? Keeping it secret? Not using social media to communicate?
In this case she explicitly did NOT make any mention of the divorce on social media when her husband first sprung it on her, nor during the process. She wrote this piece after it had been finalized.
I guess a private announcement makes more sense to people than a public announcement, unless you wanted to make a blog post about a phenomenon related to it, which she appears to be trying to do
Well ask yourself why they post? Also did you read the article? Not only did this woman post it online she cross posted it to a littany of online services. Who, exactly, is she informing?
That’s a pretty horrifying story, and Meta’s crassness is kind of stunning. It sort of reminds me of the old “Clippy Helps with A Suicide Note” meme.
> My story is absolutely layered through with trauma, humiliation, and sudden financial insecurity and I truly resent that this AI-generated garbage erases the deliberately uncomfortable and provocative words I chose to include in my original framing.
I truly feel for her, and wish her luck. Also, I feel that, of any of the large megacorps, Meta is the one I would peg to do this. I’m not even sure they feel any shame over it. They may actually appreciate the publicity this generates.
I’m thinking that Facebook could do something like slightly alter the text in your posts, to incite rage in others. They already arrange your feed to induce “engagement” (their term for rage).
For example, if you write a post about how you failed to get a job, some “extra spice” could be added, inferring that you lost to an immigrant, or that you are angry at the company that turned you down, as opposed to just disappointed.
Meta added it in "<meta>" tag(no pun intended) intended for search engine. And some other app crawled it and displayed it in main text. Not defending Meta but the text is not visible in instagram or any other Meta app.
og:description is exactly the meta tag to use for link descriptions in embeds. Not all meta tags are only for search engines. The app acted correctly here.
> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.
Incredibly sorry this happened to you. Unfortunately, Silicon Valley could not care less. Consent is not a concept they understand.
I haven't posted on IG for years, but read it sometime and see that a slop-description is added below some (not all) posts. I assumed that it was something creators have added manually, but now you are telling me that Facebook does it automatically?
I’ve been noticing DuckDuckGo search results increasingly frequently doing this. They used to either use the <meta name=description> (which is subject to abuse by the site) or show an excerpt from the page text highlighting the keyword matches (which is often most helpful), but from time to time now I see useful meta descriptions or keyword matches sidelined in favour of what I presume is Microsoft-generated clickbaity slop of a “learn more about such-and-such” kind, occasionally irrelevant to the actual article’s text or even inconsistent with it.
Social media users are unpaid employees of tech companies. Why any rational person volunteers for this dull work is frankly confusing to me. Even more confusing is when the same people try to moralize the predictably horrifying results of their work.
This article confirms all the reasons I stay away from social media platforms. What happened in this situation is awful. It also makes clear that even where legal bounds may have been crossed, it doesn’t really matter because who has the time, energy, and financial resources to challenge them? The big platforms know this and will continue to exploit not just user-created content, but the user’s own hard-earned reputation in order to feed more drivel to the masses.
In the very first place... What's the freaking point to announce the divorce in the social media??? Why? Especially in the social media run by people known for having problems with moral behaviors (ask Winklevoss brothers), where 3/4 of their platform is either scam/fraud or infomercial.
There are people who love divorces, love interacting with them, and love watching people go through them. It’s a cottage market in gossip futures. Social media is designed around gossip futures by people with questionable character. So you answered your own question ;) more shit piled onto the heap!
2. You a have a public image that includes you being married and social media is one of the main channels over which you reach the people who know you. Now you get divorced and you do not want these people to have the false image of you being happily married and potentially even getting comments referencing your marriage anymore.
Could be a lot easier to rip off the band-aid all at once rather than pen a hand written note to dozens or more mutuals with subtle hints of "please stop sending couples invitations to social events"
Surely, if the slop is generated by looking at the image and the text, then it seems someone could manipulate it into hallucinating all manner of wonderful things.
Another thing I've noticed recently on youtube suddenly my feed is full of AI fakes of well known speakers like Sarah Paine an eminent historian who talks about Russia and the like but there's all this slop with her face speaking and "Why Putin's War Was ALWAYS Inevitable - Sarah Paine" but with AI generated words. They usually say somewhere in the small print that it's an AI fan tribute but it's all a bit weird.
(update they now say 'video taken down' but were there for a while)
One day, you won't be able to delete your social network account anymore. There will be a delete button, but the account will stay, and it will keep posting after you're gone, it won't care whether you are doing something else entirely or whether you're dead, the show will go on.
The shareholders will be content, because they see value in that. The users might not, but not many of them are actual humans, nowadays they're mostly AI, who has time to read and/or post on social media? Just ask your favorite AI what's the hottest trends on social networks, it should suffice to scratch the itch.
I made a tiktok account to write a comment on a video I hated. Now when i sign in again I am presented with lots of awful videos from the guy I dislike. I cannot delete my viewing history using the website, and following other accounts doesn't remove the obsession tiktok has with always showing me his videos as the default. I'm not installing the app, so the only way around this is to delete my account completely.
> the only way around this is to delete my account completely
You can choose the option to tell TikTok you are 'not interested' in videos like these, or block the account entirely. There are legitimate criticisms about social media algorithms, but I don't understand why you jump to the conclusion that you have to delete your account.
Classic "any interaction is positive interaction". That's modern platforms to you.
Do not try LinkedIn. Not even once.
Just recently, Twitter started making the default view "For You" instead of "Following" with no way to switch back. Fortunately there's an extension that fixes that and lets you eliminate the For You view entirely.
Facebook is spooky that way.
They track and log every reel viewed.
I suppose everyone does it but actually seeing it is another level of creepy.
I'm having trouble finding it now but I recall a mostly dead physics forum using LLMs to make new posts under the names of their once prolific users. So this has already happened at least on a small scale.
It seems nuts to me shareholders would be happy about a bunch of fake users, at least ones that don't have any money.
So I worked on a comparison shopping website.
We crawled the Internet, identified stores, found item listings, extracted prices and product details, consolidated results for the same item together, and made the whole thing searchable.
And this was the pre-LLM days, so that was all a lot of work, and not "hey magic oracle, please use an amount of compute previously reserved for cancer research to find these fields in this HTML and put them in this JSON format".
We never really found a user base, and neither did most of our competitors (one or two of them lasted longer, but I'm not sure any survived to this day). Users basically always just went to Google or Amazon and searched there instead.
However, shortly after we ran out of money and laid off most of the company, one of our engineers mastered the basics of SEO, and we discovered that users would click through Google to our site to an item listing, then through to make a purchase at a merchant site, and we became profitable.
I suppose we were providing some value in the exchange, since the users were visiting our item listings which displayed the prices from all the various stores selling the item, and not just a naked redirect to Amazon or whatever, but we never turned any significant number of these click-throughs into actual users, and boy howdy was that demoralizing as the person working on the search functionality.
Our shareholders had mostly written us off by that point, since comparison shopping had proven itself to not be the explosive growth area they'd hoped it was when investing, but they did get their money back through a modest sale a few years later.
PhysicsForums and the Dead Internet Theory -- https://news.ycombinator.com/item?id=42816284
Remember Twitter a few years ago?
Users are $$$. Nobody wants to talk about which are human and which aren’t. It’s all a game of hot potato.
I don’t think advertisers want to spend money for bot clicks.
Or maybe they do, because the big open secret in advertising is that they do spend quite a bit of money on exactly that
I don't even think it's a secret anymore, open or otherwise. As long as Vanity Metric Goes Up, it doesn't seem like anyone, anywhere, actually cares how much "economic activity" is fake.
They don’t know (and don’t ask questions enough to know!) about the fake users - it’s no one look down.
As long as no one figures out it’s all fake, the line can keep going up and to the right and everyone is happy.
Anyone who starts asking hard questions may be up first on the chopping block.
Unless the line breaks, then bam. Everyone rushes to be the first for the door as the bubble pops.
The future is Google People: https://qntm.org/perso
This is wild, never even heard of it but kind of tracks
It seems like it is fiction, from what I could find. I was doubting it at times but it feels like for how old it was some of the tech wasn’t quite there then.
https://news.ycombinator.com/item?id=37723862
It's fiction, there's breadcrumbs at the top that list it as in the "Fiction" category. qntm is good at plausible sci-fi, e.g. https://qntm.org/mmacevedo
It reads like a creepypasta, which is cool, but definitely not convincing.
Someone created a tiktok account using my email address. Tiktok won’t let me delete the account without first verifying it with my phone number. I refuse to give tiktok my phone number because I don’t want my phone tied to social media. I don’t have tiktok (or any other social media accounts) and don’t look at it. But I’m stuck getting several email notifications a day from them.
Not quite what you’re saying, but a couple of steps in that direction.
It might not even be an actual account. Tiktok does the LinkedIn and Facebook "growth hack" thing of pushing users to let it slurp their entire address book on their phones. One of the reasons that it "requires" a phone number is to do address book graphing. Tiktok will send the emails it collects "Hey, your friend X is on Tiktok" to try to drive more accounts. All it takes is one friend/acquaintance to click yes on "Allow Access to Contacts" and your email address is considered fair to spam "on behalf of" your friends.
Social media was a mistake.
Someone signed up for a Walmart account with my email address. Once every few weeks they order either sex toys, Dolly Parton paraphernalia, or beef jerky in incredible quantities, or some combination of the above, and I get the email receipt.
I am never, ever requesting that they delete the account.
I saw some of this stuff browsing other peoples' "Christmas wish lists", but I believe it's some kind of marketing attempt to probe your interests. "Hey, other people are buying this. Doesn't it look good/exiting/tasty?"
Until an insurance company or palantir treats that data as your own, and you reap the consequences. Hope the LULs were worth it, though ;)
Poisoning data used by data brokers has been a tactic for at least a decade
If anyone using palantir wants to draw incorrect conclusions based on unverified data, the impact to them is certainly going to worse than it is to any of us normal citizens
I don't think so.
If your credit is impacted because someone made a mistake, that still fucks you over. It doesn't matter if it's real or not because the entire point of centralized data collection and analytics is that you don't need to care, the people doing the collecting and analyzing do it for you. So you just trust them with whatever. It's on YOU, the consumer, to catch these mistakes and spend a painstaking amount of time trying to fix them, and ultimately the consumer is the only one who will face any consequences. And when it comes to credit, these consequences are very material. It means maybe you can't get a car, or a home, or even a job these days. I know my job ran a credit check.
If we imbue these new-age data collection and analysis companies like Palantir and Flock in our systems, a lot of people will suffer, and I don't think anyone cares.
I hate to say it, but that ship has sailed. Nobody responded when Snowden blew the whistle and here we are today
Poison their data. If they have evidence against you, and you can prove their data is even partially bad, you have your reasonable doubt.
Juries are increasingly on the side of the citizen , which is better than nothing
A lot of this stuff happens at a level above the legal system - no amount of juries will save you. Thats one of the troubles with putting off essential shit to the private sector. The private sector is almost the wild west. We have EULAs and terms of service, so basically nothing.
My credit example is actually giving the opponents too much credit here. The bureaus are kinda government. Even that is better!
It's also not just about the juries. I recall when the "stingray" fake cell tower thing was first spreading across police departments there were articles about how some decided not to prosecute because the defendant had good lawyers that would require the whole setup being exposed. Now there is a lockdown mode on apple that disables 2G. (maybe also, but not sure about, android)
But I’m stuck getting several email notifications a day from them.
I have a cellular hotspot with a phone number apparently recycled from someone who still has it tied to a fintech account (Venmo, or something similar). Every time this person makes a purchase, my hotspot screen lights up with an inbound text message notification.
This person makes dozens of purchases each day, but unlike my previous hotspots, this one does not have a web interface that allows me to log in and see the purchase confirmations. All I get to see is "Purchase made for $xx.xx at" on the tiny screen several dozen times a day.
Report the emails as spam, report the sender address to spamhaus. When enough people do this and tiktok's emails stop getting delivered, a one-click unsubscribe button in the email body that actually works will very quickly be born.
There's a short story with a similar plot from "Valuable Humans in Transit" by qntm.
It’s still up on his website as well: https://qntm.org/perso
At that point, deleting an account stops being an act of exit and starts looking like a philosophical disagreement the platform simply ignores
The first paragraph sounds very much like the plot of Cam (2018).
I don't know, I've heard for years that everything your write will be forever on the internet, but from my experience, it's the opposite. I tried looking into my old blogger, photobucket or AIM conversations and they're nowhere to be found.
Sure maybe they exist in some corporate servers when the companies were sold for scraps. And I suppose if I became famous and someone wanted to write an expose about my youthful debauchery, but for all practical purposes all this stuff has disappeared. Or maybe not. How much do we know about the digital presence of someone like the guy who shot Trump or Las Vegas shooter. Or maybe it's known but hidden? I'm impressed that Amazon has my very first order from over 10 years ago, but that's just not par for the course.
Why would AI steal my identity and post as me? I'm not that interesting.
My data is just not the valuable and I imagine that within the next 5-10 years AI will be trained almost entirely on synthetic data.
Information storage is the same as fertility.
"If you want to have a baby, you won't be able to conceive. If you want to stay childfree, the condom will break."
If you want to find old logs of your IRC and AIM buddies from 20 years ago, they're gone. If you say something stupid once, it's kept forever.
About 20 years ago, my name showed up on a handful of websites that I could find. Was related to school activities I participated in. Used to surprise me then.
Even my damn personal website was in the top 5 Google results for my name, despite no attempt at SEO and no popularity.
Today those sites are all gone and it’s as if I no longer exist according to Google .
Instead a new breed of idiots with my name have their life chronicled. I even get a lot of their email because they can’t spell their name properly. One of them even claimed that they owned my domain name in a 3-way email squabble.
I almost no longer exist and it’s kinda nice.
Only PeopleFinder and such show otherwise.
Users will be content too, corporations will find a way to do that.
Via discounts, promo codes, gamification, whatever else they’re using today to get people to install their apps and sign over their privacy.
There seem to be quite some negative comments to this post.
Regardless of the title and the full story, I mostly feel empathy for the writer of the article.
Half a year ago a colleague of mine told me, in tears, that her spouse suddenly left her after living together for 7 years. I felt her pain and cried with her, i tried to comfort her with kind words. I had a nightmare that night. She's truly a very good person and I still feel very sad for her. She told me later that his personality suddenly seemed to have changed, he was not the man she used to know.
The bottom line of what I want to say: please have empathy with people going through a break of relationships, even if such things happen every day. Be thankful if you are in a good relationship.
> I already felt immense pain and anger by the decision of my husband to suddenly end our marriage. And now I feel a double sense of violation that the men who design and maintain and profit from the internet have literally impersonated my voice behind the closed doors of hidden metadata to tell a more palatable version of the story they think will sell.
That's a bit dismissive of women, does she think that women aren't capable of designing and maintaining software too?
It's easier to swallow when you can blame a group of people for things that are bad. You can other them and not sit with the possibility that people who look like you (maybe even are you) can also do things that harm others. I couldn't do something bad, I'm not one of _them_.
You see this later as well when she slyly glides over women who do what her husband did. When her husband decided to end their marriage, it was representative of men. When women do it, it's their choice to make.
That’s an understandable misreading of what she said. I appreciate that the rhetorical effect could feel like a sly way to slip in asymmetric gender standards about how to interpret divorce.
But I am a pedantic person who prefers to focus on the literal statements in text rather than the perceived underlying emotional current. So I’ll pedantically plod through what she actually said.
She’s dealing with two dimensions of divorce: who initiated it (husband, wife, or collaborative), and whether it was surprising or unsurprising.
That gives several possibilities, but she lists three. What unifies them is that they are all written from the perspective of the abstract woman undergoing the experience.
1. Woman initiated, surprise unspecified.
2. Collaborative, so assume unsurprising.
3. Man initiated, surprising (her situation).
She doesn’t claim this covers all possibilities. The point of that bit is just to emphasize that divorces are different, and to object to treating them as a genre for wellness AI slop.
Here is the original text containing that part so others can easily form their own opinion.
“I also object to the flattening of the contours of my particular divorce. There are really important distinctions between the experiences of women who initiate their own divorces versus women who come to a mutual agreement with their spouses to end the marriage versus women, like me, who are completely blindsided by their husbands’ decisions to suddenly end the marriage. All divorces do involve self-discovery and rebuilding your life, but the ways in which you begin down that path often involve dramatically different circumstances.”
It makes perfect sense if you include the two sentence before your quote:
> We already know that in a patriarchal society, women’s pain is dismissed, belittled, and ignored. This kind of AI-generated language also depoliticizes patriarchal power dynamics.
Man does something bad, it's the fault of patriarchy. Woman do something bad, it's also men's fault because patriarchy made her do it. Either way you cannot win with a person like that. I think I understand why the husband wanted a divorce.
I disagree with her argument as well but it’s a huge leap from that to “I understand why the husband wanted a divorce.” That’s a pretty shitty thing to say (especially given the trauma of the divorce she writes about) and has nothing at all to do with what she’s saying.
I can see why her husband divorced her. Best of luck to that guy from now on.
Marriage is surely the #1 patriarchy control scheme?
I feel terrible asking whether her accusation against Instagram is true... The comments below https://news.ycombinator.com/item?id=46354298 discuss how she might be mistaken. I can't read the link she referred to 404media but that appears to be about retitling headlines, and not about rewriting content.
If Meta were generating text, how would Meta avoid trouble with the Section 230 carveout?
A more cynical me would think they were just trying to juice links for their SEO.
I probably shouldn't be commenting on human slop - that doesn't help either.
It's the bigotry of low expectations that the right often accuses the left of (arguably justifiably). Each side has their shibboleths and hypocrisies, and this is a very "left" one. Everything is the fault of the "other", in this case "all men", apparently.
As someone else said, the red flags of insufferability abound here, first and foremost with announcing something like this which is as personal and momentous as it is, on public social media.
She sounds insufferable
The internet and the web were, and still are, made by and for white rich men. It's not about who can be a prick, everyone can. It's about what the ecosystem pushes towards, and it's not the safety and general good life of women, black people, handicapped people, etc...
If we write content for closed platforms known to do terrible things, I guess we should not be surprised when said platforms do terrible things.
I keep trying to convince people not to use Instagram, WhatsApp, Facebook, Twitter/X, but I'm not getting anywhere.
Write your own content and post it on your own terms using services that you either own or that can't be overtaken by corporate greed (like Mastodon).
Individual actions like this will never do anything, because the average person is not going to spend hours upon hours investigating platforms. They just want an easy way to connect with their friends and family, follow artists, etc.
Which is why I think the only solution has to come at the governmental regulatory level. In “freedom” terms it could be framed as freedom from, as in freedom from exploitation, unlawful use of data, etc. but unfortunately freedom to seems to be the most corporate friendly interpretation of freedom.
This is correct. Which is why "vote with your wallet" is also a flawed strategy. At the scale these companies are operating, individual action does not move the needle, and it is impossible to coordinate enough collective action to move the needle.
There is no feasible way for a normie like me to convince enough people to take any kind of action collectively that will be noticed by FAANG.
I think we like to pretend otherwise, like oh if enough people stop using Instagram, they will fail. This is only true in the most literal sense, because "enough" is an enormous number, totally unachievable by advocacy.
"Vote with your wallet" implies that the rich deserve more votes. Individual action in dollars per vote simply can't matter against the rivers of wealth in ad spend and investors. It's not just a flawed strategy, but sometimes believing in "vote with your wallet" signifies consent or at least complicity that the advertiser buying a lot of ads or the rich idiot with a lot of money invested in gaining your private data "should" win.
We need far better strategies than "vote with your wallet". I think it is at least time to get rid of "vote with your wallet" from our collective vocabularies, for the sake of actual democracy.
Sayings like "Vote with your wallet" come about as a byproduct of living in an economic system that is on its face democratic and capitalist yet somehow still concentrates political and market power in the hands of a few.
If something is bad, it's said that the free market will offer an alternative and the assumed loss of market share will rein in the bad thing. It ignores, as does most un-nuanced discourse about economy and society, that capitalism does not equate to a free market outside of a beginner's economics textbook, and democracy doesn't prevent incumbents from buying up the competition (FB/Instagram) or attempting to block competition outright (Tiktok).
>They just want an easy way to connect with their friends and family
You'd be surprised how many people in your life can be introduced to secure messaging apps like Signal (which is still centralized, so not perfect, but a big step in the right direction compared to Whatsapp, Facebook, etc) by YOU refusing to use any other communication apps, and helping them learn how to install and use Signal.
I got my parents and siblings all to use Signal by refusing to use WhatsApp myself. And yet all of them still use WhatsApp to communicate among each other. They have Signal installed, they have an account, they know how to use it, and yet they fall back to WhatsApp. Some people really do want to choose Hell over Heaven.
The primary and most important feature of a messaging app is the ability to message a lot of people.
Signal is the best messaging app, but not by metrics people use to measure messaging apps, because not a ton of people use it. I use signal, but I also still use SMS (garsp!) because ultimately sometimes I just need to send a message.
It sucks and it's stupid, what we need more than anything else, more than any app, is open and federated messaging protocols.
Great first step either way! The pressure for social conformity is a hell of a drug and I try to have compassion for those suffering from it, even as I try to gently encourage them to grow past it.
Correct. I was shocked when one of my non-technical family members moved over to Proton Mail. I was super proud of them even if it came from left field!
Owning your own site and using federated spaces like Mastodon is absolutely a healthier model, and I wish it were more viable for more people. But until discovery, reach, and social norms shift in a big way, a lot of folks are going to be stuck straddling both worlds
So many thoughts on this...
The platforms and their convenience that one "only" has to write the post yet the internet needs so much metadata, so it tried to autogenerate it, instead of asking for it. People are put off by need to write a bloody subject for an email already, imagine if they were shown what's actually the "content" is.
About convincing: get the few that matters on deltachat, so they don't need anything new or extra - it's just email on steroids.
As for Mastodon: it's still someone else's system, there's nothing stopping them from adding AI metadata either on those nodes.
How is mastodon someone else's system? You can host your mastodon server just like you can host your email server or matrix server.
And other mastodon servers, just like other email servers, can of course still modify the data they receive how they'd like.
Use them as the public toilet they are. Never put in any effort in anything you upload.
Why deltachat, an app I've never heard of before instead of Signal, which is also open source and at least has a bit of traction?
Delta.Chat is really underappreciated, open-source and distributed. I recommend you at least look into it.
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
Not even knowing what deltachat is, however Signal was suspected from the start of being developed by the NSA (read the story about the founder and the funding from the CIA) and later received tens of million USD each year from the US government to keep running. So it is never advisable option when the goal is to acquire some sense of privacy.
This is the internet, you can use hyperlinks instead of making vague references.
> it is never advisable option when the goal is to acquire some sense of privacy.
Would this depend on threat model?
Nowadays even YouTube comments are more anonymous than using a "deltachat" or "signal". On the first case there is zero verification on their claims, on the second case there is plenty of evidence of funding from the CIA.
At least commenting from an unknown account on any random youtube video won't land you immediately at a "Person of Interest" list and your comments will be ignored as a drop of water inside an ocean of comments.
It's been a while since I looked at it but as far as I know delta chat is just email so it's as anonymous as your email account I'd gather.
"read the story about the founder and the funding from the CIA"
And where can I find such a story from a trusworthy source? Quick google search rather turned up this:
https://euvsdisinfo.eu/report/us-intelligences-services-cont...
(Debunking it as russian information warfare)
This is such a recurring topic that it might be better for me to one day write a blog post that collects the details and sources.
In absence of that blog post:
Start by the beginning, how Moxley left Twitter as director of cyber over there (a company nowhere focused on privacy at the time) to found the Whisper Foundation (if memory serves me the right name). His seed funding money came from Radio Free Asia, which is a well-known CIA front for financing their operations. That guy is a surf-fan, so he decided to invite crypto-experts to surf with him while brainstorming the next big privacy-minded messenger.
So, used his CIA money to pay for everyone's trip and surf in Hawaii which by coincidence also happens to be the exact location of the headquarters for an NSA department that is responsible for breaking privacy-minded algorithms (notably, Snowden was working and siphoning data from there for a while).
Anyways: those geeks somehow happily combined wave-surf with deep algo development in a short time and came up with what would later be known as "signal" (btw, "signal" is a well-known keyword on the intelligence community, again a coincidence). A small startup was founded and shortly after that a giant called "whatsapp" decided to apply the same encryption from an unknown startup onto the billion-sized people-audience of their app. Something for sure very common to happen and for sure without any backdoors as often developed in Hawaii for decades before any outsiders discover them.
Signal kept being advertised over the years as "private" to the tune of 14 million USD in funding per year provided by the US government (CIA) until it ran out some two years ago: https://english.almayadeen.net/articles/analysis/signal-faci...
Only TOR and a few new tools remain funded, signal was never really a "hit" because most of their (target) audience insists on using telegram. Whatsapp that uses the same algorithm as signal recently admitted (this year) that internal staff had access to the the supposedly encrypted message contents, so there goes any hopes for privacy from a company that makes their money from selling user data.
Not discounting the suggestions and implications there, for all we know all of that could be true, but that's still a tremendous amount of speculation. And the fact itself that the US gov and US institutions have invested in cryptography or anything at all doesn't automatically make those investments "tainted" (for lack of a more inspired word).
I'd be interested in reading that blog post eventually.
Most people write to be read. Surely I can write on my own blog, but no one would read them (not that my social media is much more worth reading though.)
Plus, what about videos? How is a non-tech savvy creator going to host their content if it's best in video format?
> I keep trying to convince people not to use Instagram, WhatsApp, Facebook, Twitter/X,
I'm with you, but WhatsApp is tough. How do you keep in touch?
I left Insta the day FB bought it; closed my FB, twitter, and Google accounts a couple of years later; WA was the hardest to leave, I'll grant. Since I left, I've used: phone; email; Signal; Telegram; letters; post cards; meeting up in person; sms; Mastodon; tried a couple of crypto chats. There are so many options it's not worth worrying about.
In the cases of special interest groups (think school/club/street/building groups), I just miss out, or ask for updates when I meet people. I am a bit out of the loop sometimes. No-one's died as a result of my leaving. When someone did actually die that I needed to know about, I got a phone call.
Honestly... just leave. Just leave. It's not worth your time worrying about these kind of "what ifs".
How about close friends who live on the other side of the world?
Telegram and Signal are, to me, about as trustworthy as WhatsApp. Well, actually, nobody really uses Signal, and Telegram is about the same as WhatsApp so who cares.
Waiting to meet my friends once every 1-2 years is not enough. I want to chat daily with them, because they are my close friends.
Daily telephone conversations with a group of them? Nope. Snail mail? It doesn't work for daily conversation.
So WhatsApp it is!
Yea, WhatsApp is the one that's difficult to leave behind.
The OP is also on Mastodon already, but social networks are ruled by their gravity well, unfortunately.
BTW HN also mangles with your submissions/posts in your name. They change the date, and the submission text, while keeping your name.
While I don't think it has a high risk of causing anyone any harm, I kinda hate it, like I DID NOT POST A SUBMISSION WITH THAT TITLE and I MADE THAT COMMENT AT A DIFFERENT TIME. I'd prefer if texts that are altered got a [last edited at [date] by moderator] stamp.
I guess I feel safer when a program returns true data, and timestamp mangling and moderator mangled posts scare me.
I'm a little bit confused about what's going on here. Is this nothing more than an LLM-generated summary of her post? She shows the metadata but also shows it coming up in the post. I don't use any of these apps so I'm not really sure what a normal user would have seen. ie, would that text have been appended visibly to her post, making users think she wrote that, but also have been in tags which would have optimized for search engines?
Either way, I don't know what to tell people. Social media exists to take advantage of you. If you use it, your choices are "takes more advantage" vs. "takes less advantage," but that's as good as gets.
It looks like it's a third-party UI, her Mastodon client, using the description metadata in a way that kind of makes it look like that metadata is part of the post.
Auto-generating said description tag in the first person is a bit of a weird product decision - probably a bad one that upsets users more than it's useful - but the presentation layer isn't owned by Meta here.
Thanks for the explanation, that makes a lot of sense. I'll bet that when it's not a sensitive topic, this totally goes unnoticed by a lot of users. Frustratingly, I would imagine that the response from most people would just be that the LLM summarizations / metadata tagging should be censored in "sensitive cases," but will otherwise be accepted by the user base.
Author posted to Instagram > Author shared the Instagram link on Mastodon > Mastodon mobile app unfurled the link into a preview > app concatenated mystery text from a hidden metadata field in the Instagram page > turns out Meta's LLM wrote first-person inspiration slop in the "I" voice for SEO > Author feels impersonated
It's unacceptable that Meta did something like this.
But this doesn’t change the fact that she shouldn’t share anything personal on social media. Consider social media the new "streets". A street with dim lights or an alley that you go at 3am and shout something or showing your images/videos to strangers there. This is exactly what you should keep in mind before you share anything personal on social media.
And either way, who wants to be an unpaid Meta employee that provides any kind of content for free?
Even if you don't care much about your own privacy, sharing too much or too widely can lead to a loss of privacy for everyone.
Much of privacy law is based on a "reasonable" expectation of privacy. What counts as "reasonable" can change depending on what people in general believe it to be.
Here's an essay [1] by an appeals court judge from 2012 for some more on this.
[1] https://www.stanfordlawreview.org/online/privacy-paradox-the...
I'll lay odds that the Meta employee that made the decision to do this, has an HN account. I notice how quickly this story is descending through the pages. It's already off the front page.
>I notice how quickly this story is descending through the pages. It's already off the front page.
HN doesn't even have a downvote button.
HN does have flagging for submissions which can downrank them, and this submission is marked as [flagged] so that's probably what happened here.
Yes. I was wondering why it is flagged though.
Generally when I post something unflattering about Tesla, Meta or Airbnb it gets flagged. Just a pattern I've noticed. The Tesla ones get flagged the fastest!
I know.
It’s fascinating to see which stories take a dive.
So you don't know why exactly this story was "taking a dive", but regardless you're going to assume the Meta employee responsible for the feature has an HN account and is somehow causing it?
Huh?
Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
No, but if it was flagged, it's possible that it was by Meta employees. I can't see why it would be flagged, otherwise. I'll bet there's enough for a good flash mob, like the Elon stans that always flag down stories critical of him.
I'll bet they will also be flagging my comment.
It's nice to be loved...
This is precisely why I always use the active front page.
https://news.ycombinator.com/active
It is the answer to the question. "What stories do Hacker News users not want me to see?"
>Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
Sorry, I thought you were the OP, which made the claim of
>Way to interpolate. Kudos to your reach! I just was pointing out that it's likely that the employee responsible was here. The diving is highly unlikely to be triggered by a single employee (unless it was that "employee").
It's always so wild to see the denial of what is almost certainly going on. "Article critical of tech company X or tech celebrity Y gets quickly buried in flags" is a tale as old as time here. It is not a huge leap of faith to suppose that the flagging activity often comes from employees or fans who have an interest in burying criticism. But if you mention it, people act as though you are talking crazy. That Meta employees, with a strong incentive to see Meta succeed, would NEVER stoop to flagging an article critical of Meta or demonstrating Meta's wrongdoing. My goodness! How could one even imagine that could happen??
Technically, no we don't know this is going on. Only HN's admins can know this. But come on...
I flagged it, and I've never worked at Meta. I flagged it because:
1. I find her description of what happened here ("an AI impersonated me!!") to be inaccurate and misleading.
2. I find her blatantly misandrist victimhood stance to be disgusting.
Agree with both - it's a shitty thing for the company to do.
But I do not understand why someone who's so passionate about the issues raised in the post would do something as silly as post this on a Meta-owned property at all. The end result is blindingly obvious, and anyone who doesn't expect exactly this is living in a bizarre fantasy-world, where social media (and moreso Meta-owned social media) isn't inherently evil and run/maintained by evil people (and yes, I understand the irony).
I've noticed for a while these bizarre and sometimes inaccurate AI summaries of pages showing up in Google search results where matches to actual page content used to be. I assumed it was Google generating them, not the 3rd parties themselves. Why does Google even allow indexing on text that isn't really on the page?
Can someone smarter than me explain if/how Section 230 is relevant to this type of content that the platforms are, in fact, authoring and publishing?
Maybe I m out of touch, but what has instagrams seo spamming ai todo with patriarchy?
The last section of the blogpost.
”I posted content to a proprietary social network, then got upset when it generated a page description with AI”
Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
If you don’t enjoy using a service that does this to the content you post then don’t use that service.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
When the behavior is not only something something you "don't like" but is also (as this woman perceives it) a professional threat (she makes a living out of carefully choosing her words; she felt this attributed to her words she would never have said) and furthermore is unexpected, to simply quietly leave the platform seems insufficient. One ought to warn other users about the unexpected dangerous practice -- which is precisely what this article accomplishes!
That’s one approach. Another is that you can complain about things companies do that you don’t like.
> I share my pain publicly as a gesture of solidarity with other people, but especially women, who have been profoundly traumatized by those they thought they could love and trust.
This is about her husband divorcing her. I find this to be a very unfair way to frame someone else's decision to not spend their life with you anymore. Your partner does not owe you a relationship. Interestingly it is not even me coming up with the word "framing". She herself describes her Instagram post as deliberate framing.
She also claims that the AI chose words dismissive of her pain because she is a woman (rather than just because it's fake-positive corpo slop) and does not substantiate that in any way.
I'm all against this AI slop BS, especially when it's impersonating people. The blog post is mostly not about that.
>dismissive of her pain because she is a woman
That would probably be her default position: whoever it is did not sufficiently empathize, and only "I" can be the judge of what sufficient means.
If anything there's an interesting angle in the facts of this story about a new form of "mansplaining," but it's the algorithm doing "robosplaining" for the human race.
there is a part of the marriage vows where a loving couple promise each other til death do us part ... it's selfish to the max to go back on a promise like that for a reason outside of your partners control ... after retyping this a dozen times to stamp the snark out , I am now genuinely curious as to what has reversed the victim role in your mind ...
People being allowed to part ways and not having to stick with their partner until death is one of the great achievements of feminism. It goes both ways.
You cannot control that you will love someone forever, so you cannot promise that. What you can promise someone is that you plan on spending the rest of your life with them and that you have so much love that you trust it will last forever. Sometimes that does not work out. That is no one's fault and no one owes to anyone to stay together with a person they no longer love.
> You cannot control that you will love someone forever, so you cannot promise that.
Yet, people routinely do in wedding vows. Maybe that tradition should end. Maybe the traditional wedding vows should be changed to "Hey, we'll give it a shot but no promises!"
I'm a big fan of annual contracts versus the whole wedding model. As your next anniversary approaches you review your contract and decide to re-up with the same terms, make changes, or dissolve the marriage.
> People being allowed to part ways and not having to stick with their partner until death is one of the great achievements of feminism.
And it has been one of the greatest mistakes humanity has ever made. If there is a good reason, sure, you cannot be expected to live with someone who has been cruel or irresponsible towards you. But no-fault divorce just because you got bored? Fuck off, you made a commitment at the time. Relationships do take work, always have and always will. Especially when there are children a no-fault divorce is pure selfishness.
With that said, we only know one side of this story, so I'm not going to argue for either side in this particular case. I'm talking in general here.
> greatest mistakes humanity has ever made
Humanity has a lot more variation than the our standard modern marriage.
And how did those societies turn out? What are they like to live in? Would you live in one?
That's just not how humans work. Love can fade. People change. It's the natural course of things. Sometimes there is just no one at fault for love being lost and no way to prevent that. We just gave up the illusion that love in marriage is always forever.
love is not an emotion , love is willing sacrifice , breaking a lifelong commitment because of faded emotions is pathetic and has nothing to do with your feminism argument
The problem here is the usage of "no-fault". It can be interpreted differently by everyone.
Does fault only include cheating? Can the fault be on the same one who initiated the divorce? What if the fault is simply someone has changed so much that they're no longer compatible with person they fell in love with before? The fault could be on oneself without any inkling of infidelity.
Til death do us part has been ironically dead for decades now since people have been divorcing at high rates for long enough that it doesn't really mean much anymore, and that's okay. Things change.
>The problem here is the usage of "no-fault". It can be interpreted differently by everyone.
No, it's a legal term. From wikipedia:
>No-fault divorce is the dissolution of a marriage that does not require a showing of wrongdoing by either party.[1][2] Laws providing for no-fault divorce allow a family court to grant a divorce in response to a petition by either party of the marriage without requiring the petitioner to provide evidence that the defendant has committed a breach of the marital contract.
It quite literally means that people can request divorce for any reason.
Maybe in your marriage vow. Why would you assume we all use the same one?
what's the point of a marriage then ?
filing jointly
It might be painful short term, but excellent long-term. Many people already realized they gave away control over many aspects of their lives, especially the most important one, attention, to big corporations who are exploiting whatever they can ruthlessly. Many people already quit Facebook and the like; the one who remain are bound to experience quite a few surprises.
This is genuinely disturbing, and not in a vague "AI is weird" way but in a very concrete authorship-and-consent way
From a French novel: Elle n'avait qu'un seul défaut : elle était insupportable.
> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
Obviously I am putting words in the author's mouth here, so take with a grain of salt, but I think the reasoning is something like: such LLM-generated content disproportionately negatively affects women, and the fact that this got pushed through shows that they didn't take those consequences into account, e.g. by not testing what it would look like in situations like these.
> such LLM-generated content disproportionately negatively affects women,
Major citation needed
> Ahead of the International Women's Day, a UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men ¬– four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.
https://www.unesco.org/en/articles/generative-ai-unesco-stud...
> Our analysis proves that bias in LLMs is not an unintended flaw but a systematic result of their rational processing, which tends to preserve and amplify existing societal biases encoded in training data. Drawing on existentialist theory, we argue that LLM-generated bias reflects entrenched societal structures and highlights the limitations of purely technical debiasing methods.
https://arxiv.org/html/2410.19775v1
> We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written por- trayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An inter- sectional lens further reveals tropes that domi- nate portrayals of marginalized groups, such as tropicalism and the hypersexualization of mi- noritized women. These representational harms have concerning implications for downstream applications like story generation.
https://aclanthology.org/2023.acl-long.84.pdf
The question is whether these LLM summaries disproportionately "impact" women, not whether LLMs describe women as more often working in domestic roles.
Then you have to do your research on whether domestic roles have an equal status to non-domestic roles, and not rest on your preconceptions
Unfortunately I can't provide that, since I'm merely trying to come up with the reasoning of the author. If they have sources, though, that could lead to this reasoning.
> Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
When all one has is a hammer, everything looks like a nail.
I know getting folks onboard is not easy for any social media site.
But I'd pay for a social media site that respected my preferences / content choices and had everyone using real names / validated and so on.
It sounds like a relative benign AI-summary of her post.
I guess it should have been marked clearly as such.
The misleading aspect is that the AI generated content was in first person, so any reasonable reader would falsely attribute the statement to the person involved, when in fact it was concocted entirely by Meta's AI.
The dystopian part isn't that AI impersonation is possible - we've known that for years. It's that Meta proactively created an AI profile without explicit opt-in, using someone's personal photos and life events to train a simulacrum that interacts with their actual friends. This crosses a fundamental consent boundary that feels qualitatively different from "AI suggested you write this reply."
The legal framework is completely unprepared for this. Current identity theft laws require financial harm or fraud intent. But what's the legal status of an AI that impersonates you with your own data on a platform you actively use? It's not fraud in the traditional sense, but it's definitely some kind of identity violation. We need new categories: "computational identity theft," "algorithmic impersonation," something that recognizes the harm of having your digital self puppeteered by a corporate AI.
The metadata implications are worse than people realize. Even if you never post personal content, Meta can infer relationship status, location patterns, health issues, political leanings from likes, tags, and behavioral signals. An AI profile built from that could plausibly interact in your name with significant accuracy. The person being impersonated might not even know unless someone explicitly asks "wait, did you really say that?"
The immediate solution is legislation requiring explicit opt-in for any AI feature that generates content attributed to a user's identity. No defaults, no dark patterns, no "we'll enable it and let you opt out later." But the deeper problem is the power asymmetry - these companies own the platforms and the data, so they define what's acceptable. We need data portability rights and mandatory AI disclosure so users can at least migrate to platforms that don't pull this.
Becoming slopulacrum after taking out the slops via insta. Gud lorde...
who announces a divorce?
It's not uncommon. My cousin sent out a Christmas card announcing her divorce - I think it stops a lot of 1-1 conversations with people which can be quite draining when you're already pretty raw.
What is the alternative to announcing a divorce? Keeping it secret? Not using social media to communicate?
In this case she explicitly did NOT make any mention of the divorce on social media when her husband first sprung it on her, nor during the process. She wrote this piece after it had been finalized.
I guess a private announcement makes more sense to people than a public announcement, unless you wanted to make a blog post about a phenomenon related to it, which she appears to be trying to do
> Not using social media to communicate?
Apparently I'm a luddite now, because yes, this. Stop using social media to communicate with people you ostensibly care about.
You are not alone, because the entire concept of announcing life events on social media has always been weird to me.
The actual post in TFA was actually much less weird than I expected from a "divorce announcement".
She was unable "see" the divorce coming. That is one of the key sentences, therefore the need to explicitly announce the intention.
People looking for emotional support from friends and family during an emotionally draining event?
On social media?
Unlikely.
Divorcees.
Is it like a gender reveal where they pop a balloon that says "I cheated!"?
did you read the article?
kind of hard to do when the site is unreachable
Site has been perfectly reachable. Regardless, commenting on pure headlines is a waste of everyone's time. Read the article, or don't comment.
thanks dad
Attention seekers. Narcissistsz. The whole post has so many red flags. No wonder the husband asked for a divorce.
I can't image everyone posting on the socials has to be such huge attention seekers or narcisssits. Are they all really that?
Well ask yourself why they post? Also did you read the article? Not only did this woman post it online she cross posted it to a littany of online services. Who, exactly, is she informing?
And now, thanks to "AI", this "hot divorcee" even made it onto HN. I am glad for her this "brutal" event transformed into something positive. /s
...on instagram
Did you read the article?
No, the site is unreachable
https://archive.is/ue4An
https://web.archive.org/web/20251222092511/https://eiratanse...
That’s a pretty horrifying story, and Meta’s crassness is kind of stunning. It sort of reminds me of the old “Clippy Helps with A Suicide Note” meme.
> My story is absolutely layered through with trauma, humiliation, and sudden financial insecurity and I truly resent that this AI-generated garbage erases the deliberately uncomfortable and provocative words I chose to include in my original framing.
I truly feel for her, and wish her luck. Also, I feel that, of any of the large megacorps, Meta is the one I would peg to do this. I’m not even sure they feel any shame over it. They may actually appreciate the publicity this generates.
I’m thinking that Facebook could do something like slightly alter the text in your posts, to incite rage in others. They already arrange your feed to induce “engagement” (their term for rage).
For example, if you write a post about how you failed to get a job, some “extra spice” could be added, inferring that you lost to an immigrant, or that you are angry at the company that turned you down, as opposed to just disappointed.
I hope putting first person words in a poster's mouth is not permitted explicitly by Meta's own terms of service.
Meta added it in "<meta>" tag(no pun intended) intended for search engine. And some other app crawled it and displayed it in main text. Not defending Meta but the text is not visible in instagram or any other Meta app.
It’s the OpenGraph description metadata (“og:description”, see https://ogp.me/ )
Many apps, like Slack and LinkedIn, use it to display a link card with a description.
og:description is exactly the meta tag to use for link descriptions in embeds. Not all meta tags are only for search engines. The app acted correctly here.
If og:description is used in the app, the app shouldn't just show append it to main text without any way to differentiate.
> Because what this AI-generated SEO slop formed from an extremely vulnerable and honest place shows is that women’s pain is still not taken seriously.
Incredibly sorry this happened to you. Unfortunately, Silicon Valley could not care less. Consent is not a concept they understand.
I hope you find healing and strength.
I haven't posted on IG for years, but read it sometime and see that a slop-description is added below some (not all) posts. I assumed that it was something creators have added manually, but now you are telling me that Facebook does it automatically?
I’ve been noticing DuckDuckGo search results increasingly frequently doing this. They used to either use the <meta name=description> (which is subject to abuse by the site) or show an excerpt from the page text highlighting the keyword matches (which is often most helpful), but from time to time now I see useful meta descriptions or keyword matches sidelined in favour of what I presume is Microsoft-generated clickbaity slop of a “learn more about such-and-such” kind, occasionally irrelevant to the actual article’s text or even inconsistent with it.
Social media users are unpaid employees of tech companies. Why any rational person volunteers for this dull work is frankly confusing to me. Even more confusing is when the same people try to moralize the predictably horrifying results of their work.
This article confirms all the reasons I stay away from social media platforms. What happened in this situation is awful. It also makes clear that even where legal bounds may have been crossed, it doesn’t really matter because who has the time, energy, and financial resources to challenge them? The big platforms know this and will continue to exploit not just user-created content, but the user’s own hard-earned reputation in order to feed more drivel to the masses.
> While I am sure buried deep in some EULA there is some bullshit allowing Meta to get away with this
All that sweet, sweet innovation!
I would be so offended by AI impresonating me that I'd delete my account immediately.
In the very first place... What's the freaking point to announce the divorce in the social media??? Why? Especially in the social media run by people known for having problems with moral behaviors (ask Winklevoss brothers), where 3/4 of their platform is either scam/fraud or infomercial.
There are people who love divorces, love interacting with them, and love watching people go through them. It’s a cottage market in gossip futures. Social media is designed around gossip futures by people with questionable character. So you answered your own question ;) more shit piled onto the heap!
Desperate need for attention I'd imagine
Two possible reasons:
1. Attention.
2. You a have a public image that includes you being married and social media is one of the main channels over which you reach the people who know you. Now you get divorced and you do not want these people to have the false image of you being happily married and potentially even getting comments referencing your marriage anymore.
Why would you not talk about your divorce ? It's also a part of your life
Validation that her feelings/ her view is correct.
No one(*almost) is posting to reddits AITA(am I the asshole) expecting to hear that they are wrong.
This is also how echo chambers form.
Could be a lot easier to rip off the band-aid all at once rather than pen a hand written note to dozens or more mutuals with subtle hints of "please stop sending couples invitations to social events"
Surely, if the slop is generated by looking at the image and the text, then it seems someone could manipulate it into hallucinating all manner of wonderful things.
Well, after reading all that, I now realize why her husband divorced her, although she clearly does not
That's awful of Meta.
Another thing I've noticed recently on youtube suddenly my feed is full of AI fakes of well known speakers like Sarah Paine an eminent historian who talks about Russia and the like but there's all this slop with her face speaking and "Why Putin's War Was ALWAYS Inevitable - Sarah Paine" but with AI generated words. They usually say somewhere in the small print that it's an AI fan tribute but it's all a bit weird.
(update they now say 'video taken down' but were there for a while)
Seems odd this was flagged. Seems like a genuine article, and didn't appear to cause a flamewar. Possible false positive?
cc: @dang