I subscribe to handful of investment-related YouTube channels. This pattern has been common for years. A bot will reply with a comment loosely related to the video and about how something worked for them. Another bot will reply asking how they did that. Another bot (not the original commenter) will reply that they worked with so-and-so or invested in such-and-such, and then there will be maybe four or five more comments responding to that. All obvious bot accounts.
It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.
Oh I love these comment threads! I like to add another reply saying something like “oh my goodness, I used Elizabeth Ferguson for my investing too!! She went to my college, so I thought I could trust her. But then I found out she was cheating on me with my wife! We got a divorce and i lost half my assets in the separation. Elizabeth Ferguson probably is enjoying them now :(. Just one experience, but buyer beware!”
I'd be careful with that. Sounds like you could be mistaken for a bot that is part of the scheme and get your Google account banned.
Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?
That makes me realize that banning is a punishment only usable on people who care about their account. Scammers don’t, a new bot account is a click away. But basilikum would be sad to lose his account.
It's been well know to happen on reddit too for many years. Whole posts and comment threads copied verbatim with new accounts. Nowadays with AI you can make it way more dynamic.
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
> It's been well know to happen on reddit too for many years
"For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.
Have you seen the same chain pattern outside finance yet? Wonder whether investment scams are the most conspicuous because the payout per convert is high or whether it's seeded the widest on YouTube specifically.
I saw something like this for a book. It was under an Instagram reel where the person was describing ways to improve your self-esteem. In the comments section someone mentioned a book that worked for them and it had a few replies saying how it worked for them too. I searched for the book and it was a very new book from an unknown author and zero reviews everywhere.
Yes and what they do is use actual registered investment advisors names and set up scam websites for them. This way it's more legitimate because if you research that person you will find that they are actually registered in official databases.
I’m putting together an AI presentation internally for my company, can anyone point to examples of this exact behavior? I’d like to use it as a reference.
I’ve been seeing this kind of spam on forums all the way back in 2004. I wonder if it was a feature in Xrumer or whatever they used to post spam back then.
If you have a forum and haven’t found a thread that is just one guy arguing with himself on twelve sock accounts; well then you haven’t been looking or only have one user.
Generally when people start having a back and forth about a product I assume it’s astroturfing unless it makes sense in context and/or it’s just one of those brands people genuinely get excited about (they tend to be obvious ones you’ve seen a lot already).
Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.
Jack Beagle
@blog the ones in your screenshot are pretty good because they are a bit more conversational. I use <product> myself because generally these types of spam messages will be trying to promote something specific but outside of the second message in your example it might have still snuck through. As the LLMs get better the spam messages will certainly get better.
The post timing is the main giveaway. Surely it wouldn't be that hard to space out these spam posts. The amount of automated comments being spammed on all social platforms is not quite at tipping point, but has significantly increased.
Bots would win over all anti-spam, anti-slop measures. All blog posts and comments everywhere would be filled with spam and slop. That's when humanity turns it head away from screens, back towards other humans nearby and start talking to each other, while the ocean of slop and spam keep bubbling, infested bots.
This has been a thing since blogs became a widespread thing 25+ years ago. Especially with the advent of Wordpress. It was even a “commonly accepted” SEO tactic for awhile.
Nice. I run a site that depends on user submitted content, and it's really interesting to observe how some people try to get around the guardrails. Not sure if your tool does this, but I would perform some additional checks for comments that have links in them.
This is a direct result of pretty much all of the LLMs using Reddit as a training tool. People are selling GEO services with reddit spam being a big part of that.
I’m not a heavy Reddit user but I’ve noticed a sharp increase in comment spam disguised as real discussion.
I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.
Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.
Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.
I have noticed the same uptick in bot-like behaviour there. The part I struggle to square is, why so much of it is so useless?
It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.
Just a thought, but I wonder if Reddit are hiding this information deliberately to prevent anyone from publishing a study estimating what percentage of their traffic is driven by bots (anecdotally, it's a lot - and they used to be mostly organic even half a decade ago).
Text generation is now cheap, so I expect this problem to worsen. I hate to write it, but I don't see any other solution on platforms, that aspire to be a modern agora, than identity verification ...
Why would identity verification solve this? The spammer can just verify himself, or if he doesn't want to or it's at a bigger scale than individual, then there will be services where you can get identity verifications on the cheap and they'll work either by paying people in a poor country to verify themselves all day, or, even more cheaply, sketchy age verification services on sketchy porn sites will be actually proxying or replaying people's verifications to another service of your choice
It did solve the spam/russian shill problem on https://www.lide.cz/ . You have to verify yourself using a national ID and you discuss under your citizen name.
Not that I am happy with it, it would be ideal to have my old internet back.
I also see a ton of this here on HN as the political topics have ramped up.
Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.
I subscribe to handful of investment-related YouTube channels. This pattern has been common for years. A bot will reply with a comment loosely related to the video and about how something worked for them. Another bot will reply asking how they did that. Another bot (not the original commenter) will reply that they worked with so-and-so or invested in such-and-such, and then there will be maybe four or five more comments responding to that. All obvious bot accounts.
It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.
Most elaborate scam (illegally run by SF entrepreneur?)
https://claimyr.com/government-services/irs/I-filed-my-2021-...
Oh I love these comment threads! I like to add another reply saying something like “oh my goodness, I used Elizabeth Ferguson for my investing too!! She went to my college, so I thought I could trust her. But then I found out she was cheating on me with my wife! We got a divorce and i lost half my assets in the separation. Elizabeth Ferguson probably is enjoying them now :(. Just one experience, but buyer beware!”
I'd be careful with that. Sounds like you could be mistaken for a bot that is part of the scheme and get your Google account banned.
Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?
That makes me realize that banning is a punishment only usable on people who care about their account. Scammers don’t, a new bot account is a click away. But basilikum would be sad to lose his account.
For something like YouTube, there is a small monetary cost in order to verify a phone number.
Fun until that's a real person using a paid bot service to promote their business and you just libeled them in a perfectly preserved medium.
Dishonesty, meet dishonesty. (Legally, I think libel requires intent.)
It's been well know to happen on reddit too for many years. Whole posts and comment threads copied verbatim with new accounts. Nowadays with AI you can make it way more dynamic.
AI has been awful on Reddit.
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
Considering reddit now allows you to hide your post history, I don't know if the admins consider bots to be the giant problem that they certainly are.
I assumed this was meant to make the bot postings less obvious to normal users, to buy them time to "solve the problem."
But definitely, bots on reddit seem significantly more common in the past year or two.
> It's been well know to happen on reddit too for many years
"For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.
Have you seen the same chain pattern outside finance yet? Wonder whether investment scams are the most conspicuous because the payout per convert is high or whether it's seeded the widest on YouTube specifically.
I saw something like this for a book. It was under an Instagram reel where the person was describing ways to improve your self-esteem. In the comments section someone mentioned a book that worked for them and it had a few replies saying how it worked for them too. I searched for the book and it was a very new book from an unknown author and zero reviews everywhere.
Yes and what they do is use actual registered investment advisors names and set up scam websites for them. This way it's more legitimate because if you research that person you will find that they are actually registered in official databases.
I’m putting together an AI presentation internally for my company, can anyone point to examples of this exact behavior? I’d like to use it as a reference.
I’ve been seeing this kind of spam on forums all the way back in 2004. I wonder if it was a feature in Xrumer or whatever they used to post spam back then.
If you have a forum and haven’t found a thread that is just one guy arguing with himself on twelve sock accounts; well then you haven’t been looking or only have one user.
They also talk like people in a national ad.
“Wow! Seems like it’s so easy to change over with savings like that!”
The bad ones seem like this, the scary part is not knowing if there are good ones
Generally when people start having a back and forth about a product I assume it’s astroturfing unless it makes sense in context and/or it’s just one of those brands people genuinely get excited about (they tend to be obvious ones you’ve seen a lot already).
Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.
Ironically one reply to the blog post is.. spam
Jack Beagle @blog the ones in your screenshot are pretty good because they are a bit more conversational. I use <product> myself because generally these types of spam messages will be trying to promote something specific but outside of the second message in your example it might have still snuck through. As the LLMs get better the spam messages will certainly get better.
The post timing is the main giveaway. Surely it wouldn't be that hard to space out these spam posts. The amount of automated comments being spammed on all social platforms is not quite at tipping point, but has significantly increased.
Bots would win over all anti-spam, anti-slop measures. All blog posts and comments everywhere would be filled with spam and slop. That's when humanity turns it head away from screens, back towards other humans nearby and start talking to each other, while the ocean of slop and spam keep bubbling, infested bots.
This has been a thing since blogs became a widespread thing 25+ years ago. Especially with the advent of Wordpress. It was even a “commonly accepted” SEO tactic for awhile.
Nice. I run a site that depends on user submitted content, and it's really interesting to observe how some people try to get around the guardrails. Not sure if your tool does this, but I would perform some additional checks for comments that have links in them.
This also is absolutely rampant on reddit in the past months.
This is a direct result of pretty much all of the LLMs using Reddit as a training tool. People are selling GEO services with reddit spam being a big part of that.
I’m not a heavy Reddit user but I’ve noticed a sharp increase in comment spam disguised as real discussion.
I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.
Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.
Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.
I have noticed the same uptick in bot-like behaviour there. The part I struggle to square is, why so much of it is so useless?
It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.
It's interesting that people are concerned about seeing ads in ChatGPT when it will happily regurgitate astroturf from Reddit right now
I agree, anecdotally I noticed a big uptick coincident with the comment hiding feature and with the Q4 2025 leap forward in LLM quality.
Just a thought, but I wonder if Reddit are hiding this information deliberately to prevent anyone from publishing a study estimating what percentage of their traffic is driven by bots (anecdotally, it's a lot - and they used to be mostly organic even half a decade ago).
https://old.reddit.com/r/self/comments/1s3yscz/how_reddit_us...
There must be some element of reddit turning a blind eye to this/trying to push it into their sales funnel for the paid reddit marketing features.
This has been rampant on reddit for years.
Text generation is now cheap, so I expect this problem to worsen. I hate to write it, but I don't see any other solution on platforms, that aspire to be a modern agora, than identity verification ...
Why would identity verification solve this? The spammer can just verify himself, or if he doesn't want to or it's at a bigger scale than individual, then there will be services where you can get identity verifications on the cheap and they'll work either by paying people in a poor country to verify themselves all day, or, even more cheaply, sketchy age verification services on sketchy porn sites will be actually proxying or replaying people's verifications to another service of your choice
It did solve the spam/russian shill problem on https://www.lide.cz/ . You have to verify yourself using a national ID and you discuss under your citizen name.
Not that I am happy with it, it would be ideal to have my old internet back.
All roads lead to authoritarianism eh
I also see a ton of this here on HN as the political topics have ramped up.
Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.
I haven't seen this. Can you give some examples?
Oh you definitely have seen it
I rarely downvote anything; but I’ll unholster the downvote for obvious political spam when it agrees with me.
If we don’t police our side nobody will.