Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
Would be a nightmare to implement and achieve the goal, but I have to say I think it’s more right than wrong. All of the data is very clear about the harms.
China has restrictions for social media and screen time for kids — how do they implement this?
> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
TikTok has a drug-like effect on the brain. Multiple studies show a clear link between excessive TikTok engagement and increased levels of anxiety, depression, and stress. Maybe it is time we regulate it like a drug?
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
> Age verification obliviates anonymity on the internet.
How so?
Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.
The Party doesn't care about the Proles, only the members of the Outer Party.
I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).
I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
> Monitoring children's DMs is the responsibility of the parents, not megacorps
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.
(Reconsider my post. I'm arguing for no regulation.)
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
I feel like this makes sense for a platform that targets teens. Plus, I wouldn't trust TikTok to implement E2E encryption properly—who knows what they've snuck into their client.
What kind of application is not targeted at both teens and adults?
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
I think it's very safe to assume that no major US based platform has 'real' E2E encryption. They're almost certainly all a part of PRISM by now, and it'd contradict their obligations to enable government surveillance. So the only thing that's different is not lying about it. Though I expect the other platforms are, like when denying they were part of PRISM, telling half truths and just being intentionally misleading. 'We provide complete E2E encryption [using deterministically generated keys which can be recreated on demand].'
Aside from the fact that you can get Metadata and that some communication frequently happens outside of E2EE - what US law do you believe mandates moderation? I'm curious.
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
Fun fact - there is a big correlation between World Wars and compulsory education. Of course governments and big corporations "care" about children. Of course!
Reminder, Larry “citizens shouldn’t get any privacy” Ellison now owns tik tok. If you’re still using it or have friends and family using it you should stop immediately. It WILL eventually be used against you if this regime gets its way.
As if. If people haven't stopped using TikTok with all of the other reasons for stopping, then because Ellison is damn sure not going to move the needle.
TikTok’s stance against end-to-end encryption is unsurprising but still concerning. TikTok is a source of information on many topics, such as the genocide in Gaza, which traditional media underreport and many governments try to suppress. The network effect of big social media platforms means many people will likely talk about these topics in TikTok DMs. No matter what legal controls TikTok claims to enforce, there is no substitute for technological barriers for preventing invasions of privacy and government overreach. This is yet another example where corporations and governments sacrifice people’s autonomy and privacy in the name of security.
It's a pretty terrifying world we live in now, where an unencrypted addictive short-form video platform is considered a source of information more than news agencies or even community-managed forums.
"The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk."
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
Yes. China gives a shit that user rdiddly, at 36 minutes before 00:55 UTC on March 4, 2026, said that China is spyihg to the point that they are going to be abducted for it.
> Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritising 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite
It is controversial.. amongst people who have concerns about private communications and society, from a regulatory and governance perspective.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
I wondered how it could be considered 'controversial', but they do quote at least a couple groups speaking against it. The NSPCC for instance, who incidentally also warned parents about a Harry Potter video game because their children might want to learn more about the game:
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
Calling something controversial is a favorite propaganda technique employed by "news" outlets. It's another form of selective reporting and framing. It carries negative connotations, and has really no objective standard by which it can be wrong since you'll always find somebody against any issue.
The UK government seems a lot more willing to embrace the panopticon in the name of protecting people from terrorists, child sex traffickers, human rights activists, Catholics, jaywalkers, you name it.
The core tension here isn’t really about encryption itself, it’s about moderation models.
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.
Brilliant. They're repackaging the argument governments have long made about E2EE being dangerous to children.
Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
Would be a nightmare to implement and achieve the goal, but I have to say I think it’s more right than wrong. All of the data is very clear about the harms.
China has restrictions for social media and screen time for kids — how do they implement this?
> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
> We should have banned children
I see you Mr Quaker Oats
I can't tell if this is sarcasm or not
TikTok has a drug-like effect on the brain. Multiple studies show a clear link between excessive TikTok engagement and increased levels of anxiety, depression, and stress. Maybe it is time we regulate it like a drug?
Hyperbole of some sort. I think it works on both the positive and negative side of the axis too.
This is how the internet is run in countries where you need ID to connect to services. It’s not at all dystopian.
I’ll have a packet of cigarettes, a fifth of vodka, and an unrestricted personal electro device.
ID please.
Seems entirely reasonable.
Possibility entirely ineffective, but then again I don’t often see children walking around with bottle of booze.
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
> people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
Hogwash.
Where are these mythical people who aren’t concerned with both?
Another approach could be - if we can't keep something private, make it generally available - so it doesn't give any evil corporation a business edge.
Not sure if we've reached the tipping point yet, though.
I thought it was common knowledge to just set your birthdate to 1970 or something
> Age verification should be banned
Why?
> They already got so much data on their users
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
Age verification obliviates anonymity on the internet. If everything you do, _can_ be tracked by the government, it _will_ be.
Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.
> Age verification obliviates anonymity on the internet.
How so?
Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.
Ok, and? Presenting your ID at a number of IRL estamblishments also heavily reduces anonymity
It's a slippery slope.
This is the next two steps into 1984.
Once you start mandating this, there's no going back.
The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)
> It's a slippery slope.
Is it? I thought that was a logical fallacy?
> This is the next two steps into 1984.
How so?
> Once you start mandating this, there's no going back. > The next generation will start associating wrongthink with government IDs.
Could you provide some more details on why you think this? For a start I talked about a scheme in which you don't hand over ID.
The Party doesn't care about the Proles, only the members of the Outer Party.
I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).
I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
> Monitoring children's DMs is the responsibility of the parents, not megacorps
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
> But what responsibilities do megacorps have? Right now, everyone seems to avoid this question
Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.
So there should be a human operator manually gatekeeping every individual request to connect with another endpoint?
It's a good thing those human operators couldn't listen in to whichever conversation they wanted.
Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.
(Reconsider my post. I'm arguing for no regulation.)
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
> social networks need to be required to show how their algorithm works
Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?
Would it not be an undue burden to necessitate the release of the weights every time they change?
Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.
Wouldn't enforcing algorithmic interpretability additionally be an undue burden?
> They must be able to know why a content was served to them.
What if the authors of the code are unable to tell you why?
> But what responsibilities do megacorps have?
fake and scam AD.
they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility
> But what responsibilities do megacorps have?
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
Mega corps should be compelled to and rewarded for allowing parents to monitor their children’s dms.
Parents shouldn't give their child access to a device that allows DMs.
That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.
Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.
TikTok is a front for government surveillance, so it's not really surprising that this is their position.
DMs are akin to private conversations in real life. Thus, every DM feature should entail E2EE.
It’s ok for a platform to not feature private conversations. They should just have no DM feature at all, then; make all messages publicly visible.
Private conversations are indeed not for all ages. Parents should be able to grant access to that on individual basis.
Why would you use TikTok for private communications anyway? It's mostly a public short video sharing platform.
It's the kids' social network, you're just old.
The way it starts is you pass videos back and forth with a friend. Then you find yourself chatting in the same app.
I'm mindful that it's less secure than other apps, but for a lot of chats it doesn't matter.
Says someone who has never sent a message to a friend over DM on TikTok.
Hopefully
Exactly.
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
>never going to be the case.
And in a perfect world essentially shouldn’t have to be, at least inside expensive walled garden app stores.
They might understand e2ee but not care.
it's more than that.
I feel like this makes sense for a platform that targets teens. Plus, I wouldn't trust TikTok to implement E2E encryption properly—who knows what they've snuck into their client.
What kind of application is not targeted at both teens and adults?
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
Came here to post this.
If you run (say) a restaurant, you get big spikes in business from TikTok videos in ways you don't get from Facebook or Instagram or others.
TikTok is the platform everyone is one right now.
I think it's very safe to assume that no major US based platform has 'real' E2E encryption. They're almost certainly all a part of PRISM by now, and it'd contradict their obligations to enable government surveillance. So the only thing that's different is not lying about it. Though I expect the other platforms are, like when denying they were part of PRISM, telling half truths and just being intentionally misleading. 'We provide complete E2E encryption [using deterministically generated keys which can be recreated on demand].'
Signal is open source
Snowden endorsed last I heard? He doesn’t email of course.
There is no way to do E2EE on a traditional social media platform with user-generated content and comply with existing US law.
You can’t moderate an E2EE platform.
All of Meta’s major properties (Messenger, Instagram, WhatsApp) support E2EE messaging.
Pretty sure that for Meta the impossibility to moderate E2EE was the point. It’s cheaper to shrug than pay content moderators.
Aside from the fact that you can get Metadata and that some communication frequently happens outside of E2EE - what US law do you believe mandates moderation? I'm curious.
What law do you believe supports your perspective?
Do you feel safer knowing DMs are not encrypted?
Nobody should feel safe using the TikTok client, period.
Not just the TikTok client, anything made by Oracle is risky.
Neither Instagram/Facebook's Messenger/WhatsApp.
And signal
What do you use for messaging?
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
I have been using simpleX for some time now.
Do you take "yes" for an answer?
It really depends on whether you think your government is more dangerous than, say, suicide trends, grooming, scamming.
I know the answer is pretty easy for US citizens to answer right now.
Fun fact - there is a big correlation between World Wars and compulsory education. Of course governments and big corporations "care" about children. Of course!
Reminder, Larry “citizens shouldn’t get any privacy” Ellison now owns tik tok. If you’re still using it or have friends and family using it you should stop immediately. It WILL eventually be used against you if this regime gets its way.
https://digitaldemocracynow.org/2025/03/22/the-troubling-imp...
As if. If people haven't stopped using TikTok with all of the other reasons for stopping, then because Ellison is damn sure not going to move the needle.
What were the other reasons for stopping?
Since when is E2EE controversial? Not using E2EE should be controversial.
TikTok’s stance against end-to-end encryption is unsurprising but still concerning. TikTok is a source of information on many topics, such as the genocide in Gaza, which traditional media underreport and many governments try to suppress. The network effect of big social media platforms means many people will likely talk about these topics in TikTok DMs. No matter what legal controls TikTok claims to enforce, there is no substitute for technological barriers for preventing invasions of privacy and government overreach. This is yet another example where corporations and governments sacrifice people’s autonomy and privacy in the name of security.
It's a pretty terrifying world we live in now, where an unencrypted addictive short-form video platform is considered a source of information more than news agencies or even community-managed forums.
For older generations Facebook has the same problem. "On Facebook it said [propaganda item bla bla]" is something I hear with those generations.
"The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk."
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
> I'm never going to China.
China will come to us.
Or should that be:
China will come to the US.
> "I'm never going to China."
Voluntarily.
Yes. China gives a shit that user rdiddly, at 36 minutes before 00:55 UTC on March 4, 2026, said that China is spyihg to the point that they are going to be abducted for it.
clown emoji
It's the Max app for Americans, now with 900% more US and IL government spying.
Fascinating. What a time to be alive.
> Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritising 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite
Means they read every message
BBC calling encryption "controversial privacy tech" is deeply disappointing and dangerous.
It is controversial.. amongst people who have concerns about private communications and society, from a regulatory and governance perspective.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
I wondered how it could be considered 'controversial', but they do quote at least a couple groups speaking against it. The NSPCC for instance, who incidentally also warned parents about a Harry Potter video game because their children might want to learn more about the game:
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
Calling something controversial is a favorite propaganda technique employed by "news" outlets. It's another form of selective reporting and framing. It carries negative connotations, and has really no objective standard by which it can be wrong since you'll always find somebody against any issue.
After you notice it, you'll notice it everywhere.
The UK government seems a lot more willing to embrace the panopticon in the name of protecting people from terrorists, child sex traffickers, human rights activists, Catholics, jaywalkers, you name it.
The core tension here isn’t really about encryption itself, it’s about moderation models.
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.