The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.
I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.
This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children.
Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.
I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.
I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything which made for a very long header.
One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.
And half of it will be from adults trying to avoid privacy invasion.
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...
Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.
The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.
The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.
The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.
I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
No it really can’t. Age verification requires identification.
Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.
The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.
I mean, it's more than that. I _want_ to protect kids' right to be part of the human connectome. The "protect the kids" (by disallowing them their freedom of thought on the internet) is just naked ageism.
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Dumb- BUT immediate links to sites of the right legislators!
Adam B. Schiff
Sorry, this legislator cannot be contacted with our tool. To message them, visit their website instead.
Alex Padilla
Sorry, this legislator cannot be contacted with our tool. To message them, visit their website instead.
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.
If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?
I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)
Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.
(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.
Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.
Theres also no old-age cutoff, when their mental faculties significantly decline.
Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.
—
After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-
The platform just has to be designed non-addictively.
Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
>If you are going to implement age controls, you should implement a ban on underage influencers as well.
That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.
In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
Usually eschatological Fear is the realm of governments. Modern democracy is basically built on the fear of some terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, unrecycled waste or greenhouse effect.
Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
Basically every article on this site has a comment complaining that the article is AI. Who knows. Maybe “complaining about AI” is the new AI way of fitting in.
This feels different to me than complaining about the font or whatever. I don’t want to read or comment on anything not written by a human. I also agree with GP here that using AI instead of your own words has bearing on the content itself, insofar as it’s a signal that the author doesn’t care enough to write it themself.
As a corollary, I also want to know if a project posted here is predominantly vibe-coded, since that to me is a signal that it may be of lower quality, have fewer edge cases worked out, and is more likely to be abandoned in the near future.
Caring enough to put in the effort of thinking and writing is not a tangential issue. Laziness is a substantive defect, and sadly, I think that kelseyfrog has clocked this one correctly. There are borderline cases, but the cadence of this tweet thread is unmistakeable.
We don't have to live like this. We don't have to accept it. We don't have to upvote it even if we agree (as I do) with the explicit point. The medium is the message, and the message that this poster is putting out here is that online age verification isn't actually worth getting that worked up about.
AI-generated content being passed off as human-written is not a tangential issue. HN staff agree, because posting AI generated comments is explicitly forbidden. I suspect the only reason this isn't extended to submissions is because pretty much all articles about AI are also written by AI, and effectively forbidding positive discussion of AI is obviously against the interests of a VC firm.
HN's guidelines were written under the assumption that submitted articles about [thing] would be written by people who care about [thing] and made a good faith effort to write something interesting about [thing], so it's only fair that any comments would be expected to respect the author's effort and discuss the article in equally good faith.
This assumption completely falls apart when you add AI generated submissions into the mix. If the "author" didn't care and thus couldn't be bothered to write about [thing] themselves, choosing to instead outsource that work to an LLM while they supposedly did something they deemed more valuable with the time they would've spent writing, then why should commenters be expected to dedicate more effort into their discussion of the article than the author dedicated to writing it? It's a bit unfair towards the commenters, don't you think?
No, it's because authentic writing on HN has been drowned out in an ocean of slop, in such quantities that calling it out is becoming an exercise in futility
It's more likely that they're just virtue signaling about {{current-controversial-thing}}, as evidenced by the fact that they often accuse content of being AI generated when it would only appear that way to the most naive readers.
It doesn't feel like virtue signaling. It feels like pointing out a contradiction in the text: I care deeply about this topic; I don't care enough to write it myself.
> Virtue signalling is a pejorative neologism for the expression of a moral viewpoint with the intent of communicating good character, frequently used to suggest hypocrisy.
What virtue am I signalling and what hypocrisy am I trying to hide?
Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.
Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia
It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
> If you love your family, you must stop online age verification.
> If you want the best for your children, you must stop online age verification.
> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.
Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,
"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.
They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."
So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.
Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.
Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.
What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.
Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.
Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.
But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.
The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.
Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
Is it? Digital ID is the point being made here by the x thread. It's being brought in under age verification. It's arguing that the laws being passed, and infrastructure to enforce them, to protect your kids today will be used to abuse them in the decades to come.
It's trivially easy for me to imagine covid playing out +10 years from now instead of -5, and the ~3 established identity/age verification players in that market at that point responding to exactly the same pressure that was applied to the handful of social media companies to censor people disagreeing with the administration's approach. Real people were fired from real jobs for exactly this in working memory, they did it under their real identities on social media sites. The future being driven towards now would ensure that there's no anonymous forums to avoid that risk on controversial subjects in the future.
You disagreed with the logic of mask mandates, or suggested that a lab leak was a plausible theory, that was tied to your identity, and now it's not a ban on instagram for a while -- you can't use AI tools you need for your job, your email, your online banking. You are functionally unemployable and excommunicated from all internet tools that rely on those handful of services. The smartest legal minds argue that this wasn't actually a violation of your first amendment rights, after all, those three private businesses just decided that they didn't want to do business with someone engaging in dangerous rhetoric, and the fact that the administration sent them an email letting them know isn't material to that (Murthy V Missouri, 2024).
We're already living in the early innings of political controversies coming from the fact that the youngest politicians had social media accounts where they said dumb things as teenagers/young adults. Is the future where the dumb ideas 16 year olds post on reddit are tied to their government IDs for eternity a good one?
Yeah, calling people "dogs" for pointing out that TFA is a hyperbolic (AI-written) screed without substance would ruffle some feathers.
Edit: yes it is hyperbolic and ridiculous to suggest people will be "enslaved" because they don't have access to the internet. Do you realize that makes everybody who grew up in the 90s or earlier a "slave"?
For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.
There can be no free speech if the state can imprison you for what you say, and they know everything you say.
I dropped the word ‘online’ from the above paragraph, because on is the real world. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Yes
That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?
That's why any kid can go watch any movie in the cinema right?
Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?
Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.
> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.
It's important that you engage with that. Denial is not something to brag about.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.
Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.
If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock.
its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
So many pieces of law are flawed today, and the reason why should be concerning to all.
I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.
The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.
There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.
Regardless how stupid this argument is, rags will always pounce on it.
This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.
It's the same with tough on crime. "What, you want criminals to keep getting away with it?!"
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.
Protect the children refers to a type of property, not a type of human.
"But age verification requires identity verification. Identity verification requires digital IDs."
Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
There is a suddenconcerted international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. https://github.com/upper-up/meta-lobbying-and-other-findings...
Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to everyone so that we wouldn't have been a need to bring in age verification.
The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")
They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.
This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.
meta desperately want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.
So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.
The Cashier Standard – Age Verification Without Surveillance
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.
Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.
I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.
While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be some trolls out there who will do it just for the laughs.
This is far from a perfect system since a dishonest site could also make use of the option. But it theoretically works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.
Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.
Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that turns it into a physical problem. I'm merely pointing out that's what you're doing and the issues with it.
Second, it doesn't matter what it costs, it's inconvenient and I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.
Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.
EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.
It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
The Internet pretty much runs our lives now, so: I do.
Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.
No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.
This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.
All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.
Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.
Critique of the author's style is fine. An opposing view should honestly be presented as such.
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.
States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.
Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.
Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.
I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.
I a small server operator and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.
[1] - https://news.ycombinator.com/item?id=46152074
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
The porn companies already set the RTA header. It was designed by an organisation funded by the porn companies.
https://en.wikipedia.org/wiki/Association_of_Sites_Advocatin...
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
There's a great relevant quip: "If you think that the cost of compliance is high, try noncompliance".
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.
This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
What I am suggesting could address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough "teeth" into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc... People could still bypass such restrictions with a VPN but that would not apply to most small children.
>fined
Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.
I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).
I’m here to die on this hill!
> Back in the late 90s or so, there was a proposal
This one: https://www.w3.org/PICS/
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube which means in that case the server application would need to send that header for any video tagged as adult.
I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything which made for a very long header.
How are they supposed to fine sites out of their jurisdiction?
One possible method [1] though I am sure the network and security engineers here on HN could come up with simpler methods. Just blocking domains on the popular CDN's would kill access for most people as by default most browsers are using them for DoH DNS.
[1] - https://news.ycombinator.com/item?id=47950843
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
There's an angle everyone misses.
Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.
And half of it will be from adults trying to avoid privacy invasion.
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...
Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).
Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.
The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.
The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.
The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.
I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
No it really can’t. Age verification requires identification.
Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.
The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
This is something that's technologically feasible, but will never happen in practice.
The destruction of privacy is the whole point.
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.
It's addressing a real problem in a bad way.
I mean, it's more than that. I _want_ to protect kids' right to be part of the human connectome. The "protect the kids" (by disallowing them their freedom of thought on the internet) is just naked ageism.
So do you want 5 year olds driving on the highway and 8 year olds doing shots of tequila or are you ageist?
Or perhaps protecting kids isn’t really ageism at all.
It depends on what you're depriving them of too. Those are very extreme examples with little to no upside.
Fair point
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
They are interested - interested specifically in opposing it. These groups don't care about age verification - it is a trojan horse for censorship.
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Heroes @ EFF have our guide (USA residents):
https://www.eff.org/pages/help-us-fight-back#main-content
Of course Chuck Schumer won't let me contact him using this helpful tool.
Perhaps we NYers should organize a rally outside his office in Manhattan like we did for PIPA/SOPA?
Dumb- BUT immediate links to sites of the right legislators!
Use every means necessary. If that can be organized, do it.
man the EFF owns
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Any tips for writing the letter, maybe even a starting point?
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.
If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?
I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)
Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.
(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.
Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.
Theres also no old-age cutoff, when their mental faculties significantly decline.
Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.
—
After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-
The platform just has to be designed non-addictively.
Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
>Underage influencers
Anyone who has hone so far as to become an influencer is already a lost cause. No law could save them.
>If you are going to implement age controls, you should implement a ban on underage influencers as well.
That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
This is almost verbatim the same argument that people make in support of allowing child labor in factories.
Children do not need, nor are they entitled to, any kind of "freedom" to work for a living.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.
In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
Since when did being an influencer become 'one of the few remaining methods' to make a decent income?
Usually eschatological Fear is the realm of governments. Modern democracy is basically built on the fear of some terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, unrecycled waste or greenhouse effect.
Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
I can't imagine having this much conviction and then deciding to let AI write the posts.
It's an immediate turn off. If you're this convinced of your rectitude, have a go at writing the words yourself.
Basically every article on this site has a comment complaining that the article is AI. Who knows. Maybe “complaining about AI” is the new AI way of fitting in.
I just flag all the AI complaints. Perfect example of the guideline about “don’t complain about tangential issues” or whatever the wording is.
This feels different to me than complaining about the font or whatever. I don’t want to read or comment on anything not written by a human. I also agree with GP here that using AI instead of your own words has bearing on the content itself, insofar as it’s a signal that the author doesn’t care enough to write it themself.
As a corollary, I also want to know if a project posted here is predominantly vibe-coded, since that to me is a signal that it may be of lower quality, have fewer edge cases worked out, and is more likely to be abandoned in the near future.
Caring enough to put in the effort of thinking and writing is not a tangential issue. Laziness is a substantive defect, and sadly, I think that kelseyfrog has clocked this one correctly. There are borderline cases, but the cadence of this tweet thread is unmistakeable.
We don't have to live like this. We don't have to accept it. We don't have to upvote it even if we agree (as I do) with the explicit point. The medium is the message, and the message that this poster is putting out here is that online age verification isn't actually worth getting that worked up about.
AI-generated content being passed off as human-written is not a tangential issue. HN staff agree, because posting AI generated comments is explicitly forbidden. I suspect the only reason this isn't extended to submissions is because pretty much all articles about AI are also written by AI, and effectively forbidding positive discussion of AI is obviously against the interests of a VC firm.
HN's guidelines were written under the assumption that submitted articles about [thing] would be written by people who care about [thing] and made a good faith effort to write something interesting about [thing], so it's only fair that any comments would be expected to respect the author's effort and discuss the article in equally good faith.
This assumption completely falls apart when you add AI generated submissions into the mix. If the "author" didn't care and thus couldn't be bothered to write about [thing] themselves, choosing to instead outsource that work to an LLM while they supposedly did something they deemed more valuable with the time they would've spent writing, then why should commenters be expected to dedicate more effort into their discussion of the article than the author dedicated to writing it? It's a bit unfair towards the commenters, don't you think?
> Basically every article on this site has a comment complaining that the article is AI
Just a hunch, but it may have something to with the fact that basically every article on this site is AI.
No, it's because authentic writing on HN has been drowned out in an ocean of slop, in such quantities that calling it out is becoming an exercise in futility
If everyone is complaining about the smell of shit, maybe it's because there's shit everywhere.
It's more likely that they're just virtue signaling about {{current-controversial-thing}}, as evidenced by the fact that they often accuse content of being AI generated when it would only appear that way to the most naive readers.
It doesn't feel like virtue signaling. It feels like pointing out a contradiction in the text: I care deeply about this topic; I don't care enough to write it myself.
> Virtue signalling is a pejorative neologism for the expression of a moral viewpoint with the intent of communicating good character, frequently used to suggest hypocrisy.
What virtue am I signalling and what hypocrisy am I trying to hide?
You should add an AI clause to that license agreement in your profile.
How would it read?
I can't agree with this enough and yet I think the long term danger is masked by the current problems for the majority of voters. I'm not hopeful.
I have a fair bit of fatalism on this one.
Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.
Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia
Exceeding gloomy take I know
And the piece nobody is even considering...
Responsible parents don't have separate OS accounts for their children.
Online age verification is an example of the Motte-and-bailey fallacy (https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy, https://slatestarcodex.com/2014/11/03/all-in-all-another-bri...).
It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
> If you love your family, you must stop online age verification.
> If you want the best for your children, you must stop online age verification.
> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.
Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,
"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.
They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."
So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.
Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.
Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
What if all services require ID?
Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.
What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.
Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.
Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.
But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.
The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
Chrome, Android, and Gmail are optional to use.
So is social media.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.
Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
Is Google tracking which teenagers make which posts on 4chan?
Curious about via Google Chrome versus not
Is it? Digital ID is the point being made here by the x thread. It's being brought in under age verification. It's arguing that the laws being passed, and infrastructure to enforce them, to protect your kids today will be used to abuse them in the decades to come.
It's trivially easy for me to imagine covid playing out +10 years from now instead of -5, and the ~3 established identity/age verification players in that market at that point responding to exactly the same pressure that was applied to the handful of social media companies to censor people disagreeing with the administration's approach. Real people were fired from real jobs for exactly this in working memory, they did it under their real identities on social media sites. The future being driven towards now would ensure that there's no anonymous forums to avoid that risk on controversial subjects in the future.
You disagreed with the logic of mask mandates, or suggested that a lab leak was a plausible theory, that was tied to your identity, and now it's not a ban on instagram for a while -- you can't use AI tools you need for your job, your email, your online banking. You are functionally unemployable and excommunicated from all internet tools that rely on those handful of services. The smartest legal minds argue that this wasn't actually a violation of your first amendment rights, after all, those three private businesses just decided that they didn't want to do business with someone engaging in dangerous rhetoric, and the fact that the administration sent them an email letting them know isn't material to that (Murthy V Missouri, 2024).
We're already living in the early innings of political controversies coming from the fact that the youngest politicians had social media accounts where they said dumb things as teenagers/young adults. Is the future where the dumb ideas 16 year olds post on reddit are tied to their government IDs for eternity a good one?
How is this rhetoric jumping the shark?
A lot of people dismissed RMS's "Right to Read"[1] essay long ago. All the things it was warning about have come to pass, in spades.
1: https://www.gnu.org/philosophy/right-to-read.html
Responding to tone but not to content is what a dog does.
looks like you ruffled some feathers with this one
Tone was off
Yeah, calling people "dogs" for pointing out that TFA is a hyperbolic (AI-written) screed without substance would ruffle some feathers.
Edit: yes it is hyperbolic and ridiculous to suggest people will be "enslaved" because they don't have access to the internet. Do you realize that makes everybody who grew up in the 90s or earlier a "slave"?
Nothing "hyperbolic" about the points made. If anything it's not nearly extreme enough. People have no idea how bad things really are.
>They are counting on you caring more about sounding reasonable than protecting your kids from a system designed to control them forever.
Do you actually have an argument to make?
He’s 100% correct.
For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.
There can be no free speech if the state can imprison you for what you say, and they know everything you say.
I dropped the word ‘online’ from the above paragraph, because on is the real world. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?
That’s not a world I want to live in.
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.
Yes
That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?
That's why any kid can go watch any movie in the cinema right?
Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?
Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.
> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?
Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
Did social media exist when you grew up?
Xanga and MySpace are what my friends had; yes
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
> how unreasonable he sounds
It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.
It's important that you engage with that. Denial is not something to brag about.
Maybe you're not the target, then.
I haven't heard too many people say these extreme-sounding, yet at least arguably true points out loud.
Someone should be saying them, and the fact that it's not your particular cup of tea may not be the biggest issue here.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.
Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.
If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock. its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
So many pieces of law are flawed today, and the reason why should be concerning to all.
I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.
The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.
There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.
Yeah, yeah, but the children...
Ok, maybe that’s a silly thought, but… couldn’t this be provided by Apple/Google anonymously?
When you set up kids devices in your family they ask you to provide the birthday anyway.
I’m keen to see the arguments against this.
Further empowering and depending on either of those companies as a middleman in our lives should make us nauseous.
Just a reminder that the YC funds many of the companies pushing these laws and building the surveillance state.
Why is it always “think of the children” used to abrogate the rights of adults?
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.
Regardless how stupid this argument is, rags will always pounce on it.
This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.
It's the same with tough on crime. "What, you want criminals to keep getting away with it?!"
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.
Protect the children refers to a type of property, not a type of human.
"But age verification requires identity verification. Identity verification requires digital IDs."
Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
Are you going to give your cc number to every website in the world? Also, is that really an ID?
Agree
There is a sudden concerted international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. https://github.com/upper-up/meta-lobbying-and-other-findings...
Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to everyone so that we wouldn't have been a need to bring in age verification.
The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")
They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.
This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.
meta desperately want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.
So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.
The Cashier Standard – Age Verification Without Surveillance
https://news.ycombinator.com/item?id=47809795
https://claude.ai/public/artifacts/7fe74381-a683-4f49-9c2b-1...
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
More and more places I go in other states besides Utah, try to scan IDs when purchasing alcohol.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
How does this prevent a second market for one time codes? I as an adult can just get a code and sell it someone else.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.
Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
This type of system is a horrible idea for the following reasons:
1) the cards can just be re-sold which creates a black market and defeats the "cashier physically saw the person buying the card" angle
2) nickle and dimes people for simply browsing the internet (verification can dystopia anyone?)
3) related to #2, it creates winners in the private sector since presumably you need central authorities handing out these codes
I abhor the idea of digital ID verification, but if we're going to do it, let's not create a web of new problems while we're at it.
Is it even theoretically possible to have bearer anonymity and no reselling option at the same time?
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.
I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.
While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be some trolls out there who will do it just for the laughs.
This is far from a perfect system since a dishonest site could also make use of the option. But it theoretically works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.
Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.
Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
'consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.'
I don't think it applies, the difference is that codes are digital and can be sold over the internet, anonymously, in a scallable manner.
I still like this solution because all the solutions I've seen have flaws, this one being so easy to explain makes it great to campaign for.
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that turns it into a physical problem. I'm merely pointing out that's what you're doing and the issues with it.
Second, it doesn't matter what it costs, it's inconvenient and I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.
Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.
EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
We now know all the arguments. No more need to persuade anyone.
People will show what they are made of.
An attestation-like system to detect humanity at time of post is absolutely for useful online spaces in the era of AI slop.
The writing style of the author is very annoying.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
It could be done with anonymous credentials though. No tracing to who the human is.
Anonymous in terms of it not being possible to derive the real world identity of the human from the value, sure. Anonymous in terms of providing no durable way to ban that human from the platform? No.
Very unpopular opinion here on HN: one can't stop it without direct physical action against those who push it.
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.
It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
> who cares this much about the internet?
The Internet pretty much runs our lives now, so: I do.
Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.
No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.
This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
You'll need to explain how age verification fixes that.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
That's not what they said.
Playing devil's advocate outside of debate club only serves to promote the devil's point of view.
State your well reasoned opinion where you have considered the facts. Or just say you are in support of this openly.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.
All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.
Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.
Critique of the author's style is fine. An opposing view should honestly be presented as such.
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.
States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.
Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.
Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.
I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
What I'm hearing you say:
> Our freedom is already being eroded, saying that it is being eroded more is just fear mongering.
> They want to hurt you, instead of fear mongering, find a middle ground where they're hurting you differently.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
This is simply false -- the literature is full of discussion about the health effects of social media.
More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.
[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/ - the content you consume can promote healthy or unhealthy behaviors
[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) https://www.jmir.org/2021/1/E17187/ is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck
[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) https://link.springer.com/article/10.1007/s10639-024-12811-y was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - https://www.researchgate.net/profile/Sharique-Ahmad-2/public...) for more on the actual literature that has probed those questions
[4]: The APA has a whole "Health advisory on social media use in adolesence" https://www.apa.org/topics/social-media-internet/health-advi... which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
You posted a giant, AI generated block of junk science.