"compliance incident" is the catchall for everything from a spelling error on a CPS (certification practice statement) or being one second late on revocation, all the way up to to key compromise.
it is almost always closer to the spelling mistake side than it is the key compromise side of the spectrum.
Indeed. "Compliance" can mean some internal audit/monitoring system has tripped and requires in depth investigation and preservation of logging, or it can mean "federal law enforcement with badges are right now standing in our datacenter and/or NOC serving a court order".
I heard the CEO of Lets Encrypt, Warren Buffet, accidentally started a fire while charging his e-unicycle in the data centre and that knocked out the server that issues the certificates. They've got a backup, but it's in a safe only two people have keys to; one keyholder, Anne Hathaway, is at a parrot show in Singapore this week and her flight back is delayed due to fuel shortages. The other keyholder, Henry Kissinger, it turns out has been dead for 3 years.
I sincerely hope it's the most mundane and least spectacular explanation possible, just saying from my point above that compliance has a very wide range of possible meanings and interpretations (also depending on the background/career POV of the reader), until the incident is further explained..
Federal law enforcement in your DC isn't something you'd call a "compliance" issue, that's not what that term means. Yes it's various derivatives of the English word "comply", but this is a field of well-defined verbiage, and that ain't it. Compliance means they failed (or are being questioned) about following particular practices that they have agreed to, nothing else really.
NB: "legal compliance" is another term. So is "{legal,lawful} enforcement"
That's really not good. Fortunately I'm not using any short-lived certificates like the recently announced 6 day certs, so have some breathing room. Without further details, I'd imagine anyone with a short-lived cert is getting a bit sweaty right now.
Let's Encrypt has become one of those pieces of critical Internet infrastructure that just quietly hums away in the background, the fact that they've stopped ALL issuance is deeply concerning.
Stopping all issuance is an pretty standard response if a CA thinks what they are issuing might be non-compliant in any way. It's an action we're required to take. It's not necessarily a sign of a more dramatic failure mode or key compromise. That said, the impact is the same for as long as the downtime lasts so it is unfortunate and we're sorry for the disruption.
I don't think the premise behind short lived (six day) certificates being viable is that CA issuance never goes down. Sure, the runway is shorter, but not that short. Most down time is a few hours or less, which is not a problem for six day certificates that should be renewed every three days.
Short lived certificates are optional though, so if it's not worth it to you there are longer lifetime options.
Considering the open source nature of Letsencrypt, I wonder what the barriers/costs would be (theoretically) to a wealthy benefactor who wanted to duplicate its server side infrastructure and a core staffing level of persons, and fund a "parallel" equally trusted, alternative entity with a solid governing board. Same general idea how Acton funded the Signal foundation.
Somewhere that none of the physical infrastructure/hosting environment overlapped with existing Letsencrypt stuff so that the failure of one entity would have zero blast radius affecting the other.
I know there's a long and complicated process to go through to become a trusted root CA and get your CA public cert auto-installed in every OS and browser trust store. Indeed in the early days of letsencrypt I recall their root CA certs were signed by other older root CAs.
I understand there's probably a big thorny problem of duplicating the corporate process/policies on the human level that ensure compliance, but is the back-end software pipelining stuff to CT logs not also something that can be replicated? Or is it not part of the server side stuff which has been open sourced?
I just find it incredible that in 30+ years the industry hasn't adapted one bit to the brittle failure modes of certificates. I did some subcontract work with Verisign to deploy their CA infrastructure back in the early oughties and it felt like a solution was overdue way back then. I was at Google in the teensies when gmail broke due to expired SMTP certs. WAAAY overdue by then. Here we are, a decade later and it's still the same lol.
I'd like to see better support for networks that aren't connected to the broader internet, or moving away from X.509. Note that these are contradictory. X.509 was intentionally designed to support offline verification and has a lot of elaborate ceremony to support it (like all the rest of the OSI stack). The industry just doesn't, so we get the worst of both worlds.
It's certainly an incident when ceasing to issue certificates... after doing absolutely everything, including limiting lifetime, to encourage their frequent renewal
Only if you’re reissuing right before expiration, which is a stupid thing to do. If you have a 47-day cert, best practice is to reissue on day 30, meaning LE would need to be down for more than two weeks before anything went wrong.
If this outage breaks your system, that’s entirely on you, not Let’s Encrypt.
You have to opt in, and they are honest about the tradeoffs when discussing them:
> Short-lived certificates are opt-in and we have no plan to make them the default at this time. Subscribers that have fully automated their renewal process should be able to switch to short-lived certificates easily if they wish, but we understand that not everyone is in that position and generally comfortable with this significantly shorter lifetime. We hope that over time everyone moves to automated solutions and we can demonstrate that short-lived certificates work well.
> We hope that over time everyone moves to automated solutions and we can demonstrate that short-lived certificates work well.
They're expressly trying to show that this is a viable approach. It's actually kinda good that this outage, whatever it is, is happening now, as it's giving them a chance to demonstrate (or not) that they can deliver.
Hopefully it's just a technical issue and not something like a key compromise. This could have disastrous effects considering how much of the web runs on LE certs these days.
Granted if it's configured properly everyone should have 30 days of leeway before having to issue new certs...
"We have been made aware of a potential incident and are shutting down all issuance" seems to lean towards the latter and not simply a technical issue :(
Just speculating, but I don't think it's unrelated. Discord heavily utilizes Cloudflare, and Cloudflare uses Let's Encrypt for a certificate issuance. If they happened to have a certificate signing dependency in some operational rollout today, I think it could explain it. Certainly the timing is very correlated.
On my account they always serve Google issued certificates. There is also Let’s encrypt certificate but it is not used though. I guess that’s a fail-safe.
In Cloudflare Enterprise you can pick either or leave it on auto. Iirc there's a 3rd option but I don't know if it's still supported (Terraform and SDKs used to have it in the enum)
I guess we'll find out but it would be surprising if they use Let's Encrypt for their backend services. The front door is issued by Google Trust Services.
It's an interesting thought experiment to consider how much of 'the internet' would still find a way to communicate with each other and fix the problem if somebody waved a magic wand and all http and https servers and clients magically disappeared worldwide instantly.
For instance some of the folks who run core BGP at medium to large sized ISPs would revert back to a few legacy IRC channels and find each other to chat and figure out WTF is going on.
"the internet" would still exist, a subset of the application layer stuff that runs on top it wouldn't...
I bet we'd see a bunch of unexpected breakage in presumed-to-be-lower-level-than-http[s] infrastructure so that eg. your legacy IRC server goes down because it's running on rented hardware and the hosting provider's operations rely on some internal http services.
This is extremely likely in the case of many automated provisioning, billing, and web interface control panel systems for shared hosting platforms, VPS, virtual machine service providers that likely do something https to https internally to communicate between tooling.
In my intentionally absurd theoretical scenario, what would remain up would be the bare metal in colocation in certain service providers' environments...
This is a compliance incident, we should be issuing again shortly.
Update: Issuance is back up.
Update: Preliminary incident report:
https://bugzilla.mozilla.org/show_bug.cgi?id=2038351
can you update the status page with this information?
Thanks for the assurance, jaas! Keep up the good work
Real soon now?
> This is a compliance incident
Uh. I don't know if I like the sound of that...
"compliance incident" is the catchall for everything from a spelling error on a CPS (certification practice statement) or being one second late on revocation, all the way up to to key compromise.
it is almost always closer to the spelling mistake side than it is the key compromise side of the spectrum.
a peak at https://bugzilla.mozilla.org/buglist.cgi?product=CA%20Progra... will show that most compliance issues, to the general public, are quite mundane.
Indeed. "Compliance" can mean some internal audit/monitoring system has tripped and requires in depth investigation and preservation of logging, or it can mean "federal law enforcement with badges are right now standing in our datacenter and/or NOC serving a court order".
At times like this it's worth remembering that message boards strongly favor whatever narrative is going to be most fun and exciting to talk about.
I heard the CEO of Lets Encrypt, Warren Buffet, accidentally started a fire while charging his e-unicycle in the data centre and that knocked out the server that issues the certificates. They've got a backup, but it's in a safe only two people have keys to; one keyholder, Anne Hathaway, is at a parrot show in Singapore this week and her flight back is delayed due to fuel shortages. The other keyholder, Henry Kissinger, it turns out has been dead for 3 years.
I sincerely hope it's the most mundane and least spectacular explanation possible, just saying from my point above that compliance has a very wide range of possible meanings and interpretations (also depending on the background/career POV of the reader), until the incident is further explained..
In that sense, prepare yourself to be bored.
Federal law enforcement in your DC isn't something you'd call a "compliance" issue, that's not what that term means. Yes it's various derivatives of the English word "comply", but this is a field of well-defined verbiage, and that ain't it. Compliance means they failed (or are being questioned) about following particular practices that they have agreed to, nothing else really.
NB: "legal compliance" is another term. So is "{legal,lawful} enforcement"
That's really not good. Fortunately I'm not using any short-lived certificates like the recently announced 6 day certs, so have some breathing room. Without further details, I'd imagine anyone with a short-lived cert is getting a bit sweaty right now.
Let's Encrypt has become one of those pieces of critical Internet infrastructure that just quietly hums away in the background, the fact that they've stopped ALL issuance is deeply concerning.
Stopping all issuance is an pretty standard response if a CA thinks what they are issuing might be non-compliant in any way. It's an action we're required to take. It's not necessarily a sign of a more dramatic failure mode or key compromise. That said, the impact is the same for as long as the downtime lasts so it is unfortunate and we're sorry for the disruption.
I don't think the premise behind short lived (six day) certificates being viable is that CA issuance never goes down. Sure, the runway is shorter, but not that short. Most down time is a few hours or less, which is not a problem for six day certificates that should be renewed every three days.
Short lived certificates are optional though, so if it's not worth it to you there are longer lifetime options.
Considering the open source nature of Letsencrypt, I wonder what the barriers/costs would be (theoretically) to a wealthy benefactor who wanted to duplicate its server side infrastructure and a core staffing level of persons, and fund a "parallel" equally trusted, alternative entity with a solid governing board. Same general idea how Acton funded the Signal foundation.
Somewhere that none of the physical infrastructure/hosting environment overlapped with existing Letsencrypt stuff so that the failure of one entity would have zero blast radius affecting the other.
I know there's a long and complicated process to go through to become a trusted root CA and get your CA public cert auto-installed in every OS and browser trust store. Indeed in the early days of letsencrypt I recall their root CA certs were signed by other older root CAs.
A lot of Let’s Encrypt is not the software but a bunch of auditing and process that ensure compliance and make it legible to the required auditors.
I understand there's probably a big thorny problem of duplicating the corporate process/policies on the human level that ensure compliance, but is the back-end software pipelining stuff to CT logs not also something that can be replicated? Or is it not part of the server side stuff which has been open sourced?
https://letsencrypt.org/docs/ct-logs/
Google has their own free ACME endpoint: https://pki.goog/
ZeroSSL should also be drop in
I just find it incredible that in 30+ years the industry hasn't adapted one bit to the brittle failure modes of certificates. I did some subcontract work with Verisign to deploy their CA infrastructure back in the early oughties and it felt like a solution was overdue way back then. I was at Google in the teensies when gmail broke due to expired SMTP certs. WAAAY overdue by then. Here we are, a decade later and it's still the same lol.
Other than automating renewal - which we have made huge strides on - what adaption would you want to see?
I'd like to see better support for networks that aren't connected to the broader internet, or moving away from X.509. Note that these are contradictory. X.509 was intentionally designed to support offline verification and has a lot of elaborate ceremony to support it (like all the rest of the OSI stack). The industry just doesn't, so we get the worst of both worlds.
I mean, what's the alternative? I struggle to come up with a solution that doesn't boil down to the same primitive operations and trust model.
>pieces of critical Internet infrastructure that just quietly hums away in the background,
And donation supported no less
Wonder what incident that even could have been.
> like the recently announced 6 day certs
Just you wait for the 1 hour and 59 minutes certs! For security!
It's certainly an incident when ceasing to issue certificates... after doing absolutely everything, including limiting lifetime, to encourage their frequent renewal
There is one little-discussed down side to ever shorter-lived certificates...
Letsencrypt is not the only acme authority. ZeroSSL is the other popular one. There are others.
If you're using ACME to handle certificate rotation, can't you just configure multiple providers?
Only if you’re reissuing right before expiration, which is a stupid thing to do. If you have a 47-day cert, best practice is to reissue on day 30, meaning LE would need to be down for more than two weeks before anything went wrong.
If this outage breaks your system, that’s entirely on you, not Let’s Encrypt.
Short-lived = 6 days. Even if you reissue after 2 or 3 days, that's… not a lot of breathing room.
You have to opt in, and they are honest about the tradeoffs when discussing them:
> Short-lived certificates are opt-in and we have no plan to make them the default at this time. Subscribers that have fully automated their renewal process should be able to switch to short-lived certificates easily if they wish, but we understand that not everyone is in that position and generally comfortable with this significantly shorter lifetime. We hope that over time everyone moves to automated solutions and we can demonstrate that short-lived certificates work well.
https://letsencrypt.org/2026/01/15/6day-and-ip-general-avail...
That's not really an answer, especially with:
> We hope that over time everyone moves to automated solutions and we can demonstrate that short-lived certificates work well.
They're expressly trying to show that this is a viable approach. It's actually kinda good that this outage, whatever it is, is happening now, as it's giving them a chance to demonstrate (or not) that they can deliver.
> no plan to make them the default at this time
At this time! Boil the frog slowly...
3-4 days is a ton of breathing room
You're holding your 6-day cert wrong
Chill, it's 2 hours. They recommend renewing at the first third of the 160 hrs.
Thought that was the iPhone 6
Useful context: https://letsencrypt.org/2026/01/15/6day-and-ip-general-avail...
Only as long as LE isn’t down for 17 days, then we’re in big trouble.
Hopefully it's just a technical issue and not something like a key compromise. This could have disastrous effects considering how much of the web runs on LE certs these days.
Granted if it's configured properly everyone should have 30 days of leeway before having to issue new certs...
"We have been made aware of a potential incident and are shutting down all issuance" seems to lean towards the latter and not simply a technical issue :(
Josh Aas is on the thread. It's a compliance issue, they expect to be issuing shortly.
What if they get kicked out of trusted roots because non-compliant ?
That's why they take incidents like this seriously and stop issuance until it's fixed. They could get kicked out of trusted roots otherwise.
Change your config to ZeroSSL or another free ACME provider?
Related Cloudflare issue: https://www.cloudflarestatus.com/incidents/z3vgxxfvt3yb
They had scheduled maintenance a few hours ago, https://letsencrypt.status.io/pages/maintenance/55957a99e800...
Discord is out too right now, probably unrelated though.
Just speculating, but I don't think it's unrelated. Discord heavily utilizes Cloudflare, and Cloudflare uses Let's Encrypt for a certificate issuance. If they happened to have a certificate signing dependency in some operational rollout today, I think it could explain it. Certainly the timing is very correlated.
On my account they always serve Google issued certificates. There is also Let’s encrypt certificate but it is not used though. I guess that’s a fail-safe.
In Cloudflare Enterprise you can pick either or leave it on auto. Iirc there's a 3rd option but I don't know if it's still supported (Terraform and SDKs used to have it in the enum)
https://developers.cloudflare.com/ssl/reference/certificate-...
I guess we'll find out but it would be surprising if they use Let's Encrypt for their backend services. The front door is issued by Google Trust Services.
Just speculating
Then why post? HN is for informed discussion, not every random thought in someone's head.
Certainly the timing is very correlated.
I had chocolate ice cream for breakfast. Certainly the timing is very corrolated [sic].
The title is misspelled. It's “Let's Encrypt”, with an apostrophe.
Issuance was stopped almost 2 hours ago: May 8, 2026 18:37 UTC.
Hopefully just a minor mississuance incident and not something more serious.
in other news, Digicert's Secure Site Pro certificates are down to only $5,880.00 yearly for one wildcard domain!
dang I'll have to return to paid certs again?
How much of the internet is going to fail because of this?
None, unless someone is renewing their certificates only 2 hours before they expire, which is a dumb thing to do.
It's an interesting thought experiment to consider how much of 'the internet' would still find a way to communicate with each other and fix the problem if somebody waved a magic wand and all http and https servers and clients magically disappeared worldwide instantly.
For instance some of the folks who run core BGP at medium to large sized ISPs would revert back to a few legacy IRC channels and find each other to chat and figure out WTF is going on.
"the internet" would still exist, a subset of the application layer stuff that runs on top it wouldn't...
I bet we'd see a bunch of unexpected breakage in presumed-to-be-lower-level-than-http[s] infrastructure so that eg. your legacy IRC server goes down because it's running on rented hardware and the hosting provider's operations rely on some internal http services.
This is extremely likely in the case of many automated provisioning, billing, and web interface control panel systems for shared hosting platforms, VPS, virtual machine service providers that likely do something https to https internally to communicate between tooling.
In my intentionally absurd theoretical scenario, what would remain up would be the bare metal in colocation in certain service providers' environments...
Some other internet things going on to Discord, Cloudflare, and others.
Unsure if related in any way.
Here's hoping it's not another security nightmare...