Every single Ivanti product (including their SSL-VPN) should be considered a critical threat. The fact that this company is allowed to continue to sell their malware dressed-up as "security solutions" is a disaster. How they haven't been sued into bankruptcy is something I'll never understand.
The purpose of cybersecurity products and companies is not to sell security. It's to sell the illusion of security to (often incompetent) execs - which is perfectly fine because the market doesn't actually punish security breaches so an illusion is all that's needed. It is an insanely lucrative industry selling luxury-grade snake oil.
Actual cybersecurity isn't something you can just buy off-the-shelf and requires skill and making every single person in the org to give a shit about it, which is already hard to achieve, and even more so when you've tried for years to pay them as little as you can get away with.
Actually there is a significant push to more effective products coming from the reinsurance companies that underwrite cyber risks. Most of them come with a checklist of things you need to have before they sign you at any reasonable price. The more we get government regulation for fines in cases of breaches etc. the more this trend will accelerate.
The thing is that real security isn't something that a checklist can guarantee. You have to build it into the product architecture and mindset of every engineer that works on the project. At every single stage, you have to be thinking "How do I minimize this attack surface? What inputs might come in that I don't expect? What are the ways that this code might be exploited that I haven't thought about? What privileges does it have that it doesn't need?"
I can almost guarantee you that your ordinary feature developer working on a deadline is not thinking about that. They're thinking about how they can ship on time with the features that the salesguy has promised the client. Inverting that - and thinking about what "features" you're shipping that you haven't promised the client - costs a lot of money that isn't necessary for making the sale.
So when the reinsurance company mandates a checklist, they get a checklist, with all the boxes dutifully checked off. Any suitably diligent attacker will still be able to get in, but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down. The ecosystem settles into an equilibrium of parasites (hackers, who have silently pwned a wide variety of computer systems and can use that to setup systems for their advantage) and blowhards (executives who claim their software has security guarantees that it doesn't really).
> but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down
I would argue the opposite is true. Insurance doesn’t pay out if you don’t self-report in time. Big data breaches usually get discovered when the hacker tries to peddle off the data in a darknet marketplace so not reporting is gambling that this won’t happen.
There need to be much more powerful automated tools. And they need to meet critical systems where they are.
Not very long ago actual security existed basically nowhere (except air-gapping, most of the time ;)). And today it still mostly doesn't because we can't properly isolate software and system resources (and we're very far away from routinely proving actual security). Mobile is much better by default, but limited in other ways.
Heck, I could be infected with something nasty and never know about it: the surface to surveil is far too large and constantly changing. Gave up configuring SELinux years ago because it was too time-consuming.
I'll admit that much has changed since then and I want to give it a go again, maybe with a simpler solution to start with (e.g. never grant full filesystem access and network for anything).
We must gain sufficiently powerful (and comfortable...) tools for this. The script in question should never have had the kind of access it did.
Err, cybersecurity insurance as a business model has not worked. I have seen analyst reports showing that there have been multiple large claims that are each individually larger than all premiums ever collected industry wide. Those same reports indicated that all the large cybersecurity insurance vendors were basically no longer issuing policies with significant coverage, capping out at the few million dollar range. Cybersecurity insurance is picking up pennies in front of a steamroller; you wonder why no one else is picking up this free money on the ground until you get crushed.
Note, that is not to say that cybersecurity insurance if fundamentally impossible, just that the current cost structure and risk mitigation structure is untenable and should not be pointed at as evidence of function.
You are asserting that security has to be hand-crafted. That is a very strong claim, if you think about it.
Is it not possible to have secure software components that only work when assembled in secure ways? Why not?
Conversely, what security claims about a component can one rely upon, without verifying it oneself?
How would a non-professional verify claims of security professionals, who have a strong interest in people depending upon their work and not challenging its utility?
Not the person you are responding to, but: I would agree that at the stage of full maturity of cybersecurity tooling and corporate deployment, configuration would be canonical and painless, and robust and independent verification of security would be possible by less-than-expert auditors. At such a stage of maturity, checklist-style approaches make perfect sense.
I do not think we're at that stage of maturity. I think it would be hubris to imitate the practices of that stage of maturity, enshrining those practices in the eyes of insurance underwriters.
Holy those checklists are the bane of my existence. For example demanding 2FA for email, which is impossible if you self host, unless you force everyone to use RoundCube, but then you have to answer to the CEO why he can’t get email on his iPhone in the mail app.
Or just loads of other stuff that really only applies to large Fortune 500 size companies. My small startups certainly don’t have a network engineer on staff who has created a network topology graph and various policies pertaining to it, etc etc. the list goes on, I could name 100s of absurd requirements these insurance companies want that don’t actually add any level of security to the organization, and absolutely do not apply to small scale shops.
Those checklists are frequently answered like this:
"Hey it says we need to do mobile management and can't just let people manage their own phones. Looks like we'll buy Avanti mobile manager". Same conversation I've seen play out with generally secure routers being replaced with Fortigates that have major vulnerabilities every week because the checklist says you must be doing SSL interception.
There is no bad publicity? I take few had heard of them before so this is free marketing putting the name in public. Or then there is some broken LLM based sentiment analysis bot that automatically buy companies in news...
Suing for negligence and friends is how car companies -- when it is found out they've built something highly unsafe/dangerously broken -- happens. I don't see the difference.
In most cases, you can't evade liability for negligence that results in personal injury. You can usually disclaim away liability for other types of damage caused by negligence.
"We are aware" can mean "we are taking this very seriously and have seen very little so far" or it can mean "after covering our eyes and plugging our ears we are seeing and hearing very little of this problem".
And "a very limited number" may mean "though we pretend to be a big company, we have a limited number of customers and while they all pay licence fees, most are not actually using the product in production."
Ivanti isn't exactly a small company. It's products are used in fair amount of the F100's out there so any risk on their part can have an outsized influence.
The array indexing thing is a special case in [[...]] which is otherwise more-or-less secure (no expansion occurs under typical unquoted variable access). https://news.ycombinator.com/item?id=46631811
Can't help but notice the weird choice of illustration in TFA.
Ivanti is a US company. But if you have never heard of them, the dragon-resembling creature in the illustration (representing the dormant backdoor?) makes it look like the incident is somehow related to China.
There is some dark amusement about an MDM and general enterprise management and security systems being used as the attack vector. Ivanti in particular has proven itself to be swiss cheese as of late, and would be bankrupt if people cared about security rather than it being a compliance/insurance checkbox that truly _nobody_ cares about in practice.
Semi-related: with the recent much-touted cybersecurity improvements of AI models (as well as the general recent increase in tensions and conflicts worldwide) I wonder just how much the pace of attacks will increase, and whether it’ll prove to be a benefit or a disadvantage over time. Government sponsored teams were already combing through every random weekend project and library that somehow ended in node or became moderately popular, but soon any dick and tom will be able to do it at scale for a few bucks. On the other hand, what’s being exploited tends to get patched in time - but this can take quite a while, especially when the target is some random side project on github last updated 4 years ago.
My gut feeling is that there will be a lot more exploitation everywhere, and not much upside for the end consumer (who didn’t care about state level actors anyway). Probably a good idea to firewall aggressively and minimize the surface area that can be attacked in the first place. The era of running any random vscode extension and trust-me-bro chrome extension is likely at an end. I’m also looking forward to being pwned by wifi enabled will-never-be-updated smart appliances that seem to multiply by the year.
Why the fuck do people still use Ivanti, and while we're at it, Cisco gear? How many backdoors and vulnerabilities can these two companies produce until they get put out of business?
If you ask me... both these companies should be treated similarly to misbehaving banks: banned from acquiring new customers, an external overseer installed, and only when the products do not pose a threat to the general public any more, they can acquire new customers again.
Every single Ivanti product (including their SSL-VPN) should be considered a critical threat. The fact that this company is allowed to continue to sell their malware dressed-up as "security solutions" is a disaster. How they haven't been sued into bankruptcy is something I'll never understand.
The purpose of cybersecurity products and companies is not to sell security. It's to sell the illusion of security to (often incompetent) execs - which is perfectly fine because the market doesn't actually punish security breaches so an illusion is all that's needed. It is an insanely lucrative industry selling luxury-grade snake oil.
Actual cybersecurity isn't something you can just buy off-the-shelf and requires skill and making every single person in the org to give a shit about it, which is already hard to achieve, and even more so when you've tried for years to pay them as little as you can get away with.
Actually there is a significant push to more effective products coming from the reinsurance companies that underwrite cyber risks. Most of them come with a checklist of things you need to have before they sign you at any reasonable price. The more we get government regulation for fines in cases of breaches etc. the more this trend will accelerate.
The thing is that real security isn't something that a checklist can guarantee. You have to build it into the product architecture and mindset of every engineer that works on the project. At every single stage, you have to be thinking "How do I minimize this attack surface? What inputs might come in that I don't expect? What are the ways that this code might be exploited that I haven't thought about? What privileges does it have that it doesn't need?"
I can almost guarantee you that your ordinary feature developer working on a deadline is not thinking about that. They're thinking about how they can ship on time with the features that the salesguy has promised the client. Inverting that - and thinking about what "features" you're shipping that you haven't promised the client - costs a lot of money that isn't necessary for making the sale.
So when the reinsurance company mandates a checklist, they get a checklist, with all the boxes dutifully checked off. Any suitably diligent attacker will still be able to get in, but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down. The ecosystem settles into an equilibrium of parasites (hackers, who have silently pwned a wide variety of computer systems and can use that to setup systems for their advantage) and blowhards (executives who claim their software has security guarantees that it doesn't really).
> but now there's a very strong incentive to not report data breaches and have your insurance premiums go up or government regulation come down
I would argue the opposite is true. Insurance doesn’t pay out if you don’t self-report in time. Big data breaches usually get discovered when the hacker tries to peddle off the data in a darknet marketplace so not reporting is gambling that this won’t happen.
Curious how the compromised company can report if the compromise has not been detected
There need to be much more powerful automated tools. And they need to meet critical systems where they are.
Not very long ago actual security existed basically nowhere (except air-gapping, most of the time ;)). And today it still mostly doesn't because we can't properly isolate software and system resources (and we're very far away from routinely proving actual security). Mobile is much better by default, but limited in other ways.
Heck, I could be infected with something nasty and never know about it: the surface to surveil is far too large and constantly changing. Gave up configuring SELinux years ago because it was too time-consuming.
I'll admit that much has changed since then and I want to give it a go again, maybe with a simpler solution to start with (e.g. never grant full filesystem access and network for anything).
We must gain sufficiently powerful (and comfortable...) tools for this. The script in question should never have had the kind of access it did.
You’re making many assumptions which fit your worldview.
I can assure you that insurers don’t work like that.
If underwriting was as sloppy as you think it is insurance as a business model wouldn’t work.
Err, cybersecurity insurance as a business model has not worked. I have seen analyst reports showing that there have been multiple large claims that are each individually larger than all premiums ever collected industry wide. Those same reports indicated that all the large cybersecurity insurance vendors were basically no longer issuing policies with significant coverage, capping out at the few million dollar range. Cybersecurity insurance is picking up pennies in front of a steamroller; you wonder why no one else is picking up this free money on the ground until you get crushed.
Note, that is not to say that cybersecurity insurance if fundamentally impossible, just that the current cost structure and risk mitigation structure is untenable and should not be pointed at as evidence of function.
You are asserting that security has to be hand-crafted. That is a very strong claim, if you think about it.
Is it not possible to have secure software components that only work when assembled in secure ways? Why not?
Conversely, what security claims about a component can one rely upon, without verifying it oneself?
How would a non-professional verify claims of security professionals, who have a strong interest in people depending upon their work and not challenging its utility?
Not the person you are responding to, but: I would agree that at the stage of full maturity of cybersecurity tooling and corporate deployment, configuration would be canonical and painless, and robust and independent verification of security would be possible by less-than-expert auditors. At such a stage of maturity, checklist-style approaches make perfect sense.
I do not think we're at that stage of maturity. I think it would be hubris to imitate the practices of that stage of maturity, enshrining those practices in the eyes of insurance underwriters.
Holy those checklists are the bane of my existence. For example demanding 2FA for email, which is impossible if you self host, unless you force everyone to use RoundCube, but then you have to answer to the CEO why he can’t get email on his iPhone in the mail app.
Or just loads of other stuff that really only applies to large Fortune 500 size companies. My small startups certainly don’t have a network engineer on staff who has created a network topology graph and various policies pertaining to it, etc etc. the list goes on, I could name 100s of absurd requirements these insurance companies want that don’t actually add any level of security to the organization, and absolutely do not apply to small scale shops.
I'm mostly with you (see my other comment) but MFA on email really is table stakes and your CEO will be the first to be phished without it.
Why is 2FA impossible if you self host?
Those checklists are frequently answered like this:
"Hey it says we need to do mobile management and can't just let people manage their own phones. Looks like we'll buy Avanti mobile manager". Same conversation I've seen play out with generally secure routers being replaced with Fortigates that have major vulnerabilities every week because the checklist says you must be doing SSL interception.
It's also selling box checks for various certifications.
If crowdstrike is any indicator, expect Ivanti stock to go up now. Seems to be the mo for security companies. Fuck up, get paid.
There is no bad publicity? I take few had heard of them before so this is free marketing putting the name in public. Or then there is some broken LLM based sentiment analysis bot that automatically buy companies in news...
> How they haven't been sued into bankruptcy is something I'll never understand.
Isn't most off-the-shelf software effectively always supplied without any kind of warranty? What grounds would the lawsuit have?
Suing for negligence and friends is how car companies -- when it is found out they've built something highly unsafe/dangerously broken -- happens. I don't see the difference.
In most cases, you can't evade liability for negligence that results in personal injury. You can usually disclaim away liability for other types of damage caused by negligence.
Well, next week there will be a similar vulnerability Fortinet and everyone will momentarily forget about Ivanti again :-)
Yes. These companies should be shut down in the name of national security, seriously.
>We are aware of a very limited number of customers whose solution has been exploited at the time of disclosure.
“We are aware” and “very limited” are likely (in our opinion, this is probably not fact, etc, etc) to be doing a significant amount of lifting.
For avoidance of doubt, the following versions of Ivanti EPMM are patched:
None
----
Ah, this company is a security joke as most software security companies are.
It seems you forgot to note this comment is a quote from [1].
1. https://labs.watchtowr.com/someone-knows-bash-far-too-well-a...
"We are aware" can mean "we are taking this very seriously and have seen very little so far" or it can mean "after covering our eyes and plugging our ears we are seeing and hearing very little of this problem".
And "a very limited number" may mean "though we pretend to be a big company, we have a limited number of customers and while they all pay licence fees, most are not actually using the product in production."
Ivanti isn't exactly a small company. It's products are used in fair amount of the F100's out there so any risk on their part can have an outsized influence.
That's why you hire a CSO: Chief Scapegoat Officer.
You pay them a million per year, and fire them when a breach happens.
Way cheaper than improving security.
If you're aware of the sheer number of exploits that can work around or without authentication against anything Ivanti, it has to be the latter.
Related: Someone Knows Bash Far Too Well, And We Love It (Ivanti EPMM Pre-Auth RCEs CVE-2026-1281 & CVE-2026-1340) https://labs.watchtowr.com/someone-knows-bash-far-too-well-a...
I think there is an easier substitution attack since there is shell expansion occuring. I will toy with it later today.
The array indexing thing is a special case in [[...]] which is otherwise more-or-less secure (no expansion occurs under typical unquoted variable access). https://news.ycombinator.com/item?id=46631811
Can't help but notice the weird choice of illustration in TFA.
Ivanti is a US company. But if you have never heard of them, the dragon-resembling creature in the illustration (representing the dormant backdoor?) makes it look like the incident is somehow related to China.
Ivanti may be a US company but the webshells very likely aren’t.
Anyway, the image is just the end result of plugging the title into nano banana. You ought to address your complaints to Google :)
I didn't see that exploit showing up on Hackernews so here it is
https://hub.ivanti.com/s/article/Security-Advisory-Ivanti-En...
Ivanti doesn't explain how this happened or what mistake led to this exploit being created.
There is some dark amusement about an MDM and general enterprise management and security systems being used as the attack vector. Ivanti in particular has proven itself to be swiss cheese as of late, and would be bankrupt if people cared about security rather than it being a compliance/insurance checkbox that truly _nobody_ cares about in practice.
Semi-related: with the recent much-touted cybersecurity improvements of AI models (as well as the general recent increase in tensions and conflicts worldwide) I wonder just how much the pace of attacks will increase, and whether it’ll prove to be a benefit or a disadvantage over time. Government sponsored teams were already combing through every random weekend project and library that somehow ended in node or became moderately popular, but soon any dick and tom will be able to do it at scale for a few bucks. On the other hand, what’s being exploited tends to get patched in time - but this can take quite a while, especially when the target is some random side project on github last updated 4 years ago.
My gut feeling is that there will be a lot more exploitation everywhere, and not much upside for the end consumer (who didn’t care about state level actors anyway). Probably a good idea to firewall aggressively and minimize the surface area that can be attacked in the first place. The era of running any random vscode extension and trust-me-bro chrome extension is likely at an end. I’m also looking forward to being pwned by wifi enabled will-never-be-updated smart appliances that seem to multiply by the year.
thank god they're dormant eh
Why the fuck do people still use Ivanti, and while we're at it, Cisco gear? How many backdoors and vulnerabilities can these two companies produce until they get put out of business?
If you ask me... both these companies should be treated similarly to misbehaving banks: banned from acquiring new customers, an external overseer installed, and only when the products do not pose a threat to the general public any more, they can acquire new customers again.