I've been wondering about this failed Apple Intelligence project, but the more I think of it, Apple can afford to sit and wait. In 5 years we're going to have Opus 4.6-level performance on-device, and Apple is the only company that stands to benefit from it. Nobody wants to be sending EVERY request to someone else's cloud server.
Have you tried running a reasonably sized model locally? You need minimum 24GB VRAM to load up a model. 32GB to be safe, and this isnt even frontier, but bare minimum.
A good analogy would be streaming. To get good quality, sure, you can store the video file but it is going to take up space. For videos, these are 2-4GB (lets say) and streaming will always be easier and better.
For models, we're looking at 100s of GB worth of model params. There's no way we can make it into, say, 1GB without loss in quality.
So nope, beyond minimal classification and such, on-device isnt happening.
--
EDIT:
> Nobody wants to be sending EVERY request to someone else's cloud server.
We do this already with streaming. You watch YouTube that is hosting videos on the "cloud". For latest MKBHD video, I dont care about having that locally (for the most part). I just wanna watch the video and be done with it.
Same with LLMs. If LLMs are here to stay, most people would wanna use the latest / greatest models.
---
EDIT-EDIT:
If you response is Apple will figure it out somehow. Nope, Apple is sitting out the AI race. So it has no technology. It has nothing. It has access to whatever open source is available or something they can license from rest. So nope, Apple isnt pushing the limits. They are watching the world move beyond them.
> You need minimum 24GB VRAM to load up a model. 32GB to be safe, and this isnt even frontier, but bare minimum.
Indeed.
But they said 5 years. That's certainly plausible for high-end mobile devices in Jan 2031.
I have high uncertainty on if distillation will get Opus 4.6-level performance into that RAM envelope, but something interesting on device even if not that specifically, is certainly within the realm of plausibility.
I think this is very pessimistic. Yes, big models are "smarter" and have more inherent knowledge but I'd bet you a coffee that what 99% of people want to do with Siri isn't "Write me an essay on the history of textiles" or "Vibe code me a SPA", rather it's "Send Mom the pictures I took of the kids yesterday" and "Hey, play that Deadmau5 album that came out a couple years back" which is more about tool calls than having wikipedia-level knowledge built in to the model.
> Hey, play that Deadmau5 album that came out a couple years back
It could work for Deadmau5 because it is probably popular enough to be part of the model. How about "Hey, play that $regional_artist's cover of Deadmau5" and the model needs to know about "regional_artist", the concept of "cover", where those remixes might be (youtube? soundcloud? some other place).
All of a sudden, it all breaks down. So it'll work for "turn off porch lights", but not for "turn off the lights that's in the front of the house"
As long as it can run tool calls it won't "break down," not sure why you think the LLM would be searching within its own training data rather than calling the Spotify API or MCP to access that specific artist and search through for the song id of the cover.
Have you ran models locally, especially on the phone? I have, and there are even apps like Google AI Edge Gallery that runs Gemma for you. It works perfectly fine for use cases like summarizing emails and such, you don't really need the latest and greatest (ie biggest) models for tasks like these, in much the same way more people do not need the latest and greatest phone or laptop for their use cases.
And anyway, you already see models like Qwen 3.5 9B and 4B beating 30B and 80B parameter models, which can already run on phones today, especially with quantization.
I'm going by what features Apple advertisement showed in the iPhone 16 ad. Take a phone out, and point at a restuarant and ask it to a) analyze the video/image b) understand what's going on
Or pull out the phone and ask "Who's the person I met on X day ..".
>> So nope, beyond minimal classification and such, on-device isnt happening.
This is a paradox right? Handset makers want less handset storage so they can get users to buy more of their proprietary cloud storage while at the same time wanting them to use their AI more frequently on their handsets.
It will be interesting which direction they decide to go. Finding a phone in the last few years with more than 256gb storage is not only expensive AF, its become more of a rarity than commonplace. Backtracking on this model in order to simply get AI models on board would be a huge paradigm shift.
If you have good performance storage, you don't need to keep all your params in VRAM. The big datacenter-scale providers do it for peak performance/throughput, but locally you're better off (at least for the largest models) letting them sit on storage and accessing them on demand.
I think there's also laws of physics based on the current architecture. Its like saying looking at a 10GB video file and saying - it has to compress to 500MB right? I mean, it has to - right?
Unless we invent a completely NEW way of doing videos, there's no way you can get that kind of efficiency. If tomorrow we're using quantum pixels (or something), sure 500MB is good enough but not from existing.
In other words, you cannot compress a 100GB gguf file into .. 5GB.
There surely are limits, but I don't think we have a good idea of what those are, and there's nothing to indicate we're anywhere close to them. In terms of raw facts, you can look at the information content and know that you need at least that many bits to represent that knowledge in the model. Intelligence/reasoning is a lot less clear.
100GB to 5GB would be 20x. Video has seen an improvement of that magnitude in the days since MPEG-1.
It's interesting to consider that improvements in video codecs have come from both research and massively increased computing power, basically trading space for computation. LLMs are mostly constrained by memory bandwidth, so if there was some equivalent technique to trade space for computation in LLM inference, that would be a nice win.
The insistence/assumption that llm models will consistently get better, smaller, and cheaper is so annoying. These things fundamentally require lots of data and lots of processing power. Moore's Law is dead; devices aren't getting exponentially faster anymore. RAM and SSDs are getting more expensive (thanks to this insane bubble).
> RAM and SSDs are getting more expensive (thanks to this insane bubble).
That's not a matter of Moore's Law failing, but short-term capacity constraints being hit. It's actually what you want if Moore's Law is to keep going. It's a blessing in disguise for the industry as a whole.
Streaming video is almost exclusively pull. The only data you're sending up to the server is what you're watching, when you seek, pause, etc.
Useful LLM usage involves pushing a lot of private data into them. There's a pretty big difference sending up some metadata about your viewing of an MKBHD video, and asking an LLM to read a text message talking about your STD test results to decide whether it merits a priority notification. A lot of people will not be comfortable with sending the latter off to The Cloud.
I think there's a lot of false assumptions in that assertion:
- that a bunch of users won't jump ship if Apple stagnates for 5 years
- that a product based on a model with Q12026 SoTA performance would be competitive with products using 2031's models.
- that just having access to good (by 2025/2026 standards) models is the big thing that Apple needs in order for Apple Intelligence to finally be useful.
On that last point, I think the OS/app-level features are almost more important than the model itself. If the model can't _do_ anything, it doesn't really matter how intelligent it is. If Apple sits on their laurels for 5 years, would their OS, built-in apps, and 3rd-party apps have all the hooks needed for a useful AI product?
Assuming the rate of progress on AI stays the same:
1/ No, you don't get Opus 4.6 level on devices with 12Gb of RAM, 7B quantised models just don't get that good. Still quite good mind you, and I believe that the biggest advance to come from mobile AI would be apps providing tools and the device providing a discovery service (see Android's AppFunctions, if it was ever documented well): output quality doesn't matter on device, really efficient and good tool calling is a game changer.
2/ Opus 4.6 is now Opus 4.6+5years and has new capabilities that make people want to keep sending everything to someone else's cloud server instead of burning their battery life
The rest of the FAANG has invested very heavily in cloud while Apple seems to be a laggard. GCP, AWS and Azure are all publicly available products, and cloud at Netflix, Meta seems very mature for a private offering.
This is not a huge disadvantage in my opinion. Let the rest of big tech fight each other to death over cloud, while controlling a very profitable differentiated offering (devices+services). Apple keeps the M series HW out of data centers, even though it presents some very attractive performance/w and per-core numbers.
I think you're correct on it not being a disadvantage. Apple's competitors are the Android OEMs, Microsoft, and Dell. Apple Intelligence is a failure only in the sense that we hold Apple to a higher standard. Can anyone argue that Apple's AI implementation is more flawed than Microsoft? I don't think so.
I do love that feature. As a parent I'm part of multiple group chat for different things. And it's nice to have a single summary instead of reading 50+ unread messages.
Being able to search photos with queries like "show me photos of me and teeray" is pretty useful.
What I really want is my phone to transcribe all of my phone calls to a Notes document. Since it isn't recording an audio conversation, I don't think the consent laws come into play.
I often use Apple Intelligence to proofread emails before I send them. It's nice that it runs on device. I don't think I ever had a use case where it would have to use their Private Cloud though.
I'm a complete Apple ecosystem user-- I have a Mac, an iPhone, an Apple Watch, Apple earbuds, and an Apple TV, and I also pay reasonably close attention to their announcements and developments-- and I couldn't tell you a single Apple Intelligence feature. Nor do I ever use Siri except for setting kitchen timers.
What do people even expect from these intelligence services? Apple is always said to have failed, yet I've seen nothing in Windows that I'd actually want to use WRT to intelligence services.
Siri being better at free form requests for actions and doing internet/knowledge searches is about all I can think of. But also, I use Kagi for that, and unless Siri has a pluggable backend for search I'm not sure being forced to use only Apple's search, if it ever exists, is a great design.
I was wondering the same thing. I turned notification summaries off as they were less than useful, and I don't think I've stumbled across any other Apple Intelligence features apart from the laughable Image Playground or whatever it's called.
I cringe whenever I see the Image Playground icon on my MacBook.
It somehow looks worse than most scammy image generation apps you see on half-page search ads on the App Store. I have no idea how Apple willingly released it like that.
It was updated on my iPhone to a bland, forgettable abstract icon that’s still fairly mediocre but no longer an ongoing embarrassment for their corporate brand standards.
Yeah but this is how Apple has always done infrastructure/services. Their internal software teams are a mess. They constantly reinvent the wheel poorly, and then they charge a premium for exclusive access. Is anyone surprised by this?
What's insane is that the market / users doesn't care, they're making more than ever... It's quite sad to see that vision pro, apple Intelligence and liquid glass were all failures and no one cared... I hope android makes a comeback against Apple in the US so they're forced to innovate.
I don’t see Android making big inroads until there’s more of a presence from Android manufacturers that fill Apple’s niche in smartphones and tablets.
Samsung desperately wants to be this but misses the part where iPhones don’t come with third party junkware even if they’re entry level models and don’t allow carrier junkware either. Google could be it but they’re too married to midrange hardware and underwhelming physical designs.
All it would take is for a manufacturer to commit to their whole lineup being built with reasonably capable hardware (no ancient or weak SoCs as seen in budget Android devices), to completely jettison third party junkware, and have top end flagships with hardware that actually matches that description, but none thus far have managed this.
I don't think the average consumer is thinking about junkware nor physical design, it's just most people have iPhones especially in tech / young adults and thus more want to be on iPhone to share messages, airdrop, airpod support etcetc. They've created a network effect.
> I don't think the average consumer is thinking about junkware nor physical design
Probably not, but a zero junkware/zero carrier meddling policy is a major contributor to the brand's premium image, which makes the whole lineup more desirable. The iPhone is an invariable, singular product no matter how it's obtained, even if it has different price points.
By contrast Samsung, etc undermine themselves by trying to squeeze out pennies anywhere they can. That's the behavior of a commodity, not a premium brand.
I haven't followed OnePlus closely but as I remember, when they had their first burst of popularity they were aiming to be a value play more than anything else, operating mostly in the midrange space.
Taken another way given apple’s enormous market reach, this could be seen as perhaps the most solid metric of actual consumer interest in ai and features ignoring hype.
Not sure. I'm a heavy AI user at this point. Oh, also a heavy Apple user and never once used an Apple AI thing since they released them. I don't even know what they released. It is complete failure of execution on their part.
I've been wondering about this failed Apple Intelligence project, but the more I think of it, Apple can afford to sit and wait. In 5 years we're going to have Opus 4.6-level performance on-device, and Apple is the only company that stands to benefit from it. Nobody wants to be sending EVERY request to someone else's cloud server.
Have you tried running a reasonably sized model locally? You need minimum 24GB VRAM to load up a model. 32GB to be safe, and this isnt even frontier, but bare minimum.
A good analogy would be streaming. To get good quality, sure, you can store the video file but it is going to take up space. For videos, these are 2-4GB (lets say) and streaming will always be easier and better.
For models, we're looking at 100s of GB worth of model params. There's no way we can make it into, say, 1GB without loss in quality.
So nope, beyond minimal classification and such, on-device isnt happening.
--
EDIT:
> Nobody wants to be sending EVERY request to someone else's cloud server.
We do this already with streaming. You watch YouTube that is hosting videos on the "cloud". For latest MKBHD video, I dont care about having that locally (for the most part). I just wanna watch the video and be done with it.
Same with LLMs. If LLMs are here to stay, most people would wanna use the latest / greatest models.
---
EDIT-EDIT:
If you response is Apple will figure it out somehow. Nope, Apple is sitting out the AI race. So it has no technology. It has nothing. It has access to whatever open source is available or something they can license from rest. So nope, Apple isnt pushing the limits. They are watching the world move beyond them.
> You need minimum 24GB VRAM to load up a model. 32GB to be safe, and this isnt even frontier, but bare minimum.
Indeed.
But they said 5 years. That's certainly plausible for high-end mobile devices in Jan 2031.
I have high uncertainty on if distillation will get Opus 4.6-level performance into that RAM envelope, but something interesting on device even if not that specifically, is certainly within the realm of plausibility.
I think this is very pessimistic. Yes, big models are "smarter" and have more inherent knowledge but I'd bet you a coffee that what 99% of people want to do with Siri isn't "Write me an essay on the history of textiles" or "Vibe code me a SPA", rather it's "Send Mom the pictures I took of the kids yesterday" and "Hey, play that Deadmau5 album that came out a couple years back" which is more about tool calls than having wikipedia-level knowledge built in to the model.
> Hey, play that Deadmau5 album that came out a couple years back
It could work for Deadmau5 because it is probably popular enough to be part of the model. How about "Hey, play that $regional_artist's cover of Deadmau5" and the model needs to know about "regional_artist", the concept of "cover", where those remixes might be (youtube? soundcloud? some other place).
All of a sudden, it all breaks down. So it'll work for "turn off porch lights", but not for "turn off the lights that's in the front of the house"
As long as it can run tool calls it won't "break down," not sure why you think the LLM would be searching within its own training data rather than calling the Spotify API or MCP to access that specific artist and search through for the song id of the cover.
*deadmau5
Have you ran models locally, especially on the phone? I have, and there are even apps like Google AI Edge Gallery that runs Gemma for you. It works perfectly fine for use cases like summarizing emails and such, you don't really need the latest and greatest (ie biggest) models for tasks like these, in much the same way more people do not need the latest and greatest phone or laptop for their use cases.
And anyway, you already see models like Qwen 3.5 9B and 4B beating 30B and 80B parameter models, which can already run on phones today, especially with quantization.
Benchmarks: https://huggingface.co/Qwen/Qwen3.5-4B
I'm going by what features Apple advertisement showed in the iPhone 16 ad. Take a phone out, and point at a restuarant and ask it to a) analyze the video/image b) understand what's going on
Or pull out the phone and ask "Who's the person I met on X day ..".
Sure, many local models can do all that today already, as they have vision and tool calling support.
>> So nope, beyond minimal classification and such, on-device isnt happening.
This is a paradox right? Handset makers want less handset storage so they can get users to buy more of their proprietary cloud storage while at the same time wanting them to use their AI more frequently on their handsets.
It will be interesting which direction they decide to go. Finding a phone in the last few years with more than 256gb storage is not only expensive AF, its become more of a rarity than commonplace. Backtracking on this model in order to simply get AI models on board would be a huge paradigm shift.
If all of the storage is used up by models, users will need to buy proprietary cloud storage for their own content.
If you have good performance storage, you don't need to keep all your params in VRAM. The big datacenter-scale providers do it for peak performance/throughput, but locally you're better off (at least for the largest models) letting them sit on storage and accessing them on demand.
5 years ago , LLM was "beyond minimal conversation, intelligence isn't happening".
I'm pretty sure in five years, local LLM will be a thing.
I think there's also laws of physics based on the current architecture. Its like saying looking at a 10GB video file and saying - it has to compress to 500MB right? I mean, it has to - right?
Unless we invent a completely NEW way of doing videos, there's no way you can get that kind of efficiency. If tomorrow we're using quantum pixels (or something), sure 500MB is good enough but not from existing.
In other words, you cannot compress a 100GB gguf file into .. 5GB.
There surely are limits, but I don't think we have a good idea of what those are, and there's nothing to indicate we're anywhere close to them. In terms of raw facts, you can look at the information content and know that you need at least that many bits to represent that knowledge in the model. Intelligence/reasoning is a lot less clear.
100GB to 5GB would be 20x. Video has seen an improvement of that magnitude in the days since MPEG-1.
It's interesting to consider that improvements in video codecs have come from both research and massively increased computing power, basically trading space for computation. LLMs are mostly constrained by memory bandwidth, so if there was some equivalent technique to trade space for computation in LLM inference, that would be a nice win.
The insistence/assumption that llm models will consistently get better, smaller, and cheaper is so annoying. These things fundamentally require lots of data and lots of processing power. Moore's Law is dead; devices aren't getting exponentially faster anymore. RAM and SSDs are getting more expensive (thanks to this insane bubble).
> RAM and SSDs are getting more expensive (thanks to this insane bubble).
That's not a matter of Moore's Law failing, but short-term capacity constraints being hit. It's actually what you want if Moore's Law is to keep going. It's a blessing in disguise for the industry as a whole.
For vibe coding? Sure. For "Hey Siri, send Grandma an e-mail summarizing my schedule this afternoon."? No.
Streaming video is almost exclusively pull. The only data you're sending up to the server is what you're watching, when you seek, pause, etc.
Useful LLM usage involves pushing a lot of private data into them. There's a pretty big difference sending up some metadata about your viewing of an MKBHD video, and asking an LLM to read a text message talking about your STD test results to decide whether it merits a priority notification. A lot of people will not be comfortable with sending the latter off to The Cloud.
How do you do on-device inference while preserving battery life?
I think there's a lot of false assumptions in that assertion:
On that last point, I think the OS/app-level features are almost more important than the model itself. If the model can't _do_ anything, it doesn't really matter how intelligent it is. If Apple sits on their laurels for 5 years, would their OS, built-in apps, and 3rd-party apps have all the hooks needed for a useful AI product?Assuming the rate of progress on AI stays the same:
1/ No, you don't get Opus 4.6 level on devices with 12Gb of RAM, 7B quantised models just don't get that good. Still quite good mind you, and I believe that the biggest advance to come from mobile AI would be apps providing tools and the device providing a discovery service (see Android's AppFunctions, if it was ever documented well): output quality doesn't matter on device, really efficient and good tool calling is a game changer.
2/ Opus 4.6 is now Opus 4.6+5years and has new capabilities that make people want to keep sending everything to someone else's cloud server instead of burning their battery life
I think the claim is that in 5 years an iPhone will have enough ultra-fast RAM to run 300B-1T models on-device.
It isnt speed you want. It is storage. Faster CPU doesnt mean you can store a TB model. It needs raw storage, which famously is through the roof.
So unless iPhone 20 Pro Max has 100GB of unifieid memory all of this is just pipe-dream. I mean, it wont even have 32GB of unified memory.
> and Apple is the only company that stands to benefit from it.
And that is exactly why it won't happen (like that).
The rest of the FAANG has invested very heavily in cloud while Apple seems to be a laggard. GCP, AWS and Azure are all publicly available products, and cloud at Netflix, Meta seems very mature for a private offering.
This is not a huge disadvantage in my opinion. Let the rest of big tech fight each other to death over cloud, while controlling a very profitable differentiated offering (devices+services). Apple keeps the M series HW out of data centers, even though it presents some very attractive performance/w and per-core numbers.
I think you're correct on it not being a disadvantage. Apple's competitors are the Android OEMs, Microsoft, and Dell. Apple Intelligence is a failure only in the sense that we hold Apple to a higher standard. Can anyone argue that Apple's AI implementation is more flawed than Microsoft? I don't think so.
Just like Siri, it’s completely useless. I don’t need Apple Intelligence to summarize my text messages. I can skim them nearly as fast.
I do love that feature. As a parent I'm part of multiple group chat for different things. And it's nice to have a single summary instead of reading 50+ unread messages.
Being able to search photos with queries like "show me photos of me and teeray" is pretty useful.
What I really want is my phone to transcribe all of my phone calls to a Notes document. Since it isn't recording an audio conversation, I don't think the consent laws come into play.
I often use Apple Intelligence to proofread emails before I send them. It's nice that it runs on device. I don't think I ever had a use case where it would have to use their Private Cloud though.
What are these servers actually used for?
The Siri+LLM features of Apple Intelligence aren’t launched yet, and the other features like notification summaries run on-device.
Well... you can write Apple Shortcuts that send AI requests to their cloud.
The next Siri is Siri by Gemini, running on Google servers with Apple Privacy requirements. (aiui)
https://www.macrumors.com/2026/01/30/apple-explains-how-gemi...
They are supposed to run Apple Intelligence for devices too old to do it themselves.
https://security.apple.com/blog/private-cloud-compute/
I'm a complete Apple ecosystem user-- I have a Mac, an iPhone, an Apple Watch, Apple earbuds, and an Apple TV, and I also pay reasonably close attention to their announcements and developments-- and I couldn't tell you a single Apple Intelligence feature. Nor do I ever use Siri except for setting kitchen timers.
Just a total failure of execution.
What do people even expect from these intelligence services? Apple is always said to have failed, yet I've seen nothing in Windows that I'd actually want to use WRT to intelligence services.
Siri being better at free form requests for actions and doing internet/knowledge searches is about all I can think of. But also, I use Kagi for that, and unless Siri has a pluggable backend for search I'm not sure being forced to use only Apple's search, if it ever exists, is a great design.
The gorgeous rainbow border is one of the Apple Intelligence features, unavailable to plain Siri :/
> Nor do I ever use Siri except for setting kitchen timers.
If it even works, it fails with "something went wrong" for me 3 out of 5 times
I was wondering the same thing. I turned notification summaries off as they were less than useful, and I don't think I've stumbled across any other Apple Intelligence features apart from the laughable Image Playground or whatever it's called.
I cringe whenever I see the Image Playground icon on my MacBook.
It somehow looks worse than most scammy image generation apps you see on half-page search ads on the App Store. I have no idea how Apple willingly released it like that.
It was updated on my iPhone to a bland, forgettable abstract icon that’s still fairly mediocre but no longer an ongoing embarrassment for their corporate brand standards.
Yeah but this is how Apple has always done infrastructure/services. Their internal software teams are a mess. They constantly reinvent the wheel poorly, and then they charge a premium for exclusive access. Is anyone surprised by this?
they should just scalp the RAM on ebay, that's what people actually _want_ to buy, not ai
Can't, the RAM is soldered to the motherboard
Did they not just see crazy sales on Mac Minis the second users figured out it meant they could give an AI access to blue-bubble text messages?
Imagine launching such a shitty product that AI servers are sitting unused in 2026.
What's insane is that the market / users doesn't care, they're making more than ever... It's quite sad to see that vision pro, apple Intelligence and liquid glass were all failures and no one cared... I hope android makes a comeback against Apple in the US so they're forced to innovate.
I don’t see Android making big inroads until there’s more of a presence from Android manufacturers that fill Apple’s niche in smartphones and tablets.
Samsung desperately wants to be this but misses the part where iPhones don’t come with third party junkware even if they’re entry level models and don’t allow carrier junkware either. Google could be it but they’re too married to midrange hardware and underwhelming physical designs.
All it would take is for a manufacturer to commit to their whole lineup being built with reasonably capable hardware (no ancient or weak SoCs as seen in budget Android devices), to completely jettison third party junkware, and have top end flagships with hardware that actually matches that description, but none thus far have managed this.
I don't think the average consumer is thinking about junkware nor physical design, it's just most people have iPhones especially in tech / young adults and thus more want to be on iPhone to share messages, airdrop, airpod support etcetc. They've created a network effect.
> I don't think the average consumer is thinking about junkware nor physical design
Probably not, but a zero junkware/zero carrier meddling policy is a major contributor to the brand's premium image, which makes the whole lineup more desirable. The iPhone is an invariable, singular product no matter how it's obtained, even if it has different price points.
By contrast Samsung, etc undermine themselves by trying to squeeze out pennies anywhere they can. That's the behavior of a commodity, not a premium brand.
is that not what oneplus started as?
I haven't followed OnePlus closely but as I remember, when they had their first burst of popularity they were aiming to be a value play more than anything else, operating mostly in the midrange space.
Microsoft CoPilot?
Microsoft is doing a "good" job slapping AI onto their products. Might not be the best use but I doubt they sit idle.
Their servers aren't sitting idle. Sam needs them all.
Apple will utilize them when needed and scoop up extra capacity after the bubble burst.
Taken another way given apple’s enormous market reach, this could be seen as perhaps the most solid metric of actual consumer interest in ai and features ignoring hype.
Not sure. I'm a heavy AI user at this point. Oh, also a heavy Apple user and never once used an Apple AI thing since they released them. I don't even know what they released. It is complete failure of execution on their part.