I think it adds value. Having a conversation with/around a book/document with an AI is a good use case, and having that feature as a not forced option in a book management solution a good match.
It is not something that works regardless if we configure or activate it or not. It may broaden the AI use for people that find that useful? Yes. Would that end being dependency on a particular provider? Maybe on how we use it. At some point a lot of those decisions were taken in the past by most of the rest, like using search engines or a narrow/builtin set of browsers or desktop/mobile OSs. If using AIs is a concern then the ship has sailed long ago for many bigger things already.
You are not "having a conversation". Stop anthropomorphizing. You are interacting with a machine which has its singular inhuman workings, developed and kept on a leash by a megacorporation.
Will it report me if I try to discuss "The anarchist's cookbook" with it? Will it try to convince me the "Protocols of the sages of Sion" is real? Will it encourage me to follow the example of the main character in "Lolita"? Will it cast in a bad light any gay or transsexual character because the megacorp behind it is forced to toe the anti-woke line of the US government in order to keep its lucrative military and anti-immigration contracts?
I'm interacting with a language model, using language and normal phrases. That is basically a conversation from my point of view, as is mostly indistinguishable from saying the same and getting similar answers that I could get from a real person that had read that book. No matter what is in the other side, because we are talking about the exchanged text and not the participants or what possibly they could had in their minds or chips.
If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation? How could it “convince” you of anything or cast something in a bad light without conversing?
Your point about censorship, however, I fully agree with.
The addition doesn't really bother me because Calibre is already full of features that seem utterly useless, so I trust its author to add new stuff without ruining the parts that are useful to me. Still, does anyone actually use "ChatGPT bolted onto the ebook reader" type features for anything besides cheating on school assignments? Lack of web search tools makes them suboptimal for asking clarifying questions or getting recommendations. Makes some sense on a Kindle where you can't exactly alt-tab to ask ChatGPT directly, not so much in a desktop application.
Not to say that there's no use case (I'd be interested to try a LLM-aided notetaking tool), just that adding a chat box is hardly a feature.
I struggle to understand the pushback against AI features. As long as the feature isn't intrusive, it seems like a minor addition, and may even be useful to some people. LLMs are here to stay, there is no denying that at this point.
I guess it depends on your definition of "intrusive".
I have no interest in any of the AI features that have been added to the UIs of Meta products (WhatsApp, and Messenger), yet still see prompts for them and modified UIs to try and get me to engage with Meta AI.
Same goes with Gemini poking its head into various spots in the UIs of the Google products I use.
There are now UI spots I can accidentally tap/click and get dropped into a chat with an AI in various things I use on a daily basis.
There are also more "calls to action" for AI features, more "hey do you wanna try AI here?" prompts, etc.
It's not just the addition of AI features, it's all the modern, transparent desperation-for-metrics-to-go-up UX bits that come with it.
And yes, some of these things were around before this wave of AI launches, but a- that doesn't make it better, and b- all the AI features are seemingly the same across apps, so now we have bunches of apps all pushing the same "feature" at us.
I agree with you that the push towards them is annoying. (Google's "Your phone has new exciting features.")
In this case, Calibre does not seem to introduce any said annoyances (probably because it is FOSS, so no pressure for adoption), but people are upset anyways.
There are many features I don't use in various software, but it never made me complain that a new icon/menu entry appeared.
I think there are "classes" of features people have disliked. Eg: every social media app added "stories" at some point, using up screen real estate. Same goes with "shorts/reels/etc".
It's one thing when a feature gets added to an app.
It's another thing when it happens in a context where every app is doing it (or something similar), and you see it in every facet of your tech life.
WhatsApp has now told me twice about Lisa Blackpink.. I wanted to write my friend Lisa, and I talk to her on Instagram and I don't have her on WhatsApp. So searching for her on WhatsApp gives me 2 unrelated contacts, and then Meta Ducking AI suggestions, of which the top one is Lisa Blackpink. Then, further down the screen (hidden by the keyboard) I can see chats where I've mentioned her to mutual friends, but fucking nooo, it's more important that Fuckerberg shoves AI down our throats.
WhatsApp should release their most searched terms on AI, I bet it would correlate with most common names among WhatsApp users...
My problem is that all AI features are currently wildly underpriced by tech giants who are providing subsidies in the hopes of us becoming reliant upon them. Not to mention it means we’re feeding all kinds of our own behavioural data to these giants with very little visibility.
Any new feature should face a very simple cost/benefit analysis. The average user currently can’t do that with AI. I think AI in some form is inevitable. But what we see today (hey, here’s a completely free feature we added!) is unsustainable both economically and environmentally.
Actually the frontier lab pricing is way more expensive than actual cost. Look up the prices for e.g. Kiki K2 on open router to see the real “unsubsidized” costs. It can be up to an order of magnitude less.
This is totally true and points to why Calibre's feature adds value. However I think the big players see exactly what you see and are scrambling to become peoples' go-to first. I believe this is for two main reasons. The first is because it's the only way they know how to win, and they don't see any option other than winning. The second is that they want the data that comes with it so they can monetize it. People switching to local models has a chance to take all that away, so cloud providers are doing everything they can to make their models easier to use and more integrated.
> I struggle to understand the pushback against AI features.
To develop and sustain these "AI features", human intelligence - manifested as countless hours of work published online and elsewhere - was exigently preempted and used without permission to further increase the asymmetry of knowledge/power between those with political power and those without (mostly vulnerable, marginalized cohorts).
People are annoyed because the rollout of AI has been very intrusive. It's being added to everything even when it doesn't make sense. It's this generations version of having an app for every website. Does Calibre really need its own AI chatbox when I can ask the same question to ChatGPT in a browser?
If they added a menu item “Kick a puppy”, and every time you clicked it, a puppy somewhere got kicked, would your response be “oh, well, I don’t like kicking puppies, so I just won’t click it, no big deal”?
People don't want it, plain and simple. Yet again here, like a thousand times before: it still gets forced on users. I don't know who's orchestrating this madness, but it's pathetic
> After much pushback, it looks as though users will get the ability to hide the feature from calibre's user interface, but LLM-driven features are here to stay and more will likely be added over time.
With the whole "no local models?! mega corp censorship!" complaint sidestepped from day 1, and now that it's not even shown on the UI, what will AI opponents complain about?!
I think it adds value. Having a conversation with/around a book/document with an AI is a good use case, and having that feature as a not forced option in a book management solution a good match.
It is not something that works regardless if we configure or activate it or not. It may broaden the AI use for people that find that useful? Yes. Would that end being dependency on a particular provider? Maybe on how we use it. At some point a lot of those decisions were taken in the past by most of the rest, like using search engines or a narrow/builtin set of browsers or desktop/mobile OSs. If using AIs is a concern then the ship has sailed long ago for many bigger things already.
You are not "having a conversation". Stop anthropomorphizing. You are interacting with a machine which has its singular inhuman workings, developed and kept on a leash by a megacorporation.
Will it report me if I try to discuss "The anarchist's cookbook" with it? Will it try to convince me the "Protocols of the sages of Sion" is real? Will it encourage me to follow the example of the main character in "Lolita"? Will it cast in a bad light any gay or transsexual character because the megacorp behind it is forced to toe the anti-woke line of the US government in order to keep its lucrative military and anti-immigration contracts?
I'm interacting with a language model, using language and normal phrases. That is basically a conversation from my point of view, as is mostly indistinguishable from saying the same and getting similar answers that I could get from a real person that had read that book. No matter what is in the other side, because we are talking about the exchanged text and not the participants or what possibly they could had in their minds or chips.
If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation? How could it “convince” you of anything or cast something in a bad light without conversing?
Your point about censorship, however, I fully agree with.
> If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation?
Humans are more than biased word predictors.
That has nothing to do with the guy who said stop anthropomorphizing llms and then proceeded to anthropomorphize an llm.
The addition doesn't really bother me because Calibre is already full of features that seem utterly useless, so I trust its author to add new stuff without ruining the parts that are useful to me. Still, does anyone actually use "ChatGPT bolted onto the ebook reader" type features for anything besides cheating on school assignments? Lack of web search tools makes them suboptimal for asking clarifying questions or getting recommendations. Makes some sense on a Kindle where you can't exactly alt-tab to ask ChatGPT directly, not so much in a desktop application.
Not to say that there's no use case (I'd be interested to try a LLM-aided notetaking tool), just that adding a chat box is hardly a feature.
Its useful for answering questions related related to back references. eg who was this character if you're reading a novel with too many of these.
Oh true, that actually sounds useful. I often just avoid that kind of novel because I'm terrible with names.
I struggle to understand the pushback against AI features. As long as the feature isn't intrusive, it seems like a minor addition, and may even be useful to some people. LLMs are here to stay, there is no denying that at this point.
I guess it depends on your definition of "intrusive".
I have no interest in any of the AI features that have been added to the UIs of Meta products (WhatsApp, and Messenger), yet still see prompts for them and modified UIs to try and get me to engage with Meta AI.
Same goes with Gemini poking its head into various spots in the UIs of the Google products I use.
There are now UI spots I can accidentally tap/click and get dropped into a chat with an AI in various things I use on a daily basis.
There are also more "calls to action" for AI features, more "hey do you wanna try AI here?" prompts, etc.
It's not just the addition of AI features, it's all the modern, transparent desperation-for-metrics-to-go-up UX bits that come with it.
And yes, some of these things were around before this wave of AI launches, but a- that doesn't make it better, and b- all the AI features are seemingly the same across apps, so now we have bunches of apps all pushing the same "feature" at us.
I agree with you that the push towards them is annoying. (Google's "Your phone has new exciting features.")
In this case, Calibre does not seem to introduce any said annoyances (probably because it is FOSS, so no pressure for adoption), but people are upset anyways.
There are many features I don't use in various software, but it never made me complain that a new icon/menu entry appeared.
I think there are "classes" of features people have disliked. Eg: every social media app added "stories" at some point, using up screen real estate. Same goes with "shorts/reels/etc".
It's one thing when a feature gets added to an app.
It's another thing when it happens in a context where every app is doing it (or something similar), and you see it in every facet of your tech life.
WhatsApp has now told me twice about Lisa Blackpink.. I wanted to write my friend Lisa, and I talk to her on Instagram and I don't have her on WhatsApp. So searching for her on WhatsApp gives me 2 unrelated contacts, and then Meta Ducking AI suggestions, of which the top one is Lisa Blackpink. Then, further down the screen (hidden by the keyboard) I can see chats where I've mentioned her to mutual friends, but fucking nooo, it's more important that Fuckerberg shoves AI down our throats.
WhatsApp should release their most searched terms on AI, I bet it would correlate with most common names among WhatsApp users...
My problem is that all AI features are currently wildly underpriced by tech giants who are providing subsidies in the hopes of us becoming reliant upon them. Not to mention it means we’re feeding all kinds of our own behavioural data to these giants with very little visibility.
Any new feature should face a very simple cost/benefit analysis. The average user currently can’t do that with AI. I think AI in some form is inevitable. But what we see today (hey, here’s a completely free feature we added!) is unsustainable both economically and environmentally.
Actually the frontier lab pricing is way more expensive than actual cost. Look up the prices for e.g. Kiki K2 on open router to see the real “unsubsidized” costs. It can be up to an order of magnitude less.
Summarizing text can very easily be done by local AI. Low powered, and free. for this type of task, there is essentially no reason to pay.
This is totally true and points to why Calibre's feature adds value. However I think the big players see exactly what you see and are scrambling to become peoples' go-to first. I believe this is for two main reasons. The first is because it's the only way they know how to win, and they don't see any option other than winning. The second is that they want the data that comes with it so they can monetize it. People switching to local models has a chance to take all that away, so cloud providers are doing everything they can to make their models easier to use and more integrated.
Which is not what is happening here. I think a lot of people’s objections would be resolved by a local model.
> Currently, calibre users have a choice of commercial providers, or running models locally using LM Studio or Ollama.
The choice is yours. If you want local models, you can do that.
> I struggle to understand the pushback against AI features.
To develop and sustain these "AI features", human intelligence - manifested as countless hours of work published online and elsewhere - was exigently preempted and used without permission to further increase the asymmetry of knowledge/power between those with political power and those without (mostly vulnerable, marginalized cohorts).
Why is it obvious they are here to stay?
For my part it is the uneasy feeling of maybe being tracked and “sharing” private text/media either by accident or by the software’s malice.
Most devs want to put AI on their CV so they have strong personal incentives to circumvent what is best for their users.
Would you like to have LLM connections to Google from your OSS torrent client?
We can see the painting on the wall, but still not like it.
People are annoyed because the rollout of AI has been very intrusive. It's being added to everything even when it doesn't make sense. It's this generations version of having an app for every website. Does Calibre really need its own AI chatbox when I can ask the same question to ChatGPT in a browser?
Do you really need AI integration in your IDE when you can just use the ChatGPT chatbox in your browser?
Having it built-in allows Calibre to add context to the prompt automagically.
Someone wasted their time to add this feature when I can just throw some book titles at any of the other chatbots instead.
Perhaps to the detriment of some compatibility features that got sent to the back burner.
>LLMs are here to stay, there is no denying that at this point.
You make LLMs sound like a stalker, or your mom's abusive live-in boyfriend
If they added a menu item “Kick a puppy”, and every time you clicked it, a puppy somewhere got kicked, would your response be “oh, well, I don’t like kicking puppies, so I just won’t click it, no big deal”?
People don't want it, plain and simple. Yet again here, like a thousand times before: it still gets forced on users. I don't know who's orchestrating this madness, but it's pathetic
> After much pushback, it looks as though users will get the ability to hide the feature from calibre's user interface, but LLM-driven features are here to stay and more will likely be added over time.
With the whole "no local models?! mega corp censorship!" complaint sidestepped from day 1, and now that it's not even shown on the UI, what will AI opponents complain about?!