I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.
If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.
Yay, we've found another way to give LLMs biases.
I can see how obscure but useful nuggets of information that you rarely need, but are critical when you do, will be lost too.
If the weighting was shared between users, an attacker could use this feedback loop to promote their product or ideology, by executing fake interactions that look successful.
Here's the part nobody talks about
This feels like such an obvious LLM tell; it has that sort of breathless TED Talk vibe that was so big in the late oughts.