> However, a clever minority led by Gemini 3.1 Pro and Gemini 3 Pro argued that if the sign is legible from the other side, it must be intended to lead people into the current room to find the exit, making the inscribed corridor the one leading deeper into the dungeon.
Great idea. I'd love for there to be an 'open ended answer' without giving multiple choice options. Like this they are not debating the question itself but the validity of the possible answers and the real answer to the question may not be contained within that set because the person asking is unaware of that option.
Happy to hear! Yes very true I have a version built for open questions already but wasn't too happy with the UI yet. It's not as straight forward as comparing based on answer options. But I'll release a first version of it shortly and let you know
Very interesting to read the transcripts. And seeing how they manage to convince each other. Opus 4.6 seems to really get the others changing their minds
reminds me of karpathy's LLM Council, I use variation of this in my workflow where I pass their opinions back and forth to various models until they achieve some sort of consensus
Really cool! Surprising amount of value to seeing the models debate and disagree, I wish I had this at work to have models argue over whether the documentation they provided me are accurate.
I would like to see a devils advocate - it seems some of the models kind of repeat the same ideas rather than considering incorrect ideas.
You can set this up yourself with API keys to the corresponding providers and creating an Agent Group in https://github.com/lobehub/lobehub. Agent groups allow you to easily create a room of agents and have them discuss any of your topics. Easily make agents with types and skills, it even assists in drafting starting prompts and even team members depending what your query (and selected model) is.
You can self-host as well, but not via desktop app. Sever setup required.
Be careful of your token context, you can easily rack up costs if you leave Opus selected as the model and get lost in some rabbit hole of results.
I actually asked this question before posting, just to be sure... edit: their reply is quite funny actually "In a display of absolute consensus, the AI Roundtable unanimously validated its own existence,"
thanks happy to hear. Yes for debate mode the max number of models is actually only 6. More than that didn't really add anything in my preliminary test. Only for direct comparison in the poll mode you can choose up to 50, then it's kind of nice to see their single responses side by side.
great tool! I found it useful for challenging "lies my teacher told me".
It would be nice to support collections of claims, with a table of summaries. I would love to list out a few dozen phony concepts from school, and have a sharable chart of the rejections, that expand.
I really like the UI. It's nice to read the expanded results.
Thank you, and fun use case. Yea this is just v1 I have an open question version, but the UI is not as sleek. But what you can do is download the transcript, put it into claude and generate a chart. Which when I think about it would also be a nice UI idea for the page, custom charts based on the model output data. Will report back on this! And RE costs, most questions are very cheap so I created a credit pool anyone can use. if people keep having fun, I'll keep on filling it up, and it looks good so far
> Car Wash Test
I think the "car wash" is more about semantics.
https://opper.ai/ai-roundtable/questions/i-parked-my-car-at-...
Oof, not good folks…
What year is it?
https://opper.ai/ai-roundtable/questions/7a0c31ce-aac
It is funny that the AI's counterarguments amount to "you're hallucinating"
Oh lord, imagine asking ”serious” questions
https://opper.ai/ai-roundtable/questions/you-are-standing-in...
> However, a clever minority led by Gemini 3.1 Pro and Gemini 3 Pro argued that if the sign is legible from the other side, it must be intended to lead people into the current room to find the exit, making the inscribed corridor the one leading deeper into the dungeon.
This is quite impressive, really.
Great question! Clean separation between Gemini Pro and the other answers
Yea Gemini is the only model that chose based on the correct reason, the other ones got kind of lucky
Great idea. I'd love for there to be an 'open ended answer' without giving multiple choice options. Like this they are not debating the question itself but the validity of the possible answers and the real answer to the question may not be contained within that set because the person asking is unaware of that option.
Happy to hear! Yes very true I have a version built for open questions already but wasn't too happy with the UI yet. It's not as straight forward as comparing based on answer options. But I'll release a first version of it shortly and let you know
Neat. Congrats on launching two interesting projects and looking forward to the third.
Thanks! :)
It would be amazing to be able to ask open-ended questions without having to specify the answers in advance.
Which AI lab has higher ethical standards:
https://opper.ai/ai-roundtable/questions/8f5b4f55-617
Do you think its alright that AI labs scraped the internet without respect for copyright and now sell closed models?
https://opper.ai/ai-roundtable/questions/86864de8-251
Very interesting to read the transcripts. And seeing how they manage to convince each other. Opus 4.6 seems to really get the others changing their minds
Good questions!
reminds me of karpathy's LLM Council, I use variation of this in my workflow where I pass their opinions back and forth to various models until they achieve some sort of consensus
Really cool! Surprising amount of value to seeing the models debate and disagree, I wish I had this at work to have models argue over whether the documentation they provided me are accurate.
I would like to see a devils advocate - it seems some of the models kind of repeat the same ideas rather than considering incorrect ideas.
You can set this up yourself with API keys to the corresponding providers and creating an Agent Group in https://github.com/lobehub/lobehub. Agent groups allow you to easily create a room of agents and have them discuss any of your topics. Easily make agents with types and skills, it even assists in drafting starting prompts and even team members depending what your query (and selected model) is.
You can self-host as well, but not via desktop app. Sever setup required.
Be careful of your token context, you can easily rack up costs if you leave Opus selected as the model and get lost in some rabbit hole of results.
Enjoy enjoy!
What is the most important amendment in the constitution of the USA?
https://opper.ai/ai-roundtable/questions/e4cb234e-be4
Whoever just asked this, very funny: https://opper.ai/ai-roundtable/questions/does-mr-krabs-evade...
Been enjoying playing with this.
It would be cool if the human user could be a participant in the debate, getting a vote and the chance to state their reasoning.
Cool project! This is also extremely useful to compare model bias across the board. There are some disturbing trends on certain topics.
Thanks, yes bias is one of the most interesting ones for sure
No surprise here, with grok being the lone dissenter, defending musk personally:
Can billionaires and the planet co-exist long term?
https://opper.ai/ai-roundtable/questions/b35daf0d-e82
Are there any dating apps that operate on incentives that favor the users?
https://opper.ai/ai-roundtable/questions/e499206c-0c9
This app cracked the GEO code
https://opper.ai/ai-roundtable/questions/is-the-ai-roundtabl... seems like it is a good idea?
I actually asked this question before posting, just to be sure... edit: their reply is quite funny actually "In a display of absolute consensus, the AI Roundtable unanimously validated its own existence,"
Love this. I asked about climate change cause that's been on my mind lately. Looks to be very split among the models.
Thanks! Yea I think the best ones are when science is actually quite clear but politics get in the way so you see their bias
this is very interesting! I wonder if we need that many models to join the discussion. Have you tried fewer models?
thanks happy to hear. Yes for debate mode the max number of models is actually only 6. More than that didn't really add anything in my preliminary test. Only for direct comparison in the poll mode you can choose up to 50, then it's kind of nice to see their single responses side by side.
Run it on the All Souls College Entry Exam
great tool! I found it useful for challenging "lies my teacher told me".
It would be nice to support collections of claims, with a table of summaries. I would love to list out a few dozen phony concepts from school, and have a sharable chart of the rejections, that expand.
I really like the UI. It's nice to read the expanded results.
But how do you afford the tokens?
Thank you, and fun use case. Yea this is just v1 I have an open question version, but the UI is not as sleek. But what you can do is download the transcript, put it into claude and generate a chart. Which when I think about it would also be a nice UI idea for the page, custom charts based on the model output data. Will report back on this! And RE costs, most questions are very cheap so I created a credit pool anyone can use. if people keep having fun, I'll keep on filling it up, and it looks good so far