GLM-5V-Turbo is a model I wanted to like due to its speed and API reliability, but it didn't perform well in our coding and reasoning testing. More recent open source models have made it obsolete. GLM 5.1 is so many light years ahead of it on everything except speed, that I'm not sure why it's still being served.
But if you go to the linked site, it seems like the only thing that's part of the evaluation is how well the models play various games? I suppose that counts as "reasoning", but I don't see how coding ability tested?
GLM-5V-Turbo is a model I wanted to like due to its speed and API reliability, but it didn't perform well in our coding and reasoning testing. More recent open source models have made it obsolete. GLM 5.1 is so many light years ahead of it on everything except speed, that I'm not sure why it's still being served.
Comprehensive evaluation results at https://gertlabs.com/rankings
>but it didn't perform well in our coding and reasoning testing
>Comprehensive evaluation results at https://gertlabs.com/rankings
But if you go to the linked site, it seems like the only thing that's part of the evaluation is how well the models play various games? I suppose that counts as "reasoning", but I don't see how coding ability tested?
This may be a strange request, but is it at all possible to include Cursor's Composer models in your tests?
GLM-5.1 does not support image input.
I think the point is to use them both with GLM 5.1 delegating vision tasks to GLM-5V-Turbo
z.ai will use quantized models in off hours. Buyer beware
Do you have proof for this?
I have a subscription and I have not seen any difference in performance during on/off hours. What exactly are you basing this on?