With free libre software, where freedom and liberty are about what the end user is empowered with actually, the software is mostly metonymic. Free software, free society, because there are free people in the middle of course.
Right, as I said elsewhere, maybe let's just let "open-source" have it.
"Open-source" can be "anything you can go out and grab a copy of and use" but doesn't give you much legal certainty about any of it, and reserve "free software" for the other, better thing.
I'm genuinely torn on this one; I get technically why not, but why I think I have no problem with it is the wishy-washiness of "open source" generally.
As I teach this stuff to people newer to this tech, it's probably just easier and more helpful to refer to the wide array of "stuff you can just download and use yourself" as "open-source" and then after that, go deeper and talk about why Stallman was right, how "Free Software" was first. etc.
I mean, you have "AI" which means just about anything in marketing speak, "Agentic" is kind of becoming similar, hopefully they don't goof that one too badly, would be nice to know what you are trying to sell me. Used to be "Cloud" meant storage not just hosting (I guess it still does).
Then there's "Smart" in front of Car, Phone, TV, and so on... Meaning different things.
I do think "Open Weight" should be more commonly used. There's definitely communities that spring up that build the training infrastructure and inference infrastructure around open models on the other hand.
> “This means a future of abundance. A future where there is no poverty, where people can have whatever they want in terms of goods and services.” – Elon Musk
> “I think we see a path now where the world gets much more abundant and much better every year.” – Sam Altman
Look at the "News" section in the readme - The original TTS model is gone from this repo (you can still find it other places), but the SST/ASR, long form TTS, and streaming TTS models are newer.
When explanations get posted directly in HN comments, I imagine someone somewhere in the world is able to learn in spite of their Internet restrictions/firewalls
People will also post their own interpretations in response to comments, and quickly find out they missed something.
… But if you try to automate it, like include a summary under every HN post, you encourage laziness too much and are pre-chewing too heavily. Some balance here.
[on topic]
(OK I’m done making excuses, time to read the article… thanks for the encouragement!)
I thought this was not explained in the readme directly but in fact I missed it. I wasn’t going to read Microsoft entire changelog! But it was substantive, thanks to sibling commenter:
“2025-09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have removed the VibeVoice-TTS code from this repository.”
"get offended" is just what the clickbait news cycle made of it. It was based on the post at [1], and this is all it said:
> We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other
Note that this just covers the Speech-to-Text/Speech-Recognition aspect (a-la whisper), there's also models for long-form Text-To-Speech and steaming Text-To-Speech.
I took a look into local options for ASR and diarization some months ago, I missed that VibeVoice now has this feature.
My conclusions back then (which only came from a shallow research on the topic and 0 real experience mind you) was that Whisper + Pyannote was the "stable" approach.
Have the VibeVoice, Voxtral, Qwen or the Nemo solutions caught up in segmentation and speaker recognition?
I'd be willing to bet it will be "Word of the Year" for 2026. Merriam-Webster had 'slop' for 2025, and 'polarization' for 2024. Is there a prediction market for this?
Local? No idea. Cloud? Eleven Labs, probably. But it's described as "cloning" not "training". Not sure what the distinction is or why it matters if the end result is you can to generate any TTS that sounds like you. There might very well be an important one, I just don't know it.
Seems quite heavy for a STT model, Parakeet and Whisper are much smaller and perform great for quick dictation and transcription of longer files. I guess that's due to additional accuracy and speaker diarisation?
The TTS example clip in the repo of 'spontaneous singing' is creepy as fuck
I think we should stop calling this type of models open source. They are indeed "open weight." The training code is proprietary and never revealed.
https://github.com/microsoft/VibeVoice/issues/102
Indeed. We now live in a world where freeware is named open source. We are very sorry, Stallman.
If you're going to apologize to Stallman, you should apologize for conflating open source with software freedom. ;D
With free libre software, where freedom and liberty are about what the end user is empowered with actually, the software is mostly metonymic. Free software, free society, because there are free people in the middle of course.
Right, as I said elsewhere, maybe let's just let "open-source" have it.
"Open-source" can be "anything you can go out and grab a copy of and use" but doesn't give you much legal certainty about any of it, and reserve "free software" for the other, better thing.
What you said makes a lot of sense. Free software should not be confused with open source
> we should stop calling this type of model open source. They are indeed "open weight”
This ship has sailed. It’s now in the same category as hacker/cracker and the pronunciation of GIF.
I think you mean GIF.
It's the same as GIS, you wouldn't say jizz now would you?
I absolutely do, every single time it comes up.
I hadn't thought about how to pronounce GIS, but do you have a problem with the pronunciation of the Japanese Industrial Standards: JIS?
The developer of the format declared the pronunciation 30+ years ago. It has always been jif.
Yeah, but society overruled them.
How do you pronounce giraffe?
How do you pronounce gift?
gorge = george
i am absolutely going to from now on
I take it that you haven’t met the Arcgees people…
And "hallucination" which should have been "delusion".
Way early on (spring 2023) people tried to stop it, but no luck.
At least it's MIT licensed! As much as non-open training data irks me, restrictive licensing irks me more!
I'm genuinely torn on this one; I get technically why not, but why I think I have no problem with it is the wishy-washiness of "open source" generally.
As I teach this stuff to people newer to this tech, it's probably just easier and more helpful to refer to the wide array of "stuff you can just download and use yourself" as "open-source" and then after that, go deeper and talk about why Stallman was right, how "Free Software" was first. etc.
I mean, you have "AI" which means just about anything in marketing speak, "Agentic" is kind of becoming similar, hopefully they don't goof that one too badly, would be nice to know what you are trying to sell me. Used to be "Cloud" meant storage not just hosting (I guess it still does).
Then there's "Smart" in front of Car, Phone, TV, and so on... Meaning different things.
I do think "Open Weight" should be more commonly used. There's definitely communities that spring up that build the training infrastructure and inference infrastructure around open models on the other hand.
Openwashing is the new greenwashing, which, coincidently, seems to have gone out of fashion a few hundred datacentres ago.
it was replaced with abundancewashing
What is "abundancewashing"?
> “This means a future of abundance. A future where there is no poverty, where people can have whatever they want in terms of goods and services.” – Elon Musk
> “I think we see a path now where the world gets much more abundant and much better every year.” – Sam Altman
https://www.diamandis.com/blog/elon-sam-abundance
This is not a new model. Also, it hallucinates a lot. Also, it's very heavy and slow in inference. It's also bad in multilingual.
Edit: I'm talking purely about speech to text (STT). Not sure about the other things this can do.
Yeah, I don't get why it is suddenly getting so much attention today, it is all over twitter too
well duh, they updated the news section
https://github.com/microsoft/VibeVoice/commit/e73d1e17c3754f...
which is microsoft for "we removed two dead links". AI innovation knows no limits!
Interestingly that seems to be in response to [1], which might indeed be the trigger for this.
[1] https://doublepulsar.com/microsoft-vibing-capturing-screensh...
I think this was all covered when they said it was released by Microsoft?
The nuance is lost on LLM agentic dominant partakers.
I the past month or so, I added 2 models to my app Whisper Memos (https://whispermemos.com):
- Cohere Transcribe (self hosted)
- Grok Speech To Text (they provide an API, only $0.10/hr!)
They are both excellent. I'm not sure about this one. Would you like to see it in a consumer speech to text app?
I've had good experiences with the Mistral Voxtral models (I've used the API, but some of the model-variants are open weight)
Does Cohere work with longer transcripts? Do you have to do some magic to merge recordings over 35 seconds long?
Have you tried qwen?
Any non-Musk alternatives that are comparable in quality and cost?
Voxtral competes on price ($0.003/min) and quality. Speechmatics has best in class accuracy but is a bit more expensive ($0.004/min)
Our default is still OpenAI Whisper. Grok is just a choice for users who might prefer it.
Isn't this project the one Microsoft published but then soon after pulled it for security/safety reasons? What has changed since then?
Look at the "News" section in the readme - The original TTS model is gone from this repo (you can still find it other places), but the SST/ASR, long form TTS, and streaming TTS models are newer.
It’s confusing (at least for me) because the project covers a number of things including what you are mentioning.
[off topic]
When explanations get posted directly in HN comments, I imagine someone somewhere in the world is able to learn in spite of their Internet restrictions/firewalls
People will also post their own interpretations in response to comments, and quickly find out they missed something.
… But if you try to automate it, like include a summary under every HN post, you encourage laziness too much and are pre-chewing too heavily. Some balance here.
[on topic]
(OK I’m done making excuses, time to read the article… thanks for the encouragement!)
I thought this was not explained in the readme directly but in fact I missed it. I wasn’t going to read Microsoft entire changelog! But it was substantive, thanks to sibling commenter:
“2025-09-05: VibeVoice is an open-source research framework intended to advance collaboration in the speech synthesis community. After release, we discovered instances where the tool was used in ways inconsistent with the stated intent. Since responsible use of AI is one of Microsoft’s guiding principles, we have removed the VibeVoice-TTS code from this repository.”
Interesting to see "vibe" enshrined by the likes of Microsoft as an AI product word.
Especially when "vibe coded" can have a negative connotation meaning quickly put together without understanding.
I’m just surprised they put the name of the e-waste slop company in their product
Which makes it even more weird they get offended when people use Mircoslop. They are the ones leaning into the marketing
"get offended" is just what the clickbait news cycle made of it. It was based on the post at [1], and this is all it said:
> We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other
[1] https://snscratchpad.com/posts/looking-ahead-2026/
Great post last night from Simon: https://simonwillison.net/2026/Apr/27/vibevoice/
Note that this just covers the Speech-to-Text/Speech-Recognition aspect (a-la whisper), there's also models for long-form Text-To-Speech and steaming Text-To-Speech.
“VibeVoice can only handle up to an hour of audio”
Why?
Holy moly, a Microsoft AI product that isn't named Copilot!
Missed opportunity to call it Vopilot
I took a look into local options for ASR and diarization some months ago, I missed that VibeVoice now has this feature.
My conclusions back then (which only came from a shallow research on the topic and 0 real experience mind you) was that Whisper + Pyannote was the "stable" approach.
Have the VibeVoice, Voxtral, Qwen or the Nemo solutions caught up in segmentation and speaker recognition?
So we've really just settled on Vibe as the verb for AI then?
I'd be willing to bet it will be "Word of the Year" for 2026. Merriam-Webster had 'slop' for 2025, and 'polarization' for 2024. Is there a prediction market for this?
it'll probably be something we're not even talking about yet - we still have 7 months in which to make the world even worse
Why use precise technical language when you can just vibe with your AI system?
Microsoft has historically made poor choices in product naming, but this has to be a new low.
You have selected Microsoft Sam as the computer's default voice.
My friends and I had fun in the computer lab with Microsoft Sam, inputting long strings of characters to create funny sound effects. Sususususususu.
What’s the current state of the art, for each of training locally and in the cloud, for learning my voice?
Local? No idea. Cloud? Eleven Labs, probably. But it's described as "cloning" not "training". Not sure what the distinction is or why it matters if the end result is you can to generate any TTS that sounds like you. There might very well be an important one, I just don't know it.
Locally maybe https://voicebox.sh/
Elevenlabs in the cloud.
open weights i would say S2: https://github.com/rodrigomatta/s2.cpp
Interesting story about this repo/product/author by cybersecurity researcher Kevin Beaumont: https://cyberplace.social/@GossiTheDog/116454846703138243
looks like this offers ASR support in GGUF https://github.com/CrispStrobe/CrispASR -- haven't tested
Microsoft Store App Vibing.exe Accused of Harvesting Screens, Audio, and Clipboard Data:
https://cyberpress.org/microsoft-store-app-vibing-exe-accuse...
Maybe Microsoft’s real strength was never making the best model, it was knowing you don’t need to, as long as you own the platform everyone builds on.
For me its giving me very poor results
Previously:
Sept 2025 https://news.ycombinator.com/item?id=45114245
Microsoft is famous for choosing terrible names but how could they be this terrible.
Seems quite heavy for a STT model, Parakeet and Whisper are much smaller and perform great for quick dictation and transcription of longer files. I guess that's due to additional accuracy and speaker diarisation?
The TTS example clip in the repo of 'spontaneous singing' is creepy as fuck