Ive ran arc on fedora for years and for general desktop use it’s been perfect. For llm’s/coding it’s getting better but it’s rough around the edges. Had a bug where trying to get vram usage through pytorch would crash the system, ect.
Running dual Pro B60 on Debian stable mostly for AI coding.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
There was the video a little while back where LTT built a computer for Linus Torvalds and they put an Intel Arc card inside, so I'd imagine Linux support is at the very least, acceptable.
Since they fired the entire Arc team and a lot of the senior engineers already updated their Linkedins to reflect their new positions at AMD, Nvidia, and others, as well as laying off most of their Linux driver team (GPU and non-GPU), uh...
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
This is a chip they've had lying around for a while. It's the same architecture as used in the Arc B580 that launched at the end of 2024; this is just a slightly larger sibling. Intel clearly knew that their larger part wouldn't make for a competitive gaming GPU (hence the lack of a consumer counterpart to these cards), but must have decided that a relatively cheap workstation card with 32GB might be able to make some money.
Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
My thinking is that I'd pick this, because I can't just plug a Mac into a slot in my server and have it easily integrate with all my other hardware across an ultra fast bus.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.
How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.
It’s not the tooling for me, macOS is just bad as a server OS for many reasons. Weird collisions with desktop security features, aggressive power saving that you have to fight against, root not being allowed to do root stuff, no sane package management, no OOB management, ultra slow OS updates, and generally but most importantly: the UNIX underbelly of macOS has clearly not been a priority for a long time and is rotting with weird inconsistent and undocumented behaviour all over the place.
Provisioning, remote management, containers, virtualization, networking, graphics (and compute), storage, all very different on Mac. The real question is what you would expect to be the same.
For server usage? macOS is the least-supported OS in terms of filesystems, hardware and software. It uses multiple gigabytes of memory to load unnecessary user runtime dependencies, wastes hard drive space on statically-linked binaries, and regularly breaks package management on system upgrades.
At a certain point, even WSL becomes a more viable deployment platform.
600 GB/s of memory bandwidth isn't anything to sneeze at.
~$1000 for the Pro B70, if Microcenter is to be believed:
https://www.microcenter.com/product/709007/intel-arc-pro-b70...
https://www.microcenter.com/product/708790/asrock-intel-arc-...
Recent kernels have SR-IOV support for these chips too. B&H has them listed for $950.
https://www.bhphotovideo.com/c/product/1959142-REG/intel_33p...
When 32GB NVIDIA cards seem to start at around $4000 that's a big enough gap to be motivating for a bunch of applications.
I tend to agree that the vram size and bandwidth is the core thing, but this B70 Pro allegedly has 387 int8 tops vs a 5090 having 3400 int8 tops. 600 compares vs 1792GB/s. I'm delighted so see an option with quarter the price! But man, a tenth the performance? https://www.techpowerup.com/347721/sparkle-announces-intel-a... https://www.tomshardware.com/pc-components/gpus/nvidia-annou...
I think the B65 is priced at $650. Both supported by llamacpp I believe. With that power draw you could run two of them.
Intel GPU prices have stayed fine, but I do wonder if they are viable for Inference if they will wind up like Nvidia GPUs, severely overpriced.
New cards in 2026, and targeting Vulkan 1.3?!
Both have 32gb vram. Could be a pretty compelling choice.
They certainly look viable as replacements for my Tesla P40 for virtual workloads.
Anyone running an ARC card for desktop Linux who can comment on the experience? I've had smooth sailing with AMD GPU's but have never tried Intel.
Ive ran arc on fedora for years and for general desktop use it’s been perfect. For llm’s/coding it’s getting better but it’s rough around the edges. Had a bug where trying to get vram usage through pytorch would crash the system, ect.
Running dual Pro B60 on Debian stable mostly for AI coding.
I was initially confused what packages were needed (backports kernel + ubuntu kobuk team ppa worksforme). After getting that right I'm now running vllm mostly without issues (though I don't run it 24/7).
At first had major issues with model quality but the vllm xpu guys fixed it fast.
Software capability not as good as nvidia yet (i.e. no fp8 kv cache support last I checked) but with this price difference I don't care. I can basically run a small fp8 local model with almost 100k token context and that's what I wanted.
There was the video a little while back where LTT built a computer for Linus Torvalds and they put an Intel Arc card inside, so I'd imagine Linux support is at the very least, acceptable.
[1] https://www.youtube.com/watch?v=mfv0V1SxbNA
Since they fired the entire Arc team and a lot of the senior engineers already updated their Linkedins to reflect their new positions at AMD, Nvidia, and others, as well as laying off most of their Linux driver team (GPU and non-GPU), uh...
WTF?
You are exaggerating, right? They didn't really fire the entire Arc team did they? I couldn't find a source saying that.
Nope, no exaggeration.
The news that Celestial is basically canceled already hit the HN front page, as well as Druid has been canceled before tapeout.
Celestial will only be issued in the variant that comes in budget/industrial embedded Intel platforms that have a combined IO+GPU tile, but the performance big boy desktop/laptop parts that have a dedicated graphics tile will ship an Nvidia-produced tile.
There will be no Celestial DGPU variant, nor dedicated tile variant. Drivers will be ceasing support for DGPUs of all flavors, and no new bug fixes will happen for B series GPUs (as there is no B series IGPUs; A series IGPUs will remain unaffected).
They signed the deal like 2-3 months ago to cancel GPUs in favor of Nvidia. The other end of this deal is the Nvidia SBCs in the future will be shipping as big-boy variants with Xeon CPUs, Rubin (replacing Blackwell) for the GPU, Vera (replacing Grace) for the on-SBC GPU babysitter, and newest gen Xeons to do the non-inference tasks that Grace can't handle.
There is also talk that this deal may lead to Nvidia moving to Intel Foundry, away from TSMC. There is also talk that Nvidia may just buy Intel entirely.
For further information, see Moore's Law Is Dead's coverage off and on over the past year.
This is a chip they've had lying around for a while. It's the same architecture as used in the Arc B580 that launched at the end of 2024; this is just a slightly larger sibling. Intel clearly knew that their larger part wouldn't make for a competitive gaming GPU (hence the lack of a consumer counterpart to these cards), but must have decided that a relatively cheap workstation card with 32GB might be able to make some money.
Still seems crooked to sell a GPU that is already lost their driver team and will get no new meaningful updates.
Wake me when they wake up and release a middling card with 128GB memory.
Buy 4?
Which mainboards are cheap and have 4 pcie16x (electrical) slots, that don't need weird risers to fit 4 GPUs
If your actual gripe is risers, sounds like a "you" problem, not a technical problem.
Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
My thinking is that I'd pick this, because I can't just plug a Mac into a slot in my server and have it easily integrate with all my other hardware across an ultra fast bus.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.
Being able to keep infrastructure on Linux is a big advantage.
How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.
It’s not the tooling for me, macOS is just bad as a server OS for many reasons. Weird collisions with desktop security features, aggressive power saving that you have to fight against, root not being allowed to do root stuff, no sane package management, no OOB management, ultra slow OS updates, and generally but most importantly: the UNIX underbelly of macOS has clearly not been a priority for a long time and is rotting with weird inconsistent and undocumented behaviour all over the place.
Provisioning, remote management, containers, virtualization, networking, graphics (and compute), storage, all very different on Mac. The real question is what you would expect to be the same.
For server usage? macOS is the least-supported OS in terms of filesystems, hardware and software. It uses multiple gigabytes of memory to load unnecessary user runtime dependencies, wastes hard drive space on statically-linked binaries, and regularly breaks package management on system upgrades.
At a certain point, even WSL becomes a more viable deployment platform.
with those $2k you can have 2xB70, with 1.2Tb/sec and 64G Vram, on linux ( and you can scale further while mac prices increase are not linear 0
You're absolutely right. And these Intel GPUs will also be much faster in terms of actual math than the M series GPUs that the Apple setup would have.
Support for Single Root IO Virtualization (SR-IOV) to enable compute and Graphics workloads in virtualized environments.
Funny, I not sure why anyone would use Apple over Linux.
one can upgrade and swap parts with a computer running an Intel GPU. Linux is very well supported compared to Mac hardware.