I have no idea of the likely price, but (IMO) this is the sort of disruption that Intel needs to aim at if it's going to make some sort of dent in this market. If they could release this for around the price of a 5090, it would be very interesting.
> If they could release this for around the price of a 5090
This is not targeted at consumers. It’s competing with nVidia’s high RAM workstation cards. Think $10K price range, not $1-2K.
The 160GB of LPDDR5X chips alone is expensive enough that they couldn’t release this at the $2K price point unless they felt like giving it away (which they don’t)
Is there anything preventing them from using heterogeneous memory chips, like 1/4 GDDR7 and 3/4 LPDDR? It could enable new MEO-like architectures with finer-grained performance tuning for long contexts.
Rumor has it (according to MLID, so no one knows whether it's accurate) that AMD is also looking to use regular LPDDR memory for some of it's lower end next gen GPUs to not have to contend with nvidia over limited and cartelled GDDR7 supply. Maybe they're going to increase parallel bandwidth to compensate it? Or have wholly different tricks up their sleeve.
LPDDR5x really just means LPDDR5 running at higher than the original speed of 6400MT/s. Absent any information about which faster speed they'll be using, this correction doesn't add anything to the discussion. Nobody would expect even Intel to use 6400MT/s for a product that far in the future. Where they'll land on the spectrum from 8533 MT/s to 10700 MT/s is just a matter for speculation at the moment.
Uncle Sam owns a good chunk of Intel now. "Not affordable by civilians" might be precisely the target market: the DoD/national intelligence agencies have money to burn, can fund things long enough to stabilize Intel a little, and in exchange they get first dibs on everything.
160 GB LPDDR5 is ~$1,200 retail so the card could be sold for $2,000. The price will depend on how desperate Intel is. Intel probably can't copy Nvidia's pricing.
It’d be a disaster for Intel if it sold for less than 3k, personally I think they’re aiming for break even at 5k a pop at least, and I wouldn’t be surprised to advertise 2x memory at half nvidia price, which would put it at ~15-20k? and a healthy margin which they need like oxygen now. Of course it’s all for naught if it doesn’t perform compute-wise.
I also think they have to be substantially cheaper than nvidia to have any chance, but the pro 6000 with 96G is already available at 7-8k - so half the price would have to be significantly below 4k.
Huh didn’t know that, nice. Intel’s still in trouble then :) IMHO they’ll try to sell the increased ram as worth the ‘premium’ (or, worth the ‘reduced not-nvidia penalty’)
I agree with you, based on standard business logic, but the question is whether Intel would be willing to sell a generation at break-even to disrupt, achieve a larger (and somewhat 'sticky') install base, developer engagement, a larger mind-share, etc.?
Any discussion of an intel entry to discrete graphics cards needs to at least _mention_ intel's repeated history of abandoning discrete graphics cards.
the GPU market is not what it used to be, it's not some checkbox some executive needs to check to say "we are doing something".
the chips are so valuable now NVIDIA will end up owning a chunk of every major tech company, everyone is throwing cash and shares at them as fast as they can.
Xe3P as far as I remember is built in their own fabs as opposed to xe3 at TSMC. This could give them a huge advantage by being possibly the only competitor not competing for the same TSMC wafers
What price is this sitting at? Because if its software support is decent then Intel might have just managed to break into the hardware for AI on the edge. Examples like self hosted LLM finetuning and RAG on a old dell or HP server with these type of cards on them.
> Examples like self hosted LLM finetuning and RAG on an old dell or HP server with these type of cards on them.
This won’t be in the price range of an old Dell server or a fun impulse buy for a hobbyist. 160GB of raw LPDDR5X chips alone is not cheap.
This is a server/workstation grade card and the price is going where the market will allow. Consider that an nVidia card with almost half the RAM is going to cost $8K or more. That price point is probably the starting point for where this will be priced, too.
Funny they still call them graphics cards when they're really... I dont know, matmul cards ? Tensor cards ? TPU ?
Well that sums it up maybe, what those are are really CUDA cards.
Dude, this is asinine. Graphics cards have been doing matrix and vector operations since they were invented. No one had a problem with calling matrix multiplers graphics cards until it became cool to hate AI.
> The TMS34010 can execute general purpose programs and is supported by an ANSI C compiler.
> The successor to the TMS34010, the TMS34020 (1988), provides several enhancements including an interface for a special graphics floating point coprocessor, the TMS34082 (1989). The primary function of the TMS34082 is to allow the TMS340 architecture to generate high quality three-dimensional graphics. The performance level of 60 million vertices per second was advanced at the time.
Like these, there were several others other the IBM PC own history.
Nope, I mean it in the first sense. That happened with the GeForce 256 in 1999, and shader registers (the first programmable vector math) were introduced with the GeForce 3 in 2001. Before that 3D graphics accelerators -- the term GPU had not yet been invented -- simply handled rasterization of triangles, and texture look-ups. Transformation & lighting was handled on the CPU.
(I will use "GPU" because "3d accelerator" was very gaming PC oriented term predated by 3d graphic hardware for decade)
Only in consumer market - which is why GeForce 256 release had the game devs with engines using GL smug for immediately benefiting from hardware T&L which was the original function of earlier GPUs (to the point that more than one "3D GPU" was an i860 or few with custom firmware and some DMA glue to do... mostly vector ops on transforms (and a bit of lighting, as a treat).
The consumer PC market looked differently because games wanted textures, and the first truly successful 3D accelerator was 3Dfx Voodoo which was essentially a rasterizer chip and texture mapping chip, with everything else done on CPU.
Fully programmable GPUs were also a thing in the 2D era, with things like TIGA, where at least one package I heard of pretty much implemented most of the X11 on the GPU.
This was of course all driven by what the market demanded. Original "GPUs" were driven by the needs of professional work like CAD, military, etc. where most of the time you were operating in wireframe and using gouraud/phong shaded triangles was for fancier visualizations.
Games on the other hand really wanted textures (though limitations of consoles like PSX meant that some games were mostly simple colour shaded triangles, like Crash Bandicoot), offloading of which was major improvement for gaming.
Yes. The earliest consumer PC 3D graphics cards just rasterized pre-transformed triangles and that's it; the CPU had to do pretty much all the math (but drawing the pixels was considered the hard part). Later, "Hardware Transform and Lighting (T&L)" was introduced circa 2000 by cards like the GeForce 256.
And even then, you couldn't really get any sort of serious matmul out of it; they were per-vertex, not per-pixel.
Per-pixel matmul (which is what you really need for anything resembling GPGPU) came with Shader Model 2.0, circa 2002; Radeon 9700, the GeForce FX series and the likes. CUDA didn't exist (nor really any other form of compute shaders), but you could wrangle it with pixel shaders, and some of us did.
Oh man, I forgot about doing vector math using OpenGL textures as "hardware acceleration". And it would be many more years before it was reasonable to require a GPU with programmable shaders; having to support fixed-function was a fact of life for most of the 2000's.
GPUs may well have done the same-ish operations for a long time, but they were doing those operations for graphics. GPGPU didn't take off until relatively recently.
Graphics cards haven't ever done graphics. Graphics is a screen thing. Nobody looks at their graphics card to see little pictures. So they are still misnamed, but they've always been misnamed. They do BLAS.
It'll be either "cheap" like the DGX Spark (with crap memory bandwidth) or overpriced with the bus width of a M4 Max with the rhetoric of Intel's 50% margin.
Yeah, Intel's problem is that this is (at least) the third time they've announced a new ML accelerator platform, and the first two got shitcanned. At this point I wouldn't even glance at an Intel product in this space until it had been on the market for at least five years and several iterations, to be somewhat sure it isn't going to be killed, and Intel's current leadership inspires no confidence that they'll wait that long for success.
I’m personally just thinking about how they treated their embedded Keem Bay line. Totally shitcanned without warning. I doubt they consider this a core market to the degree that they will endure bad sales numbers for a while.
Gelsinger had a long term realistic plan. He was out around 11 months ago. You can't magic a new GPU in that timeframe - those projects have 3+ years pipelines for CPUs. I assume GPU will be a bit shorter, but not that much.
Whatever happened with new products today must've been started before he left.
How does LPDDR5 (This Xe3P) compare with GDDR7 (Nvidia's flagships) when it comes to inference performance?
Local inference is an interesting proposition because today in real life, the NV H300 and AMD MI-300 clusters are operated by OpenAI and Anthropic in batching mode, which slows users down as they're forced to wait for enough similar sized queries to arrive. For local inference, no waiting is required - so you could get potentially higher throughput.
I think the better comparison, for consumers, is how fast is LPDDR5 compared to the normal DDR5 attached to your CPU?
Or, to be more specific, what is the speed when your GPU is out of RAM and it's reading from main memory over the PCI-E bus?
PCI-E 5.0: 64GB/s @ 16x or 32GB/s @ 8x
2x 48GB (96GB) of DDR5 in an AM5 rig: ~50GB/s
Versus the ~300GB/s+ possible with a card like this, it's a lot faster for large 'dense' models. Yes, even an NVIDIA 3090 is ~900GB/s of bandwidth, but it's only 24GB, so even a card like this Xe3P is likely to 'win' because of the higher memory available.
Even if it's 1/3rd of the speed of an old NVIDIA card, it's still 6x+ the speed of what you can get in a desktop today.
I asked GPT to pull real stats on both. Looks like the 50-series RAM is about 3X that of the Xe3P, but it wanted to remind me that this new Intel card is designed for data centers and is much lower power, and that the comparable Nvidia server cards (e.g. H200) have even better RAM than GDDR7, so the difference would be even higher for cloud compute.
Any business people here that can explain why companies announce products a year before their release? I can understand getting consumers excited but it also tells competitors what you are doing giving them time to make changes of their own. What's the advantage here?
In this case there is no risk of anyone stealing Intel's ideas or even reacting to them.
First, they're not even an also-ran in the AI compute space. Nobody is looking to them for roadmap ideas. Intel does not have any credibility, and no customer is going to be going to Nvidia and demanding that they match Intel.
Second, what exactly would the competitors react to? The only concrete technical detail is that the cards will hopefully launch in 2027 and have 160GB of memory.
The cost of doing this is really low, and the value of potentially getting into the pipeline of people looking to buy data center GPUs in 2027 soon enough to matter is high.
Given how long it takes to develop a new GPU I’m pretty sure this one was signed off by Pat and given it survived Lip-Bu’s axe that says something, at least for Intel.
If customers know your product exists before they can buy it then they may wait for it. If they buy the competitor's product today because they don't know your product will exist until the day they can buy it then you lose the sale.
Samples of new products also have to go out to third party developers and reviewers ahead of time so that third party support is ready for launch day and that stuff is going to leak to competitors anyway so there's little point in not making it public.
It can also prevent competitors from entering a particular space. I was told as an undergraduate that UNIX was irrelevant because the upcoming Windows NT would be POSIX compliant. It took a _very_ long time before that happened (and for a very flexible version of "compliant"), but the pointy-headed bosses thought that buying Microsoft was the future. And at first glance the upcoming NT _looked_ as if the TCO would be much lower than AIX, HPuX or Solaris.
Then of course Linux took over everywhere except the desktop.
That wasn't even necessarily false. Windows NT on commodity hardware from the likes of Dell arguably did have a lower TCO than proprietary UNIX on proprietary hardware.
But then Linux on that same commodity hardware was lower yet.
There'll be a good market share for comparatively "lower power/ good enough" local AI. Check out Alez Ziskind's analysis of the B50 Pro [0]. Intel has an entire line-up of cheap GPUs that perform admirably for local use cases.
This guy is building a rack on B580s and the driver update alone has pushed his rig from 30 t/s to 90 t/s. [1]
Yeah even RTX’s are limited in this space due to lack of tensor cores. It’s a race to integrate more cores and faster memory buses. My suspicion is this is more me too product announcement so they can play partner to their business opportunities and continue greasing their wheels.
If you're Intel sized, it's gonna leak. If you announce it first, you get to control the message.
The other thing is enterprise sales is ridiculously slow. If Intel wants corporate customers to buy these things, they've got to announce them ~a year ahead, in order for those customers to buy them next year when they upgrade hardware.
There is a serious possibility this isn’t a bubble. Too many people watched the big short and now call every bull a bubble; maybe the bubble was the dollar and it’s popping now instead.
Have you looked in detail at the economics of this?
Career finance professionals are calling it a bubble, not due to their suddenly found deep technological expertise, but because public cos like FAANG et. al are engaging in typical bubble like behavior: Shifting capex away from their balance sheets into SPACs co-financed by private equity.
This is not a consumer debt bubble, it's gonna be a private market bubble.
But as all bubbles go, someones gonna be left holding the bag with society covering for the fallout.
It'll be a rate hike, it'll be some Fortune X00 enterprises cutting their non-ROI-AI-bleed or it'll be an AI-fanboy like Oracle over-leveraging themselves and then watching their credit default swaps going "Boom!" leading to a financing cut off.
It's possible, circular financing is definitely fishy, but OTOH every openai deal sama makes is swallowed by willing buyers at a fair market price. We'll be in a bubble when all the bears are dead and everyone accepts 'a new paradigm', not before; there's plenty of upside capitulation left judging by some hedge fund returns this year.
...and again, this is assuming AI capability stops growing exponentially in the widest possible sense (today, 50%-task-completion time horizon doubles ~7 months).
AI is not going anywhere. Now everyone wants to get a piece. Local inference is expected to grow. Documents, image, video, etc processing. Another obvious is driverless farm vehicles and other automated equipment. "Assisted" books, images, news,.. already and grows fast. Translation also a fact.
So far there is no 'plateau' in the nearest future. 'AI' as a science and its applications should develop further for the next several years. Models will get more efficient, but still the bigger the better. This is obvious. Even if models don't scale up well, they can be used collectively in parallel 'brainstorming'. This will still create demand for hardware. Stagnation is still possible in case of recession. In this case even stable businesses will suffer.
As for efficiency, replacing one programmer in group of 10 with AI already will increase productivity and lower the price. In most cases. In reality adding AI accounts to existing group works better. This is _now_, not hopes or sci-fi.
That's why I'm saying there is no way back. 'AI winter' is as likely as smartphones winter.
Your entire argument is whoefully ignoring the CapEx economics of all of this.
But that's the foundation.
And there is a plateau in real money spent on AI chips.
You're ignoring a whole group of economic and finance professionals as well as - if you're inclined to listen to their voices more - Sama calling it a bubble.
If not for AI spending, the US already would be in a recession.
So your argument might sound nice and practical from a purely scientific perspective or the narrow use case of AI coding support, but it's entirely detached from reality.
Honestly, Intel just has to build a GPU with insane amount of VRAM. It doesn't even have to be the fastest to compete... just a ton of vram for dirt cheap
It’s gonna be what, 273GB/sec vram bandwidth at most? Might as well as buy an AND 395+ 128GB right now for the same inference performance and slightly less VRAM.
If its fast LPDDR5x (9600 MT/s) with 512 bit bus width (8 64bit channels (actually multiples of quad 16 bit subchannel nonsense)) it could be upwards of 600 GB/s. Lots of bandwidth like the beefy macs have.
Slow is better than nothing. A card with this much VRAM in a "prosumer" price range would be really interesting right now for workstation, to work with big models.
Fair... Hopefully it's consumer friendly. AI absolutely allows new companies to compete in the GPU context, but it's a surprise that no one has made an expansion card for AI usage. Computers have the PCIE slots for that purpose.
I have no idea of the likely price, but (IMO) this is the sort of disruption that Intel needs to aim at if it's going to make some sort of dent in this market. If they could release this for around the price of a 5090, it would be very interesting.
> If they could release this for around the price of a 5090
This is not targeted at consumers. It’s competing with nVidia’s high RAM workstation cards. Think $10K price range, not $1-2K.
The 160GB of LPDDR5X chips alone is expensive enough that they couldn’t release this at the $2K price point unless they felt like giving it away (which they don’t)
Intel made a dent in the consumer gaming market with Battlemage.
They made a dent in the HPC market / Top500 with intel MAX.
It will be interesting to see if they can make a dent in the AI inference market (presumably datacenter/enterprise).
Maybe not that low, but given it's using LPDDR5 instead of GDDR7, at least the ram should be a lot cheaper.
Certainly an interesting choice. Dramatically worse performance but dramatically larger only time will tell how it actually goes
With 160GB, surely they can add more channels to compensate?
Is there anything preventing them from using heterogeneous memory chips, like 1/4 GDDR7 and 3/4 LPDDR? It could enable new MEO-like architectures with finer-grained performance tuning for long contexts.
Rumor has it (according to MLID, so no one knows whether it's accurate) that AMD is also looking to use regular LPDDR memory for some of it's lower end next gen GPUs to not have to contend with nvidia over limited and cartelled GDDR7 supply. Maybe they're going to increase parallel bandwidth to compensate it? Or have wholly different tricks up their sleeve.
probably just a lot more of it, to capture that consumer ai market
It‘s LPDDR5X
LPDDR5x really just means LPDDR5 running at higher than the original speed of 6400MT/s. Absent any information about which faster speed they'll be using, this correction doesn't add anything to the discussion. Nobody would expect even Intel to use 6400MT/s for a product that far in the future. Where they'll land on the spectrum from 8533 MT/s to 10700 MT/s is just a matter for speculation at the moment.
With this much ram don’t expect anything remotely affordable by civilians.
Uncle Sam owns a good chunk of Intel now. "Not affordable by civilians" might be precisely the target market: the DoD/national intelligence agencies have money to burn, can fund things long enough to stabilize Intel a little, and in exchange they get first dibs on everything.
Intel for intel on your Intels, perhaps.
160 GB LPDDR5 is ~$1,200 retail so the card could be sold for $2,000. The price will depend on how desperate Intel is. Intel probably can't copy Nvidia's pricing.
> 160 GB LPDDR5 is ~$1,200 retail so the card could be sold for $2,000.
Prices are set by what the market will bear, not the lowest possible price where they could break even on the BOM and manufacturing costs.
The high cost of the LPDDR5X should be a clue that this is going to be in the $10K range, not the $2K range.
It’d be a disaster for Intel if it sold for less than 3k, personally I think they’re aiming for break even at 5k a pop at least, and I wouldn’t be surprised to advertise 2x memory at half nvidia price, which would put it at ~15-20k? and a healthy margin which they need like oxygen now. Of course it’s all for naught if it doesn’t perform compute-wise.
I also think they have to be substantially cheaper than nvidia to have any chance, but the pro 6000 with 96G is already available at 7-8k - so half the price would have to be significantly below 4k.
Huh didn’t know that, nice. Intel’s still in trouble then :) IMHO they’ll try to sell the increased ram as worth the ‘premium’ (or, worth the ‘reduced not-nvidia penalty’)
I agree with you, based on standard business logic, but the question is whether Intel would be willing to sell a generation at break-even to disrupt, achieve a larger (and somewhat 'sticky') install base, developer engagement, a larger mind-share, etc.?
I mean, even without that, the phrase “enterprise GPU”, does not tend to convey “priced for typical consumers”.
Any discussion of an intel entry to discrete graphics cards needs to at least _mention_ intel's repeated history of abandoning discrete graphics cards.
the GPU market is not what it used to be, it's not some checkbox some executive needs to check to say "we are doing something".
the chips are so valuable now NVIDIA will end up owning a chunk of every major tech company, everyone is throwing cash and shares at them as fast as they can.
At least larrabee's cancellation resulted in the Offset engine going to the Firefall (2014) devs, which was a really great F2P MMO game for a while.
You’re saying it’s like the Google of graphics cards?
Very much so.
Xe3P as far as I remember is built in their own fabs as opposed to xe3 at TSMC. This could give them a huge advantage by being possibly the only competitor not competing for the same TSMC wafers
What price is this sitting at? Because if its software support is decent then Intel might have just managed to break into the hardware for AI on the edge. Examples like self hosted LLM finetuning and RAG on a old dell or HP server with these type of cards on them.
> Examples like self hosted LLM finetuning and RAG on an old dell or HP server with these type of cards on them.
This won’t be in the price range of an old Dell server or a fun impulse buy for a hobbyist. 160GB of raw LPDDR5X chips alone is not cheap.
This is a server/workstation grade card and the price is going where the market will allow. Consider that an nVidia card with almost half the RAM is going to cost $8K or more. That price point is probably the starting point for where this will be priced, too.
That nVidia card is going to have 5x the memory bandwidth. LPDDR5X is going to be rather low bandwidth.
(My guess is Intel's card is only going to have about 400 GB/s bandwidth.)
Funny they still call them graphics cards when they're really... I dont know, matmul cards ? Tensor cards ? TPU ? Well that sums it up maybe, what those are are really CUDA cards.
FPUs?
This sounds like a gaming card with extra RAM so it's kind of appropriate to call it a graphics card.
> what those are are really CUDA cards
That don't run CUDA?
Dude, this is asinine. Graphics cards have been doing matrix and vector operations since they were invented. No one had a problem with calling matrix multiplers graphics cards until it became cool to hate AI.
It was many generations before vector operations were moved onto graphics chips.
Only for those not following history of graphics chips.
https://en.wikipedia.org/wiki/TMS34010
> The TMS34010 can execute general purpose programs and is supported by an ANSI C compiler.
> The successor to the TMS34010, the TMS34020 (1988), provides several enhancements including an interface for a special graphics floating point coprocessor, the TMS34082 (1989). The primary function of the TMS34082 is to allow the TMS340 architecture to generate high quality three-dimensional graphics. The performance level of 60 million vertices per second was advanced at the time.
Like these, there were several others other the IBM PC own history.
I think they’re using “vector” in the linear algebra sense, e.g. multiplying a matrix and a vector produces a different vector.
Not, as I assume you mean, vector graphics like SVG, and renderers like Skia.
Nope, I mean it in the first sense. That happened with the GeForce 256 in 1999, and shader registers (the first programmable vector math) were introduced with the GeForce 3 in 2001. Before that 3D graphics accelerators -- the term GPU had not yet been invented -- simply handled rasterization of triangles, and texture look-ups. Transformation & lighting was handled on the CPU.
(I will use "GPU" because "3d accelerator" was very gaming PC oriented term predated by 3d graphic hardware for decade)
Only in consumer market - which is why GeForce 256 release had the game devs with engines using GL smug for immediately benefiting from hardware T&L which was the original function of earlier GPUs (to the point that more than one "3D GPU" was an i860 or few with custom firmware and some DMA glue to do... mostly vector ops on transforms (and a bit of lighting, as a treat).
The consumer PC market looked differently because games wanted textures, and the first truly successful 3D accelerator was 3Dfx Voodoo which was essentially a rasterizer chip and texture mapping chip, with everything else done on CPU.
Fully programmable GPUs were also a thing in the 2D era, with things like TIGA, where at least one package I heard of pretty much implemented most of the X11 on the GPU.
This was of course all driven by what the market demanded. Original "GPUs" were driven by the needs of professional work like CAD, military, etc. where most of the time you were operating in wireframe and using gouraud/phong shaded triangles was for fancier visualizations.
Games on the other hand really wanted textures (though limitations of consoles like PSX meant that some games were mostly simple colour shaded triangles, like Crash Bandicoot), offloading of which was major improvement for gaming.
Oops, sorry I misread. That makes more sense in context.
Yeah, I remember all the hype about the first Nvidia chip that offloaded “T&L” from the CPU.
On PCs, UNIX world was already exploring Renderman by then.
If you s/graphics/3d graphics does that still hold true?
Yes. The earliest consumer PC 3D graphics cards just rasterized pre-transformed triangles and that's it; the CPU had to do pretty much all the math (but drawing the pixels was considered the hard part). Later, "Hardware Transform and Lighting (T&L)" was introduced circa 2000 by cards like the GeForce 256.
And even then, you couldn't really get any sort of serious matmul out of it; they were per-vertex, not per-pixel.
Per-pixel matmul (which is what you really need for anything resembling GPGPU) came with Shader Model 2.0, circa 2002; Radeon 9700, the GeForce FX series and the likes. CUDA didn't exist (nor really any other form of compute shaders), but you could wrangle it with pixel shaders, and some of us did.
Oh man, I forgot about doing vector math using OpenGL textures as "hardware acceleration". And it would be many more years before it was reasonable to require a GPU with programmable shaders; having to support fixed-function was a fact of life for most of the 2000's.
2D images has (height, width, color), so they are Vector3
GPUs may well have done the same-ish operations for a long time, but they were doing those operations for graphics. GPGPU didn't take off until relatively recently.
Graphics cards haven't ever done graphics. Graphics is a screen thing. Nobody looks at their graphics card to see little pictures. So they are still misnamed, but they've always been misnamed. They do BLAS.
It'll be either "cheap" like the DGX Spark (with crap memory bandwidth) or overpriced with the bus width of a M4 Max with the rhetoric of Intel's 50% margin.
Or it will be cheap, with the ability to expand 8X on a server. Particularly with PCIe 6.0 coming soon, might be a very attractive package.
https://www.linkedin.com/posts/storagereview_storagereview-a...
I remember Larabee and Xeon-Phi announcements and getting so excited at the time. So I'll wait but curb my enthusiasm.
Yeah, Intel's problem is that this is (at least) the third time they've announced a new ML accelerator platform, and the first two got shitcanned. At this point I wouldn't even glance at an Intel product in this space until it had been on the market for at least five years and several iterations, to be somewhat sure it isn't going to be killed, and Intel's current leadership inspires no confidence that they'll wait that long for success.
Xe works much much better than Larabee or Xeon Phi ever did. Xe3 might even be good.
I’m personally just thinking about how they treated their embedded Keem Bay line. Totally shitcanned without warning. I doubt they consider this a core market to the degree that they will endure bad sales numbers for a while.
Between 18A becoming viable and this, it seems Intel is finally climbing out of the hole it's been in for years.
Makes me wonder whether Gelsinger put all this in motion, or if the new CEO lit a fire under everyone. Kinda a shame if it's the former...
Gelsinger had a long term realistic plan. He was out around 11 months ago. You can't magic a new GPU in that timeframe - those projects have 3+ years pipelines for CPUs. I assume GPU will be a bit shorter, but not that much.
Whatever happened with new products today must've been started before he left.
Anybody knows memory bandwidth?
Will this have any support for open source libraries like PyTorch or will it be all Intel proprietary software that you need a license for?
Intel puts a huge priority on DL framework support before releasing related hardware, going back to at least 2017.
I assume that hasn't changed.
OpenVino is entirely open-source and can run PyTorch and ONNX models, so this is definitely not a topic of concern. PyTorch also has native Intel GPU support https://docs.pytorch.org/docs/stable/notes/get_start_xpu.htm...
There is PyTorch support on oneAPI.
I'm hopeful for the second hand market, imagine when these have paid for themselves and you can do local inference of crazy capable models!?
Anyone has any idea about the price?
A not-absurdly-priced card that can run big models (even quantized) would sell like crazy. Lots and lots of fast RAM is key.
How does LPDDR5 (This Xe3P) compare with GDDR7 (Nvidia's flagships) when it comes to inference performance?
Local inference is an interesting proposition because today in real life, the NV H300 and AMD MI-300 clusters are operated by OpenAI and Anthropic in batching mode, which slows users down as they're forced to wait for enough similar sized queries to arrive. For local inference, no waiting is required - so you could get potentially higher throughput.
I think the better comparison, for consumers, is how fast is LPDDR5 compared to the normal DDR5 attached to your CPU?
Or, to be more specific, what is the speed when your GPU is out of RAM and it's reading from main memory over the PCI-E bus?
PCI-E 5.0: 64GB/s @ 16x or 32GB/s @ 8x 2x 48GB (96GB) of DDR5 in an AM5 rig: ~50GB/s
Versus the ~300GB/s+ possible with a card like this, it's a lot faster for large 'dense' models. Yes, even an NVIDIA 3090 is ~900GB/s of bandwidth, but it's only 24GB, so even a card like this Xe3P is likely to 'win' because of the higher memory available.
Even if it's 1/3rd of the speed of an old NVIDIA card, it's still 6x+ the speed of what you can get in a desktop today.
This doesn’t matter at all, if the resulting tokens/sec is still too slow for interactive use.
Lpddr5x (not lpddr5) is 10.7 Gbps. Gddr7 is 32 Gbps. So it's going to be slower
Yes but in matrix multiplication there are O(N²) numbers and O(N³) multiplications, so it might be possible that you are bounded by compute speed.
both are equally important. compute for prefill and mem bandwidth for generation
I asked GPT to pull real stats on both. Looks like the 50-series RAM is about 3X that of the Xe3P, but it wanted to remind me that this new Intel card is designed for data centers and is much lower power, and that the comparable Nvidia server cards (e.g. H200) have even better RAM than GDDR7, so the difference would be even higher for cloud compute.
Isn't that precisely what DGX Spark is designed for?
How is this better?
DGX Spark is $4000... this might (might) not be? (and with more memory)
This starts shipping in 2027. I'm sure you can buy a DGX Spark for less than $4k in 2 years time.
But good luck with Nvidia not turning it into abandoware.
Any business people here that can explain why companies announce products a year before their release? I can understand getting consumers excited but it also tells competitors what you are doing giving them time to make changes of their own. What's the advantage here?
In this case there is no risk of anyone stealing Intel's ideas or even reacting to them.
First, they're not even an also-ran in the AI compute space. Nobody is looking to them for roadmap ideas. Intel does not have any credibility, and no customer is going to be going to Nvidia and demanding that they match Intel.
Second, what exactly would the competitors react to? The only concrete technical detail is that the cards will hopefully launch in 2027 and have 160GB of memory.
The cost of doing this is really low, and the value of potentially getting into the pipeline of people looking to buy data center GPUs in 2027 soon enough to matter is high.
Given how long it takes to develop a new GPU I’m pretty sure this one was signed off by Pat and given it survived Lip-Bu’s axe that says something, at least for Intel.
If customers know your product exists before they can buy it then they may wait for it. If they buy the competitor's product today because they don't know your product will exist until the day they can buy it then you lose the sale.
Samples of new products also have to go out to third party developers and reviewers ahead of time so that third party support is ready for launch day and that stuff is going to leak to competitors anyway so there's little point in not making it public.
I don't think you're giving much advantage to anybody really on such a small timeframe.
Semiconductors are like container ships, they are extremely slow and hard to steer, you plan today the products you'll release in 2030.
Adding on to everyone else. It might help with sales for those with long procurement cycles.
If you're planning a supercomputer to be built in 2027, you want to look at what's on the roadmap.
It's more than a year. They're sampling this to customers in the second half of 2026. It's a 2027 launch at best.
Intel has practically nothing to show for an AI capex boom for the ages. I suspect that Intel is talking about it early for a shred of AI relevance.
It can also prevent competitors from entering a particular space. I was told as an undergraduate that UNIX was irrelevant because the upcoming Windows NT would be POSIX compliant. It took a _very_ long time before that happened (and for a very flexible version of "compliant"), but the pointy-headed bosses thought that buying Microsoft was the future. And at first glance the upcoming NT _looked_ as if the TCO would be much lower than AIX, HPuX or Solaris.
Then of course Linux took over everywhere except the desktop.
That wasn't even necessarily false. Windows NT on commodity hardware from the likes of Dell arguably did have a lower TCO than proprietary UNIX on proprietary hardware.
But then Linux on that same commodity hardware was lower yet.
This is a shareholder “me too” product
What are they gonna do with their own FAB?
Not release anything?
There'll be a good market share for comparatively "lower power/ good enough" local AI. Check out Alez Ziskind's analysis of the B50 Pro [0]. Intel has an entire line-up of cheap GPUs that perform admirably for local use cases.
This guy is building a rack on B580s and the driver update alone has pushed his rig from 30 t/s to 90 t/s. [1]
0: https://www.youtube.com/watch?v=KBbJy-jhsAA
1: https://old.reddit.com/r/LocalLLaMA/comments/1o1k5rc/new_int...
Watson…
Yeah even RTX’s are limited in this space due to lack of tensor cores. It’s a race to integrate more cores and faster memory buses. My suspicion is this is more me too product announcement so they can play partner to their business opportunities and continue greasing their wheels.
If you're Intel sized, it's gonna leak. If you announce it first, you get to control the message.
The other thing is enterprise sales is ridiculously slow. If Intel wants corporate customers to buy these things, they've got to announce them ~a year ahead, in order for those customers to buy them next year when they upgrade hardware.
To keep investors happy and stock from failing? Fairy tales work as well, see Tesla robots.
> What's the advantage here?
Stock number go up
The AI bubble might not last another year. Better get a few more pumps in before it blows.
There is a serious possibility this isn’t a bubble. Too many people watched the big short and now call every bull a bubble; maybe the bubble was the dollar and it’s popping now instead.
Have you looked in detail at the economics of this?
Career finance professionals are calling it a bubble, not due to their suddenly found deep technological expertise, but because public cos like FAANG et. al are engaging in typical bubble like behavior: Shifting capex away from their balance sheets into SPACs co-financed by private equity.
This is not a consumer debt bubble, it's gonna be a private market bubble.
But as all bubbles go, someones gonna be left holding the bag with society covering for the fallout.
It'll be a rate hike, it'll be some Fortune X00 enterprises cutting their non-ROI-AI-bleed or it'll be an AI-fanboy like Oracle over-leveraging themselves and then watching their credit default swaps going "Boom!" leading to a financing cut off.
It's possible, circular financing is definitely fishy, but OTOH every openai deal sama makes is swallowed by willing buyers at a fair market price. We'll be in a bubble when all the bears are dead and everyone accepts 'a new paradigm', not before; there's plenty of upside capitulation left judging by some hedge fund returns this year.
...and again, this is assuming AI capability stops growing exponentially in the widest possible sense (today, 50%-task-completion time horizon doubles ~7 months).
AI is not going anywhere. Now everyone wants to get a piece. Local inference is expected to grow. Documents, image, video, etc processing. Another obvious is driverless farm vehicles and other automated equipment. "Assisted" books, images, news,.. already and grows fast. Translation also a fact.
The technology, maybe - and if on local.
The public co valuations of quickly depreciating chip hoarders selling expensive fever dreams to enterprises are gonna pop though.
Spend 3-7 USD for 20 cents in return and 95% project failures rates for quarters on end aren't gonna go unnoticed on Wall St.
So far there is no 'plateau' in the nearest future. 'AI' as a science and its applications should develop further for the next several years. Models will get more efficient, but still the bigger the better. This is obvious. Even if models don't scale up well, they can be used collectively in parallel 'brainstorming'. This will still create demand for hardware. Stagnation is still possible in case of recession. In this case even stable businesses will suffer.
As for efficiency, replacing one programmer in group of 10 with AI already will increase productivity and lower the price. In most cases. In reality adding AI accounts to existing group works better. This is _now_, not hopes or sci-fi.
That's why I'm saying there is no way back. 'AI winter' is as likely as smartphones winter.
Your entire argument is whoefully ignoring the CapEx economics of all of this.
But that's the foundation.
And there is a plateau in real money spent on AI chips.
You're ignoring a whole group of economic and finance professionals as well as - if you're inclined to listen to their voices more - Sama calling it a bubble.
If not for AI spending, the US already would be in a recession.
So your argument might sound nice and practical from a purely scientific perspective or the narrow use case of AI coding support, but it's entirely detached from reality.
A year out, in that time nvidia and amd; not to mention huawei and others are going to hit the market as well. Intel are quite behind.
To me, the price point is what matters. It's going to be slow with ddr5. The 5090 today is much faster. But sure big ram.
RTX pro 6000 with 96gb of ram will be much faster.
So I'm thinking price point is below the 6000, above the 5090.
Honestly, Intel just has to build a GPU with insane amount of VRAM. It doesn't even have to be the fastest to compete... just a ton of vram for dirt cheap
It’s LPDDR5x
It’s gonna be slowwww
It’s gonna be what, 273GB/sec vram bandwidth at most? Might as well as buy an AND 395+ 128GB right now for the same inference performance and slightly less VRAM.
Bandwidth depends very much on on bus width.
If its fast LPDDR5x (9600 MT/s) with 512 bit bus width (8 64bit channels (actually multiples of quad 16 bit subchannel nonsense)) it could be upwards of 600 GB/s. Lots of bandwidth like the beefy macs have.
How can you tell without knowing the bus width?
Slow is better than nothing. A card with this much VRAM in a "prosumer" price range would be really interesting right now for workstation, to work with big models.
Isn't this exactly that?
We don't know the pricing yet.
Fair... Hopefully it's consumer friendly. AI absolutely allows new companies to compete in the GPU context, but it's a surprise that no one has made an expansion card for AI usage. Computers have the PCIE slots for that purpose.
the mad lad leopold did it, props
Sound as if it won‘t be widely available before 2027 which disappointing for a 341GB/s chip.
Intel leadership actually reads HN? Mindblown...