All posts tagged: Blackwell

Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the  trillion stratosphere

Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere

Nvidia CEO Jensen Huang threw out a lot of numbers — mostly of the technical variety — during his keynote Monday to kick off the company’s annual GTC Conference in San Jose, California. But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia’s Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business. About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026. “Now, I don’t know if you guys feel the same way, but $500 billion is an enormous amount of revenue,” he said. “Well, I’m here to tell you that right now where I stand — a few short months after GTC DC, one year after last GTC — right here where I stand, I see through 2027, at least $1 trillion.” The Rubin computing chip architecture, which was first announced in 2024, has been described by Huang …

Skild AI, Nvidia deploy robot brain on Blackwell assembly lines

Skild AI, Nvidia deploy robot brain on Blackwell assembly lines

March 16 : Skild AI’s artificial intelligence model will power robots manning Foxconn’s assembly lines in Houston, where Nvidia’s Blackwell GPU server racks are built, in what the companies described as an early commercial deployment of generalized physical AI. The startup, backed by Nvidia and SoftBank, said on Monday that it would also partner with ABB Robotics and Universal Robots to embed its software across industrial robots, aiming to supply what it calls a general-purpose “brain”. Skild AI said its generalized AI model addresses a key limitation of current robotics systems, which are typically programmed for a single repetitive task and require extensive engineering to adapt to new processes. “If we partner with robotic OEMs (original equipment manufacturers) that already have hundreds of thousands of robots deployed, it gives us a path to extreme scalability and establishes the data flywheel,” Skild AI CEO Deepak Pathak told Reuters. Partnerships with ABB and Teradyne’s Universal Robots are intended to expand the data available to train the system by integrating the software into robots. The announcements come amid …

Vanity Fair Oscar Party Livestream Host Quenlin Blackwell Just Wants to Have Fun

Vanity Fair Oscar Party Livestream Host Quenlin Blackwell Just Wants to Have Fun

The next era of the Quenaissance has just begun. You may know Quenlin Blackwell from playing herself on I Love LA or from her hit YouTube cooking show, Feeding Starving Celebrities, featuring fellow It girls such as Charli xcx, PinkPantheress, and Addison Rae. Next up, she’ll be ruling the red carpet as one of Vanity Fair’s Oscar Party livestream hosts. Hollywood’s biggest night of the year doesn’t end when the Oscars ceremony comes to a close; in fact, the night doesn’t really start until the winners find their way to the annual Vanity Fair Oscar Party. While the Vanity Fair Oscar Party has a brand-new location and three new livestream hosts, one thing that hasn’t changed is that Vanity Fair knows how to throw a party. “I’m looking forward to speaking to people that I would’ve not gotten the chance to really speak to in any other sort of situation, unless it was my cooking show,” Blackwell says with a laugh. And hosting the star-studded Vanity Fair Oscar Party could be the perfect opportunity for …

AI inference costs dropped up to 10x on Nvidia’s Blackwell — but hardware is only half the equation

AI inference costs dropped up to 10x on Nvidia’s Blackwell — but hardware is only half the equation

Lowering the cost of inference is typically a combination of hardware and software. A new analysis released Thursday by Nvidia details how four leading inference providers are reporting 4x to 10x reductions in cost per token. The dramatic cost reductions were achieved using Nvidia’s Blackwell platform with open-source models. Production deployment data from Baseten, DeepInfra, Fireworks AI and Together AI shows significant cost improvements across healthcare, gaming, agentic chat, and customer service as enterprises scale AI from pilot projects to millions of users. The 4x to 10x cost reductions reported by inference providers required combining Blackwell hardware with two other elements: optimized software stacks and switching from proprietary to open-source models that now match frontier-level intelligence. Hardware improvements alone delivered 2x gains in some deployments, according to the analysis. Reaching larger cost reductions required adopting low-precision formats like NVFP4 and moving away from closed source APIs that charge premium rates. The economics prove counterintuitive. Reducing inference costs requires investing in higher-performance infrastructure because throughput improvements translate directly into lower per-token costs. “Performance is what drives …

Nvidia’s Vera Rubin is months away — Blackwell is getting faster right now

Nvidia’s Vera Rubin is months away — Blackwell is getting faster right now

The big news this week from Nvidia, splashed in headlines across all forms of media, was the company’s announcement about its Vera Rubin GPU. This week, Nvidia CEO Jensen Huang used his CES keynote to highlight performance metrics for the new chip. According to Huang, the Rubin GPU is capable of 50 PFLOPs of NVFP4 inference and 35 PFLOPs of NVFP4 training performance, representing 5x and 3.5x the performance of Blackwell. But it won’t be available until the second half of 2026. So what should enterprises be doing now? Blackwell keeps on getting better The current, shipping Nvidia GPU architecture is Blackwell, which was announced in 2024 as the successor to Hopper.  Alongside that release, Nvidia emphasized that that its product engineering path also included squeezing as much performance as possible out of the prior Grace Hopper architecture. It’s a direction that will hold true for Blackwell as well, with Vera Rubin coming later this year. “We continue to optimize our inference and training stacks for the Blackwell architecture,” Dave Salvator, director of accelerated computing …