摘要
第三季度业绩回顾
本季度英伟达收入达570亿美元,同比增长62%,连续收入增长100亿美元,即22%。客户倾向于三个平台转变,推动加速计算、强大的AI模型和智能应用的指数增长,但仍处于转型早期阶段;
数据中心业务表现出色,收入达510亿美元,同比增长66%。云服务售罄,GPU安装基础得到充分利用。计算业务同比增长56%,主要得益于GB 300的增长;网络业务翻了一番以上,NV长度扩大,频谱X、量子X和Cinnabon的以太网也实现了稳健的两位数增长;
分析师对2026年顶级Csps和超大规模企业的预期总资本支出继续增加,约为6000亿美元,比年初高出超过2000亿美元。目前超大规模工作负载中约一半过渡到计算和生成式AI。
AI市场需求与发展
基础模型构建者推动计算支出持续增加,如OpenAI每周用户群增长到8亿,付费客户增加到100万,年化运行率收入达到70亿。同时,gentek AI在各行业和任务中扩散,为企业带来投资回报;
众多软件平台集成英伟达加速计算和AI,如Palantir使用英伟达的库达马尼图书馆和AI模型对本体平台进行增压,企业广泛利用AI提高生产力、效率和降低成本,如RBC利用agentic AI提高分析师生产力,联合利华利用AI提高内容创作速度并降低成本,Salesforce工程团队提高新代码开发生产力;
多个市场和企业宣布AI工厂和基础设施项目,如AWS和人道主义扩大合作,部署多达150000个AI加速器,建设世界级GPU数据中心网络。
产品平台与技术进展
布莱克威尔平台在Q3获得进一步动力,GB 300超过GB 200,贡献了总收入的约三分之二,向GB 300的过渡已无缝实现生产出货。Hopper平台Q3收入约为20亿;
鲁本平台有望在2026年下半年推出,由7芯片驱动,将实现X倍的性能提升。英伟达供应链合作伙伴已获得硅回报,生态系统为快速增长做好准备,每年的X因子性能飞跃提高了每美元的性能,降低了客户计算成本;
二十多年来,英伟达优化了cudamani玩具,改善现有工作负载,加速新工作负载,增加了每个软件版本的吞吐量。大多数使用cudamani的加速器经过时间考验,多功能架构过时较慢;
网络业务是为AI而建,目前是全球最大的,收入为82亿,同比增长162%。英伟达在数据中心网络方面取得胜利,大多数AI部署包括其以太网交换机。微软、Oracle和X AI正在建设使用频谱X以太网交换机的千兆瓦AI工厂;
客户对nvlink融合的兴趣持续增长,英伟达宣布与zsu、英特尔合作,Arm公司宣布为客户集成nvlink IP。nvlink已进入第五个阶段,是目前市场上唯一经过验证的扩展技术;
在最新的mlperf训练结果中,Blackwell Ultra的训练时间比Hopper快5倍,英伟达横扫所有基准测试。Nvidia Dymo是开源的低拉塞尔推理框架,已被每个主要的云服务提供商采用。
战略合作伙伴关系
英伟达与OpenAI建立战略合作伙伴关系,帮助其建设和部署至少10吉瓦的AI数据中心,并通过云合作伙伴投资相关公司;
与人类学建立深入技术合作伙伴关系,合作优化CUDA的人工模型,针对人为工作负载和主题优化未来架构,计算能力最初包括高达1吉瓦,使用Grace Blackwell和Vera rubbin系统;
物理人工智能成为价值数十亿美元的业务,英伟达为制造商和机器人创新者提供机会,如机器人计算机等推出新服务,亚马逊机器人利用英伟达平台开发。英伟达还与TSMC合作庆祝生产第一个Blackwell晶片,并与多家公司合作扩大在美国的存在。
其他业务表现与展望
娱乐业收入为43亿美元,同比增长30%,终端市场健康,渠道库存在假日季节前处于正常水平。Steam打破并发用户纪录,举办GeForce玩家节;
可视化收入同比增长56%,由dgx Spark推动。汽车收入同比增长32%,得益于自动驾驶解决方案。英伟达与Uber合作扩展自动驾驶车队;
毛利率方面,公认会计准则毛利率为73.4%,非公认会计准则毛利率为73.6%,超出前景,因数据中心组合、周期时间和成本结构改进而环比增长。运营费用环比增长,非公认会计准则有效税率比预期高出17%。存货季度环比增长32%,供应承诺环比增长63%;
第四季度总收入预计将达到650亿加或 - 2%,环比增长14%,不假设来自中国的数据中心计算收入。公认会计准则和非公认会计准则毛利率预计分别为74.8%和75%,增加或 - 50个基点。展望2027年财年,投入成本上升,但公司努力保持70年代中期的毛利率,公认会计准则和非非公认会计准则运营费用预计分别约为67亿和50亿。
内容实录
During this call, we may make forward looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially for a discussion of factors that could affect our future financial results and business, please refer to the disclosure. In today's earnings release, Most recents and the reports that we file on with the Securities and Exchange Commission. All our statements are made as of today November 19, 2025, based on information currently available to us, except as required by law, We assume no obligation to update any such statements. During this call, we will discuss non-s GAAP financial measures. You can find a reconciliation of these non-gaap financial financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over .
译:在本次电话会议中,我们可能会基于当前预期作出前瞻性陈述。此类陈述受多项重大风险与不确定性因素影响,实际业绩可能与前瞻性陈述存在重大差异。有关可能影响我们未来财务业绩及业务的因素讨论,请参阅今日发布的盈利报告、最新财报以及我们向美国证券交易委员会提交的文件中的披露信息。除法律另有要求外,我们所有陈述均基于截至 2025 年11月19日我们目前可获取的信息作出,我们不承担更新任何此类陈述的义务。本次电话会议中,我们将讨论非公认会计原则(non-GAAP)财务指标。有关非公认会计原则财务指标与公认会计原则(GAAP)财务指标的调节表,可在我们发布于公司官网的首席财务官(CFO)评论报告中查阅。
to Colette, thank you. We delivered another outstanding quarter with revenue of 57 billion up 62% year over year and a record sequential revenue growth of 10 billion or 22%. Our customers continue to lean into three platform shifts, fueling exponential growth for accelerated computing, powerful AI models, and agentic applications. Yet we are still in the early innings of these transitions that will impact our work across every industry.
译:科莱特(Colette),感谢你。我们又交付了一个出色的季度:营收达到570亿美元,同比增长62%;季度营收环比增长100亿美元,增幅达22%,创下历史新高。我们的客户持续投身于三大平台变革,为加速计算、高性能 AI 模型及智能体应用的指数级增长提供了强劲动力。然而,这些变革仍处于早期阶段,未来它们将对所有行业的业务运作产生深远影响。
We currently have visibility to a half a trillion dollars in Blackwell and Ruben revenue from the start of this year through the end of calendar year 2026 by executing our annual product cadence and extending our performance leadership through full stack design, we believe Nvidia will be the superior choice for the 3 to $4 trillion in annual AI infrastructure build we estimate by the end of the decade.
译:通过执行年度产品发布节奏,并借助全栈式设计持续巩固性能领先优势,目前我们已明确:自今年年初至2026年日历年末,Blackwell与 Rubin的营收规模将可达5000亿美元。我们预计,到本十年末,全球 AI 基础设施建设的年投入规模将达3万亿至4万亿美元,而英伟达将成为这一领域的优选合作伙伴。
Demand for AI infrastructure continues to exceed our expectations. The clouds are sold out and our GPU installed base both new and previous generations, including Blackwell Hopper and Ampere, is fully utilized. Record Q3 data center revenue of 51 billion increased 66% year over year, a significant feat at our scale. Compute grew 56% year over year, driven primarily by the GB 300 ramp, while networking more than doubled given the onset of NV length scale up and robust double digit growth across spectrum X, Ethernet in Quantum X and Cinnabon, the world hyperscalers a trillion dollar industry are transforming search recommendations and content understanding from classical machine learning to generative AI. Nvidia cudamani at both and is the ideal platform for this transition driving infrastructure investment measured in hundreds of billions of dollars at Meta AI, recommendation systems are delivering higher quality and more relevant content, leading to more time spent on apps such as Facebook and Friends.
译:AI基础设施的需求持续超出我们的预期。各大云厂商的相关资源已售罄,我们的GPU装机量,无论是新一代产品,还是 Blackwell、Hopper、Ampere 等前代产品均处于满负荷使用状态,Q3数据中心业务营收创下510亿美元的纪录,同比增长66%,以我们当前的业务规模而言,这是一项显著成就。计算业务同比增长56%,这主要得益于GB300产品的产能提升;而网络业务营收实现翻倍以上增长,原因包括NV Length纵向扩展方案的落地,以及Spectrum X、Ethernet in Quantum X、Cinnabon等产品线均实现稳健的两位数增长,全球超大规模科技企业正将搜索、推荐及内容理解业务的技术架构,从传统机器学习转向生成式 AI。英伟达凭借 CUDA 生态全面支持这一转型,是该技术变革的理想平台,也因此推动了规模达数千亿美元的基础设施投资。在Meta的AI业务中,推荐系统正输出更高质量、更具相关性的内容,进而促使用户在Facebook及Friends等应用上花费更多时间。
Analyst expectations for the top Csps and hyperscalers in 2026, Aggregate CapEx have continued to increase and now sit roughly at 600 billion, more than 200 billion higher relative to the start of the year. We see the transition to of computing and generative AI across current hyperscale workloads, contributing toward roughly half of our long term. The opportunity.
译:分析师对2026年头部云服务提供商及超大规模科技企业的资本支出预期持续上升,目前合计资本支出约达6000亿美元,较年初增加逾2000亿美元。我们观察到,在当前超大规模科技企业的各类工作负载中,正逐步向计算与生成式AI转型,这一转型将为我们贡献约半数的长期市场机遇。
Another growth pillar is the ongoing increase in compute spend driven by foundation model builders such as Anthropic nostril, OpenAI, reflection, safe superintelligence thinking machines, lab, and X AI, all scaling compute aggressively to scale intelligence. The three scaling laws, pre training, post training and inference remain intact. In fact, we see a positive virtuous cycle emerging whereby the three scaling laws and access to compute are generating better intelligence and in turn increasing adoption and profits. OpenAI recently shared that their weekly user base has grown to 800 million and our price customers has increased to 1 million and that their gross margins were healthy. Well Anthropic recently reported that its annualized run rate revenue has reached 7 billion as of last month, up from 1 billion at the start of the year. We are also witnessing a proliferation of a gentek AI across various industries and tasks companies such as Cursor Anthropic, open evidence, Epic and a bridge are experiencing a surge in user growth as they supercharge the existing workforce, delivering unquestionable ROI for coders and healthcare professionals, the world's most important enterprise.
译:另一个增长支柱是计算支出的持续增加,这一增长由Anthropic、 nostril、OpenAI、Reflection、Safe Superintelligence、Thinking Machines Lab及 X AI 等基础模型研发企业推动,这些企业都在大力扩大计算规模,以实现智能水平的提升,AI 领域的三大规模化法则,即预训练、训练后优化与推理,依然有效。事实上,我们正观察到一个积极的良性循环正在形成:这三大规模化法则与计算资源的可及性共同催生了更先进的智能技术,而更先进的智能技术又反过来推动了技术应用普及度与企业利润的增长。,OpenAI 近期透露,其周活跃用户数已增至8亿,付费用户数达100万,且毛利率保持良好水平。此外,Anthropic近期报告显示,截至上个月,其年化营收已从年初的10亿美元增至70亿美元。我们还见证了智能体 AI在各行业及各类任务中的广泛应用。Cursor、Anthropic、Open Evidence、Epic 及 A Bridge创造了无可争议的投资回报率(ROI)。
Software platforms like ServiceNow, CrowdStrike, and Sap. Integrating nvida is accelerated computing and AI. Our new partner Palantir is supercharging the incredibly popular ontology platform with Nvidia cudamani brary and AI models for the first time previously like most enterprise software platforms, ontology runs only on Cpu's slowe's is leveraging the platform to build supply chain agility, reducing costs, and improving customer satisfaction. Enterprises broadly are leveraging AI to boost productivity, increase efficiency, and reduce costs. RBC is leveraging agentic AI to drive significant analyst productivity, slashing report generation time from hours to minutes. AI and digital twins are helping Unilever accelerate content creation by 2x and cut costs by 5 50%. And Salesforce's engineering team has seen at least 30% productivity increase in new code development.
译:ServiceNow、CrowdStrike以及SAP等软件平台,正将英伟达的加速计算与AI技术整合到自身系统中。我们的新合作伙伴Palantir首次借助英伟达CUDA生态与AI模型,为其广受欢迎的Ontology平台赋能。此前,与大多数企业级软件平台类似,Ontology平台仅能在中央处理器上运行,速度较慢。如今,企业正借助这一升级后的平台提升供应链灵活性、降低成本并提高客户满意度。从整体来看,各类企业正利用AI技术提升生产力、提高效率并削减成本。加拿大皇家银行通过AI智能体显著提升分析师工作效率,将报告生成时间从数小时缩短至数分钟。联合利华则借助AI与数字孪生技术,将内容创作效率提升一倍,同时降低50%的成本。此外,Salesforce的工程团队在新代码开发方面,生产力至少提升了30%。
After adopting Cursor this past quarter, we announced AI factory and infrastructure projects amounting to an aggregate of 5 million Gpu's. This demand spans every market, CSP s, sovereigns, mo builders, enterprises and supercomputing centers, and includes multiple landmark build-outs X Ai's. Colossus 2, the world's first gigawatt scale data center. Lily's AI Factory for Drug Discovery, the pharmaceutical industry's most powerful data center. And just today, AWS and Humane expanded their partnership, including the deployment of up to 150000 AI accelerators, including our GB 300x AI, and Humane also announced a partnership in which the two will jointly develop a network of world class GPU data centers anchored by the flagship 500 Me facility.
译:本季度采用 Cursor后,我们宣布了AI工厂及基础设施项目,这些项目合计所需的 GPU数量达500万块。这一需求覆盖所有市场领域,包括云服务提供商、主权实体、模型研发企业、各类企业以及超级计算中心,其中还包含多个具有里程碑意义的建设项目:X AI 的 “Colossus 2”全球首个吉瓦级数据中,Lily的AI药物研发工厂,制药行业算力最强大的数据中心,此外,就在今日,亚马逊云服务与Humane宣布扩大合作范围,包括部署多达15万台AI加速器,同时,Humane 还宣布了一项合作计划,双方将联合打造一个世界级GPU数据中心网络,该网络将以一座500兆瓦的旗舰级数据中心为核心。
Blackwell gained further momentum in Q3. The GB 300 crossed over GB 200 and contributed roughly two thirds of the total Blackwell revenue. The transition to GB 300 has been seamless with production shipments to the majority to the major cloud service providers, hyperscalers and GP clouds, and is already driving their growth. The Hopper platform, in its 13th quarter since exception, recorded approximately 2 billion in revenue in Q3. Age 20, sales were approximately 50 million. Sizable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China, while we were disappointed in the current state that prevents us from shipping more competitive data center compute products to China, we are committed to continued engagement with the us and China government and will continue to advocate for America's ability to compete around the world to establish a sustainable leadership and position in AI computing. America must win the support of every developer and be the platform of choice for every commercial business, including those in China.
译:Blackwell 架构在Q3的发展势头进一步增强。其中,GB 300产品的营收占比已超过GB 200,约占Blackwell架构总营收的三分之二。GB 300的产品过渡过程十分顺畅,目前已向多数头部云服务提供商、超大规模科技企业及GPU云服务厂商批量发货,且已开始为这些客户的业务增长提供助力。Hopper架构平台自推出以来已进入第13个季度,该平台在第三季度实现约20亿美元营收。H20的销售额约为 5000 万美元。受地缘政治因素及中国市场竞争日益激烈的影响,本季度未能达成大额采购订单。尽管当前局势导致我们无法向中国市场运送更具竞争力的数据中心计算产品,对此我们感到遗憾,但我们仍致力于与美国和中国政府保持沟通,并将持续倡导维护美国在全球范围内的竞争能力,以在AI计算领域确立可持续的领先地位。美国必须赢得全球每一位开发者的支持,并成为包括中国企业在内的所有商业机构的首选平台。
The Rubin platform is on track to ramp in the second half of 2026. Powered by 7 chips, the Vera Rubin platform will once again deliver an X Factor improvement in performance. Relative to Blackwell, we have received silicon back from our supply chain partners and are happy to report that Nvidia teams across the world are executing to bring up beautifully. Rubin is our third generation Rec scale system, substantially redefined the manufact ability while remaining compatible with Grace Blackwell, our supply chain data center ecosystem and cloud partners have now mastered the build to installation process of Nvidia's rack architecture. Our ecosystem will be ready for a fast Rubin Ramp, our annual X Factor performance Leap increases performance per dollar while driving down computing costs for our customers. The long, useful life of Nvidia's coua Gpu's is a significant TCO advantage over accelerators, Cuba ISS compatibility and our massive installed base extend the life and video systems well beyond their original estimated useful life.
译:Rubin 平台计划于2026年下半年启动产能提升。Vera Rubin平台由7颗芯片提供算力支撑,将再度实现 “X 倍级” 性能提升,相较于Blackwell架构,目前我们已从供应链合作伙伴处收回芯片,并欣喜地告知,英伟达全球团队正稳步推进相关调试工作,进展十分顺利。Rubin是我们的第三代Rec级规模系统,在大幅优化可量产的同时,仍保持与 Grace Blackwell架构的兼容性。我们的供应链、数据中心生态系统及云服务合作伙伴,如今已熟练掌握英伟达机柜架构从生产到安装的全流程。届时,我们的生态系统将为Rubin平台的快速产能提升做好准备。我们每年实现的 “X 倍级” 性能飞跃,不仅能提高单位成本对应的性能,还能为客户降低计算成本,英伟达CUDA GPU的超长使用寿命,使其在总拥有成本方面较其他加速器具备显著优势。CUDA生态的兼容性,以及我们庞大的装机量,大幅延长了英伟达系统的使用寿命,使其远超最初预估的使用周期。
For more than two decades, we have optimized the cudamani toys andm improving existing workloads, accelerating new ones, an increasing throughput with every software release. Most accelerators with cudamani vidia is time tested and versatile architecture became obsolete within a few years as model technologies evolve. Thanks to cudamani 100 Gpu's we shipped six years ago are still running at full utilization today, Powered by vastly improved software stack, we have evolved over the past 25 years from a gaming GPU company to now an AI data center infrastructure company.
译:二十多年来,我们持续优化CUDA生态,通过每一次软件版本更新,优化现有工作负载、加速新工作负载的运行,并提升数据吞吐量。大多数非英伟达的加速器由于缺乏经时间验证的通用架构,随着模型技术的迭代,往往在短短几年内就会被淘汰。得益于 CUDA 生态,我们六年前出货的100块 GPU如今仍在满负荷运行,这背后是我们已大幅完善的软件栈提供的支撑,过去 25 年间,我们已从一家游戏GPU企业,发展成为如今的AI数据中心基础设施企业
Our ability to innovate across the CPU, the GPU, networking and software, and ultimately drive down cost per token is unmatched across the industry. Our networking business purpose built for AI and now the largest in the world generated revenue of 8.2 billion up 162% year over year with nvlink and finneman and Spectrum X Ethernet all contributing to growth. We are winning in data center networking as the majority of AI deployments now include our switches with Ethernet. GPU attach rates roughly on power with Infiniband meta. Microsoft Oracle and X AI are building gigawatt AI factories with Spectrum X Ethernet switches and each will run its operating system of choice, highlighting the flexibility and openness of our platform. We recently introduced Spectrum. Spectrum xgs, a scale across technology that enables scale AI factories, and that is the only company with AI scale up, scale out, and scale across platforms, reinforcing our unique position in the market as the AI infrastructure provider.
译:我们在CPU、GPU、网络技术及软件领域的全方位创新能力,以及最终实现降低单位令牌成本的实力,在行业内无可匹敌。我们的网络业务是专为AI打造的业务板块,如今已成为全球规模最大的AI网络业务:该业务营收达 82 亿美元,同比增长 162%,其中 NVLink、Finneman及Spectrum X以太网技术均为营收增长做出了贡献。我们在数据中心网络领域占据领先地位,目前大多数AI部署项目都采用了我们的以太网交换机。在英伟达,GPU与以太网交换机的搭配率已与InfiniBand技术基本持平。Meta、微软、甲骨文及X AI正采用Spectrum X以太网交换机建设吉瓦级AI工厂,且每家企业都将运行其自主选择的操作系统,这一情况充分体现了我们平台的灵活性与开放性。近期,我们推出了Spectrum XGS技术,这是一项支持横向扩展(scale across)的技术,能够为 AI 工厂的规模化建设提供支撑。目前,我们是全球唯一一家同时拥有 AI 纵向扩展(scale up)、横向扩展(scale out)及跨域扩展(scale across)平台的企业,这一优势进一步巩固了我们作为 AI 基础设施提供商在市场中的独特地位。
Customer interest in nvlink fusion continues to grow, we announced strategic collaboration with the zsu in October, where we will integrate phuss Cpu's and Nvidia Gpu's via nvi lake fusion, connecting our large ecosystems. We also announced a collaboration with Intel to develop multiple generations of custom data center and PC products connecting Nvidia and Intel's ecosystems using nvlink. This week at Supercomputing 25 Arm announced that it will be integrating nvlink IP for customers to build CPU Socs that connect with Nvidia. Currently on its fifth, nvlink is the only proven scale up technology available on the market today.
译:客户对 NVLink Fusion(英伟达高速互联融合技术)的兴趣持续升温。我们于10月宣布与ZSU达成战略合作,双方将通过NVLink Fusion整合Phuss系列处理器与英伟达 GPU,实现两大生态系统的互联互通。此外,我们还宣布与英特尔展开合作,计划联合研发多代定制化数据中心及个人电脑产品,这些产品将借助NVLink技术,打通英伟达与英特尔的生态系统。在本周举办的 2025 年超级计算大会上,ARM 公司宣布将集成NVLink IP,助力客户研发可与英伟达产品互联的CPU系统级芯片,NVLink技术目前已发展至第五代,是当今市场上唯一经过验证的纵向扩展技术。
In the latest mlperf training results, Blackwell Ultra delivered 5x faster time to train than Hopper. Nvidia swept every benchmark, notably Nvidia is the only training platform to Ledge Bridge F 4 while meeting the mlperf strict accuracy standard in semi-analytical inference. Max Benchmark Blackwell achieved the highest performance and lowest total cost of ownership across every model and use case. Particularly important is Blackwell's NB Link's performance on a mixture of experts, the architecture for the world's most popular reasoning models on Deep Sea are one Blackwell delivered 10x higher performance per watt and 10x lower cost per token versus H2 hundred, a huge generational leap fueled by our extreme code design approach. Nvidia Dymo, an open source low lasser inference framework, has now been adopted by every major cloud service provider, leveraging Dynamos enable Ference and disaggregated inference, the resulting increase in performance of complex AI models such as Moe models, AWS Google Cloud, Microsoft Azure and Oci have boosted AI inference performance for Enterprise Cloud customers.
译:在最新的 MLPerf 训练基准测试结果中,Blackwell Ultra的训练速度较Hopper 架构产品快5倍。英伟达在所有基准测试项目中均排名第一,尤其值得注意的是,在半解析推理场景下,英伟达是唯一能在完成Bridge F 4任务的同时,满足MLPerf严格精度标准的训练平台。在 Max 基准测试中,Blackwell 架构在所有模型与应用场景下均实现了最高性能和最低总拥有成本,尤为重要的是,Blackwell 架构的 NB Link在专家混合模型上的表现,在 DeepSeek这一全球最主流的推理模型架构中,Blackwell Ultra的每瓦性能较 H200提升10倍,单位令牌成本降低 10 倍。这一跨代际的巨大飞跃,得益于我们极致的代码设计方法,英伟达Dymo目前已被所有头部云服务提供商采用。借助Dymo框架支持的分布式推理与解耦推理,复杂AI模型的性能得到显著提升。亚马逊云科技(AWS)、谷歌云(Google Cloud)、微软 Azure(Microsoft Azure)及甲骨文云基础设施(OCI)均通过该框架,为企业云客户提升了 AI 推理性能。
We are working on a strategic partnership with OpenAI focused on helping them build and deploy at least 10 GW of AI data centers. In addition, we have the opportunity to invest in the company we serve OpenAI through their cloud partners, Microsoft Azure Oci and core Weave, we will continue to do so for the foreseeable future as they continue to scale. We are delighted to support the company to add self. Build infrastructure. And we are working toward a definitive agreement and are excited to support OpenAI's growth.
译:我们正与OpenAI推进一项战略合作,核心目标是助力其建设并部署至少10吉瓦(GW)规模的人工智能(AI)数据中心。此外,我们还拥有对OpenAI进行投资的机会目前我们通过微软Azure、甲骨文云基础设施(OCI)及Core Weave等 OpenAI 的云服务合作伙伴为其提供支持;在可预见的未来,随着OpenAI持续扩大业务规模,我们将继续通过这一方式为其提供助力。我们很高兴能支持OpenAI新增自主建设的基础设施,目前双方正致力于达成一份最终协议,我们对能为OpenAI的发展提供支持感到振奋。
Yesterday, we celebrated an announcement with Anthropic for the first time. Anthropic is adopting Nvidia and we are establishing a deep technology partnership to support Anthropic principles fast growth. We will collaborate to optimize Anthropic models for CUDA and deliver the best possible performance, efficiency, and TCO. We will also optimize future Nvidia architectures for Anthropic workloads and topics. Compute commitment is initially including up to 1 GW of compute capacity with Grace Blackwell and Vera rubbin Systems. Our strategic investments in Anthropic mestral, opening eye reflection, thinking machines, and other represent partnerships that grow the Nvidia cudamani ecosystem and enable every model to run optimally on Nvidia is everywhere. We will continue to invest strategically while preserving our disciplined approach to cash flow management.
译:昨日,我们与Anthropic共同宣布了一项合作消息,这是双方首次达成此类合作,Anthropic将采用英伟达的技术产品,同时我们将建立深度技术合作关系,以支持 Anthropic实现快速发展。双方将开展协作,针对CUDA优化Anthropic的模型,从而实现最佳的性能、效率与总拥有成本。我们还将针对Anthropic的工作负载与相关需求,优化未来的英伟达架构初期的算力承诺包括:通过GB与Vera Rubin系统,为Anthropic提供最高1GW的算力支持。我们对Anthropic、Mestral、OpenAI、Reflection、Thinking Machines等机构的战略投资,体现了我们的合作理念,这些合作将推动英伟达 CUDA 生态发展壮大,确保各类模型均能在英伟达全场景产品上实现最优运行。未来,我们将继续开展战略投资,同时始终坚持审慎的现金流管理策略。
Physical AI is already a multi-billion dollar business, addressing a multistrigaria dollar opportunity and the next leg of growth for Nvidia leading us manufacturers and robotics innovators are leveraging Nvidia's 3 computer architecture to train on Nvidia test on Omniverse computer and deploy real world AI on Justin. Robotic computers, PTC and Siemens introduce new services that bring Omniverse powered digital twin workflows to their extensive installed base of customers companies, including beldin Caterpillar foxcon Lucid Motors Toyota, TSMC, and wron are building Omniverse digital twin factories to accelerate AI driven manufacturing and automation agility robotics. Amazon robotics figure and skill at AI are building our platform, tapping offerings such as Nvidia Cosmos World Foundation models for development, Omniverse for simulation and validation, and Justin to power next generation intelligent robots. We remain focused on building resiliency and redundancy in our global supply chain. Last month, in partnership with TSMC, we celebrated the first Blackwell wafer produced on us, so we will continue to work with foxcon, wron, amkor, spill, and others to grow our presence in the us.
译:物理AI业务规模已达数十亿美元,其背后蕴藏着数千亿美元的市场机遇,同时也是英伟达下一阶段的增长支柱。美国领先的制造企业与机器人技术创新企业正借助英伟达的3D计算架构推进技术落地,在英伟达平台上完成 AI 模型训练,在Omniverse计算平台上开展测试验证,最终在Jetson机器人计算平台上部署,实现物理世界的人工智能应用。PTC与西门子推出了全新服务,将由Omniverse驱动的数字孪生工作流引入其庞大的既有客户群体。包括贝尔金、卡特彼勒、富士康、Lucid、丰田、台积电及纬创在内的企业,正搭建基于Omniverse的数字孪生工厂,以加速人工智能驱动的制造业升级、自动化进程及机器人技术的灵活应用。亚马逊机器人、Figure及Skill at AI等企业正基于我们的平台进行开发,它们充分利用英伟达的各类产品与服务,例如用于模型开发的英伟达 Cosmos World 基础模型,用于仿真与验证的Omniverse平台,以及为下一代智能机器人提供算力支持的Jetson平台。我们始终致力于增强全球供应链的韧性与冗余能力,上个月,我们与台积电(TSMC)合作,见证了在美国本土生产的首批Blackwell架构晶圆下线;未来,我们将继续与富士康、纬创、Amkor、Spill等合作伙伴携手,扩大在美国市场的业务布局。
Gaming revenue was 4.3 billion, up 30% year on year, driven by strong demand as Blackwell Momentum continued. End Markets health through remains robust and channel inventories are at normal levels heading into the holiday season. Steam recently broke its concurrent user record with 42 million gamers, while thousands of fans packed the GeForce Gamer Festival in South Creta to celebrate 25 years of GeForce Nvidia Pro.
译:游戏业务营收达 43 亿美元,同比增长 30%。这一增长得益于强劲的市场需求,同时Blackwell架构的发展势头持续助推业务增长,终端市场整体保持健康态势,且临近假日季,渠道库存已处于正常水平。近期,Steam平台的同时在线用户数突破纪录,达4200万;此外,在South Creta举办的 “GeForce 玩家嘉年华”活动中,数千名粉丝齐聚一堂,共同庆祝英伟达GeForce专业显卡诞生25周年。
Visualization has evolved into computers for engineers and developers, whether for graphics or for AI professional visualization revenue, with 760 million up 56% year over year, was another record Growth was driven by dgx Spark, the world's smallest AI supercomputer built on a slow configuration of Grace Blackwell automotive revenue with 592 million, up 32% year over year, Primarily driven by self driving solutions. We are partnering with Uber to scale the world's largest level 4 autonomous fleet built on the new Nd Hyperion L 4 C reference architecture, moving to the rest of the P and L, GAAP gross margins were 73.4% and non GAAP gross margins was 73.6%, exceeding our outlook. Gross margins increased sequentially due to our data center mix, improved cycle time and cost structure. GAAP operating expenses were up 8% sequentially and up 11% on non in Aapt basis. The growth was driven by infrastructure compute as well as higher compensation and benefits and engineering development costs. Non GAAP effective tax rate for the third quarter was just over 17% higher than our guidance of 16.5% due to the strong us revenue. On our balance sheet, inventory grew 32% quarter over quarter, while supply commitments increased 63% sequentially. We are preparing for significant growth ahead and feel good about our ability to execute against our opportunity set.
译:可视化技术已逐步发展成为面向工程师与开发者的专用计算工具,无论是用于图形处理,还是AI领域。专业可视化业务营收达7.6亿美元,同比增长56%,再创历史新高。这一增长主要由DGX Spark推动。汽车业务营收为5.92亿美元,同比增长32%,增长动力主要来自自动驾驶解决方案。我们正与优步(Uber)合作,基于全新的ND Hyperion L4 C参考架构,规模化部署全球最大规模的L4级自动驾驶车队。接下来看利润表其他项目,公认会计原则(GAAP)毛利率为 73.4%,非公认会计原则(non-GAAP)毛利率为 73.6%,均超出我们此前的预期。得益于数据中心业务的产品结构优化、生产周期缩短及成本控制改善,毛利率环比有所提升。公认会计原则(GAAP)下运营费用环比增长 8%,非公认会计原则(non-GAAP)下运营费用环比增长11%。费用增长主要源于基础设施计算相关投入增加,以及薪酬福利支出、工程研发成本的上升。受美国市场营收表现强劲影响,第三季度非公认会计原则实际税率略高于 17%,超出我们此前16.5%的指引。资产负债表方面,库存环比增长32%,供应链承诺环比增长63%。我们正为未来的大幅增长做准备,且对把握当前市场机遇、实现业务目标的能力充满信心。
Okay, let me turn to the outlook for the fourth quarter. Total revenue is expected to be 65 billion plus or -2%. At the midpoint. Our outlook implies 14% sequential growth driven by continued momentum in the Blackwell architecture, consistent with last quarter, we are not assuming any data center compute revenue from China. GAAP and non non-gaap gross margins are expected to be 74.8% and 75% respectively, plus or -50 basis points. Looking ahead to fiscal year 2027, input costs are on the rise, but we are working to hold gross margins in the mid 70s GAAP and non non GAAP operating expenses are expected to be approximately 6.7 billion and 5 billion respectively.
译:接下来,我来谈谈第四季度的业绩展望。总营收预计为650亿美元,上下浮动 2%。按预期区间中值计算,本季度营收环比增幅预计为14%,这一增长主要得益于Blackwell架构的持续增长势头。与上季度一致,我们在业绩预期中未纳入任何来自中国市场的数据中心计算业务营收。公认会计原则(GAAP)毛利率与非公认会计原则(non-GAAP)毛利率预计分别为74.8%和75%,两者均允许上下浮动50个基点,展望 2027 财年,尽管投入成本呈上升趋势,但我们正采取措施将公认会计原则(GAAP)与非公认会计原则(non-GAAP)毛利率维持在75%左右的水平。此外,第四季度公认会计原则(GAAP)运营费用与非公认会计原则(non-GAAP)运营费用预计分别约为 67 亿美元和 50 亿美元。
GAAP and non GAAP other income and expenses are expected to be an income of approximately 500 million, excluding gains and losses from non marketable and publicly held equity securities. GAAP and non GAAP tax raises are expected to be 17% plus or -1%, excluding any discrete items at this time. Let me turn the call over to Jensen for him to say a few words.
译:公认会计原则(GAAP)与非公认会计原则(non-GAAP)下的其他收支预计为约5亿美元收入,此数据不包含非流通股权证券及公开持有的股权证券所产生的损益。公认会计原则(GAAP)与非公认会计原则(non-GAAP)下的税率预计为17%,上下浮动1%,目前此预估未包含任何偶发离散项目,接下来,我将会议主持权交给Jensen(黄仁勋)由他来讲几句。
Thanks Colette. There's been a lot of talk about an AI bubble. From our vantage point, we see something very different as a reminder Nvidia is unlike any other accelerator, we excel at every phase of AI from pre-training and post training to inference. And with our two decade investment in Cuba X acceleration libraries, we are also exceptional at science and engineering simulations, computer graphics, structured data processing, classical machine learning.
译:谢谢科莱特(Colette)。目前有很多关于AI泡沫的讨论。但从我们的视角来看,我们看到的情况与这种说法截然不同。需要说明的是,英伟达与其他任何加速器企业都不同,我们在人工智能的每个阶段都表现出色,无论是预训练、训练后优化还是推理。此外,凭借我们在CUDA加速库领域长达二十年的投入,我们在科学与工程仿真、计算机图形学、结构化数据处理以及传统机器学习等领域也具备独特优势。
The world is going is undergoing three massive platform shifts at once, the first time since the dawn of Moore's Law. Nvidia is uniquely addressing each of the three transformations, the first transition is from CPU general purpose computing to GPU accelerated computing.
译:自摩尔定律诞生以来,全球首次同时经历三场大规模的平台变革。英伟达正以独特的方式应对这三项变革,其中第一项变革便是从CPU通用计算向GPU加速计算的转型。
As Moore's Law slows, the world has a massive investment in non AI software, from data processing to science and engineering simulations representing hundreds of billions of dollars in comp cloud computing spend each year. Many of these applications, which ran once exclusively on Cpu's, are now rapidly shifting to cudamani U's. Accelerated computing has reached a tipping point. Secondly AI is also reached a tipping point, and it's transforming existing applications while enabling entirely new ones for existing applications. Generative AI is replacing classical machine learning in search rankings. Recommender systems add targeting, click through prediction to content moderation, the very foundations of hyperscale infrastructure. Me's Gem, a foundation model for ad recommendations trained on large scale GPU clusters, exemplifies this shift in Q2 meta reported over a 5% increase in an Ad conversions on Instagram and 3% gain on Facebook feed driven by generative AI based Jam. Transitioning to generative AI represents substantial revenue gains for hyperscalers.
译:随着摩尔定律增速放缓,全球在非AI软件领域投入巨大,从数据处理到科学与工程仿真,这类软件每年在云计算领域的支出高达数千亿美元。过去,许多此类应用完全依赖CPU运行,如今正迅速转向基于CUDA的GPU,加速计算已迎来临界点。其次,人工智能也已抵达临界点,它不仅在改造现有应用,还在催生全新应用。在现有应用场景中,生成式人工智能正逐步取代传统机器学习,无论是搜索排名、推荐系统的定向功能、点击率预测,还是内容审核。例如,Meta的广告推荐基础模型Me's Gem,便是通过大规模 GPU集群训练而成,这一案例正是上述变革的典型体现。Meta在Q2财报中表示,受基于生成式人工智能的Gem模型推动,Instagram平台的广告转化率提升了5%以上,Facebook FEED流的广告转化率提升了3%。对超大规模科技企业而言,向生成式人工智能转型意味着可观的营收增长。
Now a new wave is rising agentic AI systems capable of reasoning, planning, and using tools from coding assistances like cursor and C code to radiology tools like Idoc, legal assistance like Harvey, and AI chauffeurs like Tesla FSD, and Waymo. These systems mark the next frontier of computing the fastest growing companies in the world today, OpenAI Anthropic xai Google cursor lovable repli Cognition AI open evidence of Bridge Tesla are pioneering agentic AI, so there are three massive platform shifts. The transition to accelerated computing is foundational and necessary essential in a post Moore's Law era. The transition to generative AI is transformational and necessary, supercharging existing applications and business models, And the transition to agentic and physical AI will be revolutionary, giving rise to new applications, companies, products, and services. As you consider infrastructure investments, consider these three fundamental dynamics. Each will contribute to infrastructure growth in the coming years. Nvidia is chosen because our singular architecture enables all three transitions, and thus so for any form and modality of AI across all industries, across every phase of AI, across all of the diverse computing needs in a cloud, and also from cloud to enterprise to robots, 1 architecture toshio be you.
译:如今,一股新浪潮正在兴起,那便是具备推理、规划能力并能使用工具的智能体人工智能系统。从Cursor、C Code这类编程辅助工具,到Idoc这类放射医学工具,再到 Harvey这类法律辅助工具,以及特斯拉全自动驾驶(Tesla FSD)、Waymo这类人工智能驾驶系统,均属于智能体人工智能的范畴。这些系统标志着计算领域的下一个前沿方向。当今全球增长最快的企业,如OpenAI、Anthropic、XAI、谷歌、Cursor、Lovable、Repli、Cognition AI、Open Evidence、A Bridge、特斯拉(Tesla)等,都在引领智能体人工智能的创新。综上,当前正发生三场大规模平台变革:向加速计算转型,这在后摩尔定律时代具有基础性、必要性,是不可或缺的变革;向生成式人工智能转型,这一变革具备颠覆性与必要性,能为现有应用及商业模式注入强大动力;向智能体人工智能与物理人工智能(Physical AI)转型,这将是革命性的变革,有望催生出全新的应用、企业、产品与服务。当您考虑基础设施投资时,不妨关注这三大核心动态,未来数年,每一项动态都将推动基础设施领域的增长。英伟达之所以被选择,是因为我们独特的架构能够支持上述所有三项转型,从而满足各行各业、人工智能全流程的各类人工智能形态与模式需求,同时覆盖云、企业、机器人等所有场景的多样化计算需求,一套架构,便能实现全方位赋能。
投资者提问
Great, thank you. I wonder if you could update us. You talked about the 500 billion of revenue for Blackwell plus Rubin in 25 and 26 at the time you talked about 150 billion of that already having been shipped, so as the court has wrapped up, are those still kind of the general parameters that there's 350 billion in the next kind of, you know, 14 months or so? And, you know, I would assume over that time, you haven't seen all the demand that there is. There's any possibility of upside to those numbers as we move forward.
译:非常好,谢谢。想请教您能否更新一下相关情况?您之前提到,2025年和2026年Blackwell加 Rubin的营收目标为5000亿美元,当时还提到其中1500亿美元的产品已完成出货。如今周期已告一段落,之前提到的这些大致参数是否仍然适用?比如未来14个月左右是否仍有3500亿美元的营收空间?而且,我想在这段时间里,您应该还未完全消化所有市场需求。随着后续推进,这些营收数字是否存在上调的可能性?
Yeah, thanks, Joe. I'll start first with a response here on that. Yes, that's correct, working into our 500 billion forecast and we are on track for that as we have finished some of the quarters and now we have several quarters now in front of us to take us through the end of calendar year 26. The number will grow and we will achieve I'm sure, additional needs for compute that will be shippable by fiscal year 26. So we shipped 50 billion this quarter, but we would be not finished if we didn't say that. We'll probably be taking more orders. For example, just even today, our announcements with KSA and that agreement in itself is 400 to 600000 more GPU is over three years, a tropic is also net new, so there's definitely an opportunity for us to have more on top of the 500 billion that we announced.
译:好的,谢谢。我先就这个问题作答。没错,情况是这样的,我们正朝着5000亿美元的预期目标推进,目前进展符合计划。随着部分季度的结束,现在我们面前还有几个季度的时间,直至2026日历年年底。这个数字还会增长,而且我确信,到2026财年,我们还会承接更多可交付的算力需求订单,本季度我们已出货500亿美元,但有一点必须说明,我们的订单承接尚未结束,后续很可能还会获得更多订单。比如,仅在今天,我们就宣布了与沙特阿拉伯的合作,该协议本身就涉及未来三年内额外交付40万至60万块 GPU;此外,A Tropic也属于新增订单。因此,在我们此前宣布的5000亿美元目标基础上,我们绝对有机会实现更高的业绩。
Yes, good afternoon. Thank you for taking the question. There's clearly a great deal of consternation around the magnitude of AI infrastructure, build-outs and the ability to fund and the, you know, the same time you're about being out, every is taken, The AI world hasn't seen the enormous benefit yet from B3 hundred. Nevermind Rubin and Gemini 3 just announced Croc 5 coming soon. And so the question is this, when you look at that as the backdrop, do you see a realistic path for supply to catch up with demand over the next 12 to 18 months or do you think it can extend beyond that time frame?
译:下午好。感谢您抽时间提问。目前,AI基础设施建设的规模、资金筹措能力,显然存在诸多担忧,而且,正如您所知,当下市场处于 “一货难求” 的状态,此外,AI领域尚未充分享受到GB300带来的巨大效益,更不用说Rubin了;而且Gemini 3刚发布不久,Croc 5也即将推出。因此,我的问题是:在这样的背景下,您认为未来12至18个月内,供应链有切实可行的路径追上需求吗?还是说,供需缺口可能会持续超过这个时间段?
Well, as you know, we've done a really good job planning our supply chain. Nvidia supply chain basically includes every technology company in the world and TSMC and their packaging and our memory vendors and memory partners and all of our system odms have done a really good job playing with us. And we were planning for a big year.
译:嗯,正如您所知,我们在供应链规划方面做得非常出色。英伟达的供应链本质上涵盖了全球所有相关科技企业台积电及其封装业务、我们的存储供应商与存储合作伙伴,以及所有系统原始设计制造商都与我们配合得十分默契。而且,我们已为业绩大幅增长的一年做好了规划。
You know, we've seen for some time the three transitions that I spoke about just just a second ago, accelerated computing from general purpose computing. And it's really important to recognize that AI is not just agentic AI, but generative AI is transforming the way that hyperscalers did the work that they used to do on Cpu's. Generative AI made it possible for them to move search and recommender systems, you know, ad recommendations and targeting. All of that has been generate has been moved to generative AI and still transitioning. And so whether you install Nvidia GPU for data processing or you did it for generative AI for your recommender system, or you're building it for agentic chatbots and the type of AIS that most people see when they think about AI, all of those applications are accelerated by Nvidia. And so when you look at the totality of the spend, it's really important to think about each one of those layers. They're all growing, they're related, but not the same. But the wonderful thing is that they all run on Vid Gpu's simultaneously, because the quality of the AI models are improving so incredibly.
译:要知道,一段时间以来,我刚才提到的三大转型趋势其实已经显现,也就是从通用计算向加速计算的转型。有一点必须明确,人工智能(AI)并非只有智能体人工智能这一种形态,生成式人工AI同样在改变超大规模科技企业过去依赖CPU开展工作的方式。正是生成式人工智能的出现,让这些企业得以将搜索业务、推荐系统迁移到新的技术架构上。目前,这类迁移工作仍在进行中,且全都基于生成式人工智能技术。所以,无论你部署英伟达GPU是为了数据处理,是为了给推荐系统搭载生成式人工智能功能,还是为了开发智能体聊天机器人,所有这些应用的运行效率都能通过英伟达 GPU 得到提升。因此,在看待整体投入规模时,关键是要考虑到上述每一个技术层面,它们都在增长,相互关联但又各不相同。不过,很棒的一点是,这些应用都能同时在英伟达GPU上运行;这背后的原因,正是 AI 模型的质量正在以惊人的速度不断提升。
The adoption of it in the different use cases, whether it's in code assistance, which and uses fairly exhaustively. And we're not the only one. I mean, the fastest growing application in history, combination of cursor and qua code and OpenAI code and GitHub pilot, these applications are the fastest growing in history. And it's not just used for software engineers. It's used because of volume coding. It's used by engineers and marketeers all over companies, supply chain planners all over companies. And so I think that that's just one example.
译:它在不同应用场景中的采用率正在提升,以代码辅助场景为例,该场景对其的应用已相当广泛,而且并非只有我们在推动这一趋势。要知道,代码辅助领域的应用是史上增长最快的一类应用,无论是Cursor、Qua Code、OpenAI Code,还是GitHub Pilot,这类工具的增长速度都创下了历史纪录。而且,这类代码辅助工具的使用者并非只有软件工程师。由于海量编码需求的存在,企业内的工程师、营销人员,乃至供应链规划人员,都会用到这类工具。因此,我认为这只是其中一个例证。
And the list goes on, you know, whether it's open evidence and the work that they do in health care or the work that's being done in Digital Videoing Runway. And I mean, the number of really, really exciting startups that are taking advantage of generative AI, agentic AI, is growing quite rapidly. And not to mention, we're all using it a lot more. And so all of these exponentials, not to mention, you know, just today I was reading a text from Deis, and he was saying that that pre training in and post training fully intact and the Gemini 3 takes advantage of the scaling laws and got received a huge jump in quality, performance, model performance. And so we're seeing all of these exponentials kind of running at the same time and just always go back to first principles and think about what's happening from each one of the dynamics that I mentioned before general purpose computing to accelerated computing, generative AI, replacing classical machine learning, and of course, agentic AI, which is a brand new category.
译:类似的例子还有很多,比如Open Evidence在医疗健康领域开展的工作,或是 Runway在数字视频领域的探索。要知道,目前有大量令人兴奋的初创企业正在借助生成式人工智能和智能体人工智能发展业务,这类企业的数量正快速增长。更不用说,我们大家对这些技术的使用频率也在大幅提升。因此,所有这些领域都在呈现指数级增长。此外,就在今天,我还看到了来自Deis的信息,其中提到,Gemini 3的预训练与训练后优化流程完全完整,且该模型充分利用了缩放定律,最终在模型质量和性能上实现了巨大飞跃。所以,我们正目睹所有这些领域同时呈现指数级增长。而我们始终要回归根本原理,从之前提到的每一个动态趋势去思考当前正在发生的变化:从通用计算向加速计算的转型、生成式人工智能对传统机器学习的替代,当然还有作为全新类别的智能体人工智能。
Thanks, my question. I'm curious, what assumptions are you making on Nvidia content, per gigawatt in that 500 billion number because we have heard, you know, numbers as low as 25 billion per gigawatt, as high as 30 or 40 billion per giws. So I'm curious what power and what dollar particular, assumptions you are, making as part of that, 500 billion, number and then longer term JSON, the 3 to 4 trillion in data center by 2030 was mentioned. How much of that do you think will require vendor financing and how much of that can be supported by cash flows of your large customers or governments or enterprises?
译:谢谢,这是我的问题。我很好奇,在5000亿美元这个营收目标中,你们对每GW对应的英伟达产品营收做了怎样的假设?因为我们听到过不同的估算数据,低至每GW250亿美元,高至每GW300亿或400亿美元,所以我想了解,在 5000亿美元这个目标中,你们具体基于怎样的功率对应营收假设来测算?另外,长期来看,之前提到截至2030年数据中心领域投入将达到3万亿至4万亿美元。你们认为其中有多少比例需要依赖供应商融资,又有多少比例可以通过大型客户、政府或企业自身的现金流来支撑?
Thank you in each generation from Ampere to Hopper, from Hopper to Blackwell, Blackwell to Rubin, our part of the data center increases. And Hopper generation was probably something along the lines of 20 some 2020 Blackwell generation. Grace Blackwell particularly is probably 30 to 30, 30, you know, say 30 plus or minus. And then Reuben is probably higher than that. And in each one of these generations, the speed up is X factors, and therefore their TCO, the customer TCO, improves by X factors. And the most important thing is in the end, you still only have one gibt of power, you know, 1 giga data centers, 1 GW of power, and therefore performance per watt.
译:谢谢。在每一代产品迭代中,从 Ampere到Hopper,从Hopper到Blackwell,再到Rubin,我们在数据中心领域的业务占比都在提升。Hopper架构时代,大概在20%左右到了 Blackwell 架构时代,尤其是Grace Blackwell,这一占比可能达到30%左右,上下浮动一些;而 Rubin 架构时代,这一占比或许会更高。并且在每一代迭代中,性能都会实现数倍的提升,因此客户的总拥有成本也会随之实现数倍的优化。最重要的一点是,归根结底,客户的算力基础设施仍只需消耗1GW的电力,也就是说,无论是1GW规模的数据中心,还是1GW的电力投入,最终追求的都是每瓦性能的提升。
The deficiency of your architecture is incredibly important. And the efficiency of your architecture can't be brute force. There is no brute forcing about it. That 1 GW translates directly, your performance per watt translates directly, absolutely directly to the revenues, which is the reason why choosing the right architecture matters so much.
译:你的架构存在的缺陷影响极大,而且架构的效率无法通过 “蛮力” 来提升,这件事根本没有 “蛮力解决” 的余地。1GW的指标会直接转化为实际效益,你的每GW能也会直接、绝对直接地转化为收入。这正是选择合适架构至关重要的原因所在。
Now, you know, the world doesn't have an excess of anything to squander. And so we have to be really, really, you know, we use this concept called co design across our entire stack, across the frameworks and models, across the entire data center, even power and cooling optimized across the entire supply chain in our ecosystem. And so each generation, our economic contribution will be greater, our value delivery will be greater. But the most important thing is our energy efficiency per one is going to be extraordinary.
译:要知道,如今全球各类资源都十分宝贵,没有任何东西可以随意浪费。因此,我们必须真的必须践行 “协同设计” 这一理念,而且要贯穿我们的整个技术栈:涵盖框架与模型、整个数据中心,甚至包括电力与冷却系统的优化,同时还要延伸到我们生态系统中的整个供应链,所以,每一代产品迭代中,我们所创造的经济价值都会更高,所交付的价值也会更大。但最重要的一点是,我们产品的单位能效将达到极高水平。
Every single generation with respect to growing into, into continuing to grow our customers, our customers, financing is up to them. We see the opportunity to grow for quite some time and remember that today most of the focus has been on the hyperscalers.
译:在每一代产品助力客户持续发展的过程中,客户的融资事宜取决于他们自身。我们认为未来相当长一段时间内都存在增长机遇,而且要记住,目前行业的关注点大多还集中在超大规模科技企业身上。
And one of the areas that is really misunderstand hyska is, is that the investment on nvi GPS not only improves their scale, speed and cost from general purpose computing. That's number one, because Moore's law has Moore's Law, scaling has really slowed. Moore's Law is about driving cost now it's about it's about deflationary cost, the incredible deflationary cost of of computing over time. But that has slowed. Therefore, a new approach is necessary for them to keep driving the cost. Now going to Nvidia GPU computing is really the best way to do so.
译:目前存在一个普遍的认知误区,即人们未能充分理解投资英伟达GPU不仅能从通用计算层面提升企业的规模、速度并降低成本,这是首要优势。背后的核心原因在于,摩尔定律的演进速度已大幅放缓。要知道,摩尔定律的本质是推动计算成本下降,实现长期的计算成本 “通缩效应”,但如今这种成本下降的趋势已明显减缓。因此,企业要继续推动计算成本降低,就必须采用新的技术路径,而转向英伟达 GPU 计算无疑是实现这一目标的最佳方式。
The second is revenue boosting in their current business models, you know, recommender systems drive the world's hyperscalers every single it's watching short form videos or recommending books or recommending the next item in your basket, recommending a.s. to, recommending news to. It's all about recommenders, the world, the internet has trillions of pieces of content. How could they possibly figure out what to put in front of you and your little tiny screen unless they have really sophisticated recommender systems to do so well that has gone generative AI?
译:第二点是能提升其现有商业模式下的营收。要知道,推荐系统是全球超大规模科技企业的核心驱动力,无论是你观看短视频、平台推荐书籍、推荐购物车中的下一件商品,还是推荐新闻,背后都离不开推荐系统。如今的互联网上有着数万亿条内容,若没有极为复杂精密的推荐系统,平台又怎能精准判断该在你小小的屏幕上呈现哪些内容呢?而如今,这类推荐系统已全面转向生成式人工智能技术。
So the first two things that I just said, hundreds of billions of dollars of CapEx is going to have to be invested. It's fully cash flow funded. What is above it, therefore, is agentic AI. This is revenue is net new, net new consumption, but it's also net new applications. And some of the applications I mentioned before, but these are these new applications are also the fastest growing applications in history. Okay? So I think that that you're going to see, you're going to see that once people start to appreciate what is actually happening under under the water, if you will, from the simplistic view of what's happening to CapEx investment, recognizing there's these three dynamics.
译:所以,关于我刚才提到的前两点,企业必须投入数千亿美元的资本支出,而这些投入完全由现金流支撑。那么,在此之上的增长点是什么呢?答案便是智能体人工智能。智能体人工智能所带来的收入是全新的,不仅意味着全新的消费需求,还催生了全新的应用场景。其中一些应用场景我之前已经提过,但这些新应用同时也是史上增长速度最快的一类应用,对吧?因此,我认为,一旦人们不再局限于对资本支出投资的表面认知,而是开始深入理解其背后实际发生的变化,不妨这么说,并认识到存在上述三大动态趋势,就能看清未来的发展方向。
And then lastly, remember we were just talking about the American CSP. Each country will fund their own infrastructure. And you have multiple countries, you have multiple industries. Most of the world's industries haven't really engaged agentic AI yet, and they're about to all the names of companies that, you know, we're working with, whether it's autonomous vehicle companies or digital twins for physical AI, for factories, the number of factories and warehouses being built around the world, just the number of digital biology startups that are being funded so that we could accelerate drug discovery. All of those different industries are now getting engaged and they're going to do their own fundraising and so't just look at, don't just look at the hyperscalers as a way to build out for the future, you've to look at the world, You've got to look at all the different industries and enterprise computing is going to fund their own industry.
译:最后,别忘了我们刚才谈到的美国云服务提供商,事实上,每个国家都会为自身的基础设施提供资金支持。而且,现在有众多国家参与其中,涉及的行业也十分广泛。目前,全球大多数行业尚未真正涉足智能体人工智能领域,但它们很快就会行动起来。比如我们正在合作的各类企业,论是自动驾驶公司,还是为实体工厂开发物理人工智能数字孪生技术的企业;再看全球范围内正在新建的工厂与仓库数量,以及获得投资的数字生物初创企业数量,都能印证这一趋势。如今,所有这些不同领域的行业都开始涉足AI,它们也将各自开展融资。因此,看待未来基础设施建设布局时,不要只把目光放在超大规模科技企业身上,而应着眼于全球范围,关注所有不同的行业,企业级计算领域的相关企业,都将为自身所在行业的投入提供资金支持。
Hey, thanks a lot Jensen. I wanted to ask you about cash. Speaking of half a trillion, you may generate about half a trillion in free cash flow over the next couple of years, what are your plans for that cash? How much goes to buyback versus investing in the ecosystem? And how do you look at investing in the ecosystem? I think there's just a lot of confusion out there about how these deals work and your criteria for doing those, like the Anthropic, the OpenAIs, etc..
译:嘿,非常感谢Jensen。我想问你关于现金的事情。说到五千亿,未来几年你可能会产生大约五千亿的自由现金流量,你在这现金有什么计划?回购和投资生态系统有多少?你如何看待对生态系统的投资?我认为对于这些交易是如何运作的,以及你做这些交易的标准,比如人类学、开放等等,存在很多困惑。
Thanks a lot, yeah, appreciate the question. Of course, using cash to fund our growth, no company has grown at the scale that we're talking about and have the connection and the depth and the breadth of supply chain that Nvidia has.
译:非常感谢,感谢您的提问。当然,使用现金来资助我们的增长,没有一家公司能够像英伟达那样拥有我们所谈论的规模、连接、深度和供应链广度。
The reason why our our entire customer base can rely on us is because we've secured a really, really resilient supply chain and we have the balance sheet to support them When we make purchases. Our suppliers can take it to the bank when we make forecast and we plan with them. They take us seriously. Because of our balance sheet. We're not making up the offtake. We know what our offtake is and because they've been playing with us for so many years, our reputation and our credibility is incredible. And so, so it takes really strong balance sheet to do that, to support the level of growth and the rate of growth and the magnitude associated with that. So that's number one.
译:我们的整个客户群之所以可以依赖我们,是因为我们已经确保了一个非常有弹性的供应链,并且我们有足够的资产负债表来支持他们进行采购。我们的供应商可以在我们做出预测并与他们一起计划时将其带到银行。他们认真对待我们。因为我们的资产负债表。我们不是在弥补市场需求。我们知道我们的需求是什么,因为他们已经和我们玩了这么多年,我们的声誉和信誉是令人难以置信的。因此,需要非常强大的资产负债表才能做到这一点,以支持增长水平、增长速度以及与之相关的规模。所以这是第一。
The second thing, of course, we're going to continue to do stock back buybacks. We're going to continue to do that. But with respect to the investments, this is really, really important work that we do, all of the investments that we've done so far, well, all the period isso with expanding the reach of cudamani the ecosystem.
译:第二件事,当然,我们将继续进行股票回购。我们将继续这样做。但就投资而言,这是我们所做的非常重要的工作,迄今为止我们所做的所有投资,以及扩大cudamani生态系统的覆盖范围。
If you look at the work that the investments that we did with OpenAI, of course, that relationship we've had since 2016, I delivered the first AI supercomputer ever made to OpenAI. And so we've had a close and wonderful relationship with OpenAI since then and everything that OpenAI does runs on nvi today. So all the clouds that they deploy in, whether it's training and inference, runs Nvidia, and we love working with them. The partnership that we have with them is one so that we could work even deeper from a technical perspective, so that we could support their accelerated growth.
译:如果你看看我们对OpenAI所做的投资,当然,自2016年以来我们建立的关系,我交付了有史以来第一台OpenAI的AI超级计算机。因此,从那时起,我们与OpenAI建立了密切而美好的关系,今天OpenAI所做的一切都可以在nvi上运行。因此,他们部署的所有云端,无论是训练和推理,都运行英伟达,我们喜欢与他们合作。我们与他们的合作伙伴关系是,这样我们就可以从技术角度更深入地合作,从而支持他们的加速增长。
This is a company that's growing incredibly fast and will look at, don't just look at it, what is said in the press. Look at all the ecosystem partners and all the developers that are connected to OpenAI, and they're all driving consumption of it and the quality of the AI that's being produced. Huge step up since a year ago. And so the quality of response is extraordinary. So we invest in OpenAI for a deep partnership in co-development to expand our ecosystem and support their growth. And of course, rather than giving up a share of our company, we get a share of their company. And we invested in them in one of the most consequential once in a generation company, once in a generation company that we have a share of. And so I fully expect that investment to translate to extraordinary returns.
译:这家公司(指 OpenAI)正以惊人的速度增长。看待它时,不要只关注媒体报道的内容,而应着眼于其所有生态系统合作伙伴,以及与OpenAI相关联的所有开发者,正是这些伙伴与开发者,共同推动着OpenAI技术的使用需求增长,也推动着其产出的人工智能质量不断提升。与一年前相比,OpenAI的进步可谓巨大,如今其的响应质量已达到极高水平。因此,我们投资 OpenAI,是为了建立深度合作关系,通过联合开发拓展我们的生态系统,并支持OpenAI的发展。当然,我们并非以出让自家公司股份为代价,相反,我们获得了OpenAI的股份。我们投资的是一家极具影响力、堪称 “一代一遇” 的公司,而且我们持有这家 “一代一遇” 企业的股份。基于此,我完全有理由预期,这项投资将为我们带来极高的回报。
Now, in the case of Antropic, this is the first time that entropic will be on Nvidia's architecture. Time anthropical and various architecture is the second most successful AI in the world in terms of total number of users. But in enterprise, they're doing incredibly well, Claude code is doing incredibly well. Claude is doing incredibly well, all of the world's enterprise. And now we have the opportunity to have a deep partnership with them and bringing Claude onto the Nvidia platform. And so what do we have now?
译:目前,就 Anthropic而言,这是其技术首次适配英伟达的架构。Anthropic旗下各类模型在全球用户总量上位居第二,是最成功的AI系统之一。而在企业级市场,他们的表现极为亮眼——Claude Code反响出众,Claude(主模型)也广受全球企业客户青睐。如今我们获得了与他们深度合作的机会,将把Claude引入英伟达平台。那么,我们现在具备了哪些优势?
Invidious architecture, Taking a step back, invidious architecture. Nvs platform is the singular platform in the world that runs every AI model.
译:先退一步说,谈及英伟达架构 —— 英伟达平台是全球唯一能运行所有 AI 模型的平台。
We run OpenAI, we run entropic, we run X AI because of our deep partnership with Elon and X AI, we were able to bring that opportunity to Saudi Arabia, to the KSA, so that Humane could also be a hosting opportunity for saii. We run X AI, we run Gemini, we run thinking machines. Let's see, what else do we run? We run them all. And so not to mention, we run the science models, the biology models, DNA models, gene models, chemical models, and all the different fields around the world.
译:我们支持运行 OpenAI 的模型、Anthropic 的模型、X AI的模型,借助与埃隆及 X AI的深度合作,我们已将相关合作机会拓展至沙特阿拉伯(KSA),让Humane也能成为沙特人工智能倡议(SAII)的合作承载方。我们还支持运行X AI、Gemini(谷歌 AI 模型)、Thinking Machines(相关 AI 系统)的模型。要说还有什么?我们能支持运行所有AI模型。更不用说,我们还支持运行各类科学模型:生物学模型、DNA 模型、基因模型、化学模型,以及全球各个不同领域的专业模型。
It's not just cognitive AI that the world uses. AI is impacting every single industry. And so we have the ability to the ecosystem investments that we make to partner with, deeply partner on a technical basis with some of the best companies, most brilliant companies in the world. We're expanding the reach of our ecosystem and we're getting a share in investment in what will be a very successful company, oftentimes once in a generation company. And so that's our investment thesis.
译:世界所应用的并非仅有认知智能(Cognitive AI)。人工智能正在影响每一个行业——因此,凭借我们在生态系统层面的投资布局,我们有能力与全球范围内部分最优秀、最顶尖的企业建立合作,开展深度技术协作。我们正在拓展生态系统的覆盖边界,同时通过投资分享这些极具发展潜力企业的成长红利——它们往往是那种 “一代一遇” 的标杆企业。这正是我们的投资逻辑。
Good afternoon, thanks for taking my question. In the past, you've talked about roughly 40% of your shipments tied to AI inference. I'm wondering, as you look forward into next year, where do you expect that percentage could go in, say, a year's time and can maybe be addressed? The Rubin Cpx product you expect to introduce next year? Contextualize that. How big of the overall TAM you expect that can take and maybe talk about some of the target customer applications for that specific product?
译:下午好,感谢您抽时间解答我的问题。过去您曾提到,贵公司约40%的出货量与AI推理相关。我想了解,展望明年,您预计这一比例可能会达到多少?另外能否介绍一下 —— 贵公司计划明年推出的Rubin Cpx产品,您对它有何定位?预计这款产品能占据多大的整体潜在市场规模?还有,能否谈谈这款特定产品的目标客户应用场景?
Thank you Ctx is designed for long context type of workload generation and so long context basically before you start generating answers, you have to read a lot, basically long context. And it could be a bunch if it could be watching a bunch of videos, studying 3D images, so on, so forth, you have to absorb the context. And so Cpx is designed for long context type of workloads and dollars per for dollars, excellent perf for Y is excellent and which made me forget the first part .
译:谢谢。Ctx 这款产品是为长上下文类型的工作负载生成而设计的——所谓长上下文,简单来说,就是在开始生成答案之前,需要先处理大量信息。这些信息形式多样:可能是观看大量视频、研究3D图像等等,本质上都是需要先充分吸收上下文信息的场景。而Cpx正是针对这类长上下文工作负载打造的,其性价比非常出色,性能表现也极为优异——不过刚才说到哪儿了,前面的部分我有点记不清了。
of the question in print. Oh, in, yeah.
译:书面形式呈现的那个问题。哦,对,是这样。
there are three scaling laws that are that are scaling at the same time. The first scaling law called pre-training continues to be very effective, and the second is post training. Post training basically has found incredible algorithms for improving and Ai's ability to break a problem down and solve a problem step by step. And post training is scaling exponentially. Basically, the more compute you apply to a model, the smarter it is, the more intelligent it is. And then the third is inference. Inference because of chain of thought, because of reasoning capabilities. Ai's are essentially reading thinking before it answers and the amount of computation necessary as a result of those three things has gone completely exponential.
译:目前有三条规模扩展定律正在同步发挥作用。第一条是 “预训练”规模扩展定律,其效用至今依然十分显著;第二条是 “训练后优化”规模扩展定律——训练后优化领域已研发出极具突破性的算法,这些算法能显著提升人工智能拆解问题、逐步求解的能力,而且训练后优化的规模正呈指数级增长。从本质上讲,投入到模型中的算力越多,模型的智能水平就越高、处理能力就越强。第三条则是 “推理”规模扩展定律。在推理层面,得益于 “思维链”技术和推理能力的提升,人工智能在给出答案前,实际上会先进行 “阅读” 与 “思考”。受上述三方面因素共同影响,当前所需的计算量已完全呈现指数级增长态势。
I think that it's hard to know exactly what the percentage will be at any given point in time and who. But of course, our hope. Our hope is that inference is a very large part of the market, because if inference is large, then what it suggests is that people are using using it in more applications and they're using it more frequently. And that's, you know, we should all hope for inference to be very large. And this is where Grace Blackwell is just an order of magnitude better, more advanced than anything in the world.
译:我认为,很难准确判断在某个特定时间点,占比会具体达到多少,也难以确定具体主体是谁。但显然,我们对此抱有期望,我们希望推理能在市场中占据非常大的份额。因为如果推理的市场占比高,就意味着人们会将其应用到更多场景中,使用频率也会更高。要知道,我们所有人都理应期待推理能拥有庞大的市场规模。而GB在这一领域的表现,比全球任何同类产品都要出色一个数量级,技术先进程度也远超其他产品。
The second best platform is H200 and it's very clear now that GB 300 GB 200 and GB 300 because of nvlink 7002, the scale up network that we have a shape and you saw and Colette talked about in the Semin analysis benchmark is the largest single inference benchmark ever done. And GB 200 nvlink 72 is 10 times 10 to 15 times higher performance. And so that's a big step up. It's going to take a long time before somebody is able to take that on. And our leadership there is surely multi-year. And so I think I'm hoping that inference becomes a very big deal. Our leadership in inference is extraordinary.
译:第二优秀的平台是 H200系列,目前已非常明确的是,GB 300、GB 200,以及搭载了NVLink 7002技术的GB 300—— 凭借我们所拥有的可扩展式升级网络架构,这一点各位此前已有所了解,且Colette也在 Semin 分析基准测试报告中提及,所参与的测试,是迄今为止规模最大的单次AI推理基准测试。其中,搭载NVLink 72技术的GB 200,性能要高出10到15倍。这无疑是一次巨大的性能飞跃,其他厂商要想达到这一水平,仍需很长时间。我们在该领域的领先地位,未来数年必将稳固。因此,我确实期望AI推理能成为核心赛道,而我们在AI推理领域的领先优势,目前来看是无可比拟的。
and so on. Jenkin, many of your customers are pursuing behind the meter power, but like, what's the single biggest bottleneck that worries you that could sustain your growth? Is it power? Maybe it's financing or maybe it's something else like memory or even foundry, thanks a lot. Well, these are all issues and they're all constraints. And the reason for that, when you're growing at the rate that we are in the scale that we are, how can anything be? What videos doing obviously has never been done before and we've created a whole new industry.
译:诸如此类,您的许多客户都在推进 “用户侧电力”相关业务,但想问的是,目前最让您担忧、且可能会对贵公司增长造成持续性影响的核心瓶颈是什么?是电力供应问题吗?还是可能是融资问题,又或者是内存、甚至晶圆代工厂这类其他方面的问题?非常感谢。嗯,这些确实都是现实问题,也都是制约因素。之所以会这样,是因为当公司以我们当前的增速和规模发展时,怎么可能没有问题呢?很显然,英伟达目前所做的事情,在行业内是前所未有的,而且我们已经开创了一个全新的产业领域。
On the one hand, we are transitioning computing from general purpose and classical or traditional computing to accelerated computing and AI. That's on the one hand. On the other hand, we created a whole new industry called AI factories. The idea that in order for software to run, you need these factories to generate it, generate every single to instead of re information that was. And so, so I think this whole, this whole transition requires extraordinary scale.
译:一方面,我们正推动计算模式从通用计算、传统经典计算,向加速计算与人工智能计算转型。这是我们当前的一个重要方向。另一方面,我们还开创了一个名为 “AI 工厂” 的全新产业领域。其核心逻辑在于:要让 AI 软件得以运行,就需要这类 “工厂” 来生成支撑软件运行所需的关键要素——换言之,不再是依赖过往已有的信息,而是通过 “AI 工厂” 生成每一个必要的新信息单元。因此,我认为,整个这一转型过程需要具备极高的规模体量作为支撑。
And all the way from the supply chain, of course, the supply chain, we have much better visibility and control over it because, you know, obviously we're incredibly good at managing our supply chain. We have great partners that we've worked with for 33 years. The supply chain part of it, we're quite confident now looking down our supply chain, we've now established partnerships with so many players in land and power and shell and of course, financing these things. None of these things are easy, but they're all tractable and they're all solvable things.
译:当然,在供应链方面,我们如今对供应链拥有远胜以往的可见性与管控力,显然,我们在供应链管理上的能力本就极为突出。我们拥有合作了33年的优质合作伙伴,这也是重要保障。当前,从供应链层面来看,我们信心十足:我们已与土地、电力、基础设施领域的众多参与方建立了合作关系,当然,也包括为这些项目提供融资支持的合作方。这些事情没有一件是容易的,但它们都是可控的,也都是能够解决的问题。
The most important thing that we have to do is do a good job planning. We plan up the supply chain, down the supply chain, we have established a whole lot of partners. And so we have a lot of routes to market. And very importantly, our architecture has to deliver the best value to the customers that we have. And so at this point, you know I'm very confident that Nvidia's architecture is the best performance per Tcl is the best performance want, and therefore for any amount of energy that is delivered, our will drive the most revenues. And I think the increasing rate of our success, I think that we're more successful this year at this point than we were last year at this point. You know, the number of customers coming to us and the number of platforms coming to us after they've explored others is increasing, not decreasing. And so I think I think all of that is just, you know, all the things that I've been telling you over the years are really are becoming true or becoming evident.
译:我们当前首要任务是做好规划工作。我们既要对供应链上游进行规划,也要对供应链下游做好布局 —— 目前我们已建立起广泛的合作伙伴网络,因此拥有多条市场渠道。尤为重要的是,我们的架构必须为现有客户创造最大价值。从目前情况来看,我完全有信心:英伟达的架构在 每Tcl性能方面表现最优,也能提供最出色的综合性能;因此,无论投入多少能源,我们的架构都能带来最高收益。
questions. Colette, I have some questions on margins. You said for next year you're working to hold them in the mid 70s. So I guess, first of all, what are the biggest cost increases? Is it just memory years of something else? What are you doing to work toward that? Is it how much is like cost optimizations versus prebus versus pricing? And then also how should we think about Opex growth next year given the revenues from likely to grow materially from where we're running right now?
译:科莱特(Colette),我有几个关于利润率的问题想请教。您之前提到,公司计划在明年将利润率维持在70%中期区间。那么首先想了解,目前面临的最大成本增长来源是什么?仅仅是内存成本的上涨,还是存在其他因素?为实现这一利润率目标,公司正在采取哪些举措?在这些举措中,成本优化、产品结构调整与定价策略这三方面分别发挥多大作用?此外,考虑到明年营收规模大概率会较当前水平实现大幅增长,我们应如何看待明年营业费用的增长趋势?
Thanks Stacy, let me see if I can start with remembering where we were with the current fiscal year that we're in. Remember earlier this year we indicated that through cost improvements and mix that we would exit the year in our gross margins in the married seven days. We achieved that and getting ready to also execute that in Q4. So now it's time for us to communicate where are we working right now in terms of next year?
译:谢谢。我先试着回想一下,我们当前财年的进展情况。大家应该还记得,今年早些时候我们曾表示,通过成本优化与产品结构调整,本财年末的毛利率将达到77%左右。目前我们已实现这一目标,并且正准备在第Q4继续维持这一水平。因此,现在是时候向大家说明,我们目前针对明年正在推进哪些规划了。
Next year, there are input prices that are well known in industries that we need to work, and the systems are by no means very easy to work with. There are tremendous amount of components, many different parts of it as we think about that. So we're taking all of that into account. But we do believe is we look at working again on cost improvement cycle time and mix that we will work to try and hold at our gross margins in the mid 7 days. So that's our overall plan for gross margin.
译:明年,行业内一些众所周知的投入成本问题需要我们着手应对,而且相关系统操作起来绝非易事。要知道,我们所考量的包含数量庞大的组件,涉及众多不同部分。因此,我们已将所有这些因素都纳入了考量范围。不过,我们确实认为,通过再次聚焦成本优化、周期缩短以及产品结构调整,我们将努力尝试把毛利率维持在70%中期区间,以上便是我们关于毛利率的整体规划。
Your second question is around Opex. And right now, our goal in terms of Opex is to really make sure that we are innovating with our engineering teams, with all of our business teams to create more and more systems for this market. As you know, right now, we have a new architecture coming out, and that means they are quite busy in order to meet that goal. And so we're going to continue to see our investments on innovating more and more, both our software, both our systems and our hardware to do so I'll it turn it to and if he wants to add .
答译:您的第二个问题围绕营业费用(Opex)展开。目前,我们在营业费用方面的核心目标,是确保能与工程团队、所有业务团队协同创新,为该市场打造更多适配的系统解决方案。正如您所知,我们目前正有一款新架构即将推出,这意味着为了实现这一目标,各团队的工作节奏都相当紧张。因此,我们将持续加大创新投入——无论是在软件、系统还是硬件领域均会如此。接下来我会把问题转交给他
in a couple more comments, I think the only thing I would add is remember that we plan, we forecast, we plan, and we negotiate with our supply chain well in advance. Our supply chain have known for quite a long time our requirements and they've known for quite a long time our demand and we've been working with them and negotiating with them for quite a long time. I think the recent surge obviously quite significant. But remember, our supply chain has been working with us for a very long time. So in many cases, we've secured a lot of supply for ourselves because obviously they're working with the largest company in the world in doing so, and we've also, we've also been working closely with them on the financial aspects of it and securing forecasts and plans and so on and so forth. I think all of that has worked out well for us.
译:我认为唯一需要额外说明的是,大家要记住:我们会提前做好规划、进行预测,并且会早早与供应链展开沟通协商。我们的供应链合作伙伴早已清楚我们的需求,而且我们与他们的合作及协商也已持续了相当长的时间。显然,近期的激增幅度确实很大。但请别忘了,我们的供应链合作伙伴已与我们合作多年。因此,在很多情况下,我们已为自身锁定了大量供应资源,显然,他们选择与我们这样的全球头部企业合作,自然也会优先保障我们的供应;此外,我们在财务层面也与他们保持着密切合作,共同落实预测、规划等相关事宜。我认为,所有这些举措都为我们带来了良好的成效。
Yes, thanks for taking the question Jensen. The question for you, as you think about the Anthropic deal that was announced and just the overall breadth of your customers. I'm curious if your thoughts around the role that AI Asics or dedicated XPS play in these architecture build-outs changed at all. Have you seen? You know, I think you've been fairly adamant in the past that some of these, some of these programs never really see deployments, But I'm curious if, if we're at a point where maybe that's even changed more in favor of just GPU architecture. Thank you, yeah, thank you very much. And I really appreciate the question.
译:好的,感谢Jensen(黄仁勋)您抽时间解答问题。想向您请教的是,结合贵公司近期宣布的与Anthropic的合作协议,以及您对客户群体整体覆盖范围的考量:您对AI专用芯片或专用加速处理器,在这些架构搭建过程中所扮演角色的看法,是否发生了变化?另外想了解,您是否观察到相关趋势的转变?要知道,过去您一直坚定地认为,这类项目中,有不少最终难以实现实际部署。但我很好奇,目前我们是否已进入一个新阶段,即这一局面是否已发生更大转变,使得 GPU 架构更具优势?
So first of all, you're not competing against teams are, excuse me, you as a company, you're competing against teams. There just aren't that many teams in the world who are, who are extraordinary at building these incredibly complicated things, you know, back in the hopper day, in the Ampere days, we would build 1 GPU.
译:首先要明确一点:你们作为一家公司并非在与其他公司竞争,而是在与 “团队” 竞争。抱歉,我重新说下:你们作为一家公司,真正的竞争对手是各个 “团队”。放眼全球,能够出色打造这类极其复杂产品的团队,数量其实并不多。要知道,早年间在Hopper架构时代、Ampere架构时代,我们一次只研发一款GPU。
That's the definition of an accelerated AI system. But today, we've got to build entire racks and Tis, you know, three different types of switches, scale up, scale up, and scale across switch. And it takes a lot more than one chip to build a compute node. Anymore thing about that computing system, because AI needs to have memory. AI didn't used to have memory at all. Now it has to remember things. The amount of memory and context it has is gigantic, the memory architecture implications incredible, The diversity of models from mixture of experts to dense models to diffusion models are regressive, not to mention biological models that obey the laws of physics that the list of different types of models have exploded in the last, last several years. And so the challenge, the challenge is the complexity of the problem is much higher.
译:这就是加速型AI系统(超节点)的定义。但如今,我们必须搭建完整的机柜和整机系统,要知道,这需要用到三种不同类型的交换机,既要支持纵向扩展(scale up),也要支持横向扩展(scale across)的交换机。而且现在搭建一个计算节点,早已不是只用一颗芯片就能完成的事了。我们得全面考量这类计算系统,因为A 现在需要内存支持。要知道,过去的AI完全不需要内存,而现在它必须具备记忆能力。如今 AI 所需的内存容量及其处理的上下文数据量都极为庞大,这对内存架构产生的影响也十分显著。此外,AI 模型的种类也日益丰富:从专家混合模型(mixture of experts)、稠密模型(dense models)到扩散模型(diffusion models),种类不胜枚举;更不用说那些遵循物理定律的生物模型了,过去几年里,不同类型模型的数量呈爆发式增长,因此,当前面临的挑战在于,相关问题的复杂程度已大幅提升。
The diversity of AI models is incredibly, incredibly large. And so this is where, you know, I will say the five things that makes us special, if you will, you know, the first thing I would say that makes a special is that we every phase of that transition, that's the first case that cudamani us to have cou to X for transitioning from general purpose of accelerated computing.
译:人工智能(AI)模型的多样性现已变得极为丰富。所以在这一点上,我想谈谈我们之所以与众不同的五个关键因素 —— 如果非要总结的话。首先,我认为我们的独特之处在于,我们深度参与并支撑了计算模式转型的每一个阶段。这也是第一个关键:凭借 CUDA 生态,我们具备了推动计算模式从通用计算向加速计算转型的能力
We're incredibly good at generative AI, we're incredibly good at agent-general I so every single phase of that, every single through every single layer of the transition, we are excellent at invest in one architecture, use it across the board, You can use one architecture and not worry about the changes in the workload across the three phases. That's number one. Number two were excellent at every phase of AI, everybody's always known that we're incredibly good at pre-training, we're obviously very good at post-training, and we're incredibly good as it turns out at inference because inference is really, really hard, how could thinking be easy? You know, people think that inference is one shot, and therefore it's easy, Anybody could approach the market that way, but it turns out to be the hardest of all because thinking, as it turns out, is quite hard, we're great at every phase of AI.
译:我们在生成式 AI领域的实力极为雄厚,在通用智能体领域同样表现卓越。因此,在计算模式转型的每一个阶段、每一个层面,我们都具备出色的能力。我们的策略是集中资源研发一种架构,并将其全面应用,借助这一种架构,你无需担心三个阶段中工作负载的变化。这是我们的第一个优势。第二个优势是,我们在AI的每一个阶段都表现优异。大家一直都知道,我们在预训练(pre-training)阶段的能力毋庸置疑,在训练后优化(post-training)阶段的表现也显然十分出色;而事实证明,我们在推理(inference)阶段的实力同样顶尖 —— 因为推理的难度其实非常非常大,要知道,“思考” 这件事怎么可能简单呢?人们总以为推理是 “一次性完成” 的任务,因此觉得它很容易,似乎任何人都能以这种思路进入该市场。但事实恰恰相反,推理其实是所有阶段中最难的,因为归根结底,“思考” 本身就相当复杂。而我们,在 AI 的每一个阶段都处于领先水平。
Second, the third thing is we're now the only architecture in the world that runs every AI model, every frontier AI model. We run open source AI models incredibly well. We run science models, biology models, robotics models, we run every single model. We the only architecture in the world that can claim that it doesn't matter whether you're auto regressive or diffusion based, we run everything and we run it for every major platform, as I just mentioned, so we run every model. And then the fourth thing I would say is that we, in every cloud, the reason why developers love us is because we're literally everywhere we're in, every cloud we're in every, we can even make you a little tiny cloud called dgx spark, and so we're in every computer, we're everywhere from cloud to on prem to robotic systems, edge devices, pcs, you name it. One architecture, things just work, it's incredible.
译:其次,第三点优势是:我们的架构如今是全球唯一能运行所有 AI 模型,包括所有前沿AI模型的架构。无论是开源AI模型,我们都能出色运行;科学模型、生物模型、机器人技术模型等各类模型,我们同样能运行。我们的架构是全球唯一敢如此宣称的:无论你使用的是自回归模型(auto regressive)还是基于扩散技术的模型(diffusion based),我们都能兼容运行;而且正如我刚才所提及的,我们的架构能在所有主流平台上运行这些模型。简言之,我们能运行所有模型。第四点优势,我认为在于我们的架构覆盖了所有云平台。开发者之所以青睐我们,核心原因就是我们的架构几乎无处不在:我们覆盖了所有云平台,甚至还能为你搭建一个小型云环境。可以说,我们的架构存在于各类计算设备中,从云平台到本地数据中心,再到机器人系统、边缘设备、个人电脑,凡是你能想到的设备,都有我们架构的身影。一种架构,便能实现全场景兼容运行,这着实令人惊叹。
And then the last thing, and this is probably the most important thing, the is if you are a cloud service provider, if you're a new company like humane, if you're a new company like core weaver in scale or and nevius or Oci for that matter, the reason why Nvidia is the best platform for you is because our offtake is so diverse. We can help you with ATA. It's not about just putting a random ASIC into a data center. Where's the offtake coming from? Where is the diversity coming from? Where's the resilience coming from? The versatility of the architecture coming from? The diversity of capability coming from?
译:最后一点,或许也是最为重要的一点:无论是云服务提供商,还是像Humane这样的新兴公司,抑或是像Core Weaver、Nevius、OCI这类企业,对你们而言,英伟达之所以是最佳平台选择,核心原因在于我们的产品应用场景极为多元。我们能通过ATA为你们提供支持,这绝非简单地在数据中心里部署一款随意选型的专用芯片就能实现的。要知道,产品的应用场景从何而来?多元性从何体现?抗风险能力从何保障?架构的通用性源于何处?多样化能力又来自哪里?
Nvidia has such incredibly good offtake because our ecosystem is so large. So these five things, every phase of acceleration and transition, every phase of AI, every model, every cloud to on-prem. And of course, finally, it all leads to offtake.
译:英伟达之所以拥有极为广阔的产品应用场景,核心原因在于我们的生态系统规模极为庞大。上述这五大优势覆盖计算加速与模式转型的每一个阶段、适配AI的每一个环节、兼容所有AI模型、实现从云端到本地数据中心的全场景覆盖,最终,所有这些优势都指向了 “广阔的产品应用场景” 这一核心结果。
本文根据公开信息整理,不构成投资建议。
投研大叔专注挖掘行业真实可靠的一手消息,
关注一下不费事,错过了想看的消息,岂不可惜!