The $4 Billion Signal
On March 2, 2026, NVIDIA disclosed two simultaneous investments that sent a clear signal through the semiconductor and optical networking industries: $2 billion into Lumentum Holdings via Series A Convertible Preferred Stock, and $2 billion into Coherent Corp. through common stock acquisition. Both deals are structured as nonexclusive, multiyear agreements with multibillion-dollar purchase commitments attached. This was not a speculative bet on a distant future. This was a calculated move to lock down the optical supply chain that will define the next generation of AI infrastructure.
The timing is not accidental. Almost exactly one year earlier, on March 18, 2025, NVIDIA announced Spectrum-X Photonics and Quantum-X Photonics at GTC — its first co-packaged optics (CPO) networking switch platforms for Ethernet and InfiniBand respectively. Those products promised up to 409.6 Tb/s of system bandwidth per switch with dramatically reduced power consumption. The $4 billion investment ensures that the photonic components those platforms depend on will actually exist at scale, on time, and with the performance NVIDIA needs.
What makes this investment distinctive is its dual-vendor structure. NVIDIA did not place $4 billion on a single supplier. It split the allocation precisely in half — ensuring competitive tension, supply chain redundancy, and co-design optionality across two complementary partners. Lumentum brings laser source expertise. Coherent brings a broader optical subsystem portfolio. Together, they cover the full photonic stack that NVIDIA needs to build AI factories where light, not copper, carries the critical data between GPUs.
Why This Matters for Data Center Engineers
This is not just a financial story. For anyone designing, building, or operating AI-class data centers, the NVIDIA photonics investment signals a fundamental shift in how interconnects will be designed. The transition from pluggable optics to co-packaged optics will change rack layouts, cooling requirements, cable management, power distribution, and operational procedures. Understanding the technology behind this shift is essential for planning your next facility build.
When Compute Outpaces the Network
The AI industry has a dirty secret: the most expensive component in a frontier training cluster is not the GPU — it is the time those GPUs spend waiting. In large-scale distributed training, performance is fundamentally constrained by how fast data moves between GPUs. Every collective operation — AllReduce, AllGather, ReduceScatter — requires synchronized data movement across thousands of devices. When the network cannot keep up with the compute, GPUs idle. And idle GPUs burning 700W each while waiting for data represent the most expensive waste in modern computing.
The key performance factors in an AI cluster are not individual GPU FLOPS but rather aggregate metrics: GPU-to-GPU data movement bandwidth, collective communication efficiency, fabric tail latency, link reliability, and deployment velocity. A single slow link in a 10,000-GPU cluster can degrade the performance of every GPU in the job. A single failed transceiver can trigger a job restart that wastes hours of compute time. The network is not an accessory to the compute — it is the compute, because distributed training turns thousands of individual GPUs into one logical accelerator, and the network is the glue that holds it together.
AI cluster traffic patterns are fundamentally different from enterprise or cloud workloads. Traditional data centers are dominated by north-south traffic — clients talking to servers. AI clusters are dominated by east-west traffic — GPUs talking to GPUs, with intense bursts of synchronized communication that stress the fabric in ways that conventional network designs were never built to handle. The traffic is bursty, latency-sensitive, and requires near-perfect reliability. One percent packet loss in an AI training job does not cause one percent degradation — it can cause 30-50% throughput collapse due to synchronization stalls.
The Scaling Paradox
If compute capability doubles every generation (B100 to B200 to B300) but network bandwidth only grows 50%, the cost per token actually degrades at scale. You are buying more GPUs that spend a higher fraction of their time waiting. This is why NVIDIA is investing $4 billion in photonics — the network must scale proportionally with compute, or the economics of AI training collapse.
The Copper Wall
Copper has been the workhorse of data center interconnects for decades. Direct Attach Copper (DAC) cables offer low latency, zero power consumption for the media itself, and simple deployment. But copper is hitting a physics wall, and that wall gets closer with every speed generation. At 200G SerDes lane speeds — the baseline for next-generation 1.6T links — the challenges become severe: electrical insertion loss increases dramatically with frequency and distance, requiring ever-heavier equalization circuits that consume more power and add latency.
The numbers tell the story. At 112G PAM4 (the current 400G/800G generation), copper DAC cables work reliably to about 2-3 meters. At 224G PAM4 (the upcoming 1.6T generation), that distance shrinks to approximately 1 meter or less. The equalization complexity required to recover a signal at these speeds over even short copper runs becomes extreme — multi-tap DFE, CTLE, and FFE circuits that collectively consume several watts per lane just to keep the signal intelligible. Multiply that by 64 lanes on a switch ASIC and the power budget for signal conditioning alone exceeds what some entire switches consumed a generation ago.
Beyond pure signal integrity, copper at scale creates physical problems. High-speed copper cables are thick, stiff, and generate significant heat at the connector interface. Cable management in a rack with hundreds of 400G or 800G DAC connections is already challenging. At 1.6T with denser cable counts, the situation becomes unmanageable. Front-panel density limits how many ports you can physically fit, and the thermal load from connector resistance adds to an already stressed cooling system. This is why NVIDIA has stated that co-packaged optics can reduce the electrical trace from the ASIC to the optical engine from 12+ inches (in pluggable designs) to less than 0.5 inch — eliminating the copper bottleneck at its source.
| Parameter | 100G NRZ | 200G PAM4 | 400G PAM4 | 800G PAM4 | 1.6T PAM4 |
|---|---|---|---|---|---|
| SerDes Lane Rate | 25G | 50G | 112G | 112G | 224G |
| Lanes per Port | 4 | 4 | 4 | 8 | 8 |
| Copper DAC Max Reach | ~5m | ~3m | ~2m | ~1.5m | ~1m |
| Power per Lane (Equalization) | ~0.3W | ~0.5W | ~1.2W | ~1.2W | ~2.5W |
| Practical for AI Clusters? | Legacy | Limited | Short reach only | Very limited | Impractical |
Silicon Photonics — Building with Light on Silicon
Silicon photonics is the technology that makes optical interconnects viable at data center scale. The core idea is elegant: use the same silicon fabrication infrastructure that produces billions of transistors to build optical components — waveguides that channel light, modulators that encode data onto light, splitters that divide optical signals, and couplers that combine them. Because these components are built on standard silicon wafers using CMOS-compatible processes, they inherit the semiconductor industry's greatest strengths: massive scale, tight dimensional control, high yield, and relentless cost reduction through Moore's Law-adjacent improvements.
The advantages of silicon photonics for data center interconnects are substantial. A single silicon photonic chip can integrate dozens of modulators, photodetectors, multiplexers, and waveguides on a die smaller than a fingernail. The bandwidth density — bits per second per square millimeter — far exceeds what is achievable with discrete optical components. The integration also reduces packaging complexity, lowers assembly cost, and improves reliability by eliminating discrete component interconnections that can fail. For NVIDIA's CPO vision, silicon photonics provides the high-density optical engine that sits next to the switch ASIC.
But silicon photonics has a fundamental limitation that explains why NVIDIA needs Lumentum and Coherent. Silicon is an indirect bandgap semiconductor. In physics terms, this means that an electron transitioning from the conduction band to the valence band in silicon cannot efficiently emit a photon because the transition requires a simultaneous change in momentum (phonon assistance). The result is that silicon is inherently poor at generating coherent light — you cannot make an efficient laser from silicon alone. This is the "silicon laser gap" that the entire industry has been working around for decades.
Why Silicon Cannot Lase Efficiently
Laser operation requires stimulated emission, where photons trigger the emission of identical photons. In direct bandgap materials like Indium Phosphide (InP) and Gallium Arsenide (GaAs), electron-hole recombination directly produces photons with high probability. In silicon, the indirect bandgap means most recombinations produce heat (phonons) rather than light. The radiative recombination efficiency of silicon is roughly 10,000 times lower than InP. This is not an engineering problem to be solved with better design — it is a fundamental property of the crystal structure. This is precisely why NVIDIA needs III-V semiconductor companies like Lumentum and Coherent to supply the laser sources.
Co-Packaged Optics — The Architecture Shift
To understand why co-packaged optics is transformative, you need to understand the architecture it replaces. In today's pluggable optics model, the switch ASIC sits in the center of a printed circuit board. Electrical signals travel from the ASIC through 12+ inches of PCB traces to front-panel connectors, where pluggable optical transceivers (QSFP-DD, OSFP) convert electrical signals to light. Those 12+ inches of high-speed copper trace are the problem. At 112G and 224G SerDes speeds, every inch of PCB trace introduces insertion loss, crosstalk, impedance discontinuities, and signal integrity challenges that require power-hungry retimers, CDRs (clock and data recovery), and DSP chips to compensate.
Co-packaged optics inverts this architecture. Instead of sending high-speed electrical signals across the PCB to the front panel, CPO places the optical engine — containing silicon photonic modulators, photodetectors, and fiber coupling — directly adjacent to the switch ASIC on the same package substrate or interposer. The electrical trace from ASIC to optical engine shrinks from 12+ inches to less than half an inch. At that distance, the signal integrity challenges largely disappear. You can eliminate retimers. You can reduce or eliminate the DSP in the optical engine. Power consumption drops dramatically because you are no longer burning watts to push signals through lossy copper over long distances.
| Characteristic | Pluggable Optics | Co-Packaged Optics (CPO) |
|---|---|---|
| Electrical Trace Length | 12-18 inches (PCB to front panel) | <0.5 inch (package-level) |
| Power per 800G Port | ~16-20W (including retimers) | ~8-10W (estimated) |
| Latency (Electrical Path) | ~5-8ns trace delay | ~0.2-0.5ns trace delay |
| Serviceability | Hot-pluggable at front panel | Requires ELS for laser servicing |
| Bandwidth Density | Limited by front-panel area | Limited by package area (much denser) |
| Signal Integrity | Retimers/CDR needed per port | Direct drive possible |
| System Bandwidth (per switch) | ~102.4 Tb/s practical | 409.6 Tb/s (NVIDIA Spectrum-X) |
The benefits compound at system level. A CPO-enabled switch that consumes 40% less power per port can support 2-4 times more bandwidth in the same thermal envelope. That means fewer switches, fewer cables, fewer racks, and less cooling infrastructure for the same aggregate fabric bandwidth. For a 100,000-GPU AI factory, the difference between pluggable and CPO architectures can translate to millions of dollars in annual power savings and significant reductions in physical footprint. This is why NVIDIA is building its next-generation networking platforms — Spectrum-X Photonics and Quantum-X Photonics — around CPO from the ground up.
External Laser Source — The Serviceability Solution
Co-packaged optics solves the performance problem, but it introduces an operational challenge that data center engineers immediately recognize: what happens when a laser fails? In pluggable optics, a failed transceiver is a 30-second swap at the front panel — pull the old QSFP, insert a new one, link comes back up. With CPO, the optical engine is bonded to the switch package. You cannot hot-swap it without replacing the entire switch board, which is a dramatically more expensive and disruptive operation.
External Laser Source (ELS) is the architectural solution to this problem. Instead of integrating the laser diode directly into the optical engine, CPO with ELS separates the light source into a standalone, replaceable module. The ELS module — containing continuous-wave (CW) lasers that produce unmodulated light — connects to the optical engine via fiber. The optical engine contains only the modulators, photodetectors, and passive components. If a laser fails, you replace the ELS module without touching the switch package. If the optical engine fails (much less likely since it contains no active light sources), only then do you need a board replacement.
ELS also solves a thermal management challenge. Lasers are highly sensitive to temperature — their wavelength drifts, output power decreases, and lifetime shortens as junction temperature increases. Placing a laser directly adjacent to a 500W+ switch ASIC creates thermal coupling that is extremely difficult to manage. ELS moves the laser to a location where thermal management is more tractable — potentially in a separate module with its own heatsink, away from the ASIC thermal zone. Additionally, a single ELS module can supply light to multiple optical engines through fiber splitting, improving redundancy and reducing component count.
Operations Perspective: ELS Changes the Maintenance Model
For data center operations teams, ELS represents a middle ground between pluggable simplicity and CPO performance. Expect to see ELS modules housed in dedicated slots on the switch board or in external enclosures, with monitoring systems that track laser power output, wavelength stability, and predicted remaining lifetime. Smart ELS management will enable predictive replacement before failure — a significant improvement over the reactive "transceiver failed, replace it" model that dominates today. This is critical for AI factories where any link failure can disrupt a training job running across thousands of GPUs.
Micro Ring Modulators and 200G Per Wavelength
At the heart of NVIDIA's silicon photonic optical engine is the micro ring modulator — a device that is deceptively simple in concept but extraordinarily demanding in execution. A micro ring modulator is a tiny circular waveguide, typically 5-20 micrometers in diameter, positioned adjacent to a bus waveguide. When the ring's resonant frequency matches the wavelength of light passing through the bus waveguide, light couples into the ring and is absorbed or diverted. By electrically modulating the ring's refractive index — using carrier injection or depletion in the silicon — you shift its resonant frequency, effectively turning the coupling on and off. This modulates the light passing through the bus waveguide, encoding data at rates up to 200 Gbps PAM4 per wavelength.
The appeal of micro ring modulators for CPO is their extreme compactness and energy efficiency. A Mach-Zehnder modulator (the traditional approach) requires millimeters of waveguide length and consumes significant power to drive long phase-shifting arms. A micro ring modulator achieves modulation in a footprint measured in tens of square micrometers, with drive voltages under 2V and energy consumption in the low hundreds of femtojoules per bit. When you need to pack 32 or 64 modulator channels onto a single optical engine die sitting next to a switch ASIC, this size and power advantage is decisive.
The engineering challenges, however, are formidable. Micro ring modulators are extremely sensitive to temperature — a one-degree Celsius change shifts the resonant wavelength by approximately 0.08 nm, which at 200G PAM4 data rates can push the operating point outside the eye mask. This means every ring needs active thermal tuning, which adds control circuitry and consumes power that partially offsets the modulator's inherent efficiency advantage. Process variations in silicon fabrication also affect ring dimensions and therefore resonant wavelengths, requiring per-ring calibration at manufacturing. NVIDIA's collaboration with TSMC on advanced silicon photonics packaging is aimed at solving exactly these manufacturability and yield challenges at the scale needed for millions of optical engines per year.
Temperature Sensitivity in Production
In a production data center environment, the switch ASIC adjacent to the optical engine can experience temperature swings of 10-20°C during workload transitions. Each degree shifts every micro ring's resonance. The thermal tuning system must track and compensate these shifts in real time, for every ring, with sub-nanosecond response times. This is one of the hardest engineering problems in CPO — and one reason why ELS (which moves the thermally sensitive laser away from the ASIC) is essential for practical deployment.
Why Lumentum — The Light Source Backbone
Lumentum Holdings is not a household name outside the photonics industry, but within it, the company holds a position of strategic importance. Lumentum is one of the world's leading manufacturers of semiconductor lasers, with deep expertise in Indium Phosphide (InP) epitaxial growth, laser diode fabrication, and optical subsystem integration. Their product portfolio spans continuous-wave (CW) lasers for silicon photonics, ultra-high-power (UHP) pump lasers, distributed feedback (DFB) lasers, tunable lasers, and External Laser Source (ELS) modules specifically designed for co-packaged optics applications.
For NVIDIA, Lumentum solves the most fundamental problem in the CPO stack: generating stable, high-quality light. Silicon photonic optical engines need a continuous wave of laser light at precise wavelengths to function. That light must be spectrally pure (narrow linewidth), power-stable (consistent output over temperature and aging), wavelength-accurate (matching the micro ring modulator resonances), and deliverable through fiber with minimal loss. Lumentum's CW and UHP laser platforms are specifically designed for these requirements, with performance specifications that have been refined over years of datacom and telecom deployment.
Beyond basic laser sources, Lumentum brings two additional capabilities that make the investment strategic. First, their ELS module development is directly aligned with NVIDIA's CPO architecture — these are purpose-built, field-replaceable laser modules designed to supply light to co-packaged optical engines. Second, Lumentum has optical circuit switch technology that could enable future reconfigurable optical fabrics — networks where the physical topology can be changed in real time by switching light paths. For NVIDIA's vision of AI factories with millions of GPUs, reconfigurable optical switching could be a game-changing capability for dynamically allocating bandwidth to different training and inference jobs.
Lumentum — Light Source Specialist
Core strengths in CW/UHP lasers, InP photonics, ELS modules, and optical circuit switching. Primary role: providing the stable, high-quality laser light that silicon photonic engines need to operate. Purpose-built ELS solutions for CPO serviceability.
Coherent — Full Optical Stack
Broad portfolio spanning transceivers, VCSELs, InP DMLs/EMLs, CW lasers, silicon photonics, TIAs, drivers, and five CPO-enabling product families. Primary role: comprehensive optical subsystem capability across multiple technology nodes.
Why Coherent — The Broad Optical Stack
If Lumentum is the laser specialist, Coherent Corp. is the optical generalist with depth everywhere that matters. Coherent's product catalog reads like a comprehensive inventory of everything the optical networking industry needs: pluggable transceivers from 100G to 1.6T, Gallium Arsenide (GaAs) vertical-cavity surface-emitting lasers (VCSELs) for short-reach links, Indium Phosphide directly modulated lasers (DMLs) and externally modulated lasers (EMLs), CW lasers for silicon photonics, silicon photonic integrated circuits, transimpedance amplifiers (TIAs), modulator drivers, and passive optical components. This breadth is precisely what NVIDIA needs in a second strategic partner.
The Coherent SEC filing accompanying the NVIDIA investment reveals critical details about the scope of the collaboration. Beyond existing product families, Coherent disclosed that it is developing five additional CPO-related product families as part of the NVIDIA partnership. While specifics are not public, the filing indicates these span the full CPO subsystem stack: optical engines, laser sources, packaging solutions, and test/characterization tools. Coherent is also actively shipping 1.6T silicon photonics transceivers, 200G VCSELs for short-reach AI cluster interconnects, and 224 Gbps quad transimpedance amplifiers — all components that feed directly into NVIDIA's photonics roadmap.
A critical differentiator in Coherent's portfolio is protocol agnosticism. Their transceiver and optical engine designs support Ethernet, InfiniBand, and NVLink protocols — the three primary networking fabrics in NVIDIA AI clusters. This means NVIDIA can use Coherent components across its entire networking stack: Spectrum-X (Ethernet), Quantum-X (InfiniBand), and NVLink network interconnects. Having a single optical supplier that can serve all three protocols simplifies qualification, reduces inventory complexity, and enables design reuse across product lines.
Five CPO Product Families
Coherent's SEC filing mentions five CPO-related product families under development for the NVIDIA partnership. While the specific product definitions are not public, industry context suggests these likely include: (1) CPO optical engines for switch ASICs, (2) External Laser Source modules, (3) silicon photonic integrated circuits for high-density modulation, (4) advanced packaging solutions for chip-to-optical coupling, and (5) test and monitoring components for production-scale CPO deployment. This breadth positions Coherent as a one-stop optical subsystem provider for NVIDIA's CPO platforms.
Lumentum vs Coherent — Complementary Roles
Understanding why NVIDIA needs both Lumentum and Coherent — rather than consolidating on a single supplier — requires looking at where their capabilities overlap and where they diverge. The two companies are not interchangeable. They occupy complementary positions in the photonics value chain, and NVIDIA's dual investment exploits this complementarity deliberately.
| Capability | Lumentum | Coherent |
|---|---|---|
| CW Lasers for SiPh | Core strength, UHP class | Available, broad portfolio |
| External Laser Source (ELS) | Purpose-built modules | Developing (CPO families) |
| Silicon Photonic ICs | Limited | Active development, 1.6T |
| Pluggable Transceivers | Selective portfolio | Full range 100G-1.6T |
| VCSELs (Short-Reach) | Not primary focus | 200G GaAs VCSELs |
| InP Laser Fabrication | World-class epitaxy | DML, EML, CW platforms |
| Optical Circuit Switching | Active development | Not primary focus |
| TIA / Driver ICs | Not primary focus | 224G quad TIA shipping |
| CPO Product Families | ELS-focused | 5 families in development |
The strategic logic of dual-sourcing extends beyond technical complementarity. From a supply chain perspective, having two suppliers for critical photonic components prevents any single vendor from becoming a bottleneck. If Lumentum has a fab issue, Coherent can increase deliveries of CW lasers. If Coherent's silicon photonic yield drops, Lumentum's optical engines can fill the gap. From a negotiating perspective, dual-sourcing gives NVIDIA leverage — neither supplier has monopoly pricing power. From a co-design perspective, NVIDIA can run parallel development tracks with both partners, selecting the best technology for each application rather than being locked into a single approach.
Perhaps most importantly, the dual investment gives NVIDIA design optionality. Different products in NVIDIA's networking lineup may benefit from different optical approaches. NVLink interconnects within a server might use Coherent's 200G VCSELs for ultra-short reach. Rack-to-rack links might use Lumentum's ELS modules with silicon photonic engines for CPO. Data center interconnect (DCI) links might use Coherent's coherent transceivers for long-reach connections. By investing in both companies, NVIDIA ensures it has access to every optical technology it might need across its entire product portfolio.
AI Factory Interconnect Analyzer
To illustrate the engineering trade-offs between copper, pluggable optics, and co-packaged optics at scale, I have built an interactive analyzer below. Input your cluster parameters and the tool will calculate power consumption, reach feasibility, annual energy cost, and latency for each interconnect technology. The comparison highlights why CPO becomes increasingly advantageous as clusters grow larger and port speeds increase.
AI Factory Interconnect Analyzer
Compare copper DAC, pluggable optics, and co-packaged optics for your AI cluster configuration.
Pro analysis provides Monte Carlo sensitivity modeling across electricity cost ranges, link distance distributions, and failure rate scenarios. Sign in to unlock detailed TCO projections including rack space reduction, deployment timeline impact, and cumulative power savings over a 5-year operational horizon.
Jensen's AI Factory Framework
To understand why NVIDIA is investing $4 billion in photonics, you need to understand Jensen Huang's "AI factory" concept — a framework that redefines data centers as industrial production facilities. In Jensen's model, an AI factory has three components: inputs (data, energy, capital), process (training, inference, reasoning), and outputs (models, tokens, intelligence). The factory metaphor is not rhetorical. It is an operational philosophy that treats every component — from GPU silicon to network cables to cooling systems — as part of an integrated production system that must be optimized holistically.
"The data center is the new unit of compute. You don't buy a GPU — you buy a factory. And a factory is only as good as its slowest production line."— Jensen Huang, NVIDIA GTC 2025 Keynote
When you operate a factory, you do not optimize one machine in isolation. You optimize the entire production line. Jensen's "extreme codesign" philosophy applies this principle to AI infrastructure: CPU, GPU, NVLink, NIC, DPU, switch ASIC, networking software, storage, and now photonics are all co-designed as a unified system. Each component is specified not just for its individual performance, but for how it enables or constrains the components around it. The switch ASIC is designed around the optical engine. The optical engine is designed around the laser source. The laser source is designed around the thermal envelope. Everything connects.
The metrics that define success in Jensen's framework are factory-level, not component-level: cost per token, training time to convergence, tokens generated per watt, cluster uptime, mean time to repair, deployment velocity (time from power-on to first training job), and aggregate throughput at full cluster scale. Photonics directly impacts nearly every one of these metrics. Lower-power optical links reduce cost per token. Faster interconnects reduce training time. More reliable optical components improve uptime. ELS improves mean time to repair. CPO's higher bandwidth density enables faster deployment by requiring fewer switches and cables. This is why $4 billion in photonics investment is not a luxury — it is a factory optimization decision with quantifiable returns.
The AI factory framework also explains the dual-vendor strategy. In industrial manufacturing, single-source components are risk factors. A factory that depends on one supplier for a critical part is one supply disruption away from a production shutdown. By investing in both Lumentum and Coherent, NVIDIA applies standard industrial supply chain management to its most critical optical components. The factory must never stop. The photonic supply chain must never be a single point of failure.
Factory Metrics: Where Photonics Fits
Consider a 50,000-GPU AI factory running continuous training. If pluggable optics consume 16W per 800G port and CPO consumes 9W per port, the power savings across 50,000 links is approximately 350 kW continuous. At $0.10/kWh, that saves $306,600 per year in electricity alone — before accounting for the reduced cooling load (another ~$120,000/year at typical PUE). Over a 5-year facility lifecycle, photonics optimization in a single facility can save over $2 million. Across NVIDIA's hyperscaler customer base operating hundreds of such facilities, the aggregate savings justify the $4 billion investment many times over.
The Bigger Picture
NVIDIA's $4 billion photonics investment is not about optics. It is about ensuring that the post-Moore's Law era of computing does not bottleneck at the interconnect layer. As individual GPU performance continues to scale — each generation delivering 2-3x more FLOPS — the network must scale proportionally. If it does not, the most powerful GPUs in the world become the most expensive space heaters, burning hundreds of watts while waiting for data that cannot arrive fast enough through copper traces and pluggable transceivers designed for a different era of computing.
Lumentum provides the light source backbone: CW lasers, UHP lasers, and ELS modules that generate the stable, high-quality photons silicon photonic engines need to operate. Coherent provides the broader optical subsystem stack: transceivers, VCSELs, silicon photonic ICs, driver electronics, and five CPO-specific product families that span the entire optical signal chain from laser to detector. Together, they give NVIDIA complete coverage of the photonic supply chain, with redundancy at every critical node.
The deeper strategic insight is that NVIDIA is building a vertically integrated AI factory platform. They already control the compute (GPUs), the high-speed interconnect protocol (NVLink), the networking silicon (Spectrum/Quantum switch ASICs), the networking software (NCCL, DOCA), and the system architecture (DGX/HGX). With the Lumentum and Coherent investments, they now influence the optical physical layer — the actual photons moving between chips. From silicon to light and back to silicon, NVIDIA is positioning itself to control every layer of the AI infrastructure stack. For data center engineers planning the next five years of AI infrastructure, the message is clear: the future of high-performance interconnects is optical, it is co-packaged, and NVIDIA intends to own it end to end.