Server Farm & Network Tower Architecture

32-Node B300 Server Farm
& Dedicated Network Towers

32× ASUS XA NB3I-E12 compute zone · 2× dedicated switch racks · Rail-optimized fat-tree InfiniBand

32
Compute Racks
256
B300 GPUs
73.7 TB
HBM3e Total
12
Q3400-RA Switches
204.8 Tb/s
IB Bisection BW
2-hop
Max IB Latency Path
~1.15 PF
FP8 Compute
47%
1 MW Headroom
1

Physical Zone Layout — 34 Racks Total

● Compute Zone — 32 Racks
C01
C02
C03
C04
C05
C06
C07
C08
C09
C10
C11
C12
C13
C14
C15
C16
C17
C18
C19
C20
C21
C22
C23
C24
C25
C26
C27
C28
C29
C30
C31
C32
Each rack: 1× ASUS XA NB3I-E12 B300 · 8× B300 GPU · ~14.5 kW sustained (~15.4 kW burst) · AOC exits to network zone
AOC 5–10 m
◆ Network Zone — 6 Positions
N1 IB Fabric12× Q3400-RA
N2 Eth/Mgmt2× Sp4 + UFM + OOB
N3–N6Spare (future)
4 spare positions for network scale-out
32-rack compute zone is 100% occupied. 6-rack network zone is adjacent — 2 racks used for all switching hardware, 4 reserved for future expansion. AOC cables cross the zone boundary.

Physical Rack Inventory

Rack IDZoneContentsPower Draw
C01–C32Compute1× ASUS XA NB3I-E12, 8× B300 GPU, 10× PSU, 10× NVMe, dual PDU~14.5 kW sustained / 15.4 kW burst
N1Network8× Q3400-RA (leaf) + 4× Q3400-RA (spine) + 2× PDU~8.8 kW
N2Network2× Spectrum-4 + 1× UFM Appliance 1U + 1× OOB 10 GbE switch + PDU~6.5 kW
N3–N6NetworkEmpty — reserved for scale-out0
Total34 racks (32 compute + 2 network active + 4 spare)~492 kW burst farm wide (compute only)
2

Compute Racks C01–C32 — Per-Rack Build

Hardware per Rack

  • ASUS XA NB3I-E12 (9U at U1–U9)
  • NVIDIA B300 GPU (on HGX tray)
  • Intel Xeon 6776P CPU
  • 32× Samsung 128 GB DDR5-6400 RDIMM
  • 10× Samsung PM9D3a NVMe (U.2 Gen5)
  • CX8 on-board IB NICs (800 Gb/s NDR)
  • BlueField-3 3220 DPU (400 Gb/s)
  • Intel X710-AT2 dual 10 GbE
  • 10× 3,200 W 80+ Titanium PSUs
  • PDU (A+B, rear-mount)

Per-Rack Outputs

  • Compute: ~36 PFLOPS FP8 / ~240 PFLOPS NVFP4
  • GPU memory: 2.304 TB HBM3e
  • NVLink BW: 14.4 TB/s (within tray)
  • IB uplinks: 8× 800 Gb/s NDR (one per leaf)
  • Eth uplinks: 2× 400 Gb/s to Spectrum-4
  • Management: 2× 10 GbE to OOB switch
  • Sustained wall draw: ~14.5 kW
  • Margin to 20 kW wall-output ceiling: +5.5 kW
ℹ️ All 32 compute racks are identical builds. This enables fast replacement, symmetric topology, and uniform UFM policy application. Each rack is an isolated failure domain — single-rack power or cooling failure does not affect neighbor racks.
3

Rack N1 — InfiniBand Fabric Tower

All 12 Q3400-RA switches (8 leaf + 4 spine) fit in a single 42U rack. Combined rack height: 48U of switch + 4U patch + 2U cable mgmt = 54U of 84U over 2 standard 42U racks — but N1 alone (one 42U) can hold all 12 with margins due to 4U-per-switch form factor.

Q3400-RA Switch Specification

  • Form factor: 4U rackmount
  • Ports: 144× 800 Gb/s NDR InfiniBand
  • Switch BW: 115.2 Tb/s full duplex
  • Latency: <130 ns
  • SHARP in-network compute: Yes
  • Power: ~500–700 W per unit
  • Management: UFM Appliance protocol
  • Cable type (intra-N1): DAC ≤3 m
  • Cable type (to compute): AOC 5–10 m
  • Fan: Front-to-rear hot-plug
  • PSU: Dual hot-plug
  • Total in farm: 12 units

Fat-Tree Two-Tier Topology

32 Server Compute Racks (C01–C32) — 8 CX8 per server, one per leaf
C01
C02
C03
C04
C05–C28
C29
C30
C31
C32
256× AOC NDR800 (32 servers × 8 CX8)
8 Leaf Switches (L0–L7) — Rail-Optimized: CX8[i] → Leaf[i]
L0
Rail 0
L1
Rail 1
L2
Rail 2
L3
Rail 3
L4
Rail 4
L5
Rail 5
L6
Rail 6
L7
Rail 7
256 DAC cables intra-N1 (8 parallel × 4 spines × 8 leaves)
4 Spine Switches (S0–S3) — Full any-to-any inter-leaf path
Spine S0
Spine S1
Spine S2
Spine S3

Leaf/Spine Port Allocation

SwitchRoleDownlinks (to servers)Uplinks (to spines)Used / TotalUtilization
L0–L7Leaf (×8)32 ports — 1 per server (CX8[rail-i])32 ports → 8 per spine64 / 14444%
S0–S3Spine (×4)— (spine-only)64 ports from 8 leaves64 / 14444%

Rack N1 Physical Layout (42U)

PositionComponentUnit Size
U1–U4Q3400-RA Leaf L04U
U5–U8Q3400-RA Leaf L14U
U9–U12Q3400-RA Leaf L24U
U13–U16Q3400-RA Leaf L34U
U17–U20Q3400-RA Leaf L44U
U21–U24Q3400-RA Leaf L54U
U25–U28Q3400-RA Leaf L64U
U29–U32Q3400-RA Leaf L74U
U33Patch panel / cable tray1U
U34–U37Q3400-RA Spine S04U
U38–U41Q3400-RA Spine S14U
(Rack N1-B U1–U4)Q3400-RA Spine S24U
(Rack N1-B U5–U8)Q3400-RA Spine S34U
Total switch height: 12 × 4U = 48U across N1 / N1-B48U
4

Rack N2 — Ethernet, UFM & Management Tower

Spectrum-4 Ethernet Switch ×2 (Active-Active)

AttributeValue
ModelNVIDIA Spectrum-4
Form factor2U rackmount
Ports128× 400 GbE (or 64× 800 GbE)
Switch BW51.2 Tb/s full duplex
DeploymentActive-active (both switches active simultaneously)
Uplinks from BF364 BF3 DPU ports × 400 Gb/s = 25.6 Tb/s aggregate
RoleEthernet storage fabric, RDMA over Ethernet, east-west traffic
Port utilization32 BF3 uplinks per switch / 128 ports = 25%

UFM Appliance (1U)

  • Hardware: 1U dedicated UFM Appliance
  • Capacity: 648 fabric ports — well above 256 + 12 × 144 = 1,984 port count (note: port-count budgeting applies per managed domain)
  • Functions: SM (Subnet Manager), routing engine, SHARP orchestration, telemetry
  • UFM Agents: 32× SW agents installed on each server OS report to this appliance
  • Connection: 10 GbE management port → OOB switch
  • Redundancy: single appliance (all IB fabric config state is in hardware switches)

OOB Management Switch (1U)

  • Protocol: 10 GbE
  • Port count: ≥96 ports (32 OS mgmt + 32 BMC/IPMI + 12 Q3400-RA mgmt + 2 Spectrum-4 mgmt + 1 UFM + uplinks = 80 connections required)
  • Connected devices: all 32 server X710-AT2 (OS mgmt + BMC), UFM Appliance
  • VLAN config: dedicated out-of-band management VLAN isolated from data plane

Rack N2 Physical Layout (42U)

PositionComponentSize
U1–U2Spectrum-4 #1 (Active)2U
U3–U4Spectrum-4 #2 (Active)2U
U5UFM Appliance1U
U6OOB 10 GbE Management Switch1U
U7–U8Patch panel (Eth to compute)2U
U9–U42Empty / cable management / future34U
Active equipment height8U of 42U
5

Inter-Zone Cable Infrastructure

All cables cross the zone boundary between compute racks (C01–C32) and network racks (N1, N2). Distance ~5–15 m. This mandates AOC (active optical) for IB and Ethernet; copper Cat6A for 10 GbE management.

Cable CategoryCountTypeSpeedRoute
CX8 → Q3400-RA Leaf (IB)256AOC NDR800800 Gb/s each32 servers × 8 CX8 → 8 leaf switches
BF3 DPU → Spectrum-4 (Eth)64AOC NDR400400 Gb/s each32 servers × 2 BF3 → 2 Spectrum-4
X710 OS Mgmt → OOB switch32Cat6A10 GbE32 servers × port 0 → OOB
X710 BMC/IPMI → OOB switch32Cat6A10 GbE32 servers × port 1 → OOB
Q3400-RA leaf ↔ spine (intra-N1)256DAC ≤3 m800 Gb/s each8 parallel links per leaf-spine pair × 4 spines × 8 leaves = 256 cables
Cross-zone total (AOC+Cat6A)384256 IB AOC + 64 Eth AOC + 64 Cat6A mgmt
N1 intra-rack (DAC)256Leaf-to-spine within Rack N1 only
6

Aggregate Performance Summary

MetricPer RackFarm Total (×32)
FP8 Tensor Core (dense)~36 PFLOPS~1,152 PFLOPS
NVFP4 Tensor Core (sparse)~240 PFLOPS~7,680 PFLOPS
GPU memory bandwidth~64 TB/s (8 GPUs × 8 TB/s)~2,048 TB/s
NVLink BW (intra-server)14.4 TB/s14.4 TB/s (within node)
IB fabric BW (server uplinks)6.4 Tb/s (8 × 800 Gb/s)204.8 Tb/s
IB bisection BW204.8 Tb/s (1:1 non-blocking)
Ethernet / DPU fabric BW800 Gb/s (2 × 400)25.6 Tb/s
GPU memory total2.304 TB73.73 TB HBM3e
System RAM total4 TB DDR5128 TB DDR5
NVMe storage total34.56 TB~1,106 TB
Peak power (burst, all racks)~15.4 kW~492 kW
Max latency (cross-node IB)<200 ns (2-hop leaf→spine→leaf)
IB Fabric — 1:1 Non-Blocking
204.8 Tb/s
Bisection bandwidth · <200 ns hop latency · SHARP enabled
Compute (FP8 Dense)
1,152 PFLOPS
256 × B300 GPUs · NVLink 5 intra-node · PCIe 6 host
7

Future Expansion Headroom

ResourceCurrent UseCapacityHeadroom
1 MW power budget~530 kW (compute burst + network + cooling overhead)1,000 kW470 kW (47%)
Network zone rack slots2 of 6 used6 positions4 spare racks
Leaf switch ports32 downlinks used per leaf144 ports per Q3400-RA80 ports spare per leaf
Spine switch ports64 uplinks used per spine144 ports per Q3400-RA80 ports spare per spine
Spectrum-4 ports32 per switch (BF3 uplinks)128 ports per switch96 ports spare per switch
OOB switch ports80 required (32 OS + 32 BMC + 12 switch mgmt + 2 Sp4 + 1 UFM + uplinks)96-port switch16 ports spare
UFM Appliance capacity256 server + 12 switch ports648 ports (domain capacity)Available for scale
The current build can be expanded significantly without topology changes. Leaf switches each have 80 unused downlink ports — enough to add more servers without new switches. The 4 spare network zone racks allow new IB tiers if needed. ~470 kW of power headroom still funds substantial additional compute.