FuriosaAI

RNGD Product Page

Rngd chip
Meet RNGD - 2nd-gen AI accelerator.

The most efficient data center accelerator for high-performance LLM and multimodal deployment

512 TFLOPS
64 TFLOPS (FP8) x 8 Processing Elements
48 GB
HBM3 Memory Capacity
1.5 TB/s
Memory Bandwidth
150 W
Thermal Design Power

Tensor Contraction Processor

Tensor Contraction Processor (TCP) is the compute architecture underlying Furiosa accelerators. With tensor operation as the first-class citizen, TCP unlocks unparalleled energy efficiency.

Performance results

Llama 2 7B

Energy Efficiency

Perf/Watt (tokens/s/W)

*Higher value is better

Energy efficiency 3x

Batch=32, Input Length=2K, Output Length=2K

Energy efficiency 4x

Batch=16, Input Length=2K, Output Length=2K

Latency

(m/s)

*Lower value is better

Latency h100

Batch=1, Sequence Length=128

Latency l40s

Batch=1, Sequence Length=128

Throughput

(tokens/s)

*Higher value is better

Throughput h100

Batch=16, Input Length=2K, Output Length=2K

Throughput l40s

Batch=32, Input Length=2K, Output Length=2K

RNGD H100 L40S
Technology TSMC 5nm TSMC 4nm TSMC 5nm
BF16/FP8 (TFLOPS) 256/512 989/1979 362/733
INT8/INT4 (TOPS) 512/1024 1979/- 733/733
Memory Capacity (GB) 48 80 48
Memory Bandwidth (TB/s) 1.5 3.35 0.86
Host I/F Gen5 x16 Gen5 x16 Gen4 x16
TDP (W) 150 700 350

Disclaimer: Measurements by FuriosaAI internally on current specifications and/or internal engineering calculations. Nvidia results were retrieved from Nvidia website, https://developer.nvidia.com/deep-learning-performance-training-inference/ai-inference, on February 14, 2024.

Purpose-built for tensor contraction

Uniquely designed for AI inference deployment, Furiosa TCP unlocks superior utilization, performance and energy efficiency.

AI models structure data in tensors of various shapes. The RNGD chip fully exploits parallelism and data reuse by flexibly adapting to each tensor contraction with software-defined tactics and supporting model-wise operator fusion.

Series RNGD

RNGD-S 2025

Leadership performance for creatives, media and entertainment, and video AI

RNGD Q3 2024

Versatile cloud and on-prem LLM and Multimodal deployment

128 TFLOPS
48GB Memory Bandwith
PCIe x16

RNGD-Max 2025

Powerful cloud and on-prem LLM and Multimodal deployment