Available now

Meet our first-gen AI accelerator called WARBOY powering vision applications

what our partners are saying

They saved 3x in total cost of ownership (TCO) vs. current GPU

“Using WARBOY and FuriosaAI’s software helped streamline our end-to-end video processing pipeline for one of Korea’s largest online education services. We improved overall service latency while reducing costs for our company and customers”

A leading streaming media company in South Korea

75 AI companies will be deploying AI services powered by FuriosaAI’s WARBOY. This project is in partnership with the publicly funded Korean AI Data Center and two hyperscale cloud service providers.

Kakao Enterprise logoNaver Cloud Platform logo
WARBOY in action at one of our partner's data centers

Demos & applications

WARBOY is already being used for these models across a wide range of industries

Software that will scale with your business

WARBOY saw a 2x improvement in MLPerf with our compiler update

Offline Throughput

Object Detection (SSD - MobileNetV1)


Official MLPerf™ score of v2.0 Inference Edge: Closed. Retrieved from https://mlcommons.org/en/inference-edge-20/ 12 May 2022, entry 2.0-142. The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.

During the first round of MLPerf v1.1 (the MLCommons benchmark), WARBOY outperformed NVIDIA’s T4.

In the second round of MLPerf, our performance improved by 113% because of the time and effort our software team invested in developing our compiler.

Our strength in programmability

Single-Stream Performance

Images-per-second throughput

Top: Image Classification(ResNet-50) / Bottom: Object Detection (SSD-Small)

Performance/Price

Image Classification(ResNet-50)


MLPerf™v1.1 Inference Closed; SSD-Small: 1.1-071, 1.1-129, ResNet-50: 1.1-099, 1.1-129. Furiosa WARBOY result is submitted in the Preview category, and Nvidia T4's SSD-Small and ResNet-50 results are submitted by Alibaba and Lenovo respectively in the Available category. Price is not the primary metric of MLPerf. The MLPerf name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.

Enterprise-Ready & Cloud-Ready Stack

01

Seamless with your workflow today

We fit in with your current workflow, including intuitive APIs and building blocks; PyTorch/ONNX/TensorFlow Lite models & NumPy data.

02

Advanced tools & support for model optimization

Our advanced compiler can optimize any model for the best performance/watt deployment. We also have the profiling tools to detect hotspots, so together, we can optimize even further.

03

Extensible and scalable deployment

WARBOY can scale from half usage to ten times with multi PEs (processing elements). Our software stack is compatible with data center enablers such as containers and Kubernetes for rapidly scaling up AI workloads.

let’s talk about what warboy can do for you

What models are you using? Check out our library on Model Zoo to see all the models we support. If your model isn’t on the list, our team will work with you to the right solution for your model.

WARBOY is available now for testing and sales.