AI chips for visionaries

Unlock the next frontier of AI deployment with our NPU solutions

Supercharge your
AI Deployment

FuriosaAI is creating next-generation NPU (neural processing unit) products to help you unlock the next frontier of AI deployment.

Our AI chips will reduce your costs and energy consumption so you can innovate without constraint.

Our three core values below show what we can bring to your business. We’re excited to work with you and build the future of AI together.

1

High performance and reduced cost

2

Future-proof your ML workflow

3

Reduce your carbon footprint

introducing our powerful npu chips

Since 2017, we’ve been working tirelessly to realize our vision of unlocking the next frontier of AI deployment.

We’re excited to introduce you to our two NPU products.

Image description

Meet WARBOY, our first-gen NPU

  • Accelerate your most powerful vision applications
  • Reduce your TCO (total cost of ownership) by up to 3x
Concept Only

Our 2nd-gen chip is coming. Q2 2024.

  • Becomes the only viable alternative to H100 in accelerating ChatGPT-scale models, but with half the power consumption
  • Directly targeting 100B-scale LLMs
  • Supports most major production-level AI models including GPT-3, LLaMa, and Stable Diffusion

Enterprise-Ready & Cloud-Ready Stack

01

Seamless with your workflow today

We fit in with your current workflow, including intuitive APIs and building blocks; PyTorch/ONNX/TensorFlow Lite models & NumPy data.

02

Advanced tools & support for model optimization

Our advanced compiler can optimize any model for the best performance/watt deployment. We also have the profiling tools to detect hotspots, so together, we can optimize even further.

03

Extensible and scalable deployment

WARBOY can scale from half usage to ten times with multi PEs (processing elements). Our software stack is compatible with data center enablers such as containers and Kubernetes for rapidly scaling up AI workloads.

Our Product Principles

The design approach for NPUs is different from traditional silicon design, in that one must consider both hardware and software optimization from the start.

Here are our three product principles that have helped us stand out from other NPU solutions:

  1. Programmability: Can it get the best utilization of hardware while catering to the demands of various model types and sizes?
  2. Efficiency: Can it maximize computational performance out of given amount of energy? Also referred to as “superior performance-per-watt.”
  3. Ease-of-use in deployment: Can the engineers seamlessly continue their workflow during development and deployment with the new solution?
2017Establishment of FuriosaAI
2017Seed funding with backing from Naver
2019Successful Series A fundraising
2020Launch of WARBOY silicon and software development
2021Became the first NPU player in Korea to submit MLPerf v1.1 results
2021Successful Series B fundraising
2022Partnership with Hugging Face
2022First commercial deployment
2022Commenced volume production of WARBOY
2023Won bid to supply NPUs to a public AI datacenter in Korea
2024Launch of our 2nd-gen chip in Q2

Team & Key Milestones

Established in South Korea by AMD, Samsung, and Qualcomm engineers, FuriosaAI has now ~100 employees, with leaders and global advisors worldwide. Leaders at FuriosaAI bring their expertise from past experience at Meta AI, Western Digital, Sun Microsystems, Groq, and Intel, and we have team members across the U.S. and Germany.

Partners ecosystem

FuriosaAI collaborates with industry-leading partners worldwide to deliver the most advanced AI infrastructure to the market.

Contact us below to partner with us.

Latest news