
Quadric Chimera GPNPU
Flexibility of a Processor + Efficiency of an NPU Accelerator

Description
The Quadric Chimera GPNPU is an advanced, licensable processor architecture specifically engineered for on-device artificial intelligence computing. It uniquely combines high machine learning (ML) inference performance with the capability to run complex C++ code directly, thereby eliminating the common need for developers to partition application code across multiple, disparate processor types. This integrated approach significantly simplifies System-on-Chip (SoC) hardware design and accelerates the porting of new ML models and software application programming.
Quadric's Chimera GPNPU architecture is highly scalable, offering performance from 1 to 864 TOPs (Tera Operations Per Second), making it suitable for a wide array of markets and applications. It is designed to run all types of ML networks, including classical backbones, modern vision transformers, and large language models. For specialized needs, a safety-enhanced version is available, making it ASIL-ready for demanding automotive designs. This versatility ensures a comprehensive solution for diverse AI processing requirements on edge devices.
Key Features
- Unified Execution Pipeline: Handles matrix, vector operations, and scalar (control) C++ code in a single pipeline, removing the need to partition application code.
- Broad ML Model Compatibility: Runs all types of ML networks, including classical backbones, vision transformers, and large language models (LLMs).
- Scalable Performance: Offers a wide performance range from 1 to 864 TOPs, catering to various application segments.
- Automotive-Ready Design: Includes safety-enhanced, ASIL-ready cores suitable for automotive applications.
- SoC Design Simplification: Streamlines SoC hardware design and accelerates ML model porting and software development.
- Comprehensive SDK Support: Accompanied by the Chimera SDK featuring world-class compilers and Quadric DevStudio for easy visualization of SoC design choices.
Use Cases
- On-device AI computation for edge devices
- Advanced Driver-Assistance Systems (ADAS) and automotive AI
- High-performance ML inference acceleration
- Running Large Language Models (LLMs) locally on hardware
- Deployment of Vision Transformer (ViT) models
- Simplifying System-on-Chip (SoC) designs with integrated AI capabilities
- Accelerating applications requiring both ML and complex C++ processing
Frequently Asked Questions
What types of machine learning models can the Quadric Chimera GPNPU run?
The Chimera GPNPU is designed to run all ML networks, including classical backbones, vision transformers, and large language models (LLMs).
How does the Chimera GPNPU simplify System-on-Chip (SoC) design?
It simplifies SoC hardware design and speeds up ML model porting by using a single architecture for ML inference plus pre-and-post processing. This allows it to handle matrix, vector, and scalar (control) code in one execution pipeline, eliminating the need to artificially partition application code between different processor types.
What is the performance scalability of the Chimera GPNPU?
The Chimera GPNPU architecture is licensable and scales from 1 up to 864 TOPs (Tera Operations Per Second), catering to a wide range of application needs.
Is the Chimera GPNPU suitable for automotive applications?
Yes, the Chimera GPNPU serves all markets and includes a safety-enhanced version with ASIL-ready cores specifically designed for automotive applications.
You Might Also Like

Forloop
FreemiumThe no-code platform for external data automation.

RevContent
Contact for PricingMonetization and Recommendation Engine for Advertisers and Publishers

Instapainting
PaidTurn Your Photos into Hand-Painted Masterpieces by Real Artists

Woord
FreemiumInstant Text-to-Speech (TTS) using realistic voices

Stormly
Free TrialAI-Powered Product Analytics Platform