Skip to content
Chapter 4

Software

From CUDA to Inference Engines

25 min read5 sections

The software stack for inference has four layers of abstraction:

  • CUDA: Direct communication to the GPU for explicit control over computations and memory
  • Deep learning frameworks: Abstractions over CUDA for training, exporting, and running neural networks in Python
  • Inference engines: Highly configurable PyTorch-backed inference for common architectures
  • NVIDIA Dynamo: Sits on top of inference engines to power large-scale deployments

Most inference engineering today happens at the higher levels, configuring and deploying inference engines. No matter what level you work at, it's essential to have a strong mental model for the adjacent levels.

CUDA#

📖CUDA

CUDA (Compute Unified Device Architecture) is NVIDIA's proprietary computing platform and programming model for executing parallel tasks on GPUs. It's the foundation for the entire generative AI ecosystem on NVIDIA hardware.

CUDA has four key components:

  • CUDA kernel: A function that executes parallelized code on the GPU
  • CUDA graph: A DAG of kernels and GPU operations for optimizing repeated workflows
  • CUDA driver: Low-level interface between the application and GPU hardware
  • CUDA runtime: Developer-facing API for launching kernels and managing memory

CUDA is not a programming language — programs are written in C++, then compiled into separate CPU and GPU code by a compiler like nvcc.

Writing CUDA kernels shifts inference engineering from thinking about algorithms to thinking about implementations. The traditional attention algorithm can be expressed in a few dozen lines, but FlashAttention — the same mathematical operation — takes tens of thousands of lines to implement efficiently for a specific GPU.

CUDA Kernels for Inference

The prior art predates CUDA by decades. BLAS (Basic Linear Algebra Subprograms) is a specification for common linear algebra operations. cuBLAS brings this to GPUs, and cuDNN provides neural network primitives.

The most frequently used operation is GEMM (General Matrix-Matrix Multiplication). Key libraries:

LibraryPurpose
cuBLASPre-built kernels for essential linear algebra
CUTLASSTemplate library for writing high-performance kernels (used by FlashAttention 3)
CuTeAbstractions for tiled tensor operations on recent architectures
FlashInferHigh-performance attention kernels and fused sampling functions
DeepGEMMEfficient FP8 GEMM kernels from the DeepSeek team

CUDA Kernel Selection

Kernel implementations are highly specialized with hard-coded values based on specific GPU hardware. A kernel written for an H100 will likely not take advantage of B200 architecture. Most kernel selection is automatic — deep learning frameworks have pre-configured kernels for various architectures.

Reducing Memory Accesses with Kernel Fusion

Running two kernels back-to-back on the same data creates unnecessary round-trips to memory. Kernel fusion combines multiple kernels into a single kernel that handles both operations, eliminating intermediate reads and writes.

Key Takeaway

During decode (the bandwidth-bound phase), an inference engine can't afford unnecessary memory accesses. Kernel fusion — both automatic (via compilers) and manual (like FlashAttention) — is essential for performance.

Deep Learning Frameworks#

PyTorch is the industry standard technology underlying both training and inference. Originally created at Meta, now part of the Linux Foundation.

PyTorch balances built-in functions and automatic optimizations with manual control. The step that transforms a model from training to inference is compilation (torch.compile), which performs automatic kernel selection and fusion for a specific GPU.

Model File Formats

  • safetensors: The dominant format for serializing model weights. Uses memory mapping for fast, safe loading. Only holds tensor data, not executable code.
  • ONNX: Stores weights along with an execution graph. Highly portable across hardware.

ONNX Runtime and TensorRT

ONNX RuntimeTensorRT
SourceOpen source (Linux Foundation)Mix of proprietary and open (NVIDIA)
HardwareMany GPU typesNVIDIA only
StrengthPortabilityRaw performance

Transformers and Diffusers

The transformers and diffusers libraries by Hugging Face offer reference implementations — great for learning and tinkering, but not designed for large-scale production inference. Use them for local inference and notebooks, then switch to production inference engines.

Inference Engines#

Three competitive engines: vLLM, SGLang, and TensorRT-LLM.

vLLMSGLangTensorRT-LLM
PerformanceGoodGoodBest
Ease of useEasyEasyHard
Model supportMostMostSome
HardwareGPU, TPUNVIDIA, AMDNVIDIA only

All three support continuous batching, post-training quantization, speculative decoding, prefix caching, parallelism, and disaggregation out of the box.

vLLM

Largest market share. First released summer 2023. Best selling point: broad support — most hardware options, most model architectures, plus multimodal inference via vLLM Omni. Use when you want solid out-of-the-box performance for almost any model.

SGLang

Pairs a fast backend runtime with a flexible frontend language. Strong support for Chinese open models (DeepSeek, Qwen). Heavy investment in large-scale MoE deployments on systems like GB200 NVL72.

TensorRT-LLM

Steeper learning curve but usually best performance. Deep NVIDIA hardware integration with graph-level optimizations. At Baseten, it's the most-used engine.

NVIDIA Dynamo#

NVIDIA Dynamo is an open-source distributed serving platform that sits on top of inference engines. It manages KV cache reuse, disaggregation, and multi-GPU/multi-node orchestration — the coordination layer for large-scale deployments.

Performance Benchmarking#

⚠️Benchmark Carefully

Benchmarks should mirror real-world usage as closely as possible. Use jitter traffic (randomized arrival times and sequence shapes) rather than uniform synthetic loads. Always measure P50, P90, and P99 latencies.

Key benchmarking tools: genai-perf (NVIDIA), vllm-benchmark, sglang-bench. Profiling tools: PyTorch Profiler, NVIDIA Nsight Systems and Nsight Compute.

Tips for useful benchmarking:

  1. Define realistic workloads: Match your actual ISL/OSL distributions
  2. Warm up first: Let the engine stabilize before measuring
  3. Test multiple configurations: Vary batch size, parallelism, quantization
  4. Measure what matters: TTFT and TPS for user-facing, total throughput for batch

Check Your Understanding

1 / 11

What is kernel fusion?