Kernel Fusion: A New Way to Enhance Neural Networks Performance

Kernel Fusion: A New Way to Enhance Neural Networks Performance

Introduction

In the realm of deep learning, the performance of neural networks is often limited by the complexity of the tasks they are designed to handle. Traditional neural network architectures struggle to balance the trade-off between model size and inference speed. Kernel Fusion emerges as a groundbreaking approach that aims to address this challenge. By integrating multiple kernels into a single network, Kernel Fusion creates a more efficient and powerful neural network that can handle complex tasks with unprecedented speed and accuracy.

1. Thread Block Organization

Unfused Kernels (3 Separate Launches)

Kernel 1: Scale
T0
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
T11
T12
T13
T14
T15
Thread Block (16 threads)
Kernel 2: ReLU
T0
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
T11
T12
T13
T14
T15
Thread Block (16 threads)
Kernel 3: Add
T0
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
T11
T12
T13
T14
T15
Thread Block (16 threads)

Fused Kernel (Single Launch)

Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Scale
ReLU
Add
Each thread executes all operations sequentially
Warp 0 (Threads 0-15)Single SM Assignment
Benefits:
  • Register data reuse
  • Single kernel launch
  • Reduced scheduling overhead
Resources:
  • Shared memory: 1 block
  • Registers: All ops
  • L1 Cache: Unified

2. Memory Access Patterns

Memory Access Pattern

8 Threads
32-byte Scattered Memory Access
8 Threads
32-byte Scattered Memory Access
8 Threads
32-byte Scattered Memory Access
8 Threads
32-byte Scattered Memory Access

✗ Multiple smaller memory transactions

✗ Poor memory bandwidth utilization

✗ Higher memory latency

Memory Access Pattern

Thread Block (32 threads)
128-byte Memory Transaction

✓ Single memory transaction for 32 consecutive elements

✓ Maximum memory bandwidth utilization

✓ Minimal memory latency

3. Operation Fusion Example

Example with input value: 3

Input
3
Global Memory
Scale (×2)
6
Store
6
Global Memory
Load
6
Global Memory
ReLU
6
Store
6
Global Memory
Load
6
Global Memory
Add (+1)
7
Final
7
Global Memory
6 memory operations (3 loads + 3 stores)

What is Kernel Fusion?

Kernel Fusion is a technique that combines the strengths of different kernels to create a more powerful and efficient neural network. It is a technique that combines the strengths of different kernels to create a more powerful and efficient neural network.

Mathematical Properties:

  • • Fusion preserves computational equivalence: f₃(f₂(f₁(x))) ≡ ffused(x)
  • • Memory bandwidth utilization: (R + W)fused < Σ(R + W)individual
  • • Theoretical speedup: S = Tseparate/Tfused ≈ (nops + nsync)/(1 + 1)

Performance Implications:

  • • Reduced memory transactions: 165 global loads
  • • Register reuse: Intermediate results stored in registers instead of global memory
  • • Improved instruction cache utilization through unified kernel execution

Sources

Mastodon