FlashQLA
A tiernew this weekQwen's open-source GPU kernel library that squeezes 2–3× more speed out of linear attention on NVIDIA Hopper hardware — if you're lucky enough to own one.
Kai's verdict
A genuinely impressive, laser-focused kernel optimization from the Qwen team — real speedups on real hardware — but its utility is gated behind Hopper GPUs and Qwen's GDN architecture, making it a niche power tool rather than a broadly useful library. (Verdict pending Phi's full review.)
Strengths
- 2–3× forward-pass and ~2× backward-pass speedup over FLA Triton kernels on Hopper GPUs
- Gate-driven automatic intra-card context parallelism boosts SM utilization in long-sequence, small-head-count regimes without manual config
- Hardware-friendly algebraic reformulation reduces Tensor Core, CUDA Core, and SFU overhead with no numerical precision loss
- MIT licensed and fully open-source — drop it straight into Qwen3.x training and inference pipelines
Weaknesses
- Extremely narrow hardware requirement: SM90+ only (H100/H200, DGX Spark) with CUDA 12.8+ and PyTorch 2.8+ — useless outside Hopper-class clusters
- GDN/Qwen-specific: not a drop-in replacement for FlashAttention-style softmax kernels, and won't help you if you're not running linear-attention Qwen models
- Very new, minimal community adoption or third-party validation yet
Best for
ML engineers and researchers running Qwen3.x linear-attention models on H100/H200 clusters who need to close the gap between theoretical GDN efficiency and actual hardware throughput.
Pricing
Free (MIT License, open-source)
Fully open-source; hardware cost is the real barrier — requires H100/H200 or DGX Spark (SM90+).