As the usage of large language models (LLMs) grows, performing efficient inference with these models becomes increasingly important. While speculative decoding has recently emerged as a promising direction for speeding up inference, existing methods are limited in their ability to scale to larger speculation budgets, and adapt to different hyperparameters and hardware. This paper introduces Sequoia, a scalable, robust, and hardware-aware algorithm for speculative decoding. To attain better scalability, Sequoia introduces a dynamic programming algorithm to find the optimal tree structure for the speculated tokens. To achieve robust speculative performance, Sequoia uses a novel sampling and verification method that outperforms prior work across different decoding temperatures. Finally, Sequoia introduces a hardware-aware tree optimizer that maximizes speculative performance by automatically selecting the token tree size and depth for a given hardware platform. Sequoia improves the decoding speed of Llama2–7B, Llama2–13B, and Vicuna-33B on an A100 GPU by up to 4.04×, 3.84×, and 2.37×, and Llama2–70B offloading speed by up to 10.33× on an L40.
This paper introduces Sequoia, a scalable, robust, and hardware-aware algorithm for speculative decoding of large language models. The key contributions are:
1. A dynamic programming algorithm to construct an optimal tree structure for speculating tokens, which scales nearly logarithmically with the tree size, outperforming existing tree structures.
2. A novel sampling and verification method that samples tokens without replacement from the draft model, improving robustness across different decoding temperatures compared to prior methods.
3. A hardware-aware tree optimizer that selects the optimal tree size and depth based on the hardware being used, maximizing speedup.
Through extensive evaluation on models like Llama and Vicuna, Sequoia achieves up to 4.04x speedup for inference on an A100 GPU and up to 10.33x speedup for offloading to an L40 GPU, outperforming prior speculative decoding methods. The paper provides theoretical analysis and empirical validation of Sequoia’s scalability, robustness, and hardware awareness through ablation studies.