A Deep Dive into XGBoost: How It Works and Its Differences from GBM

Ishwarya S
5 min readSep 11, 2024

--

Introduction

In the world of machine learning, one algorithm has become synonymous with high-performance predictive modeling: XGBoost. You might have heard of its incredible accuracy in competitions like Kaggle, or its role in powering real-world systems at scale. But what makes XGBoost so powerful, and how is it different from other boosting algorithms like Gradient Boosting Machines (GBM)?

In this blog, we will explore:

  1. How XGBoost works
  2. Key mathematical differences between XGBoost and traditional GBM
  3. When to use XGBoost over other models

Let’s dive in!

What is XGBoost?

XGBoost (Extreme Gradient Boosting) is an optimized implementation of gradient boosting designed for speed, performance, and efficiency. Built by Tianqi Chen, XGBoost is based on the core principles of gradient boosting, but introduces several enhancements that make it faster, more scalable, and accurate.

How Does XGBoost Work?

At its core, XGBoost works by building an ensemble of decision trees, where each new tree corrects the mistakes of the previous trees by focusing on the residual errors. Here’s the process broken down:

  1. Initialization: XGBoost starts by predicting the mean value of the target variable and calculates the residuals (the difference between the predicted value and the actual value).
  2. Tree Building: The algorithm sequentially adds decision trees to correct the residuals. Each tree tries to minimize the loss function by predicting the residuals from the previous round.
  3. Gradient Descent: XGBoost uses gradient boosting, which means the algorithm updates the model by moving in the direction that minimizes the loss function (i.e., by using gradient descent).
  4. Regularization: A key difference between XGBoost and traditional GBM is the use of regularization terms to penalize the complexity of the model. This helps prevent overfitting.
  5. Shrinkage: XGBoost applies a shrinkage (learning rate) to the contribution of each new tree. This slows down learning and makes the model more robust.
  6. Weighted Trees: XGBoost assigns a weight to each tree depending on its performance. Trees that correct a significant number of mistakes are given more weight in the final ensemble prediction.
  7. Termination: The process continues until a specified number of trees is built, or the algorithm converges by minimizing the error.

Key Mathematical Differences Between GBM and XGBoost

Although both XGBoost and GBM are based on gradient boosting, there are some significant differences that make XGBoost faster and more accurate:

1. Objective Function with Regularization

  • GBM: Uses a standard objective function like mean squared error (MSE) for regression or log loss for classification.
  • XGBoost: The objective function includes an L1/L2 regularization term to penalize overly complex models. This helps control overfitting by reducing the model’s complexity.

XGBoost’s Objective Function:

Where:

  • L(θ) is the loss function (e.g., MSE or log loss),
  • Ω(θ) is the regularization term.

2. Handling of Missing Data

  • GBM: Requires preprocessing steps to handle missing values.
  • XGBoost: Natively handles missing values. It does this by learning the best direction to split the data even when values are missing. This makes XGBoost more flexible and faster when dealing with real-world data.

3. Parallelization

  • GBM: Training is inherently sequential, which makes it harder to parallelize.
  • XGBoost: Employs parallelized tree construction. It can split the data and compute gradients in parallel, allowing XGBoost to train faster, especially with large datasets.

4. Tree Pruning

  • GBM: Pruning typically happens after the tree has been fully grown.
  • XGBoost: Uses “max-depth” pruning and a technique called “pre-pruning”. Instead of growing trees deep and then pruning, XGBoost stops when no further gain is possible, thus saving computation and memory.

5. Learning Rate and Shrinkage

  • GBM: Applies shrinkage only after the tree is built.
  • XGBoost: Applies shrinkage (learning rate) after every tree is added. This makes the model more resistant to overfitting and allows for slower, more precise learning.

When is XGBoost Useful?

XGBoost is particularly useful in the following scenarios:

  1. Large Datasets: XGBoost is optimized for large-scale datasets, making it a go-to choice for big data applications. Its parallelization and memory-efficient algorithms make it faster on large datasets.
  2. Highly Accurate Predictions: XGBoost consistently delivers high accuracy, which is why it is often used in machine learning competitions. Its ability to handle complex data patterns makes it a preferred choice for accurate predictions.
  3. Imbalanced Datasets: XGBoost handles class imbalance well by tweaking its objective function to give more weight to minority classes.
  4. Structured Data: XGBoost works exceptionally well on structured/tabular data, where relationships between features are crucial. It’s less effective on unstructured data like images or text (though some versions can handle it).

Advantages of XGBoost

  • Speed: Due to parallelization and optimized algorithms, XGBoost is much faster than traditional GBM.
  • Accuracy: XGBoost consistently delivers high accuracy by using sophisticated regularization techniques.
  • Flexibility: XGBoost offers flexibility in choosing the loss function and can be used for classification, regression, and ranking tasks.
  • Handling Missing Values: XGBoost automatically handles missing data, making it more robust and easier to use in real-world applications.

Disadvantages of XGBoost

  • Tuning Complexity: XGBoost has many hyperparameters, and tuning them can be tricky and time-consuming.
  • Memory Intensive: While XGBoost is fast, it can be memory-intensive, especially on very large datasets.
  • Overfitting: Like all powerful models, XGBoost can overfit, especially if the trees are too deep. Regularization helps but requires careful tuning.

Conclusion

XGBoost is a powerful, flexible, and efficient machine learning algorithm that has become a top choice for a wide variety of tasks, from classification and regression to ranking. It builds upon the foundational principles of Gradient Boosting Machines but adds several crucial optimizations — such as regularization, handling missing values, and parallelization — that make it more suitable for large-scale, high-accuracy tasks.

With a clear understanding of its working, strengths, and limitations, XGBoost can significantly enhance the performance of your models. Happy boosting!

References

  1. https://www.youtube.com/watch?v=OtD8wVaFm6E
  2. https://www.linkedin.com/pulse/xgboost-classifier-algorithm-machine-learning-kavya-kumar

--

--

Ishwarya S

Data geek with 7 years’ experience. Turning numbers into insights, one line of code at a time. Let’s unravel the data universe together! 📊✨