Definition
- A sliding window (or rolling window) is a time series validation method where the training set has a fixed size and “slides forward” over time.
- As new data enters the training window, the oldest data is dropped.
- Always respects the time order (past → future).
How It Works
- Choose a window size (fixed length of training data).
- Train model on that window.
- Validate on the next time step (or next block of time).
- Slide the window forward → repeat.
Example (Window size = 2 years, dataset = 2020–2024)
- Train = 2020–2021 → Test = 2022
- Train = 2021–2022 → Test = 2023
- Train = 2022–2023 → Test = 2024
Notice: Oldest data (2020) is dropped once the window slides.
Comparison with Expanding Window
| Method | Training Set Behavior | When to Use |
|---|---|---|
| Expanding Window | Grows over time (all past data kept) | When old data is still relevant (macroeconomics, cumulative learning) |
| Sliding Window | Fixed size, oldest data dropped | When recent data is more important (trends, seasonality, fast-changing systems) |
When to Use Sliding Window
- Financial markets → recent trends matter more than old data.
- Demand forecasting → customer behavior shifts frequently.
- IoT/sensor data → old sensor readings may no longer reflect current state.
Benefits
- Keeps model focused on recent patterns.
- Reduces computational load (training set doesn’t grow infinitely).
Drawbacks
- May lose valuable historical patterns.
- Choosing window size is critical (too short = noisy, too long = less responsive).
Summary
Sliding (Rolling) Window = time series validation where the training set is fixed in size and moves forward, dropping old data and adding new data.
- Good for non-stationary environments where recent data is more predictive.
- Different from expanding window, which accumulates all past data.
