How Cyclical Betting Works
How Did Cyclical Betting Come To Be?
To first understand what Cyclical Betting is, you have to understand how it came to be. I have been building sports betting models for the better part of 6 years now, and statistical models for the better part of 10 years. There have been things I have noticed and picked up on over time that I finally decided to take action on. One of them was the long-term cycles my models would go through. When I was first starting to build simple linear regression models, I would simply bet on the teams that had the highest projected difference between the real lines and my projections.
I soon found this to not work so well. For starters, you never want to be too far off from the Vegas line, as they have the best models possible. Second, this is a clear sign of an overfit model—something that is empirically impossible to avoid while building time series sports models, and something that every bettor has seen or heard of. My way to get around that was to only bet the projections that had slight edges on Vegas—basically, only the realistic projections.
This is when I started to notice a pattern though. No matter the sport, methodology, or complexity of the model, the same pattern emerged in all of them. I would track edge ranges—from small edges to big edges—to see which ones performed the best over the long term and then bet those edges. I saw mild success, but what I did notice was extreme cyclical movement in the edge ranges. For example, say I saw an edge of 0.05–0.1 was performing great on a model. I would start betting that edge and then—poof—that range was back to being awful and a losing machine.
Anytime an edge spiked up in performance... it would sooner or later reverse back to the mean. On the other hand, this was also true for ranges that were performing poorly at one point—they would slowly start to creep back up while the good range was falling. I finally decided to test this theory on my MLB Strikeout Gradient Boosted Machine Learning Algorithm. And all I can say is: I stumbled upon a life-changing scientific finding.
Testing Of the Cyclical Model Betting Theory
I loaded up my dataset that had close to 2000 entries from April until July for the Strikeout model. I then sectioned them off into 4 distinct edge ranges and looked at performance. Each was individually down ranging from -5 units (unit = standard size of bet) to -45 units. I then plotted those in a line graph and that is where i noticed an eye opening discovery. The small edge range (0 to 0.15) was fairly inverted to the medium edge range (0.15-0.30).
As you can see from this correlation matrix the Medium and Small edge ranges had a correlation of about -0.45. Meaning there was strong evidence to suggest that when one range was struggling the other one would be thriving. In the next section I will go into more in depth statistical detail on how this strategy has proof that it works, and why it may change betting forever...
Statistical Validation of the Cyclical Edge-Based Betting StrategyTM
This betting model categorizes bets into distinct edge tiers based on projected value — from small to large edges — and analyzes their performance over time. “edge” is simply just our model’s projection minus the betting line, so if we project a pitcher to have 4.65 strikeouts and the line is 4.5, then the edge is 0.15. While each tier demonstrates its own return profile, typically overwhelmingly unsuccessful, what stands out is not simply their raw profitability but the presence of repeating, measurable cycles and correlation across time. These patterns provide strong evidence that returns are not random but exhibit structure that a dynamic strategy can capitalize on.
Using STL decomposition (Plots 1 and 2), we break the cumulative profit time series into three interpretable parts: the long-term trend, cyclical (seasonal) behavior, and residual noise. Notably, the small and medium edge groups display highly regular oscillations in the seasonal component. This implies that edge performance tends to rise and fall in predictable intervals — a key finding suggesting temporary inefficiencies or shifting market dynamics that reset and repeat over time.
To verify this, spectral analysis (Plot 3) is employed. This method assesses the frequency of cycles present in the data. In both edge groups, a clear dominant frequency emerges, pointing to repeatable patterns in profitability roughly every 6–7 days. This rhythm could correspond to subtle market cycles, bettor behavior patterns, or fluctuations in signal quality from the underlying model.
Supporting this, the autocorrelation plots (Plot 4) reveal a gradual decay — a hallmark of time series with short-term memory. This means recent outcomes influence near-future performance, further reinforcing the idea that performance moves in long term waves rather than small short term random steps.
The real innovation lies in the dynamic allocation strategy. Rather than rigidly committing to a single edge group, the model evaluates recent performance across all tiers, selects the one with the strongest and most stable results over the past five days, and bets exclusively within that group for the next session. This approach introduces a layer of tactical flexibility, allowing the strategy to “surf the wave” of whichever edge group is currently offering the best signal quality.
When we chart this dynamic strategy’s performance against static edge tiers (Plot 5), the benefits become immediately clear. The dynamic approach outperforms most static strategies, and does so with smoother returns, smaller drawdowns, and a significantly higher Sharpe ratio. It not only earns more but does so more consistently — a critical advantage in any betting context where both return and risk control matter.
In essence, this strategy works because it treats edge performance as nonstationary — it changes over time. By incorporating recent data and dynamically adapting to where the signal is strongest, it transforms raw model output into a more robust, cyclical-aware, and risk-efficient system. The cyclical system in this example would end up netting you 45.58u compared to the best case static edge range which would be the Small Range for -3.49u. This not only is true for this individual model, but true for any model no matter the statistical integrity or methodology.