Backtest an influential portfolio strategy (that actually works)

Backtest an influential portfolio strategy (that actually works)
Portfolio drawdowns are painful.
Most backtests only tell you half the story. And like I always say, no single risk metric paints the whole picture.
Python can help you see how bad losses get.
And it can help you minimize tail risk due to drawdowns.
By reading today’s newsletter, you’ll get Python code to build, optimize, and backtest a portfolio that minimizes average drawdown while maximizes Sharpe ratio.
Let’s go!
Backtest an influential portfolio strategy (that actually works)
Conditional Drawdown at Risk (CDaR) measures the average size of a portfolio’s most severe drawdowns, capturing both depth and duration of losses. Modern portfolio optimization using CDaR aims to defend against sustained loss cycles, not just short-term volatility.
In risk management, understanding where CDaR fits historically matters.
Interest in drawdown-based statistics started building after risk managers realized that volatility and max drawdown miss the true cost of sustained losses. While measures like Value at Risk (VaR) and Conditional Value at Risk (CVaR) got widespread use in the late 1990s, CDaR gained recognition in the 2000s for addressing investor pain during market stress. Academics and quants integrated CDaR to close the gap between real-world risk and what models claim.
This context changes how professionals build and test resilient portfolios today.
Now, portfolio managers use Python tools like Riskfolio-Lib to allocate capital with CDaR or CVaR as core constraints. They backtest strategies using vectorbt, applying real trading costs and crisis scenarios to gauge actual performance.
Let’s see how it works with Python.
Imports and setup
We import libraries for downloading financial data, running portfolio simulations, calculating asset analytics, and displaying results. We also set chart and warning preferences so output is clear and easy to follow.
1import os
2import warnings
3from datetime import datetime
4import riskfolio as rp
5
6import numpy as np
7import pandas as pd
8import yfinance as yf
9import vectorbt as vbt
10from vectorbt.portfolio.enums import Direction, SizeType
11from vectorbt.portfolio.nb import order_nb, sort_call_seq_nb
12
13vbt.settings.returns["year_freq"] = "252 days"
14
15warnings.filterwarnings("ignore")
We define which stocks and assets we want to analyze and download their historical price data from Yahoo Finance. We pull the daily closing prices for each ticker from 2010 to mid-2024. In this case, we’re major US financial company stocks.
1tickers = [
2"JPM", "V", "MA", "BAC", "WFC", "GS", "MS", "AXP", "C"
3]
4
5data = yf.download(
6 tickers,
7 start="2010-01-01",
8 end="2024-06-30",
9 auto_adjust=False
10)["Close"].dropna()
Here we define our investment universe by listing ticker symbols for several well-known financial companies. We then collect daily closing prices for these stocks from Yahoo Finance, covering more than a decade. Removing dates with missing values leaves us with a consistent dataset for robust backtesting.
Define portfolio simulation functions
We use VectorBT for the backtest. We set how often to rebalance, how to look back at price history for optimization, and how to place portfolio trades. It’s a lot of code and don’t worry if you don’t understand it yet, it’s safe to copy and paste.
1num_tests = 2000
2ann_factor = data.vbt.returns(freq="D").ann_factor
3
4def pre_sim_func_nb(sc, every_nth):
5 sc.segment_mask[:, :] = False
6 sc.segment_mask[every_nth::every_nth, :] = True
7 return ()
8
9def pre_segment_func_nb(
10 sc, find_weights_nb, history_len, ann_factor, num_tests, srb_sharpe
11):
12 if history_len == -1:
13 close = sc.close[: sc.i, sc.from_col : sc.to_col]
14 else:
15 if sc.i - history_len <= 0:
16 return (np.full(sc.group_len, np.nan),)
17 close = sc.close[sc.i - history_len : sc.i, sc.from_col : sc.to_col]
18
19 best_sharpe_ratio, weights = find_weights_nb(sc, close, num_tests)
20 srb_sharpe[sc.i] = best_sharpe_ratio
21
22 size_type = np.full(sc.group_len, SizeType.TargetPercent)
23 direction = np.full(sc.group_len, Direction.LongOnly)
24 temp_float_arr = np.empty(sc.group_len, dtype=np.float_)
25 for k in range(sc.group_len):
26 col = sc.from_col + k
27 sc.last_val_price[col] = sc.close[sc.i, col]
28 sort_call_seq_nb(sc, weights, size_type, direction, temp_float_arr)
29
30 return (weights,)
31
32def order_func_nb(oc, weights):
33 col_i = oc.call_seq_now[oc.call_idx]
34 return order_nb(
35 weights[col_i],
36 oc.close[oc.i, oc.col],
37 size_type=SizeType.TargetPercent,
38 )
This block sets the groundwork for our portfolio simulation. We specify how often rebalancing happens, how much historical data to use when making allocation decisions, and how orders are created during simulation. The functions handle date logic, extract relevant price data, optimize for the best risk-adjusted returns, and structure trades to aim for those weights. Using this setup, our strategy can dynamically recalculate allocations throughout the backtest.
Optimize portfolio weights and run backtest
We run an optimization to find weights that maximize risk-adjusted returns using past data, then simulate the portfolio and review performance.
1def opt_weights(sc, close, num_tests):
2 close = pd.DataFrame(close, columns=tickers)
3 returns = close.pct_change().dropna()
4 port = rp.Portfolio(returns=returns)
5 port.assets_stats(method_mu="hist", method_cov="hist")
6 w = port.optimization(model="Classic", rm="CDaR", obj="Sharpe", hist=True)
7 weights = np.ravel(w.to_numpy())
8 shp = rp.Sharpe(w, port.mu, cov=port.cov, returns=returns, rm="CDaR", alpha=0.05)
9 return shp, weights
10
11sharpe = np.full(data.shape, np.nan)
12pf = vbt.Portfolio.from_order_func(
13 data,
14 order_func_nb,
15 pre_sim_func_nb=pre_sim_func_nb,
16 pre_sim_args=(30,),
17 pre_segment_func_nb=pre_segment_func_nb,
18 pre_segment_args=(opt_weights, 252 * 4, ann_factor, num_tests, sharpe),
19 cash_sharing=True,
20 group_by=True,
21 use_numba=False,
22 freq="D"
23)
24
25pf.plot_cum_returns()
We define how to select our portfolio weights by maximizing the Sharpe ratio—this balances return against risk. Then we use a simulation engine to run our strategy, regularly rebalancing according to these optimal allocations. Finally, we visualize cumulative returns, print statistics, and compare results against a simple buy-and-hold approach. This gives us clear insight into how our adaptive strategy performed over many years, using real historical price data.
The result is a plot demonstrating the returns of the strategy against a buy-and-hold.
.png)
We can also print the strategy statistics.
1pf.stats()
.png)
Let’s compare the Sharpe ratio and drawdown of the benchmark.
1bm_returns = pf.benchmark_returns()
2bm_returns_acc = bm_returns.vbt.returns(
3 freq="1d",
4 year_freq="252 days",
5)
6print(f"Benchmark sharpe: {bm_returns_acc.sharpe_ratio()}")
7print(f"Benchmark drawdown: {bm_returns_acc.max_drawdown()}")
Looks like the Sharpe ratio of of the strategy trails the buy and hold while the drawdown is slightly less.
Your next steps
At this point, you've built and backtested a Sharpe-maximizing portfolio across top financial stocks. Next, tweak the tickers list to try other sectors or add/remove symbols. Adjust the rebalance frequency in pre_sim_args to see its impact. Change the risk model in the opt_weights function to compare performance under different risk measures.
