Category Archives: PerformanceAnalytics

Aggregate portfolio contributions through time

The last CRAN release didn’t have much new functionality, but Ross Bennett and I have completely re-written the Return.portfolio function to fix some issues and make the calculations more transparent.  The function calculates the returns of a portfolio given asset returns, weights, and rebalancing periods – which, although not rocket science, requires some diligence about it.

Users of this function frequently want to aggregate contribution through time – but contribution for higher periodicity data can’t be directly accumulated into lower periodicities (e.g., using daily contributions to calculate monthly contributions).   So the function now also outputs values for the individual assets and the aggregated portfolio so that contributions can be calculated at different periodicities.  For example, contribution during a quarter can be calculated as the change in value of the position through those three months, divided by the original value of the portfolio. The function doesn’t do this directly, but it provides the value calculation so that it can be done.

We’ve also added some other convenience features to that function.  If you do not specify weights, the function assumes an equal weight portfolio.  Alternatively, you can specify a vector or single-row matrix of weights that matches the length of the asset columns. In either case, if you don’t specify a rebalancing period, the weights will be applied at the beginning of the asset time series and no further rebalancing will take place. If a rebalancing period is specified (using the endpoints attribute of ‘days’, ‘weeks’, ‘months’, ‘quarters’, and ‘years’ from xts’ endpoints function), the portfolio will be rebalanced to the given weights at the interval specified.

That function can also do irregular rebalancing when passed a time series of weights. It uses the date index of the weights for xts-style subsetting of rebalancing periods, and treats those weights as “end-of-period” weights (which seems to be the most common use case).

When verbose=TRUE, Return.portfolio now returns a list of data and intermediary calculations.  Those should allow anyone to step through the specific calculations and see exactly how the numbers are generated.

Ross did a very nice vignette for the function (vignette(portfolio_returns)), and as usual there’s a lot more detail in the documentation – take a look.

Here’s an example of a traditional 60/40 portfolio. We’ll look at the results of different rebalancing period assumptions, and then aggregate the monthly portfolio contributions to yearly contributions.

symbols = c(
  "SPY", # US equities, SP500
  "AGG"  # US bonds, Barclay Agg
getSymbols(symbols, from="1970-01-01")
x.P <-, lapply(symbols, function(x) {
               Cl(to.monthly(Ad(get(x)), drop.time = TRUE,
colnames(x.P) = paste0(symbols, ".Adjusted")
x.R <- na.omit(Return.calculate(x.P))

#            SPY.Adjusted AGG.Adjusted
# 2003-10-31   0.05350714 -0.009464182
# 2003-11-28   0.01095923  0.003380861
# 2003-12-31   0.05035552  0.009815412
# 2004-01-30   0.01975363  0.004352241
# 2004-02-27   0.01360322  0.011411238
# 2004-03-31  -0.01331329  0.006855184
#            SPY.Adjusted AGG.Adjusted
# 2014-04-30  0.006931012  0.008151410
# 2014-05-30  0.023211141  0.011802974
# 2014-06-30  0.020650814 -0.000551116
# 2014-07-31 -0.013437564 -0.002481390
# 2014-08-29  0.039463463  0.011516492
# 2014-09-15 -0.008619401 -0.010747791

If we didn’t pass in any weights, the function would assume an equal-weight portfolio. We’ll specify a 60/40 split instead.

# Create a weights vector
w = c(.6,.4) # Traditional 60/40 Equity/Bond portfolio weights
# No rebalancing period specified, so buy and hold initial weights
result.norebal = Return.portfolio(x.R, weights=w)
#                           portfolio.returns
# Annualized Return                    0.0705
# Annualized Std Dev                   0.0880
# Annualized Sharpe (Rf=0%)            0.8008

If we don’t specify a rebalancing period, we get buy and hold returns. Instead, let’s rebalance every year.

# Rebalance annually back to 60/40 proportion
result.years = Return.portfolio(x.R, weights=w, rebalance_on="years")
#                           portfolio.returns
# Annualized Return                    0.0738
# Annualized Std Dev                   0.0861
# Annualized Sharpe (Rf=0%)            0.8565

Similarly, we might want to consider quarterly rebalancing. But this time we’ll collect all of the intermediary calculations, including position values. We get a list back this time.

# Rebalance quarterly; provide full calculations
result.quarters = Return.portfolio(x.R, weights=w, 
rebalance_on="quarters", verbose=TRUE)  
#                           portfolio.returns
# Annualized Return                    0.0723
# Annualized Std Dev                   0.0875
# Annualized Sharpe (Rf=0%)            0.8254

That provides more detail, including the monthly contributions from each asset.

# We asked for a verbose result, so the function generates a list of 
# intermediary calculations, including asset contributions for each period:
# [1] "returns"      "contribution" "BOP.Weight"   "EOP.Weight"   
# [5] "BOP.Value"    "EOP.Value" 

# Examine the beginning-of-period weights; note the reweighting periods
#            SPY.Adjusted AGG.Adjusted
# 2014-01-31    0.6000000    0.4000000
# 2014-02-28    0.5876652    0.4123348
# 2014-03-31    0.5975060    0.4024940
# 2014-04-30    0.6000000    0.4000000
# 2014-05-30    0.5996912    0.4003088
# 2014-06-30    0.6023973    0.3976027
# 2014-07-31    0.6000000    0.4000000
# 2014-08-29    0.5973447    0.4026553
# 2014-09-30    0.6039059    0.3960941
# 2014-10-15    0.6000000    0.4000000

# Look at monthly contribution from each asset
#            SPY.Adjusted  AGG.Adjusted
# 2014-01-31 -0.021147406  0.0061514949
# 2014-02-28  0.026753095  0.0015515892
# 2014-03-31  0.004943173 -0.0006035523
# 2014-04-30  0.004178138  0.0033039234
# 2014-05-30  0.013920140  0.0046954857
# 2014-06-30  0.012434880 -0.0002195083
# 2014-07-31 -0.008069401 -0.0009942920
# 2014-08-29  0.023590440  0.0046081450
# 2014-09-30 -0.008343079 -0.0024215991
# 2014-10-15 -0.032250533  0.0067572530

Having the monthly contributions is nice, but what if we want to know what each asset contributed to the annual result of the portfolio? We get this question quite a bit (and it has prompted many attempts to “fix” the code – we appreciate that isn’t as straightforward as it seems).

EDIT: Even knowing that, I got it wrong the first time… Based on the reference that Paolo points to in his comment below and some subsequent email conversation, I’ve replaced the last part of this post with the correct calculations.

From the portfolio contributions of individual assets, such as those of a particular asset class or manager, the multi-period contribution is neither the sum of nor the geometric compounding of single-period contributions. Because the weights of the individual assets change through time as transactions occur, the capital base for the asset changes.

Instead, the asset’s multi-period contribution is the sum of the asset’s dollar contributions from each period, as calculated from the wealth index of the total portfolio. Once contributions are expressed as a change in dollar value relative to the wealth index of the portfolio, asset contributions then sum to the returns of the total portfolio for the period.

# Calculate weighted contributions
# cumulative returns lagged forward to represent beginning of the period portfolio value
lag.cum.ret <- na.fill(lag(cumprod(1+result.quarters$returns),1),1) 
# multiply by contributions to get weighted contributions
wgt.contrib = result.quarters$contribution * rep(lag.cum.ret, NCOL(result.quarters$contribution))

# Create end of year dates for xts timestamps
dates = c(seq(as.Date("2003/12/31"), tail(index(returns),1), "years"), tail(index(returns),1))

# Summarize weighted contributions by year
ann.wgt.contrib = apply(wgt.contrib, 2, function (x) apply.yearly(x, sum))
ann.wgt.contrib = as.xts(ann.wgt.contrib,

# Normalize to the beginning of period value
p.ann.contrib = NULL
for(i in 2003:2014) 
  p.ann.contrib = rbind(p.ann.contrib, colSums(wgt.contrib[as.character(i)]/rep(head(lag.cum.ret[as.character(i)],1),NCOL(wgt.contrib))))
p.ann.contrib = as.xts(p.ann.contrib, = dates)
p.ann.contrib = cbind(p.ann.contrib, rowSums(p.ann.contrib))
colnames(p.ann.contrib) = c("SPY Contrib", "AGG Contrib", "Portfolio Return")
#            SPY Contrib  AGG Contrib Portfolio Return
# 2003-12-31  0.07116488  0.001458576       0.07262346
# 2004-12-31  0.06465335  0.015395509       0.08004886
# 2005-12-31  0.02927809  0.009055321       0.03833341
# 2006-12-31  0.09375945  0.016132091       0.10989154
# 2007-12-31  0.03113908  0.027212530       0.05835161
# 2008-12-31 -0.23405576  0.028503181      -0.20555258
# 2009-12-31  0.15850172  0.011712674       0.17021440
# 2010-12-31  0.09597257  0.025305730       0.12127830
# 2011-12-31  0.01689928  0.031037641       0.04793692
# 2012-12-31  0.09585370  0.015927463       0.11178116
# 2013-12-31  0.18533901 -0.008329840       0.17700917
# 2014-08-29  0.03670684  0.019700427       0.05640727

So that provides the annual contribution of each asset for each asset. Let’s check the result – do the annual contributions for each instrument sum to the portfolio returns for the year?

# Calculate the annual return of the portfolio for each year between 2003 
# and current YTD
> period.apply(result.quarters$returns, INDEX=endpoints(result.quarters$returns, "years"), FUN=Return.cumulative, geometric=TRUE)
#            portfolio.returns
# 2003-12-31        0.07262346
# 2004-12-31        0.08004886
# 2005-12-30        0.03833341
# 2006-12-29        0.10989154
# 2007-12-31        0.05835161
# 2008-12-31       -0.20555258
# 2009-12-31        0.17021440
# 2010-12-31        0.12127830
# 2011-12-30        0.04793692
# 2012-12-31        0.11178116
# 2013-12-31        0.17700917
# 2014-10-15        0.03793038 
# Yes, the results match!

So that’s an example of how one would go about aggregating return contributions from a higher periodicity (monthly) to a lower periodicity (yearly) within a portfolio.

Knowing that, I went ahead and drafted a function for aggregating contributions called to.period.contributions that’s in the sandbox on R-Forge. Once you’ve sourced the function into your environment, you can aggregate contributions as such:

to.period.contributions(result.quarters$contribution, "years")
#            SPY.Adjusted AGG.Adjusted Portfolio Return
# 2003-12-31   0.07116488  0.001458576       0.07262346
# 2004-12-31   0.06465335  0.015395509       0.08004886
# 2005-12-30   0.02927809  0.009055321       0.03833341
# 2006-12-29   0.09375945  0.016132091       0.10989154
# 2007-12-31   0.03113908  0.027212530       0.05835161
# 2008-12-31  -0.23405576  0.028503181      -0.20555258
# 2009-12-31   0.15850172  0.011712674       0.17021440
# 2010-12-31   0.09597257  0.025305730       0.12127830
# 2011-12-30   0.01689928  0.031037641       0.04793692
# 2012-12-31   0.09585370  0.015927463       0.11178116
# 2013-12-31   0.18533901 -0.008329840       0.17700917
# 2014-10-15   0.01455321  0.023377173       0.03793038

Along with that I created a few wrapper functions for to.weekly, to.monthly.contributions, to.quarterly.contributions, and to.yearly.contributions. Give those a shot and let me know if you see any issues. Thanks again to Paolo for the feedback!

Tagged ,

PerformanceAnalytics update released to CRAN

Version number 1.4.3541 of PerformanceAnalytics was released on CRAN today.

If you’ve been following along, you’ll note that we’re altering our version numbering system.  From here on out, we’ll be using a “major.cran-release.r-forge-rev” form so that when issues are reported it will be easier for us to track where they may have been introduced.

Even though PerformanceAnalytics has been in development for almost a decade, we haven’t made significant changes to the interfaces of the functions – hence the major release number hasn’t changed from “1”.  I’ll warn you that we are working on revisions to many of the charts functions that might cause us to change some of those interfaces significantly (in ways that break backward compatibility), in which case we’ll increment the major release.  Hopefully we’ll be able to provide wrappers to avoid breaking much, but we’ll see.  That development is ongoing and there’s no deadline at the moment, so maybe next year. On the other hand, it’s going pretty well and generating a lot of excitement, so maybe sooner.

This is our 4th CRAN release after 1.0, so the minor number moves to 4.  We’ve been releasing the package to CRAN a couple of times a year with some regularity over the last seven years, although it’s slowed as the package has grown and demands from the CRAN maintainers have increased.

This release is tagged at rev. 3541 on R-Forge.  During the last year most of our development activity has been on other related packages, GSOC projects, and more speculative projects.  Little new functionality has found its way into this new release – this release is mostly bug fixes with a few new functions thrown in here and there. If you’re interested, you can follow along with package development by grazing through the sandbox directory on R-Forge. There’s quite a bit in there that is close but needs to be carried over the finish line.

We continue to welcome suggestions, contributions, and patches – whether for functionality or documentation.

GSOC 2013: IID Assumptions in Performance Measurement

GSOC2013Google Summer of Code for 2013 has been announced and organizations such as R are beginning to assemble ideas for student projects this summer. If you’re an interested student, there’s a list of project proposals on the R wiki. If you’re considering being a mentor, post a project idea on the site soon – project outlines end up being 1-2 pages of text, plus references – and they should be up on the wiki by mid-to-late March. Google will use the listed projects outlines as part of their criteria for accepting the R project for another year of GSoC and in their preliminary budgeting of slots.

I’ve posted one project idea so far, one that would extend PerformanceAnalytics’ standard tools for analysis to better deal with various violations of a standard assumption that returns are IID (that is, each observation is drawn from an identical distribution and is independent of other observations).

Observable autocorrelation is one of those violations. There have been a number of different approaches for addressing autocorrelation in financial data that have been discussed in the literature. Various authors, such as Lo (2002) and Burghardt, et. al. (2012), have noted that the effects of autocorrelation can be huge, but are largely ignored in practice. Burghardt observes that the effects are particularly interesting when measuring drawdowns, a widely used performance measure that describes the performance path of an investment. Recently, Bailey and Lopez del Prado (2013) have developed a closed-form solution for the estimating drawdown potential, without having to assume IID cashflows.

There’s more detail at the project site, including a long list of references. I’d be glad to hear from you if you have any ideas, thoughts, or even code in this vein (or others). Here are a few of the references to get you thinking:

The Paul Tol 21-color salute

You may or may not know that PerformanceAnalytics contains a number of specific color schemes designed for charting data in R (they aren’t documented well, but they show up in some of the chart examples). I’ve been collecting color palates for years in search of good combinations of attractiveness, relative weight, and distinctiveness, helped along the way by great sites like ColorBrewer and packages like RColorBrewer.   I’ve assembled palettes that work for specific purposes, such as the color-focus palates (e.g., redfocus is red plus a series of dark to light gray colors). Others, such as rich#equal, provide a palette for displaying data that all deserve equal treatment in the chart. Each of these palettes have been designed to create readable, comparable line and bar graphs with specific objectives outlined before each category below.

I use this approach rather than generating schemes on the fly for two reasons: it creates fewer dependencies on libraries that don’t need to be called dynamically; and to guarantee the color used for the n-th column of data.

Oh, and here’s a little utility function (that I don’t think I wroteEDIT: that I know I didn’t write, since it was written by Achim Zeileis and is found in his colorspace package, but I have carried it around for quite a while) for displaying a palette:

# Function for plotting colors side-by-side
pal <- function(col, border = "light gray", ...){
  n <- length(col)
  plot(0, 0, type="n", xlim = c(0, 1), ylim = c(0, 1),
       axes = FALSE, xlab = "", ylab = "", ...)
  rect(0:(n-1)/n, 0, 1:n/n, 1, col = col, border = border)

Continue reading

A Heartfelt Thank You and the Resulting GSoC Project

PerformanceAnalytics has long enjoyed contributions from users who would like to see specific functionality included.

Diethelm Wuertz at ETHZ, who is the author and sponsor of all the various R/Metrics packages is one of those contributors. I first met Diethelm when he hosted a conference on high-frequency data in the early 1990’s (where we fretted about managing terabyte-sized databases), but it was his various R/Metrics packages that finally convinced me to use R. He was also keynote at our first R/Finance conference, where he demonstrated his talents in financial data visualization.

When I finally was able to attend his excellent conference in Meielisalp, he very generously contributed a very large set of functions that he had been working on from the second edition of Bacon (2008).

It pains me that it has taken so long to thank him publicly for that contribution. It only makes it worse that so much of that contribution has only slowly leaked into PerformanceAnalytics over time. Of the more than 100 functions he contributed, more than 50 have been incorporated, integrated with, or overlap with existing functions.

But there is still a fair amount of work to do. I’ve recently been focused on some of the downside metrics, things like average drawdowns. Some of the upside corollaries to downside statistics are interesting as well – measures such as upside potential, upside variance, and upside risk.

To help move this effort along, I proposed a Google Summer of Code project for 2012. Students, let me know if you are interested or even if you have any questions.

Diethelm’s original contributions can be found on r-forge in svn.

Again, many thanks to Diethelm!