The post Looking Into The Ulcer Index appeared first on System Trader Success.

]]>Many of the common metrics can be classified in ways that are similar to quantities we use to describe the world around us: temperature, speed, weight, voltage, etc. These classifications add context to what is being described based on how it is calculated and the information it contains.

In finance, one typical summary statistic is the annualized return of a strategy. To calculate this, all we need is the starting and ending point; what happened in between is irrelevant. Much like average speed simply uses the total time and distance traveled, annualized return smooths over any intermediate details. This is somewhat similar to a state variable in physics, such as temperature change, entropy, and internal energy, which depends only on the initial and final states.

If it were as simple as that, the two strategies shown below would be equivalent, but even a novice investor would likely choose to have owned strategy A.[1]

Therefore, we look at metrics like annualized volatility, which incorporates the individual realized returns over a time period. We could call volatility a *path dependent* metric, much like mechanical work is in thermodynamics. It is a quantity that is likely to change if your “route” changes. However, annualized volatility only depends on what returns were realized, not in what order they came. This also applies to the Sharpe and Sortino ratios. To illustrate this concept, the following simulated paths both have the same realized volatility.

To differentiate between these two strategies using summaries statistics, we must capture the sequence of the returns. Maximum drawdown does this by measuring the worst loss from peak to trough over the time period. Still, maximum drawdown lacks information about the length of the drawdown, which can have a substantial impact on investors’ perception of a strategy. In fact, Strategies B and C shown previously have the same maximum drawdown of 25%.[2]

Enter the Ulcer Index. It not only factors in the *severity* of the drawdowns but also their *duration*. It is calculated using the following formula:

where N is the total number of data points and each Ri is the fractional retracement from the highest price seen thus far. Whereas, the maximum drawdown is only the largest Ri, which can only increase through time, the Ulcer Index encapsulates every drawdown into one summary statistic that adapts to new data as it is realized.[3] Using the Ulcer Index, we can finally distinguish between strategies that have the same annualized return, annualized volatility, and maximum drawdown: Strategies B and C have Ulcer Indices of 11.2% and 12.8%, respectively.

As a case study, the following chart shows the return of a 60/40 portfolio of SPY and AGG rebalanced at the beginning of each year from 01/2004 to 12/2013. Along with the true realized path, I have included the path with the returns reversed and five paths with random permutations of the true returns.

The metrics for each path are shown in the table below:

Only the Ulcer Index can fully differentiate among these paths. Even in cases where the maximum drawdown is similar (e.g. the true path and Random 1), the Ulcer Index shows a sharp contrast between the strategies.

For a more concrete way of picturing the Ulcer Index, imagine driving a car along a 55 mph speed limit road with stoplights spaced every half mile. Traffic is moderately heavy and the lights are poorly timed. As you accelerate, the light down the road turns yellow and then red. Easing off the accelerator will increase the time until you get to that light, perhaps to the point where you won’t have to stop, thus reducing the amount of time spent waiting for the light to change and the subsequent acceleration to approach the speed limit again.

You continue down the road anticipating the lights so that you do not brake when unnecessary or burn needless gas racing toward red lights. This not only reduces the variation in your speed (a volatility) but also the amount you have to slow down (the severity) and the time spent waiting at red lights (the duration). The smoother trip is likely to lead to less stress, not to mention wear and tear on the car, which can cause further headaches.

Ultimately, evaluating a strategy involves more than simple performance metrics since the methodology driving the strategy is key. But when comparing historical performance, it is helpful to have a toolbox equipped with implements able to measure performance on the bases of profitability and risk in ways that are amenable to our inherent, risk-averse inclinations.

— By Nathan Faber from Flirting With Models. Nathan Faber is an Associate in Newfound’s Product Development and Quantitative Strategies group. Newfound is a Boston-based registered investment advisor and quantitative asset manager focused on rule-based, outcome-oriented investment strategies and specializes in tactical asset and risk management.

[1] One exception is if you owned another strategy that had the correct characteristics relative to strategy B (negative correlation, positive return, and similar volatility) so that the overall return was even smoother than strategy A. Even so, these trends would not have any guarantee of continuing in the future. [2] In simulations this is easy to do by reversing the order of the returns. [3] Perhaps another interesting metric would be an exponentially weighted Ulcer Index that places more weight on more recent observations.

The post Looking Into The Ulcer Index appeared first on System Trader Success.

]]>The post Trading The Equity Curve & Beyond appeared first on System Trader Success.

]]>Some trading systems have prolonged periods of winning or losing trades. Long winning streaks followed by a prolonged period of drawdown. Wouldn't it be nice if you could minimize those long drawdown periods?

Here is one tip that might help you do that. Try applying a simple moving average to your trading system's equity curve. Then, use that as an indicator on when to stop and restart trading your system. This technique might change your trading system's performance for the better.

How to do this? Well, the moving average applied to your trading system's equity curve creates a smoothed version of your trading system's equity curve. You can now use this smoothed equity curve as a signal on when to stop or restart trading. For example, when the equity curve falls below the smoothed equity curve you can stop trading your system. Why would you do this? Because your trading system is under performing, it's losing money. Only after the equity curve starts to climb again should you start taking your trades once again. This technique is trading the equity curve. You're making trading decisions based upon your equity curve. In essence the performance of your system is an indicator.

Trading the equity curve is like trading a basic moving average crossover system. When the fast moving average (your equity curve) crosses over the slower moving average (your smoothed equity curve) you go long (trade your system live). When the fast moving average crosses under the slower moving average you close your long trade (stop trading your live system).

In the image above the blue line is the equity curve of an automated trading system. The pink line is a 30-trade average of the equity curve. When the equity curve dips below the pink line, such as around trade number 60, you would stop trading the system. Once the equity curve rises above the pink line, around trade number 80, you would start trading.

It's a great idea and with some systems this technique can work wonders. In essence, we are using the equity curve as a signal or filter for our trading system. In the most simple case, it's a switch telling us when to stop trading it and when to resume trading. But you could also use this signal to reduce your risk instead of turning off the system.

First you need to track all trades to generate the complete equity curve and moving average. This is done even if your live system has stopped trading. In other words, you will need to record the theoretical trades it would be taking. This means you will need to have two copies of your system running. One will is dedicated to taking every trade in simulation mode. This simulated system tracks the theoretical equity curve and computes the smoothed equity curve. No real trades would are taken by the simulated system. Its job is to track the two equity curves.

The second system is dedicated to trading live. This live system will have the ability to trade or not trade based upon the results computed by the simulation system. Think of the simulated system as an indicator. It's always running collecting data and crunching the numbers. This information will then be used by the live system to tell it when to trade and when not to trade.

One method to do this would involve passing data between two charts in TradeStation. Both charts are trading the identical trading system. One trades live while the other trades only in simulation mode. The chart running in simulation mode acts as your indicator. This indicator chart then passes a simple variable to the live chart. The live chart then acts on the live market.

This type of setup produces a dynamic trading system that adjusts its trading behavior based upon the system's performance. It's a simple concept, but it's complex to build in TradeStation. Some solutions that I've seen are also not very flexible. Overall solutions have proven to require complex programming skills and tedious setup to get this to work.

In short, building custom Easylanguage code to trade the equity curve has been very difficult to do. In fact, it has been well outside the ability of most programmers. But, not anymore.

There is a TradeStation product that can do all the heavy lifting for us! It is, the Equity Curve Feedback Toolkit. This kit allows me to use simple EasyLanguage functions within my code to trade the equity curve. It's super simple and will only take a few minutes to set up. Let me show you.

I took an example trading system that appeared on System Trader Success called, A Simple S&P System. I then added the Equity Curve Feedback function to my code. Next, I made a couple minor adjustments to my code. Once done, the Equity Curve Feedback function now returns the simulated system's equity curve.

No need to use DLLs or other complicated setups! With this information you can make a determination if the equity curve is above or below the smoothed equity curve. In other words, you can now turn-off or turn-on your live system based upon the equity curve!

First lets look at the results of Simple S&P System without the equity curve feedback.

You can see by looking at the equity curve the strategy got off to a bad start. See trades 20-40. Then the strategy had a nice run and a huge drawdown around trade 200.

The graph below is the underwater equity on a weekly basis. You can see at the start of trading and at the most recent trades, about a 12% drawdown.

Finally, here is the performance report of the S&P Simple System.

There are some adjustments to the original code required. For example, adding number variables and arrays. But at the heart of the issue is computing the equity curve of a simulated version of our strategy. That's difficult to do in EasyLanguage! But not any more.

Below is the line of code which does all the magic! That one line is what computes the simulated trades of our system.

The next line of code is what enables or disables our live trading system. This function calculates the simulated equity curve of all trades. It then compares this value to the moving average of all simulated trades. It will then return true if the simulated equity curve is above the moving average. Else it will return false.

I went ahead and added a 30 period moving average to the equity curve. Below are the results.

It's interesting to note the net profit is about the same. However, it generated this profit with fewer trades. That means many of the losing trades were eliminated. In fact, the average trade increased by about $50. Adding equity curve feedback reduced the drawdown by about half! Looking at the equity curve you can see the deep drawdown at the far-right side has been improved. Also, looking at the first few trades the equity curve looks much better.

You don't have to stop trading when the equity curve falls below its moving average. You could adjust your risk. That is, if your equity curve begins to fall you can reduce the number of shares or contracts you trade. Or, when the equity curve is climbing, you need to increase your risk by buying more shares or contracts. You could also start or stop trading based upon drawdown or if the percentage of winners falls below a threshold. These are all possible with this kit.

The short answer is, no. Trading the equity curve works well on some trading systems. Those systems tend to have prolonged periods of drawdown. Yet, other systems don't benefit from equity curve feedback. This is because the drawdowns are rather shallow and you end up hurting your equity more than anything. But like most things in the world of trading, you'll have to perform some testing. Test different moving averages and test between halting all trading or reducing contract/share size. Remember, some systems do not benefit from this technique at all.

The Equity Curve Feedback Toolkit can also be use to create even more dynamic systems.

****For example, you can simulate many different variations of a single trading strategy. Maybe each of the simulated strategies has different input parameters. Instead of relying on a single set of input parameters giving a buy signal, you require two or more of the simulated strategies to give a buy signal. In effect you want confirmation from more than one strategy before taking a signal. This type of model is based upon on a **voting scheme**. When enough votes are counted - a trade is taken.

In another example you have multiple competing strategies. These strategies can simply be the same strategy with different inputs or they can be completely different types of strategies. A mean reverting strategy and a trend following strategy - for example. Based upon equity curve feedback, the strategy trade the best performing strategy in real time.

These examples are types of **multi-agent strategies** that can help make more dynamic and/or robust strategies. Such power was not widely available to the retail trader, but now it is.

Learn more about the Equity Curve Feedback Toolkit by clicking here.

The post Trading The Equity Curve & Beyond appeared first on System Trader Success.

]]>The post Getting Started with Neural Networks for Algorithmic Trading appeared first on System Trader Success.

]]>If you’re interested in using **artificial neural networks** (ANNs) for algorithmic trading, but don’t know where to start, then this article is for you. Normally if you want to learn about neural networks, you need to be reasonably well versed in matrix and vector operations – the world of linear algebra. **This article is different.** I’ve attempted to provide a starting point that doesn’t involve any linear algebra and have deliberately left out all references to vectors and matrices. **If you’re not strong on linear algebra, but are curious about neural networks, then I think you’ll enjoy this introduction.** In addition, if you decide to take your study of neural networks further, when you do inevitably start using linear algebra, it will probably make a lot more sense as you’ll have something of head start.

The best place to start learning about neural networks is the **perceptron**. The perceptron is the simplest possible artificial neural network, consisting of just a single neuron and capable of learning a certain class of binary classification problems. Perceptrons are the perfect introduction to ANNs and if you can understand how they work, the leap to more complex networks and their attendant issues will not be nearly as far. So we will explore their history, what they do, how they learn, where they fail. We’ll build our own perceptron from scratch and train it to perform different classification tasks which will provide insight into where they can perform well, and where they are hopelessly outgunned. Lastly, we’ll explore one way we might apply a perceptron in a trading system.

The perceptron has a long history, dating back to at least the mid 1950s. Following its discovery, the New York Times ran an article that claimed that the perceptron was the basis of an artificial intelligence (AI) that would be able to walk, talk, see and even demonstrate consciousness. Soon after, this was proven to be hyperbole on a staggering scale, when the perceptron was shown to be wholly incapable of classifying certain types of problems. The disillusionment that followed essentially led to the first AI winter, and since then we have seen a repeating pattern of hyperbole followed by disappointment in relation to artificial intelligence.

Still, the perceptron remains a useful tool for some classification problems and is the perfect place to start if you’re interested in learning more about neural networks. Before we demonstrate it in a trading application, let’s find out a little more about it.

Algorithms modelled on biology are a fascinating area of computer science. Undoubtedly you’ve heard of the genetic algorithm, which is a powerful optimization tool modelled on evolutionary processes. Nature has been used as a model for other optimization algorithms, as well as the basis for various design innovations. In this same vein, ANNs attempt to learn relationships and patterns using a somewhat loose model of neurons in the brain. The perceptron is a model of a single neuron.

In an ANN, neurons receive a number of inputs, weight each of those inputs, sum the weights, and then transform that sum using a special function called an *activation function*, of which there are many possible types. The output of that activation function is then either used as the prediction (in a single neuron model) or is combined with the outputs of other neurons for further use in more complex models, which we’ll get to in another article.

Here’s a sketch of that process in an ANN consisting of a single neuron:

Here, x1, x2,etc are the inputs. b is called the bias term, think of it like the intercept term in a linear model y=mx+b. w1,w2,etc are the weights applied to each input. The neuron firstly sums the weighted inputs (and the bias term), represented by S in the sketch above. Then, S is passed to the activation function, which simply transforms S in some way. The output of the activation function, z is then the output of the neuron.

The idea behind ANNs is that by selecting good values for the weight parameters (and the bias), the ANN can model the relationships between the inputs and some target. In the sketch above, z is the ANN’s prediction of the target given the input variables.

In the sketch, we have a single neuron with four weights and a bias parameter to learn. It isn’t uncommon for modern neural networks to consist of *hundreds *of neurons across multiple *layers*, where the output of each neuron in one layer is input to all the neurons in the next layer. Such a *fully connected *network* *architecture can easily result in many thousands of weight parameters. This enables ANNs to approximate any arbitrary function, linear or nonlinear.

The perceptron consists of just a single neuron, like in our sketch above. This greatly simplifies the problem of learning the best weights, but it also has implications for the class of problems that a perceptron can solve.

The purpose of the activation function is to take the input signal (that’s the weighted sum of the inputs and the bias) and turn it into an output signal. There are many different activation functions that convert an input signal in a slightly different way, depending on the purpose of the neuron.

Recall that the perceptron is a binary classifier. That is, it predicts either one or zero, on or off, up or down, etc. It follows then that our activation function needs to convert the input signal (which can be any real-valued number) into either a one or a zero corresponding to the predicted class.

In biological terms, think of this activation function as *firing* (activating)* *the neuron (telling it to pass the signal on to the next neuron) when it returns 1, and doing nothing when it returns 0.

What sort of function accomplishes this? It’s called a step function, and its mathematical expression looks like this:

And when plotted, it looks like this:

This function then transforms any weighted sum of the inputs (S) and converts it into a binary output (either 1 or 0). The trick to making this useful is finding (learning) a set of weights, w, that lead to good predictions using this activation function.

We already know that the inputs to a neuron get multiplied by some weight value particular to each individual input. The sum of these weighted inputs is then transformed into an output via an activation function. In order to find the best values for our weights, we start by assigning them random values and then start feeding observations from our training data to the perceptron, one by one. Each output of the perceptron is compared with the actual target value for that observation, and, if the prediction was incorrect, the weights adjusted so that the prediction would have been closer to the actual target. This is repeated until the weights converge.

In perceptron learning, the weight update function is simple: when a target is misclassified, we simply take the sign of the error and then add or subtract the inputs that led to the misclassifiction to the existing weights.

If that target was -1 and we predicted 1, the error is −1−1=−2. We would then subtract each input value from the current weights (that is, wi=wi–xi). If the target was 1 and we predicted -1, the error is 1–−1=2, so then add the inputs to the current weights (that is, wi=wi+xi).

This has the effect of moving the classifier’s decision boundary (which we will see below) in the direction that would have helped it classify the last observation correctly. In this way, weights are gradually updated until they converge. Sometimes (in fact, often) we’ll need to iterate through each of our training observations more than once in order to get the weights to converge. Each sweep through the training data is called an *epoch*.

Next, we’ll code our own perceptron learning algorithm from scratch using R. We’ll train it to classify a subset of the iris data set.

In the full iris data set, there are three species. However, perceptrons are for binary classification (that is, for distinguishing between two possible outcomes). Therefore, for the purpose of this exercise, we remove all observations of one of the species (here, *virginica*), and train a perceptron to distinguish between the remaining two. We also need to convert the species classification into a binary variable: here we use 1 for the first species, and -1 for the other. Further, there are four variables in addition to the species classification: petal length, petal width, sepal length and sepal width. For the purposes of illustration, we’ll train our perceptron using only petal length and width and drop the other two measurements. These data transformations result in the following plot of the remaining two species in the two-dimensional feature space of petal length and petal width:

The plot suggests that petal length and petal width are strong predictors of species – at least in our training data set. Can a perceptron learn to tell them apart?

Training our perceptron is simply a matter of initializing the weights (here we initialize them to zero) and then implementing the perceptron learning rule, which just updates the weights based on the error of each observation with the current weights. We do that in a for() loop which iterates over each observation, making a prediction based on the values of petal length and petal width of each observation, calculating the error of that prediction and then updating the weights accordingly.

In this example we perform five sweeps through the entire data set, that is, we train the perceptron for five epochs. At the end of each epoch, we calculate the total number of misclassified training observations, which we hope will decrease as training progresses. Here’s the code:

`# perceptron initial weights` |

Here’s the plot of the error rate:

We can see that it took two epochs to train the perceptron to correctly classify the entire dataset. After the first epoch, the weights hadn’t been sufficiently updated. In fact, after epoch 1, the perceptron predicted the same class for every observation! Therefore it misclassified 50 out of the 100 observations (there are 50 observations of each species in the data set). However after two epochs, the perceptron was able to correctly classify the entire data set by learning appropriate weights.

Another, perhaps more intuitive way, to view the weights that the perceptron learns is in terms of its *decision boundary*. In geometric terms, for the two-dimensional feature space in this example, the decision boundary is the a straight line separating the perceptron’s predictions. On one side of the line, the perceptron always predicts -1, and on the other, it always predicts 1.

We can derive the decision boundary from the perceptron’s activation function:

where

The decision boundary is simply the line that defines the location of the step in the activation function. That step occurs at z=0, so our decision boundary is given by

Equivalently

which defines a straight line in x1,x2 feature space.

In our iris example, the perceptron learned the following decision boundary:

Here’s the complete code for training this perceptron and producing the plots shown above:

`### PERCEPTRON FROM SCRATCH ####` |

**Congratulations! You just built and trained your first neural network.**

Let’s now ask our perceptron to learn a slightly more difficult problem. Using the same iris data set, this time we remove the *setosa *species and train a perceptron to classify *virginica* and *versicolor* on the basis of their petal lengths and petal widths. When we plot these species in their feature space, we get this:

This looks a slightly more difficult problem, as this time the difference between the two classifications is not as clear cut. Let’s see how our perceptron performs on this data set.

This time, we introduce the concept of the *learning rate*, which is important to understand if you decide to pursue neural networks beyond the perceptron. The learning rate controls the speed with which weights are adjusted during training. We simply scale the adjustment by the learning rate: a high learning rate means that weights are subject to bigger adjustments. Sometimes this is a good thing, for example when the weights are far from their optimal values. But sometimes this can cause the weights to oscillate back and forth between two high-error states without ever finding a better solution. In that case, a smaller learning rate is desirable, which can be thought of as fine tuning of the weights.

Finding the best learning rate is largely a trial and error process, but a useful approach is to reduce the learning rate as training proceeds. In the example below, we do that by scaling the learning rate by the inverse of the epoch number.

Here’s a plot of our error rate after training in this manner for 400 epochs:

You can see that training proceeds much less smoothly and takes a lot longer than last time, which is a consequence of the classification problem being more difficult. Also note that the error rate is never reduced to zero, that is, the perceptron is never able to perfectly classify this data set. Here’s a plot of the decision boundary, which demonstrates where the perceptron makes the wrong predictions:

Here’s the code for this perceptron:

`# load data` |

In the first example above, we saw that our *versicolor* and *setosa *iris species could be perfectly separated by a straight line (the decision boundary) in their feature space. Such a classification problem is said to be *linearly separable* and (spoiler alert) is where perceptrons excel. In the second example, we saw that *versicolor* and *virginica* were ** almost **linearly separable, and our perceptron did a reasonable job, but could never perfectly classify the whole data set. In this next example, we’ll see how they perform on a problem that isn’t linearly separable at all.

Using the same iris data set, this time we classify our iris species as either *versicolor* or *other* (that is *setosa* and *virginica *get the same classification) on the basis of their petal lengths and petal widths. When we plot these species in their feature space, we get this:

This time, there is no straight line that can perfectly separate the two species. Let’s see how our perceptron performs now. Here’s the error rate over 400 epochs and the decision boundary:

We can see that the perceptron fails to distinguish between the two classes. This is typical of the performance of the perceptron on any problem that isn’t linearly separable. Hence my comment at the start of this unit (see footnote 2) that I’m skeptical that perceptrons can find practical application in trading. Maybe you can find a use case in trading, but even if not, they provide an excellent foundation for exploring more complex networks which *can *model more complex relationships.

The Zorro trading automation platform includes a flexible perceptron implementation. If you haven’t heard of Zorro, it is a fast, accurate and powerful backtesting/execution platform that abstracts a lot of tedious programming tasks so that the user is empowered to concentrate on efficient research. It uses a simple C-based scripting language that takes almost no time to learn if you already know C, and a week or two if you don’t (although of course mastery can take much longer). This makes it an excellent choice for independent traders and those getting started with algorithmic trading. While the software sacrifices little for the abstraction that enables efficient research, experienced quant developers or those with an abundance of spare time might take issue with that aspect of the software, as it’s not open source, so it isn’t for everyone. But it’s a great choice for beginners and DIY traders who maintain a day job. If you want to learn to use Zorro, even if you’re not a programmer, we can help.

Zorro’s perceptron implementation allows us to define any features we think are pertinent, and to specify any target we like, which Zorro automatically converts it to a binary variable (by default, positive values are given one class; negative values the other). After training, Zorro’s perceptron predicts either a positive or negative value corresponding to the positive and negative classes respectively.

Here’s the Zorro code for implementing a perceptron that tries to predict whether the 5-day price change in the EUR/USD exchange rate will be greater than 200 pips, based on recent returns and volatility, whose predictions are tested under a walk-forward framework:

`/* PERCEPTRON` |

Zorro firstly outputs a trained perceptron for predicting long and short 5-day price moves greater than 200 pips for each walk-forward period, and then tests their out-of-sample predictions.

Here’s the walk-forward equity curve of our example perceptron trading strategy:

I find this result particularly interesting because I expected the perceptron to perform poorly on market data, which I find it hard to imagine falling into the linearly separable category. However, sometimes simplicity is not a bad thing, it seems.

I hope this article not only whet your appetite for further exploration of neural networks, but facilitated your understanding of the basic concepts, without getting too hung up on the math.

I intended for this article to be an introduction to neural networks where the perceptron was to be nothing more than a learning aid. However, given the surprising walk-forward result from our simple trading model, I’m now going to experiment with this approach a little further. If this interests you too, some ideas you might consider include extending the backtest, experimenting with different signals and targets, testing the algorithm on other markets and of course considering data mining bias. I’d love to hear about your results in the comments.

Thanks for reading!

--by Kris Longmore from blog Robotwealth

The post Getting Started with Neural Networks for Algorithmic Trading appeared first on System Trader Success.

]]>The post Secret Weapon of Stock & ETF System Development appeared first on System Trader Success.

]]>This is the second article in a two-part series where I discuss the top three pitfalls when backtesting Stock & ETF trading systems. In the first article, The Top Three Pitfalls of Stock and ETF System Development, I highlighted the top three issues system developers face. In this article, I'm going to show you how I fixed this problem to give me precise and accurate historical backtested results.

As many of you know, I designed the ETF trading strategies for Tuttle Tactical Management. This firm manages over $200 million in ETFs. I started working with Tuttle in 2007. One of the reasons why I was hired as a consultant was due to my expertise in developing realistic equity backtests. Being aware of the glaring issues of backtesting both Stock and ETFs with traditional trading platforms was the only thing I could do to fix the issue. I created a system development platform that fixes these issues.

**Behold! The Secret Weapon: ****TradersStudio**********The platform developed was TradersStudio 1.0 way back in 2003-2004. I improved the algorithm in 2005 and it has remained the same since. My algorithm requires three different types of data streams for each equity being traded. In order to generate realistic results, TradersStudio needs split-adjusted data. Many data vendors roll split and dividend-adjusted data into one series. This is not an ideal solution as on longer historical data series it just compounds the errors in precision of the split-adjusted data.

Next, TradersStudio uses dividend-only adjusted data. This simply means the data series is subtracting out the dividends without any further adjustments. CSI Data is the only vendor which allows you to produce this series because it, in effect, gives away the stock and ETF dividend database. Finally, TradersStudio needs the totally unadjusted data series. I then use my algorithm which allows me to accurately backtest portfolios of stocks and ETFs with any money management strategy you would like and get accurate results. One can see the splits and the dividends in the trade-by-trade reports along with real prices.

TradersStudio is the only product on the market that can produce a table of stock splits and dividends when CSI stock data or data from another vendor allows the data to be outputted in the correct format. Let’s look at the poster child for problems with splits and dividends, Microsoft. This analysis is shown in the splits and dividends report.

This report shows the date of splits and dividends. The number of shares (288) was calculated by multiplying the stock split ratios (2.0 * 2.0 * 1.5 * 2.0 * …). From this report we can also see that Microsoft did not pay dividends until February 19, 2003. This is a valuable report due to the fact that it allows us to audit our testing results to see what has happened. This report is also created when this type of analysis is performed on a portfolio of stocks. For example, if it were run on the S&P500 or NASDAQ 100, you would have this same report, ordered by date, for all stocks included in the portfolio.

Consider now the buy-and-hold calculation. Since the split factor for Microsoft is 288, our formula is (288 * (final price – original split price)).

Since there are issues with “maxbars” back, we need to use the value of the first bar’s open split adjusted price on 4/25/1986. That price was $0.11111. This is “Maxbarsback + 2” bars. Next we have our final closing price which is $46.29. This is “lastbar - 1” close on 7/29/2015.

288 * ($46.29 [final price]) - $0.11111 [split adjusted open first bar] = $13,299.52

In our buy-and-hold calculation, we also need to adjust for dividends. Since Microsoft had split 288-1 by the time it paid any dividends and has not split during the dividend period, our calculation is easy. Simply add the dividends from the Splits and Dividends report at $10.28 per share and multiply that value by 288. This gives a total of $2,960.64. Add the two numbers together and we get $16,260.16 which matches exactly what is returned on the summary report as the buy-and-hold return.

A TradersStudio stock session has two different Trade-by-Trade Reports. There is the standard report that shows the entry and exit prices with the split-adjusted values. The P/L (profit/loss) from the trade does not represent the P/L that would be calculated by only using split-adjusted data, but the real P/L on the trade. The standard Trade-by-Trade Report used to the entry and exit prices match the prices you see on the chart. The Trade-by-Trade Real Price gives you real-world results. First, let’s look at the standard report. The split-adjusted price of Microsoft is less than $0.12 on June 5, 1986 while the real price is $34!

The other report is the Trade-by-Trade Real Price Report which shows the real entry and exit prices. For example, on June 5, 1986, Microsoft was purchased at $34.25 and exited at $33.75 on June 6, 1986.

This demonstrates how buy-and-hold is calculated and how TradersStudio gives you actual entry and exit prices as well as a Splits and Dividend history. There is a problem inherent in this analysis and it’s the reason why it looks like we only made a very small percentage of buy-and-hold with this system (only 376.99 points when buy-and-hold was over $16,000).

The answer is that we should be buying the current number of shares that one original share has become and not just one share to be consistent in our first test. The size of what we are buying should be equal to the current split factor.

To do this, we need to modify our simple system by adding a call to split factor to use for sizing and a flag so that buy-and-hold still represents starting with one share when we started. The code appears as follows.

`'Simple ORB system to trade stocks, backtest adjusted for`

'splits to create correct comparison to buy and hold

Sub QQQBreakOutStockTest(MULT)

Dim AveTr

Dim Nxtopen

Dim NumLots

NumLots = splitfactor

BuyAndHoldSingle = true

If BarNumber < LastBar Then Nxtopen = NextOpen(0) Else Nxtopen = 0 End If If Close > Open Then

Sell("SellBrk",NumLots,Nxtopen – MULT * TrueRange,Stop,Day)

End If

If Close < Open Then

Buy("BuyBrk",NumLots,Nxtopen + MULT * TrueRange,Stop,Day)

End If

End Substocks

backtest adjusted for splits to create correct comparison ' to buy-and-hold

Sub QQQBreakOutStockTest(MULT)

Dim AveTr

Dim Nxtopen

Dim NumLots

NumLots = splitfactor

BuyAndHoldSingle = true

If BarNumber < LastBar Then Nxtopen = NextOpen(0) Else Nxtopen = 0 End If If Close > Open Then Sell("SellBrk",NumLots,Nxtopen – MULT * TrueRange,Stop,Day)

End If

If Close < Open Then

Buy("BuyBrk",NumLots,Nxtopen + MULT * TrueRange,Stop,Day)

End If

End Sub

A new function called “splitfactor” is called. This number represents the number of shares that our original single share of stock has become after multiple splits. There is also a flag called “BuyAndHoldSingle” which is set to true. This bases buy- and-hold analysis on one original share and not the original number of shares that were purchased. For example if our analysis was started with 100 shares, buy-and- hold would be calculated based on 100 original shares if “BuyAndHoldSingle” was set to false. This means that the number of shares of Microsoft to buy and sell will change when there are signals based on the following schedule. Before September 21, 1987, only one share would be purchased.

By making this significant change, the analysis is now correctly comparing apples to apples and we see how the QQQBreakout system has done.

Indeed, our original system which looked like a joke $376.99 points versus over $16k now looks more respectable at $8,651.23 which is a little more than half of buy-and-hold. I am not arguing that this is a good system, but you can see how ensuring that we are truly comparing apples to apples which can change a system that looks dismal at first to something that actually makes sense.

When working with a stock database, the calculations should be to 4 decimal places including the time before decimalization to increase accuracy. Some stock database vendors have maintained their database calculations at only two decimal places even after decimalization occurred. This oversight serves to destroy the accuracy of the data. Using CSI Data avoids this issue entirely as they have done a good job overall of maintaining their data accurately. The purpose of this example was not to give out a great system but to illustrate and explain the issues that are related to trading stocks with stock splits and dividends.

Learn more about this amazing platform by visiting TradersStudio.com

The post Secret Weapon of Stock & ETF System Development appeared first on System Trader Success.

]]>The post Randomly Pushing Buttons appeared first on System Trader Success.

]]>Before my current circumstances, and before I was a photographer (see above), I used to make music for a living. Specifically, weird-ass techno/electronic music that many people found difficult or annoying. One of the ways I would find sonic inspiration was to use audio software to generate random sounds. I would record this stream of noisy squawkiness, sift through a lot of garbage, and occasionally find a useful gem. I would take these little bits of useful audio and turn them into gritty, weird dance music.

It’s possible to find dedicated software that dives deeply into finding non-obvious, non-linear connections between “features” of price data. For example, we can ask ourselves if today’s high of the price of oil is above its 3-day moving average, and the S&P 500’s closing price is below yesterday’s open, will gold go up the next day? The danger, of course, is that you might – no you WILL – hit upon a great-looking system that just so happens to look good for your test, but fails in real life. “Curve fitting” is a fact of life when developing trading systems, and you must take steps to reduce this.

Recently, I thought it would be fun to put together a rudimentary script that tests various combinations of the open, high, low, and close of recent days in the past. It can’t be compared to software dedicated to that purpose, of course. We can, for example, explore such questions as: if the open of two days ago is above the high of seven days ago, AND the close of three days ago is lower than today’s low, should I buy at the next open? Who knows! This little optimizer will tell me if there’s anything to it.

Now if you’re like me, you’re thinking what on earth can the comparison between prices 11 and 12 days ago have to do with current prices? Well… maybe something, maybe nothing. Let’s find out.

One thing I did realize quickly is that using a 2-day moving average improved my testing. It seems to filter out some noise and improve the signal (or perhaps just allow me to curve-fit more tightly?). You could add this as another parameter to test, but you increase your processing time by adding another dimension.

I call my rudimentary push-button optimizer the “Comparinator”. If you’ve ever watched the cartoon Phineas and Ferb, the semi-evil Dr. Doofenshmirtz invents all sorts of evil devices with names ending in “-inator”. Now you can evilly compare OHLC data in the comfort of your own secret lair.

“Holy crap-inator, Matt, can you get to the point already? Show us a graph or something!”

OK, here’s a graph. Do you like it?

The gray line is SPY buy-and-hold. The orange is a system that the Comparinator developed. The out-of-sample performance is quite good. Also, note that this system is long-only and seems to love the volatility of bear markets. It doesn’t like political shenanigans such as what happened in 2011, but it still does exceedingly well. Wait until you hear the trading details, which seem a little, well… random.

By the way, I developed this using a super-secret trading system development platform (“SSTSDP”) for which I’m an alpha tester. The good news is that I’ve slapped together some AmiBroker code that does the same thing.

Here’s the pseudo code for the entry (using the SSTSDP syntax). This was created to trade SPY. When the below code evaluates to ‘true’, go long at the open of the next day.

`ma(C[1],2)>ma(C[0],2) and ma(C[10],2)>ma(O[11],2)` |

In plain English: when the 2-day moving average of the close of one day ago is greater than the 2-day moving average of the close of today, AND the 2-day moving average of the close 10 days ago is greater than the 2-day moving average of the open 11 days ago, go long at the open of the next day.

Clear as mud, right? Let’s see if we can simplify it.

`ma(C[1],2) > ma(C[0],2)` |

The first half of that expression can be simplified to read: today’s close was below the close of two days ago. That’s much easier to figure out. Just don’t forget to include the other part. I attempted to rationalize why that part of the equation improves results, but I’ve decided just to point out that it seems to work.

The exit is simple: when the first part of the entry is no longer true, exit. I.e. when the close of today is NOT below the close of two days ago, exit at the open of the next day. Often this is a 24-hour hold, but the average is two days.

Here’s some AmiBroker code you can play with and come up with your own

A suggestion for using the code: run this to optimize for one comparison first (which will require 4096 permutations), using whatever characteristic you prefer. I chose the best sharpe ratio that had at least 200 trades over the period 2000-2010. Then make those values permanent. Next, uncomment the second section in the code to come up with your fine-tuning. Play around with the entries and exits: same day close, next day close, limit orders, etc.

Remember also to only test on a portion of your data, and leave some as out-of-sample data to verify it works. Even then, don’t start trading actively right away. Let your systems stew for awhile, or trade with tiny sums.

This code is a child’s play compared to a fully-compiled, dedicated application. Use it as a springboard for your own ideas.

`#include_once "Formulas\Norgate Data\Norgate Data Functions.afl"` |

— By Matt Haines from Throwing Good Money After Bad

The post Randomly Pushing Buttons appeared first on System Trader Success.

]]>The post Simulation: Beyond Backtesting appeared first on System Trader Success.

]]>And yet backtesting largely assumes that the future will be similar to the past. Yet, we can imagine the possibility for non-repeating but predictable profit opportunities. Even without getting into those possibilities, we can imagine that if we can model the dynamics of the market accurately, we can predict new outcomes that cannot be extrapolated from the past.

The way this is accomplished is simulation. Simulation offers the powerful promise of allowing us to make use of historical market data under varying conditions of future similarity. Simulation, massive simulation is also poised to impact every aspect of our lives.

Imagine for a moment that you are a world class MMA fighter or boxer and you’re competing against a similar well ranked fighter. What should your strategy be? In the past, you might have studied your opponent and intuited a strategy. Perhaps, if you were more sophisticated you might have even used crude statistics such as counting to figure up the risk of and probability of a given move working. But today, it is surely possible to feed in your moves into a computer with precise timing and force calculations. Next, it is possible to infer the same regarding your opponent by using previous fight videos. In addition, by using the fighters height, weight, and other statistics it is possible to model how well they could perform even moves that were not recorded. Once all the data is put into the computer then you can run thousands or hundreds of thousands of fight simulations. The best performing simulations will yield the best strategies. The strategies that are discovered may be non-intuitive and completely innovative. These can be used, with human cognition and consideration, as the basis for your game plan.

Now, imagine how this would work for the trader. It is not just running thousands of simulations on past data. But you must infer how future traders will react to changing market conditions. This is the difficult part because you need to know how the combination of variables will impact their behavior.

Even if that level of simulation is beyond the average developers capability or can only provide rough approximations due to the difficulty in modeling, it is still possible to start thinking more along the lines of simulation to explore creative opportunity and risk management.

Some ideas for how you might do this:

- Use random and partially randomized entries and exits to try to find more universal or robust settings for your strategies.
- Create synthetic market data where you change the amount of volatility, trend, and mean reversion to see how it might impact your strategies.
- Create models of how traders might act in certain scenarios and look for situations that might offer predictive advantage.
- Use Monte Carlo analysis with randomized entries to come up with pessimistic capital requirements.
- Try to find optimal strategies for given market conditions.
- Build self-learning strategies with limited capacity for memory and try to find the optimal rules for trading.

–by Curtis White from blog, Beyondbacktesting

The post Simulation: Beyond Backtesting appeared first on System Trader Success.

]]>The post <thrive_headline click tho-post-12488 tho-test-10>Why I Prefer Trailing Stops</thrive_headline> appeared first on System Trader Success.

]]>There are four different types of strategy exits – opposite trade signal, stop loss, time exit, profit target, and trailing stop. I’d like to elaborate on the last two options and how they relate to our topic. In my opinion using profit target in trading is counter to the cited above wisdom. That’s why in all my systems I use the trailing stop option instead of the target order.

On the chart above I have marked four days of price action on the eur-usd market during 2015. For four trading sessions, the market has made gains of 670+ pips. It is a huge short-term movement without any visible corrections. It is not an every month move, but when it occurs I’d like to be on the right side of the market for as long as possible.

If I use a profit target of 100 pips I would miss more than 80% of that move. Even if I use 200 pips the missed movement will be 70%. The solution I found out for myself is to use a trailing stop. I plot a trend-following indicator such as Moving Average or Parabolic SAR or any other tool which will work well during trending movements. Then my stop loss order just follows the indicator with regards to the settings and time frame I have chosen. For my long-term strategies I use a very wide trailing stop and for my short-term systems, I use a tight one. With this very simple tool, I am sure that if the market decides to move a lot in a certain direction it is very likely for me to catch a big part of it. How big it is, depends entirely on the chosen settings which are a function of the strategy and the personal preferences of each trader. Aggressive traders use tight stops and conservative traders prefer the wide ones.

It is also possible to use both target and trailing stop in a combined approach. Instead of exiting on your predetermined target, you just can use it as an activation point of a new trailing stop which as I’ve said above could be wide or tight. Once the target is hit, we begin to move our stop loss order when the price continues to move in the desired direction.

If the market advances in our direction widely, we could use the presented above second target option which once hit, we lock in more profits than initially. The thinking process here is that the price has already moved a lot on the upside and thus chances of continuing the bull trend are now much lower. It is very convenient for those who don’t like to have big open profits. It is again a very good combination of both approaches.

Choosing a fixed target for your trading is hard and counterproductive because the market is always changing and a good target of 100 pips today is going to be worthless during a wild volatile market and you potentially could miss very big moves. The same applies to the opposite – using 200 pips target during quiet market conditions is a sure path to never reaching the target and missing profits again.

I like to think that if the market is willing to give me only 50 pips I would gladly take them. If the market is exploding and could give 500 pips or more for just a few trading sessions then again I am willing to be at least ready to grab them. I don’t like to force the market to hit my targets. I need to be very flexible because the market is constantly changing and one approach that has been giving us excellent results in the past few months could not give us good profit anymore.

As a proof of above-written concept, I’d like to present you with an example of a simple Forex trading strategy designed to work on eur-usd currency. It is short-term trend following based on daily bars volatility breakout pattern. I have done four backtests and the only difference in strategy’s inputs is the exit. I have used a trailing stop option and three profit target variations. The backtest has been conducted on 15 years data – from 2001 to 2015. Here are the results:

As you can observe the trailing stop settings make the best gain and lowest MaxDD. The bigger the target the worse the results.

Below are the equity curves of all backtests:

As expected the first one which represents the trailing stop option is the best and most good looking.

I hope that I have contributed to your knowledge about how to be more in sync with current market conditions and be more prepared to grab some big moves which occur from time to time. I wish you a profitable trading!

— By Professional Trading Systems

The post <thrive_headline click tho-post-12488 tho-test-10>Why I Prefer Trailing Stops</thrive_headline> appeared first on System Trader Success.

]]>The post The Top Three Pitfalls of Stock And ETF System Development appeared first on System Trader Success.

]]>I have found the same to not be true for stock and ETF systems, even though both security types are more accessible to the general public. This seems to be for several reasons. First, the standard split-adjusted data series used by itself in backtesting has severe limitations when testing on a portfolio. I’ll detail more about this later in the article, but the summary is that you need to use unrealistic trading assumptions just to get a backtest which is moderately distorted from what the real results would have been for a long backtest of 15-20 years. Certain trading types are off-limits with standard split-adjusted data in a portfolio. For example, you can’t just exit one of the stocks or ETFs being traded on a protective stop and continue trading the others.

Secondly, another issue is that stock and ETF system developers do not understand how to test a stock/ETF system to make valid comparisons to buy and hold. It is true that it is, indeed, hard to develop systems which outperform buy and hold on a return basis depending on your testing window. For example, if we don’t include the 2008-2009 crash, then it is very difficult. If this period is included, then the comparisons are easier. Outperforming buy and hold is a bit of a myopic goal, however. One thing that systems designers can do is build a system which makes almost as much as buy and hold without as much risk. In the trading world, reducing risk is always critical.

Please note that I am not saying mechanical trading systems cannot beat buy and hold. I am simply saying that you need a good system to beat buy and hold by a sizable margin over a long time period. In fact, I have designed many excellent strategies which greatly outperform buy and hold with a lot less risk. A later installment of this article series will focus on how to develop these strategies.

For now, let’s look at the beginning of how to develop stock and ETF strategies. Fundamental to designing these systems is understanding the concepts behind split-adjusted data and some other general issues relating to equity data.

**Split-Adjusted Data**

A major reason why traders believe that they cannot build mechanical trading systems that outperform “buy-and-hold” is that they do not understand the issues in terms of preparing the data and the effects that differences in the data can have on the results.

Most stock traders use what is called “split-adjusted” data, which is similar to “ratio-adjusted” data in the commodities world. The problem in using this type of data is that the adjustment destroys the dollar returns, historical daily range, and the original price levels. The only advantage it has is that it correctly calculated the percent return.

If the price of a stock moves too high or too low, management of a company can issue splits in the stock to encourage trading. The vast majority of stock splits occur because the price has gotten too high. Consider the case when the price rallies to $50 per share and management decides to split 2 shares for 1 share. This does not affect the company valuation, but there are two shares of $25 stock on the books for every one share of $50 stock that previously existed. If the gap on the chart cannot be smoothed, false trades occur in testing because the unadjusted chart looks like the price dropped from $50 to $25. If you had a protective stop at $40 on a trade, it might trigger in the backtested results. However, this is actually a false order because the company valuation didn’t change with the drop from $50 to $25. “Split-adjusted” data is a way to solve this problem.

The split-adjusted data stream is produced by dividing the prices prior to a stock split by the factor of the stock split. For example, in our case above, the dividing factor would be 2 (for the 2 for 1 split). This sounds like a great solution, right? Not quite. This has the effect of cutting the previous daily ranges in half. Instead of one day being between $45 and $55 per share, that day now traded between $22.50 and $27.50. The problem really occurs when a stock has been split many times. When this occurs, the split-adjusted data can get ridiculous. An extreme example of this is Microsoft. The split-adjusted price is $0.11 per share if stock splits are handled all the way back to 1988. The real price per share at that time was about $34!

Another problem has to do with the way the major markets have been operated in the past. Stocks were priced in fractions until quotes were changed to the current decimal format. Many of the available public stock databases are set up to calculate to two decimal places. When a stock like Microsoft has had so many splits, a split-adjusted $0.01 move scales out to $3.40! To compensate, a minimum of 4 decimal places needs to be saved and more would be better in many cases. Another problem with split-adjusted data is accounting for the calculation of commissions. The actual dollar values are meaningless and the results of backtesting can only be seen in terms of percent returns.

The core premise of split-adjusted data is that the percent return is correct. The next step in this logic is that always buying the same amount of each stock traded will make percentage return numbers also correct. However, what happens when you don’t want to buy the same dollar value of each stock? What if you want to use a percent risk model whereby position size is based upon how much risk is assumed in a given trade? For example, suppose you decide to risk 1% of your account on a given position. For a $100,000 account, the risk would be $1,000 on a given trade.

Disregarding split-adjusted data for the moment, presume that our system rules are to exit a long position at a 10-day low. If that low is $1 per share away, it is possible to buy 1000 shares of that stock. With split-adjusted data, this is not possible (remember, the Microsoft disparity in prices). The problem becomes amplified when looking at a portfolio of stocks where risk analysis needs to be performed on each of them. Since portfolio-level analysis in backtesting software is a relatively new development, many of these issues have yet to be addressed.

A percent-risk money management strategy might require that you place a large percentage of your account into one stock position. What if the example stock with the $1.00 per share risk was $100 per share stock? In this case, 100% of the account would have to be placed in the stock position while we might be trying to trade a basket of 100 stocks. We are within the directives of our system because we are risking 1% of our account’s value on that trade. In summary, it’s essential to not just know the split-adjusted price of the stock for the correct prices on the trades, but also the unadjusted price is needed for entries and exits so dollar returns, percent returns, and the amount of money committed to taking a position can be accurately calculated.

**Business Survivorship**

Another issue in a stock trading system is the “business survivorship” test. For example, how valid is a test on the current S&P 500 in which results are overstated due to the fact that stocks such as WorldCom and Enron are not included in the backtest? The “business survivorship” test is an issue that many traders choose to completely ignore. However, it is one that they will likely have to face in real life because old data for many de-listed stocks is not readily available. Whether you choose to consider this in your testing or not is naturally entirely up to you; however, you have to realize that it exists when you are evaluating a trading strategy.

**Dividends**

Dividends can create problems as well with split-adjusted data. This used to be an issue more so for Dow 30 companies and utility stocks. Nowadays, changes in tax treatment where options have to be expensed and dividends are better tax-wise has helped to make dividends more popular. The price of a stock drops on the day the dividend is assigned to the existing owner of the stock, which causes a downtick on the chart. When analyzing mature companies like those in the S&P500, it must be remembered that dividends can account for as much as one half of the return of buy-and-hold strategies. When profits from dividends are compounded and reinvested back into stocks, dividends can have a very powerful effect on those compound returns.

Other dividend-related problems could cause major changes to system results even if dividend-adjusted data is used. Assume that our system rules tell us to buy at the highest high of the last 12 months. During that time, the stock has had a high of $40, but has paid a $2 dividend since that high. Most system developers would buy if the price exceeds $40, but since the dividend was subtracted since the high price was made, the real adjusted 12-month high is $38. If the stocks were bought at $38 instead of $40 across hundreds of shares over decades, the results of the system would be drastically different!

Another issue to consider when trading stocks is the use of fundamental data. Most data vendors do not have access to fundamental data and the ones who do typically only maintain one year of it in their databases. This creates a problem since it is very difficult to backtest a strategy using such a limited amount of data. This is only one of the problems of trading stocks using fundamentals. Long-term histories of fundamental data are available, but it is usually quite expensive.

As you can see, stock traders can have a tendency to not believe in mechanical systems due to the difficulty of isolating the issues involved in obtaining realistic backtest results that would compare with real-world performance. Analyzing these problems objectively results in an easy loss of faith in the system. After all, if the data is not clean, how can we realistically test any system and know how it will hold up with real money on the line? A trader must have a tremendous amount of faith in the system and that only comes after extensive backtesting on clean data, with the right backtesting platform.

**How Can We Solve These issues in Backtesting ETF and Stock Systems?**

How we solve these issues will be discussed in next week’s article. In that article I’m going to show you how to correct these issues we discused and in doing so, create systems that accurately backtest so you can become more successful in trading stocks and ETFs.

The post The Top Three Pitfalls of Stock And ETF System Development appeared first on System Trader Success.

]]>The post Using this Indicator Makes $199K More Profit appeared first on System Trader Success.

]]>For this entire article, the backtest will be conducted from January 1, 2000, to December 31, 2016. I will be deducting $5 in commissions and two-tick of slippage per round trip. I will trade one contract per signal on a $100,000 account. Profits will not be reinvested. The backtest will be conducted on a basket of index futures. The markets I will use are:

- E-mini S&P
- E-mini DOW
- E-Mini NASDAQ
- E-Mini RUSSEL 2000
- E-Mini S&P MidCap 400

These are interesting results and seem to verify that for these stock index markets, this indicator is a decent predictor of market turning points. We have a profit of over $434K which gives us a compounded annual rate of 10.36%. The profit factor is 1.33 and drawdown only exceeded 18% once. Let’s now compare it to another popular indicator used to locate potential turning points.

I created a simple strategy to open long trades when the 2-period RSI crosses below 10 and to sell short when price crosses above 90. This is a similar concept to John’s Oscillator as both strategies are either long or short. Below are the results of this strategy.

In this case, we can see the 2-period RSI under perform John’s Oscillator. Not only does it underperform in terms of net profit, profit factor, sharp ratio, and average annual return but the drawdown is larger. We have a profit of over $235K which is about $199K less than John’s Oscillator. The compounded annual rate is 7.37%. The profit factor is 1.24 and drawdown exceeds 20% many times and peaks at around 48%.

John’s Oscillator does appear to pick turning points better than the 2-period RSI on the stock index markets. Using John’s Oscillator combined with these markets might just be a great place to start building a profitable trading system.

In a future article, I’m going to compare it to a few other indicators and then move to other markets such as currency futures, commodities, and bonds.

If you want to get personalized help on how to use advanced cycle and DSP technology in your trading, you’ll want to check out John’s Workshop.

- John’s Oscillator Strategy, Function and Indicator (TradeStation ELD)
- Workspace containing all markets tested (TradeStation TSW)
- The text for this strategy is available in the article, Predictive Indicators

The post Using this Indicator Makes $199K More Profit appeared first on System Trader Success.

]]>