Numpy Strategies 0.1.1
- I want to make huge profits on the stock market with Markov chains
You think a few seconds and say: “We can implement this easily with NumPy.” You split the User Story in Tasks, the first task being determining state transitions and defining the Markov chain model.
A Markov chain is a system that has at least two states. The system switches at random between these states. I would like to define a Markov chain for a stock – AAPL (disclaimer I own AAPL shares). Let’s say that we have the states flat F, up U and down D. We can determine the states based on end of day close prices.
1. Obtain 1 year of data
1 2 3 4
today = date.today() start = (today.year - 1, today.month, today.day) quotes = quotes_historical_yahoo('AAPL', start, today)
2. Select the close price
We now have historical data from Yahoo Finance. The data is represented as a list of tuples, but we are only interested in the close price. We can select the close prices as follows.
close = [q for q in quotes] print len(close)
The close price is the fifth number in each tuple. We should have a list of about 253 close prices now.
3. Determine the states
Finally, we can determine the states by subtracting price of sequential days with the NumPy diff function. The state is then given by the sign of the difference. The NumPy sign function returns -1 for a negative, 1 for a positive number or 0 otherwise.
states = numpy.sign(numpy.diff(close))
The nice thing about this NumPy code, is that there were very few for loops. I am willing to bet that it is much faster than equivalent “normal” Python code, so it would be nice to measure the difference. Let’s leave this as an exercise for the reader.
If you liked this post and are interested in NumPy check out NumPy Beginner’s Guide by yours truly. Last time I checked it was on the bestsellers list of Packt Publishing. This week a review of the book by Marcel Caraciolo generated a lot of buzz on the Internet. Please show your support for this review. I will be back next week with the continuation of the Markov chains story.