ElasticNetCV from scikit-learn combines the Ridge and Lasso algorithms suitable for generalized linear models, such as the one I will be discussing today. CV stands for cross validation, which is a tedious, but necessary process.
Oh, yeah before I forget this is one of those finance posts again. Sorry. To determine resistance and support levels we can use a linear combination of the previous period close, high and low prices. This is technically not forecasting, but we can interpret it as such. If you look up “pivot point” in Wikipedia you will find out that there are predetermined linear combinations that are commonly used. Instead I am going to go for a general linear autoregressive model with lag 1 – AR(1). The following function applies the algorithm with cross-validation using 10 folds:
1 2 3 4 5 6 7 8 9 10 11 12 13
def predict_encv(x, y): encv = ElasticNetCV(fit_intercept=True) kf = KFold(len(y), n_folds=10) err = 0 for train,test in kf: encv.fit(x[train], y[train]) pred = map(encv.predict, x[test]) pred = np.array(pred).ravel() err = np.abs(pred - y[test]) return (pred[-1], err.mean())
The preceding function also computes the mean absolute error and is called by the following functions that print out resistance and support levels estimates:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
def print_high(high, err, h): print "High", round(high, 2), "Yest", h[-1], "change %", round(100 * (1 - h[-1]/high), 2) print "High + 1 err", high + err def print_low(low, err, l): print "Low", round(low, 2), "Yest", l[-1], "change %", round(100 * (1 - l[-1]/low), 2) print "Low - 1 err", low - err def find_range(h, l, c): x = np.vstack((h[:-1], l[:-1], c[:-1])) x = x.T y = h[1:] high, err = predict_encv(x, y) print_high(high,err, h) y = l[1:] low, err = predict_encv(x, y) print_low(low,err, l)
I get the following output for AAPL:
High 530.11 Yest 532.75 change % -0.5 High + 1 err 535.185777118 Low 517.35 Yest 522.12 change % -0.92 Low - 1 err 511.622130581
I forgot the disclaimer yesterday. So here goes:
DISCLAIMER: Nothing in this blog should be viewed as recommendation or advice. Even if it says so, because then it’s probably just a joke. Use at your own risk. Any errors or mentions of persons/organizations/entities are purely coincidental. No that is not the right word, but you know what I mean.