FanDuel NBA Projected Optimal Lineup

Since the milestone, we expanded our traning set by collecting data from the database on NBA's official website. The model is "simple" in that each has only one independent, or predictor, variable, and it is "binary" in that the dependent variable can take on only one of two values: Iterations continue until further pruning becomes harmful. Navigation menu

Western Conference Projections

Eastern Conference Projections

Then random weights for each of those features are selected. Finally, the best split for that particular set of weights is found. The procedure is repeated k times and the best resulting split is picked out of those k trials. MultiBoost algorithm combines two commonly used techniques: The motivation for MultiBoost algorithm comes from the observations that Boosting decreases bias of the error, while bagging decreases variance.

Hence, combining two of these techniques may decrease both and increase the overall performance of the algorithm. Based on the empirical evidence, researchers claim that Adaboost gains the most benefit out of the first several iterations. As a result, it may be beneficial to let Adaboost run for some pre specified number of iterations, reset the weights, and then run it again. This procedure would be repeated until some termination criteria is reached.

For this algorithm, we use VR-trees as our base classifier. Despite its simplicity, kNN is probably one of the most effective Machine Learning algorithms. However, one major shortcoming of regular kNN is the use of Euclidean distance, instead of some more advanced metric learning technique. Euclidean distance gives equal weight for each feature, which is very unlikely in large and noisy datasets.

After this procedure, we take some specified fraction of features from each tree in the forest and use only those features to calculate Euclidean distance.

In other words, the features collected from each tree will correspond to one kNN classifier. If we have built N trees, we will also have N kNN classifiers.

Each classifier will vote and the final vote will be determined as an average of the sum of the votes. AdaBoost adaptive boosting is a method to generate strong classifiers from a pool of weak classifiers. This is done by iteratively training weak classifiers on a weighted distribution of training examples currently misclassified examples are weighted higher , and then boosting their weights appropriately so that the weighted sum of their contributions provides an optimal prediction of the target value.

AdaBoost is proven to be a very strong method for classification problems, but is rarely used for regression problems. We attempted to apply two variations of AdaBoost for regression. The weak learner we are using is derived by choosing, on each iteration, a random subset of feature dimensions and performing least squares regression along selected dimensions only with iterative batch gradient descent. We experimented with subset sizes of 5, 8 and 10 feature dimensions.

AdaBoostRT iteratively recruits weak learners based on a distribution of training examples reflecting the strengths of the current committee examples that are hard to predict for current committee are weighted more heavily. We only accept weak learners that meet our standards, by predicting with an error rate of less than After being accepted, the new weak learner's weight and the training distribution are updated.

We recruited weak learners into our AdaBoost model until either weak learners were recruited or weak learners in a row were rejected. We found that the weak learners themselves were strong, perhaps too strong to benefit too much from boosting. We found our highest accuracy success with the following runs:. Basically, this shows that if we are less strict with weak learner recruitment higher phi values , we can recruit more learners into the system, but we have to balance the factors of phi and complexity of weak learners number of feature dimensions to find local optima.

While AdaBoostRT resulted in greater prediction success than AdaBoostR, it performs weakly when presented with data whose target values lie in a small range that incorporates 0, such as our difference bet problem. When we attempted to train AdaBoostRT to predict difference bets, it's predictions all approached 0, exhibiting this major limitation that extends from its learning objective. We found that the Random Forest performs the best on difference bets, while Modified kNN performs the best on combination bets.

Return on investment in NBA betting using these machine learning techniques could result in 5 times the profit acquired by investing in top hedge funds! Improvements can be made to our work thus far by exploring different types and combinations of input features and different training set sizes.

Additionally, using a simpler weak learner for AdaBoostRT could produce improved results. Classification and Regression Trees. The majority classifier takes non-anomalous data and incorporates it within its calculations. This ensures that the results produced by the predictive modelling system are as valid as possible.

Ordinary least squares is a method that minimizes the sum of squared distances between observed and predicted values. The generalized linear model GLM is a flexible family of models that are unified under a single method.

Logistic regression is a notable special case of GLM. Other types of GLM include Poisson regression , gamma regression, and multinomial regression. Logistic regression differs from ordinary least squares OLS regression in that the dependent variable is binary in nature. This procedure has many applications. In biostatistics , the researcher may be interested in trying to model the probability of a patient being diagnosed with a certain type of cancer based on knowing, say, the incidence of that cancer in his or her family.

In business, the marketer may be interested in modelling the probability of an individual purchasing a product based on the price of that product.

Both of these are examples of a simple, binary logistic regression model. The model is "simple" in that each has only one independent, or predictor, variable, and it is "binary" in that the dependent variable can take on only one of two values: Generalized additive model is a smoothing method for multiple predictors that allows for non-parametric predictions. Robust regression includes a number of modelling approaches to handle high leverage observations or violation of assumptions.

Models can be both parametric e. Semiparametric regression includes the proportional odds model and the Cox proportional hazards model where the response is a rank. Predictive models can either be used directly to estimate a response output given a defined set of characteristics input , or indirectly to drive the choice of decision rules. Depending on the methodology employed for the prediction, it is often possible to derive a formula that may be used in a spreadsheet software.

This has some advantages for end users or decision makers, the main one being familiarity with the software itself, hence a lower barrier to adoption.

Nomograms are useful graphical representation of a predictive model. As in spreadsheet software, their use depends on the methodology chosen. The advantage of nomograms is the immediacy of computing predictions without the aid of a computer. Point estimates tables are one of the simplest form to represent a predictive tool. Here combination of characteristics of interests can either be represented via a table or a graph and the associated prediction read off the y-axis or the table itself.

CART, survival trees provide one of the most graphically intuitive ways to present predictions. However, their usage is limited to those methods that use this type of modelling approach which can have several drawbacks. Score charts are graphical tabular or graphical tools to represent either predictions or decision rules. A new class of modern tools are represented by web-based applications.

With a Shiny app, a modeller has the advantage to represent any which way he or she chooses to represent the predictive model while allowing the user some control. A user can choose a combination of characteristics of interest via sliders or input boxes and results can be generated, from graphs to confidence intervals to tables and various statistics of interests. However, these tools often require a server installation of Rstudio.

Uplift modelling is a technique for modelling the change in probability caused by an action. Typically this is a marketing action such as an offer to buy a product, to use a product more or to re-sign a contract. For example, in a retention campaign you wish to predict the change in probability that a customer will remain a customer if they are contacted. A model of the change in probability allows the retention campaign to be targeted at those customers on whom the change in probability will be beneficial.

This allows the retention programme to avoid triggering unnecessary churn or customer attrition without wasting money contacting people who would act anyway. Development of quantitative methods and a greater availability of applicable data led to growth of the discipline in the s and by the late s, substantial progress had been made by major land managers worldwide. Generally, predictive modelling in archaeology is establishing statistically valid causal or covariable relationships between natural proxies such as soil types, elevation, slope, vegetation, proximity to water, geology, geomorphology, etc.

Through analysis of these quantifiable attributes from land that has undergone archaeological survey, sometimes the "archaeological sensitivity" of unsurveyed areas can be anticipated based on the natural proxies in those areas. By using predictive modelling in their cultural resource management plans, they are capable of making more informed decisions when planning for activities that have the potential to require ground disturbance and subsequently affect archaeological sites.

Predictive modelling is used extensively in analytical customer relationship management and data mining to produce customer-level models that describe the likelihood that a customer will take a particular action.

The actions are usually sales, marketing and customer retention related. For example, a large consumer organisation such as a mobile telecommunications operator will have a set of predictive models for product cross-sell , product deep-sell or upselling and churn. It is also now more common for such an organisation to have a model of savability using an uplift model.

This predicts the likelihood that a customer can be saved at the end of a contract period the change in churn probability as opposed to the standard churn prediction model. Predictive modelling is utilised in vehicle insurance to assign risk of incidents to policy holders from information obtained from policy holders. This is extensively employed in usage-based insurance solutions where predictive models utilise telemetry-based data to build a model of predictive risk for claim likelihood.

Initially the hospital focused on patients with congestive heart failure, but the program has expanded to include patients with diabetes, acute myocardial infarction, and pneumonia.