Using a Scalable Feature Selection Approach for Big Data Regressions

Qingdong Cheng, Purdue University

Abstract

Logistic regression is a widely used statistical method in data analysis and machine learning. When the capacity of data is large, it is time-consuming and even infeasible to perform big data machine learning using the traditional approach. Therefore, it is crucial to come up with an efficient way to evaluate feature combinations and update learning models. With the approach proposed by Yang, Wang, Xu, and Zhang (2018), a system can be represented using small enough matrices, which can be hosted in memory. These working sufficient statistics matrices can be applied in updating models in logistic regression. This study applies the working sufficient statistics approach in logistic regression machine learning to examine how this new method improves the performance. This study investigated the difference between the performance of this new working sufficient statistics approach and performance of the traditional approach on Spark’s machine learning package. The experiments showed that the working sufficient statistics method could improve the performance of training the logistic regression models when the input size was large.

Degree

M.Sc.

Advisors

Yang, Purdue University.

Subject Area

Artificial intelligence|Computer science|Marketing

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS