Binary classification with adiabatic quantum optimization
We study the problem of supervised binary classification from the perspective of deploying adiabatic quantum optimization in training. A vast body of prior academic work consisting of both theoretical and numerical studies has indicated that quantum technology promises to provide computational power that may be fundamentally superior to any classical computing methods. Given the abundance of NP-hard optimization problems that naturally arise in learning, it is clear that machine learning can immensely benefit from such an optimization tool. We describe a series of increasingly complex designs that result in computationally hard training problems of combinatorial nature. In return for accepting classical computational hardness, we retain theoretical properties such as maximal sparsity and robustness to label noise, which are otherwise sacrificed by convex methods for the sake of computational efficiency and sound theoretical footing. In order to be compatible with emerging quantum hardware technology, we formalize the training problem as quadratic unconstrained binary optimization. Our initial investigations focus on a simple training formulation with non-convex regularization that conforms to the architecture of existing quantum hardware and makes frugal use of a limited number of available physical qubits. Next, we extend this baseline formulation to a scalable algorithm, QBoost, which is able to train incrementally large-scale classifiers on data sets of practical interest. Further, we derive another algorithm, TotalQBoost, as a theoretically motivated totally corrective boosting algorithm with cardinality penalization that also makes use of quantum optimization. Both QBoost and TotalQBoost perform explicit cardinality regularization, which is the only known way of achieving maximal sparsity in the trained classifiers. We apply QBoost and TotalQBoost to three different real-world computer vision problems and make use of a quantum processor for solving the sequence of discrete optimization problems generated by one of them. Finally, we study a learning formulation with convex regularization and a non-convex loss function, q-loss, specifically designed for robust supervised learning in the presence of label noise as it occurs in practice. For compatibility with quantum hardware we derive the corresponding quadratic binary problem via variational approximation. For all proposed algorithms we compare results on a variety of popular synthetic and natural data sets against a rich selection of existing rival learning formulations.
Neven, Purdue University.
Off-Campus Purdue Users:
To access this dissertation, please log in to our