Abstract

This document reports on an application of artificial intelligence to achieve demand-based scheduling within the context of a network-computing infrastructure. The described A1 sub-system uses tool-specific, run-time input to predict the resoura:-usage characteristics of runs. Instance-based learning with locally weighted polynornial regression is employed because of the need to simultaneoi~sly learn multiplr: polynomial concepts and the fact that knowledge is acquired increnlentally in this dornain. An innovative combination of a two-level knowledge base, and age and usage s~.atisticsa re used to: a) detect inadequate and noisy feature-vectors, 13) account for short-term variations in compute-server and network performance, and c) exploit temporal and spatial locality of runs. Modifications to the basic learning algorithm allow the approach to be computationally feasible for extended use and noise tolerant by se1ec:tively adding feature-vectors into the knowledge base and discarding featurevectors that consistently result in inaccurate predictions, respectively. The learning system was tested on three semiconductor simulation tools during normal use of the Purdue University Network Computing Hub during Fall 1997, and on four synthetic data-sets. Results indicate that the described instance-based learning technique using locally weighted regression with a locally linear model works well for this domain.

Date of this Version

April 1998

Share

COinS