Keeping unsuitable web content from young eyes is a challenge, given the wide-open environment of the internet. Research was conducted into selecting an Artificial Intelligence interface to be used to train and select whether specific images represented explicit material or not. Ten potential vendors were reviewed, and Google Automl Cloud® was selected for training and verification testing. Unfortunately, it was difficult to obtain a sizable enough archive of approved images to complete the originally envisioned training and testing program. A modest-sized image data base was finally secured, and the code was successfully tested with a small data set, even though the results did not contain enough samples to establish the commercial-level reliability required for further testing.
"Artificial Intelligence in Cybersecurity,"
American Journal of Rising Scholar Activities: Vol. 1
Available at: https://docs.lib.purdue.edu/ajrsa/vol1/iss1/6