LP
Logan Park
← Back to Projects

Machine Learning Foundations — Classification, Clustering, and CNN Analysis

PythonPyTorchTensorFlowKerasMLCNNClassificationClustering

A collection of applied machine learning projects spanning supervised learning, unsupervised learning, and neural network analysis — built across multiple Master's courses at ASU.

This work covers the core ML workflow: preprocessing, training, evaluation, and iteration across different algorithms and problem types.

Supervised Learning

Implemented a Naive Bayes classifier for handwritten digit recognition on the MNIST dataset, working with binary classification of digits 0 and 1. Separately trained CNNs for image classification tasks including handwritten digits and clothing images, using both custom-trained and pre-trained models.

Unsupervised Learning

Applied K-Means clustering to handwritten digit data, working through the mechanics of centroid initialization, convergence behavior, and cluster evaluation.

CNN Hyperparameter Analysis

The most detailed experiment was a systematic analysis of how kernel size and feature map count affect CNN performance on MNIST. Starting from a baseline model (3x3 kernels, 6 and 16 feature maps) that achieved 98.3% test accuracy, I modified the architecture — increasing kernel size to 5x5 and adjusting feature map counts — and measured the impact on accuracy and loss. Each configuration was run three times and averaged to ensure consistency. The 5x5 kernel variant dropped to 97.8%, showing that larger receptive fields don't always improve performance on small-scale images.

Tools: Python, PyTorch, TensorFlow, Keras, NumPy, pandas

The value of this work wasn't any single experiment — it was building fluency with the full modeling pipeline and developing intuition for when and why different approaches work.