Overview
Validating ceramic part geometry across production stages — spec, green-formed, and fired — previously required manual measurement of X-ray and optical images. This project automated that process end-to-end using deep learning, then surfaced the results in a researcher-facing web application.
What I Built
Dataset via Image Augmentation
Real labeled data for ceramic X-ray images was scarce. I applied image augmentation techniques — rotations, flips, elastic deformations, and synthetic noise — to expand a small set of annotated scans into a viable training dataset, making supervised segmentation training feasible without requiring additional labeling effort.
Semantic Segmentation Model
Trained deep learning models in PyTorch to perform pixel-level recognition of embedded part features from X-ray computed tomography (XCT) images. The models identify internal structural features and part boundaries, enabling geometric comparisons across production stages (spec → green → fired).
Pixel-Level Measurement App
Built a web application that runs the segmentation model on uploaded images and performs pixel-level geometric measurements directly in the browser. Researchers can compare a fired part against its green-formed state or CAD specification without leaving the app. This reduced data collection time by 95% compared to the previous manual measurement workflow.