Tagged: Deep Learning

Projects

  • Brute Forcing Keypoints: BoVW vs CNN — This project investigates whether modern GPU acceleration (via cuML/RAPIDS on H100 architecture) can rehabilitate classical Bag-of-Visual-Words methods for image classification. By scaling codebook construction to 50 million keypoints on CIFAR-10, we benchmark GPU-accelerated BoVW pipelines against modernised CNN architectures (LeNet-5 variant and VGG-16-BN). Our best BoVW configuration achieves 65.46% test accuracy, matching a shallow CNN but falling substantially short of VGG-16-BN at 83.90%. The results confirm that while modern hardware enables previously impractical scaling for classical methods, fundamental limitations of BoVW—particularly vector quantisation error and the absence of spatial hierarchy—remain decisive when compared to deeper architectures.
  • PoSTACRED: Attention-GCN Relation Extraction — Investigating syntactic feature injection in BERT entity markers and attention mechanisms over GCN for sentence-level relation extraction on TACRED.
  • Deep Learning, Sequencing Technologies & Polygenic Scores: Alzheimer’s Disease Risk Prediction and Classification Review — This work reviews traditional genome-wide association studies (GWAS) and weighted polygenic risk scores (PRS) as methods for predicting the onset of Alzheimer’s Disease (AD), then examines machine learning (ML) and deep learning (DL) approaches. Reviewed studies include the use of random forests, support vector machines, and various neural network architectures. We identify persistent challenges encountered throughout the survey, including dataset diversity, model explainability, and regulatory compliance. The work concludes by cautiously proposing a multi-phase framework for clinical adoption of selective ML and DL methods into existing NHS genomic testing pipelines over a seven-year timeline, emphasising quality control, SHAP-based interpretability, and robust validation before any scaled deployment.

← All tags