From first lines of code to multi-agent reinforcement learning research. Every project taught me something new.
2022-2024 โ First steps into ML & Computer Science
"Every expert was once a beginner. This is where my story started โ with curiosity, classic datasets, and a lot of Stack Overflow."
My first steps into data science. Classic ML projects including Titanic, exploratory data analysis, and learning the fundamentals of Python for ML.
Content from introductory sessions at IndabaX โ Africa's premier machine learning conference. My first exposure to the broader ML community.
A collection of web development practice projects: Blog Website, Newsletter signup, and To-Do List applications.
Neuromatch Academy project exploring feature visualization in neural networks. Understanding what CNNs "see" by visualizing learned features.
Solutions to Codeforces problems and algorithmic challenges. Building problem-solving skills through competitive programming.
Streamlit dashboard for Sudanese farmers with LAI, CAB, FCOVER vegetation indices from Sentinel Hub satellite data.
Supermarket inventory system in Java โ add/search/buy/sell items with stock tracking and reports.
SHA-512 hashing, AES-128 encryption, and Vigenรจre cipher โ from scratch in Python. Includes interactive demo!
Base64 encoder/decoder in x86-64 Assembly (NASM). Bit manipulation at the register level.
3D cubes, mathematical curves (Limacon, Cardioid, Spirals), and OpenGL rendering from scratch in C++.
Complete web app with user auth, field drawing on maps, NDVI analysis, and per-user storage. Deployed on HuggingFace.
Late 2024 โ AIMS South Africa Coursework
Intensive coursework in ML, CV, RL, and Bayesian methods
Active Learning experiments using Bayesian neural networks (BNNs) with MC Dropout. Comparing Random, Margin, and BALD acquisition strategies on MNIST and Dirty-MNIST.
Deep Ensembles vs MC Dropout vs Deterministic models. Classification with rejection, OoD detection, and density-based uncertainty.
Query strategies (QBC, Entropy, Margin, Random) on the Forest Covertype Dataset with confidence intervals and learning curves.
From SimpleCNN to BN_AsimNet โ exploring model complexity, BatchNorm, Dropout, and data augmentation. Best: 77% accuracy.
VGG16 vs MobileNet comparison + saliency maps to visualize what the model "sees". Best: MobileNetV3 at 84.37%.
Fully convolutional autoencoder for removing Gaussian noise from LFWcrop face images. MSE: 0.0015.
ResNet50 + triplet loss on VeRi dataset. Semi-hard mining achieves 2x better mAP (36%) than random mining (17%).
Deep Q-Network with replay buffer, target network, and epsilon-greedy exploration. Solves CartPole with max reward (500).
Extending DQN to 8-dim state space with 4 actions. From -380 to 250+ return in 700 episodes.
Policy gradient method learning directly from episodic returns. Stochastic policy vs DQN's deterministic approach.
Policy gradient on the challenging LunarLander. 5000+ episodes needed โ comparing with DQN's faster convergence.
Step-by-step implementation with backpropagation, gradient descent, and decision boundary visualization.
Decision stumps, Gini index, and Random Forest overfitting. Constrained models (max_leaf=32) fix generalization.
Physics ML: Logistic regression from scratch to classify CERN calorimeter data (electrons vs hadrons).
2025 โ AIMS, InstaDeep, and cutting-edge MARL
"From student to researcher. AIMS South Africa, joining InstaDeep, and working on problems at the frontier of multi-agent RL."
Published paper โ largest AR-SW corpus (32K pairs), fine-tuned Transformers achieving 30.9 BLEU. Tiny Papers @ ICLR 2023.
Collaborative filtering with Alternating Least Squares (ALS). User/movie latent factor embeddings + biases.
LLM agent that generates ML solutions from natural language. Beats 50% of Kaggle participants using tree search optimization.
Scaling Inference Time Compute for ML Engineering Agents โ 30% medal rate with DeepSeek-32B matching o4-mini.
Fine-grained credit assignment for RL training. Token-level rewards instead of sequence-level โ lower variance, stable training.
2026 โ Current work and what's next
"The journey continues. Working on challenging problems, learning every day, building towards what's next."