Researcher · Engineer · PhD-track
Saheed Faremi.
Researcher of the brain. Engineer of the systems people rely on.
01 — About
A researcher who ships.
Portrait
forthcoming
I work at the intersection of cognitive neuroscience and production engineering.
By day I build software people rely on — multi-asset fintech (Curnance), HR for African mid-market employers (Etihuku), healthcare data infrastructure (HIS Core, predict-dx), and learning systems (Skills Hub, Moodle). The through-line is that infrastructure for under-served users — geographically, economically, or computationally — needs the same engineering rigour as infrastructure for everyone else, and tends to need it more.
My doctoral research turns from the systems people use to the people themselves — specifically, what the brain looks like when it's running. EEG microstates are quasi-stable scalp topographies that segment continuous EEG into a discrete temporal alphabet; I'm working on whether deep generative models — variational autoencoders and Gaussian-mixture VAEs — can learn a microstate segmentation that's more stable across sessions and more behaviourally predictive than classical clustering. DRAFT — verify framing
In 2022 I represented Eswatini at the UNESCO India-Africa Hackathon at Gautam Buddha University in Uttar Pradesh. Team Geeks_on_Fire — five countries, five people — won problem statement AGRI12: an AI-assisted voice contact centre that lets farmers without smartphones report issues by phone and receive guidance back in their language. Gold medals and a ₹3 lakh team prize.
Based in Eswatini. Travel for research. open to collaboration
Infrastructure for under-served users needs the same engineering rigour as infrastructure for everyone else — and tends to need it more.
02 — Research
EEG microstates with deep generative models.
EEG microstates are quasi-stable scalp topographies — typically four to seven canonical classes — that segment continuous EEG signal into a discrete temporal alphabet. The classical approach uses modified k-means clustering over the global field power maxima. It works, but it depends on hard choices (the number of states, the reference electrode, the band-pass) and the resulting segmentation can be brittle across sessions.
This project asks whether a learned latent geometry — via a variational autoencoder — produces a microstate alphabet that is more interpretable, more stable across sessions, and more predictive of behaviour than the classical pipeline. DRAFT — confirm with supervisor
Approach
- VAE. Single Gaussian latent prior; learn a continuous embedding of topography frames; segment by latent-space clustering or by direct decoder reconstruction error.
- GMM-VAE. Gaussian-mixture latent prior — one component per microstate class — so the segmentation falls out of the latent prior structure rather than a post-hoc clustering step.
- Architecture search. Sweep over latent dim, regularisation, and decoder choices; compare reconstruction-vs-segmentation tradeoff curves across architectures.
diagram forthcoming
diagram forthcoming
diagram forthcoming
Diagrams export from ~/Documents/*.drawio as SVG and drop
into src/lib/assets/; pass via src.