\documentclass{human}
\usepackage{curiosity, quiet, forests}
\begin{document}

Ash.

research · security · nature

Abstract

Ash is a researcher and the CTO of UnrealizeX, where he is building the bridge between human imagination and rendered worlds. He trains large models and ships production systems at the scale of enterprises, and believes security and honesty are the hardest — and most worthwhile — parts of the craft. The patience you learn from anything that grows slowly is the same patience a safe system requires.

§ 1 About\section{about}

I am Ash — co-founder and CTO of UnrealizeX, and a quiet observer of the world that grows around me. My team and I are teaching machines to dream in three dimensions. When I am not at the desk, I am usually somewhere with more leaves than walls, watching how slower, older systems actually work.

My expertise is in security and in building systems for enterprises at scale — distributed training, hardened production infrastructure, and the careful, patient work of making things safe and honest. I have shipped research and engineering used by hundreds of thousands of people, and I care, perhaps more than is efficient, about building it the right way: with ethics, with principles, with a careful eye on what a system can and cannot be trusted to do.

If you are a maker — an artist, a writer, a builder of any kind — I would like to hear what you are working on. I think the next decade of tools belongs to the people who still know how to pay attention.

A contemplative scientific portrait
fig. 1 — self, as a study
Botanical illustration with mathematical notation
“The AI I care about doesn’t finish your work — it extends your reach, the way good boots let you walk farther into the woods.”— a note to self

§ 2 Research\section{research}

A handful of themes I have spent real time on — published at peer-reviewed venues, written up in long arXiv evenings, or quietly tested across many GPUs until the numbers said something true.

II.i

Self-supervised representations

A line of work on learning visual features without labels — layered pyramid architectures that read an image at many scales at once. Presented at a top machine learning venue; generalised well to segmentation and classification.

\keywords{ pyramids · ssl · vision }
II.ii

Generative AI & large language models

Fine-tuning large language models on multi-modal content — video, document, image — with parameter-efficient techniques and distributed training across many GPUs. Retrieval-augmented reasoning over private corpora.

\keywords{ llm · lora · rag · multimodal }
II.iii

Reinforcement learning, transferred

Cross-game transfer for Deep Q-Networks: if an agent has learned to see, it should not have to learn to see again. Reduced training wall-clock from weeks to hours on Atari-style environments.

\keywords{ rl · dqn · transfer }
II.iv

Fairness & the shape of bias

A study of debiasing classifiers — whether the de-biased model we think we have trained is the one we actually deployed. Honest numbers about an honest problem.

\keywords{ fairness · audit · bias }
II.v

Watermarking foundation models

A robust scheme for detecting AI-generated text and attributing it back to the model that produced it. Ninety-nine-point-nine percent confident, and stable under paraphrase.

\keywords{ provenance · llm · safety }
II.vi

Video understanding

Graph-structured spatio-temporal reasoning for video captioning — a model that watches and speaks in a slightly more human way. Custom CUDA kernels where they mattered.

\keywords{ video · gcn · cuda }

§ 3 Craft\section{craft}

What I build, when I am building. The shapes change, but the practice is the same: carve away until only the essential remains.

“I’ve noticed the people I admire most are quieter than you’d expect them to be.”— from the notebook
A notebook of hand-drawn equations and pressed leaves

§ 4 Say hello

I write back to makers, researchers, and anyone with an interesting problem and a patient question. Shorter is usually better.

— Ash
\end{document}