I love machines and I love humans. I have spent my entire career trying to understand how they think, how they interact, and what happens when they shape each other.
I have been a computer geek my entire life, and I have spent that entire life equally obsessed with the way that people think: not what they report about their own thinking, but the implicit signals buried in their language, their behavior, their patterns, the things they reveal without knowing they are revealing them. Everything I have built professionally comes from the intersection of those two obsessions: taking messy, unstructured, chaotic, deeply ambiguous data about human and machine behavior and slicing through the gray areas using disciplined proxy measurement, statistical rigor, and a deep appreciation for how humans navigate and operate at the individual, group, and system level.
That combination led me to clinical psychology, forensic psychiatric evaluation, and critical incident response, where I learned that what someone tells you directly is only a fraction of what their language and behavior are communicating. It led me to my doctoral research, where I studied the written communications of 100 mass murderers and used computational psycholinguistics to measure implicit power drives in their language: K-means clustering revealed two distinct behavioral typologies, and binary logistic regression confirmed that real-world attack behaviors predicted cluster membership. The perpetrators could not be interviewed or assessed directly, but the quantitative structure of their language revealed motivational states that predicted how they would act, and that insight has driven every role I have held since.
Language reveals implicit psychological states that predict behavioral outcomes, even when the subject cannot be directly assessed, and the same methods that make this measurable in humans are the methods we need to apply to AI systems, because the problem is structurally identical: you cannot directly assess an LLM's internal states, but you can observe its language and behavioral outputs, extract implicit signals from those outputs, classify them into meaningful typologies, and use those typologies to predict outcomes and build governance.
At TikTok, as Principal Data Scientist and Applied Researcher, I built measurement and detection systems processing over a billion records, including statistical safety frameworks for detecting semantic coercion, network analysis that shifted content moderation from chasing individual pieces of content to identifying systematic bad actors through their behavioral signatures, multi-signal child safety detection for CSAM networks, mass violence risk assessment, and livestreamed violence UX analysis. At Spokeo, as Director of Data Science, I built and scaled the entire AI function from the ground up: a 49-terabyte knowledge graph with autonomous entity resolution through graph ML and weak supervision, a 26-research-question behavioral evaluation program that treats AI product quality as a formal research investigation with named research questions and phase gates and cross-project dependency contracts, and an FCRA compliance architecture where legal requirements are machine-enforced at the inference layer because a user acknowledging a terms-of-service document does not prevent the violation from occurring when the output is generated.
Across all of it, the method has always been the same: observe language and behavior, extract the implicit signals, classify into typologies, predict outcomes, and build the governance and measurement infrastructure around what you find.
Everything I do is centered on my love of machines and my love of humans and my relentless mission to remove the ambiguity and gray areas and mysteries that prevent us from having a true understanding of the behavioral, cognitive, and implicit signals that give us insight into states, traits, behaviors, and personalities for both machines and humans, and how those things act on each other. I believe that as AI and ML become more and more integrated into the systems that judge, measure, and make decisions for humans, and as humans learn to work in partnership with machines, we have an obligation to truly understand how these interactions unfold and what the pathway is to using AI to improve human productivity, function, and wellness while being genuinely aware of potential harms and the implicit signals of those harms, because during this period in history there is nothing that will be more impactful than the increasing integration of AI and its effects on humans as individuals and on humanity as a whole.
The field right now is conceptualizing human-computer interaction and human-AI interaction primarily with tools borrowed from computer science and UX, and those tools are not wrong, but they are incomplete: they flatten the behavioral dynamics that behavioral scientists have spent decades learning to hold. The messy, looping, self-modifying, context-dependent interplay of human cognition and machine behavior does not reduce cleanly to metrics and A/B tests, and it requires the kind of measurement science that can work with ambiguity, implicit signals, and complex systems without pretending the complexity away. That measurement science already exists in behavioral science and psychometrics and psycholinguistics, and my work is about bringing it to the table in a form that is rigorous enough and scalable enough to operate at the level the field needs.
Concretely, that means rethinking the methods by which we judge LLM outputs and understand their impacts on human states, traits, behaviors, and cognitive processes, and vice versa. It means building roadmaps for AI adoption that are grounded in a real understanding of how human-agent partnerships work at a behavioral level, not just at a task-completion level. It means measurement systems that can detect potential harms through implicit signals before those harms become visible in outcomes, and frameworks for improving human productivity and function and wellness through AI that are built on genuine behavioral science rather than engineering intuitions about what humans want.
My Post-Asimovian Framework for AI Alignment proposes psycholinguistic typology discovery as the observable state variable for adaptive AI safety governance, formalizing the same implicit signal measurement methods from my dissertation as control theory for alignment, because the alignment pendulum between dangerous permissiveness and harmful restrictiveness cannot be resolved with static thresholds that treat every context the same way. I published an NLP analysis of Claude Sonnet's unsolicited clinical diagnostics showing they caused measurable user harm, because that is what it looks like when an AI system's behavioral outputs act on human psychological states without anyone measuring the interaction. I built a clinical AI system for autism assessment with a validity check that flags its own suspicious performance, because measurement systems that cannot question their own accuracy are not measurement systems. I designed a game where the core mechanic is red-teaming your own AI system's content filter through natural language paraphrasing, because I wanted to demonstrate that these problems are deep enough and interesting enough to be worth playing with, not just publishing about.
I call myself a robopsychologist because that is the most accurate description of what I do: I apply clinical psychological assessment methodology, psycholinguistic measurement, forensic behavioral profiling, and psychometric rigor to the problem of understanding and governing AI systems and the humans who interact with them. Every project I have done is a different angle on the same question, which is how we measure what thinking systems, both human and machine, are revealing through their language and behavior, and how we use that measurement to make the interaction between them better and safer and more honest, at a moment in history when getting this right matters more than almost anything else we could be working on.
There is a thing I have not mentioned yet, which is that my brain works the way it does in part because I have ADHD, and I have never once considered that a limitation. The cross-domain pattern recognition, the speed at which I generate work, the ability to see connections between forensic psychology and AI alignment and psycholinguistics and control theory: that is what a divergent brain looks like when you point it at problems worth solving.
It also means I need very specific conditions for deep work. This is the playlist I use during sustained focus periods, and I am sharing it because it is not what most people would consider good music by mainstream standards, but it is precisely tuned for concentration in a divergent brain:
The short version: 140 to 150 BPM provides the arousal threshold an ADHD brain needs to stay engaged, bilateral panning (sound bouncing between ears) activates both hemispheres and creates a grounding effect similar to EMDR, and polyrhythmic complexity prevents the habituation that causes simple beats to become invisible. The long version is that I could not include a playlist on my about page without taxonomizing exactly why it works, so I wrote a full breakdown.
My side hustle during my doctorate was live club DJing, which is probably the least surprising thing on this page. When I am not profiling language patterns or building measurement systems, I am cutting things with giant lasers (mostly paper art), doing cloisonné badly, building furniture when the mood strikes, and becoming an expert at whatever I just found on YouTube.