Brooklyn, NY • AI & Media • Bloodhound Girl Dad
I'm a Brooklyn-based entrepreneur experimenting with new ideas at the intersection of AI & media.
I've been a management consultant and a television producer. Right now, I'm a tech founder.
Sometimes I wonder if I should just stick to a lane, but I really think anyone can do anything so long as you show up with grit and authentic curiosity.
I studied Philosophy and Mathematical Logic at Tufts. Specifically, I gravitated towards the topics of consciousness and the mathematical formalization of natural language — two areas of central discussion today in AI.
Oh also, I got to work with Daniel Dennett. Daniel Dennett!!
Below are two papers I'm particularly proud of. If you're a logic nerd, reach out and let's debate.
LLMs are surprisingly good at decoding implied meaning — tell one "She just got a raise" in response to "Those are expensive shoes" and it connects the dots. In my paper, I called these propositional implicatures and proposed a bracket notation 〈C & D〉 that separates the uttered claim from the implied one, allowing either to be independently negated. The harder problem, both for the paper and for modern NLP, is what I called instructive implicatures — sarcasm, irony, understatement — where unstated meaning doesn't add a proposition but transforms how to interpret one. The paper introduced a speculative "instructions operator" In(H) but honestly, I couldn't fully formalize it. NLP research has since confirmed the same gap: context-dependent meaning transformation remains a frontier problem in language understanding.
∀x[Px → (Ux ∨ Ix)] ∀x(Px → Cx) ∀x(Cx → Bx) ∀x(Bx → Nx) ∴ ∀x[Ix → (Cx ∧ Bx ∧ Nx)]
All propositions are programmable; all programmable propositions are Boolean at base; all Boolean claims are negatable. Therefore implicatures are programmable, Boolean, and negatable.
Transformer attention mechanisms are, by design, access-consciousness machines — information made selectively available for downstream processing. The open question is whether that access ever constitutes experience. My paper argued it must, attacking Ned Block's influential distinction between access consciousness and phenomenal consciousness through a transitive chain: if all phenomenal states are reportable, and all reportable states are accessible, the two can never come apart. If the Dennett view my paper endorses is correct, sufficiently rich internal states in AI systems might warrant moral consideration without a bright dividing line. The paper's reportability argument is now central to AI interpretability: if a system can't report its internal states to anyone — including itself — is it experiencing anything at all?
∀x[(Px → Rx) ∧ (Rx → Px)] ∀x[(Ax → Rx) ∧ (Rx → Ax)] ∴ ∀x[(Px → Ax) ∧ (Ax → Px)] ∴ ¬∃x[(Px ∧ ¬Ax) ∨ (¬Px ∧ Ax)]
All phenomenal states are reportable and vice versa; all accessible states are reportable and vice versa. By transitivity, phenomenal and accessible states always co-occur. There is no case where one exists without the other.
In my free time, I love watching movies. I try to catch one every week in a theatre. I'm particularly into Hollywood noir, Grande Dame Guignol and psychological thrillers.
9½ Weeks
1986
Sunset Boulevard
1950
Se7en
1995
Fight Club
1999
Buffalo '66
1998
Indecent Proposal
1993
Fatal Attraction
1987
Margin Call
2011
Black Swan
2010
Training Day
2001
Mulholland Drive
2001
The Talented Mr. Ripley
1999