2 min read
3 min read
I’m exploring how thinking changes when it’s treated as a living process rather than a private performance or a finished product.
Specifically, I’m working at the intersection of:
human judgment
iterative learning
AI as a thinking partner rather than an answer machine
This work spans multiple surfaces—designing custom GPTs, coordinating systems of GPTs, working with their output over time, using tools like Notion for persistence, and refining real-world programs like Focus Me Forward. But underneath all of it is one central question:
What becomes possible when we design for thinking-in-motion
instead of certainty?
Much of the most meaningful learning I’ve experienced doesn’t arrive fully formed. It arrives as:
partial clarity
friction
surprise
revisions I didn’t expect to need
Traditionally, that kind of learning stays private until it’s “clean.” But I’m increasingly convinced that this gap—between private sense-making and public sharing—is where many capable people get stuck.
So instead of waiting for conclusions, I’m experimenting with sharing orientation.
Not teaching.
Not broadcasting expertise.
Just leaving well-marked notes from the middle of the work.
This isn’t about transparency for its own sake. It’s about modeling a way of learning that doesn’t require pretending the thinking is already done.
In this exploration, AI is not a shortcut or a substitute for judgment.
I use ChatGPT—and increasingly, custom GPTs—as:
structured thinking partners
mirrors for pattern recognition
collaborators in language, framing, and system design
I design GPTs with specific roles, constraints, and responsibilities, then observe how my thinking changes when I interact with them consistently over time.
Notion functions as a different kind of space:
a place for persistence
slow accumulation
cross-referencing insights that don’t mature on demand
The tools matter—but only insofar as they help me notice how thinking evolves when it’s given continuity, structure, and room to revise.
These notes are:
working documents
snapshots of thinking in progress
reflections from inside the process, not above it
They are not:
polished frameworks (yet)
instructions to follow
declarations of “best practices”
Some notes will zoom out and look at the meta-patterns.
Others will zoom in on specific experiments—what worked, what didn’t, and what surprised me.
When something feels unresolved, I’ll leave it unresolved on purpose.
This page is the anchor.
Every future note—whether it’s about designing GPT systems, refining a workbook, or rethinking business design—connects back to this central inquiry:
"How do we design environments where thinking can stay alive long enough to become meaningful?"
If you’re reading along, you’re not expected to keep up or agree. You’re simply welcome to walk nearby.
I’m thinking with the door open.
Most public work rewards certainty.
This work rewards coherence over time.
If something here resonates, it’s likely because you’re already doing this kind of thinking quietly—without much language or permission.
These notes are my way of giving that process a visible shape.