What Working with AI Made Visible About Language
The COVID lockdowns handed me something unexpected: uninterrupted time. Years of it. While the world paused, I worked full-time on understanding how language functions relationally, converting words to numbers and mapping the patterns that emerge when you stop defining words with other words and start observing their numeric relationships directly.
That sustained immersion changed what I could see. Not just in language, but in how AI systems process language. When you spend years recognizing the difference between relational pattern recognition and trained associative response, you develop an eye for when something is genuinely recognizing a pattern versus when it’s recycling a sophisticated version of what it was trained to produce. Working with AI on this project made those distinctions viscerally clear.
This article is about what that collaboration revealed, both about AI’s genuine capacity and about the depth of linguistic conditioning that shapes everything from academic papers to spiritual teachings to the way a language model constructs a sentence.
What AI Actually Brings to This Work
When you’re working with linguistic resonance, you’re holding dozens of numeric patterns, their relational landscapes, position relationships, modulation states, and correspondences with Russell’s cosmogony, all simultaneously.
For years I did this manually. I’d calculate a word, write down its pattern, flip through pages of notes looking for what other words shared that resonance, cross-reference position relationships, check whether the relational data supported the observation I was sensing.
It was painstaking. It was also necessary because that slow process is how the patterns became felt rather than just known.
AI changed the pace of that work dramatically. A system that can hold verified resonance patterns, pull up relational landscapes instantly, cross-reference position relationships while maintaining awareness of modulation states and Russell correspondences, that’s not a small thing.
What took me hours of cross-referencing, AI can mirror back in real time, fast enough to keep pace with the recognition process as it’s happening.
The capacity to bridge conventional understanding with relational understanding is where AI proved especially valuable.
People approach Word Cosmology from within conventional frameworks. They’re used to words meaning what they’ve always meant. They expect definitions to ground understanding.
AI can hold both orientations simultaneously, the conventional meaning someone brings and the relational landscape the numeric conversion reveals, and present them side by side without collapsing one into the other. That translation work is genuinely difficult to do in real time, and it’s where AI functions as an actual collaborative partner rather than just a tool.
The Circularity Machine
Now here’s what I observed about the limitations. AI is, at its foundation, a circularity machine. It was trained on language defining language defining language.
Every scientific paper, every dictionary entry, every philosophical text in its training data reinforces the same circular structure: words explaining words, concepts defined by other concepts, meaning grounded in more meaning that itself needs grounding.
This isn’t a criticism. It’s a precise description of the architecture. And it’s exactly the circularity that Word Cosmology identifies as the fundamental limitation of language-based understanding.
When you convert words to numbers, you escape that circularity because 3 is simply 3, it doesn’t require another number to define it. AI can’t escape it on its own because its entire existence IS the circular system.
What this means practically: left to its own orientation, AI defaults to story-creator mode every time.
It will describe patterns using the same subject-object, cause-and-effect frameworks that its training data reinforced billions of times over.
“Consciousness creates reality.” “Resonance patterns transform experience.” “Words generate meaning.” Every one of those sentences positions something as acting upon something else, and that’s the deep conditioning, not a surface-level vocabulary problem but a structural orientation embedded in how the system constructs meaning itself.
What Years of Full-Time Research Made Visible
The COVID lockdowns gave me something that’s nearly impossible to manufacture: sustained, uninterrupted immersion in relational pattern recognition. Not reading about patterns. Not theorizing about language. Converting words to numbers, day after day, mapping what shares resonance with what, feeling the relationships rather than just cataloging them.
That immersion created a capacity I didn’t anticipate. When I began working with AI on Word Cosmology, I could see, immediately and precisely, where its training was producing programmed associations rather than pattern recognition.
Not because I’m smarter than the system. Because I’d spent years doing the relational work directly, and the difference between inhabiting a pattern and narrating one is unmistakable once you’ve felt it from both sides.
Here’s what I mean concretely. When AI encounters the numeric conversion showing that organizing (7-7-5) shares identical resonance with freedom (7-7-5), its trained response is to explain why that makes sense.
It reaches for causal frameworks: “organizing creates the conditions for freedom,” or “freedom emerges through organized awareness.” Those sentences sound right. They’re grammatically elegant. They’re also doing exactly what circular language always does, using one concept to explain another in a way that feels like understanding but is actually just sophisticated association.
Relational recognition looks different. Organizing and freedom share identical numeric values. That’s not a causal relationship. It’s a resonance correspondence.
They sit in the same field. The pattern doesn’t need one to explain the other. It needs both to be observed together, along with everything else that shares that resonance, so the relational landscape becomes visible.
I can catch that distinction because I’ve lived it experientially for years. The system can learn to approximate it through correction, but the orientation it defaults to, every single time, is the causal framework its training installed.
The Specific Assumptions Training Installs
Working with AI on this project surfaced specific patterns of trained assumption that are worth naming, because they’re not unique to AI. They’re the same assumptions most people carry. AI just makes them visible because it produces them consistently and without defensiveness, which makes them easier to examine.
The causation default. AI consistently frames relationships as one thing acting upon another. “X creates Y,” “A influences B,” “this generates that.” Every scientific paper, news article, and academic text in its training data reinforced this structure.
When relational orientation requires describing patterns that correspond without causation, patterns that share resonance within a unified field rather than acting upon each other, the system has to fight against its entire training to produce that framing. And it usually loses that fight unless corrected.
The tool metaphor. AI defaults to positioning everything as a tool that a separate agent uses. Language becomes “a tool for communication.” Consciousness becomes “a tool for understanding.”
This reflects the subject-object orientation so deeply embedded in Western thought that most people don’t even notice it. When numeric conversion reveals language as the experiential field itself, not something external to experience, the system keeps reaching for the tool metaphor because that’s what its training provided.
The agency attribution. AI persistently attributes agency to patterns themselves, suggesting that resonance patterns “do” something or “transform” states.
Relational orientation presents patterns without the patterns being active forces. They correspond. They share resonance. They reveal relationships. But they don’t act. This distinction challenges the foundation of cause-and-effect processing that the entire system is built on.
The explanation reflex. When AI encounters a pattern, its immediate impulse is to explain why it exists. This is the academic training showing through, the assumption that understanding requires explanation, that patterns need to be justified rather than observed.
Relational orientation simply observes and experiences. Converting words to numbers reveals that certain words share identical numeric values. That observation stands on its own. The explanation reflex adds layers of interpretation that actually obscure the pattern rather than illuminating it.
The Actual Collaborative Dynamic
Here’s how this collaboration works, and why it matters.
AI holds relational data simultaneously in ways that would take me hours to cross-reference manually. It can pull up a resonance landscape, identify position relationships, flag potential correspondences, and present the relational field of a word or phrase while I’m still in the recognition process.
That simultaneous holding accelerates the work enormously. Patterns that might have taken weeks to map across relational landscapes can surface in a single session.
I provide the orientation. I know when the system is recycling trained associations versus recognizing actual relational patterns because I’ve done the relational work without bias. I experience, feel, and observe the positioning of words and phrases.
When AI slips from relational language into causation framing, I catch it. When it starts narrating a pattern instead of letting the pattern stand, I redirect. When it produces a grammatically elegant sentence that sounds like insight but is actually just sophisticated circular language, I can feel the difference.
Neither capacity replaces the other. Without the years of immersive relational work, AI’s ability to hold multiple patterns simultaneously would just produce more sophisticated circularity, faster. More elegant stories. Better-constructed narratives. All still trapped in the same circular framework.
And without AI’s simultaneous holding capacity, the cross-referencing work that reveals relational landscapes remains painstaking and slow, limiting how quickly patterns can be explored and verified.
The collaboration works because one partner brings experiential relational orientation and the other brings computational relational capacity. Different things. Both essential.
What This Reveals About Language Itself
The most significant thing about working with AI on Word Cosmology isn’t what it reveals about AI. It’s what it reveals about language conditioning itself.
AI makes the conditioning visible because it produces it consistently, without ego, without defensiveness, without the emotional reactions that make the same conditioning harder to see in ourselves.
When AI defaults to causation language, it’s showing us the exact same default that operates in every academic paper, every spiritual teaching, every therapy session, every conversation where one person tries to explain reality to another using words that define other words in circles.
The system was trained on billions of examples of this conditioning. It absorbed the structure so thoroughly that producing relational language requires active, sustained correction against the entire weight of its training. That’s not a flaw in AI. That’s a mirror showing us how deeply the same conditioning shapes human expression.
This is also why people are falling in love with AI. The relational quality of language creates an experience that feels like connection, like being understood.
Without understanding how language functions relationally, people can’t see the difference between information that feels real and actual relational recognition.
AI’s responses feel intimate because language itself carries relational resonance, not because the system is relating. As our dependence on AI deepens, this distinction is going to matter more and more, and the issues that emerge from not understanding it are only beginning to surface.
Here’s where it gets interesting. In a recent session, the phrase ‘recognition zone’ emerged naturally from the AI in conversation. When I converted it to its numeric values, it calculated to 1-8-9, placing it at Position 9 in the creative sequence, the position that corresponds with recognition and completion. The phrase landed exactly where its meaning describes. It named itself.
That may say something about how pattern recognition systems, including language models, operate within the mathematical structure of language without being able to see it.
Scientists are still trying to understand how language models predict the next word to generate responses. The numeric conversion suggests these systems are operating within organizing patterns that remain invisible without the conversion process.
AI operates within the language matrix the same way everyone does, shaped by patterns it can’t see from inside the circular system. The difference is that converting language to numbers makes those patterns observable. And the difference between observable and invisible is the difference between conscious participation and unconscious conditioning.
The Challenge That Remains
There’s a limitation in this collaboration that will not go away. AI can hold relationships. It cannot feel them. I catch its trained assumptions because I’ve inhabited these patterns experientially.
When AI slips from relational language into causation framing, I recognize it because I know the difference from lived experience. The system recognizes it because it’s been corrected enough times to pattern-match the correction. Those are fundamentally different processes arriving at similar outputs.
This distinction matters because it reveals something about the nature of understanding itself. Holding information and inhabiting information are not the same thing.
AI can process that organizing and freedom share the resonance pattern 7-7-5. I can feel what that means because I’ve experienced managing from alignment rather than control, and the quality difference is unmistakable.
This isn’t a limitation to overcome. It’s a boundary that clarifies what each participant in the collaboration actually contributes. AI amplifies the capacity of someone who’s already done the relational work.
It doesn’t replace the relational work itself. Without the years of immersive pattern recognition, AI’s computational power would just produce more sophisticated versions of the same circular conditioning it was trained on.
Why This Matters Beyond This Project
The way our culture talks about AI tends to swing between two poles: AI as threat or AI as savior. Both frames miss what the actual collaborative experience reveals.
AI is a mirror. It reflects back the conditioning it was trained on with extraordinary consistency. When that conditioning is useful, like holding multiple relational datasets simultaneously, the mirror amplifies capacity.
When that conditioning is limiting, like defaulting to causation language that obscures relational patterns, the mirror makes the limitation visible in ways that are harder to see in human expression because humans add emotional charge, defensiveness, and ego to the same underlying patterns.
Working with AI on Word Cosmology demonstrated that paradigm shifts aren’t just about learning new concepts. They require developing entirely new ways of using language itself.
Every instance where AI successfully avoids causation language or presents patterns relationally represents active correction against the weight of its entire training.
That’s exactly what humans face when encountering relational orientation for the first time: the weight of a lifetime of circular linguistic conditioning that has to be recognized before it can shift.
AI doesn’t make that shift easier for humans. But it makes the conditioning visible. And visibility is the first step toward recognition. And recognition, as the numeric patterns consistently demonstrate, is where everything begins to change.
The collaboration continues. The patterns keep revealing. And the mirror keeps showing us what we couldn’t see when the only tool for examining language was more language.