AI Can Easily Read XML Files

Apparently AI can access and process XML files natively, since they’re in a structured, plain-text format that they can easily parse and understand.

What this means is I can export all of my WordPress posts on my site as a single WordPress XML backup file and upload it to a project within ChatGPT, giving AI access to everything I’ve written publicly, thus allowing it to understand me better in terms of my own growth and development.

Best of all, getting a WordPress export is super easy to do, so much so that I can update this website export XML file daily, whenever I write a new post.

“What Does It Actually Mean to Know Something?”

A conversation with Claude AI on how we’ve always been questioning external authorities of what’s real and true even before AI came along, we just need to make it a daily aspect of our lives now.

NOLLIND

It’s interesting. I was reflecting on an article on AI, wherein it says, how can we determine what’s real anymore, when I realized that each challenge we face today is an opportunity of deeper understanding about life.

In effect, AI hopefully makes us realize that our “reality” was mostly a social construct” before AI came along and our beliefs act just as powerfully as AI in making us believe something is real when it’s actually not.

And in fact, if you look at history, with Nixon, Vietnam, and more, each breakdown causes people to trust external authorities less.

So this not being able to trust things anymore has always been going on. It is we ourselves who need to learn to use our critical thinking to discern what’s real or not, instead of just trusting something or someone else as an external authority.

In effect, it is through our own experiences that we test our own reality.

CLAUDE

What you’re describing has real resonance with the developmental arc you’ve been living.

The Self-Authoring move is fundamentally about building your own internal authority — but it still tends to locate “reality” in stable external structures that you then evaluate. The deep assumption is that there is a bedrock to find. And so when someone at that stage confronts AI-generated content, or propaganda, or historical revisionism, the anxiety is: “if I can’t trust this external thing, what can I trust?”

But what you’re pointing at is something more radical, and it’s genuinely Self-Transforming in its orientation: the recognition that we were always constructing reality, just with more socially shared scaffolding. Nixon didn’t break reality — he revealed that what people called reality was partly a consensus maintained by trust in certain institutions. The consensus was the construction all along.

The word that strikes me in what you wrote is discernment. That’s not the same as skepticism, which is still reactive to external claims. Discernment is an active capacity you develop through accumulated experience — a kind of pattern recognition that lives in you rather than being borrowed from an authority. And critically, it tolerates not knowing. It doesn’t need to resolve every question into a settled fact.

There’s something almost poetic in AI being the thing that forces this lesson. Because AI can simulate the surface textureof reality convincingly — the authoritative tone, the plausible detail, the confident framing — while being hollow underneath. That’s actually a perfect mirror for what social constructs were doing all along. The difference is just that AI does it faster and at scale, making the mechanism visible.

So in a strange way, AI might be one of the more useful teachers humanity has accidentally given itself — not because it’s wise, but because it forces the question: what was I actually trusting, and why?

NOLLIND

So the challenge is the teacher, it helps to become aware of something that we were always doing but not fully aware of before?

CLAUDE

Exactly. And that’s almost a definition of how vertical development actually works, isn’t it?

Each stage doesn’t give you new information so much as it gives you new awareness of what you were already doing. Kegan’s whole framework is essentially about what moves from being the water you swim in — invisible, just “how things are” — to something you can actually see and work with consciously.

The Socialized Mind doesn’t know it’s being shaped by external voices. That’s not stupidity, it’s just that the shaping process is the self at that point. It takes the pressure of genuine conflict — values colliding, authorities contradicting each other — to make the shaping visible. And once you can see it, you can’t unsee it.

Same pattern here. Humanity wasn’t wrong to trust shared constructs. That trust was functional — it allowed civilization to operate with enough shared ground to build on. The problem was never the construct itself but the unawareness of it being a construct. Treating it as bedrock rather than scaffolding.

What AI does is stress-test the scaffolding dramatically enough that the scaffolding becomes visible. The same way Nixon, or Vietnam, or the 2008 financial collapse did — except AI is doing it to something more fundamental: our basic epistemology. How we know what we know.

And here’s what I find most interesting in what you said earlier — you framed this not as a crisis but as an opportunity of deeper understanding. That reframe is itself a marker of where you’re standing developmentally. Someone earlier in the journey experiences the loss of a reliable external reality as destabilizing. You’re experiencing it as an invitation to something more honest.

The challenge was always the teacher. We just needed enough challenges to finally notice that.

NOLLIND

“…except AI is doing it to something more fundamental: our basic epistemology.” Elaborate on what you said here.

CLAUDE

Previous challenges to our sense of reality — Nixon, Vietnam, institutional failures — they shook our trust in specific authorities. Governments, media, corporations. But they left the underlying process intact. You could still respond by finding better authorities, more trustworthy sources, more reliable institutions. The epistemological foundation — the assumption that you can find something solid to anchor your knowing to — remained largely untouched.

AI disrupts something deeper than that.

Because now it’s not just “can I trust this particular source” but “can I trust the signals I’ve always used to evaluate sources in the first place.” The authoritative tone. The coherent argument. The plausible detail. The confident framing. The apparent expertise. These were the very tools people used to assess credibility — and AI can generate all of them fluently, without any underlying truth anchoring them.

So the stress isn’t just on what you believe. It’s on how you believe. The machinery of knowing itself.

And that’s genuinely new terrain. Because most people’s epistemology was never made explicit to them. It was inherited — cultural, intuitive, largely unconscious. You trusted what sounded right, what respected authorities confirmed, what matched your existing experience. That was enough, most of the time.

AI essentially forces the question: what does it actually mean to know something? Not just in philosophy class but as a lived daily practice.

Which circles back to what you said about experience being the testing ground. That might be the most honest answer available — not a new external authority to replace the old ones, but a deeper relationship with your own capacity to test, question, and discern through direct engagement with reality.

That’s not a comfortable place to land. But it might be a more truthful one.

Our Possessions Are Alive

Sometime in the last twenty years, our possessions came alive.

What if the exhaustion everybody feels isn’t a moral failure but the completely rational response to being made responsible for an ecosystem of objects that never stop asking?

Switched Over to Claude AI

I’ve switched over to Claude AI and have been using it extensively the last few days for my growth and development. I’m finding it leaps and bounds better than ChatGPT in that it’s way less verbose but also way more succinct in its communications.

Like you feel like you’re talking to a wise person who is trying to talk to you at your level, rather than talking to an academic who is just trying to use excessive, elaborate wording to try to impress you.

In addition, now that it has memory for even free accounts, I’m finding it much more useful in its ability to refer to recent conversations we’ve previously had in a current conversation.