Grasping the idea that life is a role-playing game is not something trivial.
It requires a deep, meaningful understanding of the future of work, creativity, and vertical development, as well as how Joseph Campbell’s Hero’s Journey serves as a foundational primer that unlocks and integrates these domains.
Once one sees how these elements function as scaffolded dimensions of a larger framework, the concept of life as a role-playing game may appear trivial upon reflection. That said though, explaining the scaffolding layer by layer is anything but trivial because it requires grounded experience in these domains to truly understand it as a whole.
Growth and development is like exploring newer terrains of possibilities within your inner world. So as you expand your inner view of the world, the possibilities of your outer world expand in turn. In this way, growth and development is like gaining a new vantage point of life, broadening your context and understanding of it.
AI, if prompted correctly, can serve as a tool to expand your understanding of life by challenging how you see things. When it offers a different perspective of meaning, it isn’t saying you were “stupid”. It’s showing you how your current interpretation might be limited by the lens you’re currently using.
In effect, your interpretations may seem perfectly legitimate, even working functionally within your stage of understanding. However, engaging with AI lets you explore ideas from newer vantage points, revealing nuances or connections you might not have noticed before. This is effectively growth in action. So it’s not about a measure of your intelligence but about an expansion of your perception.
This is the paradox of development in that what once felt unquestionably true may later seem like a radical misinterpretation of life. So using AI in this way requires that you have the courage to step into a new space of possibilities that can radically alter your perception of yourself and your world around you, letting you see it from a much broader and deeper vantage point.
Much of our suffering comes from wrong perceptions. To remove that hurt, we have to remove our wrong perception.
Transformation is neither purely an inner nor an outer journey. It is always both.
Our inner mindsets are shaped by the societies and systems we live within, and those systems are in turn reinforced by our beliefs and behaviours, so they form a relational feedback loop.
Inner work then is not about forcing a new mindset, but about making visible how our current beliefs no longer work in a changing reality.
Transforming the concept of work follows the same pattern. You cannot change what you cannot see. The real work, personally and collectively, is making invisible systems, assumptions, and constraints visible so intelligence can be applied meaningfully, thus allowing change to emerge through understanding rather than being imposed through force.
This is basically similar to how many spiritual teachers today are saying meditating doesn’t really help you because people are misunderstanding and misinterpreting its purpose. In effect, mediation fails when it’s treated like a productivity hack to get “there” faster. In other words, your current mindset causes you to misinterpret and misunderstand deeper meanings that you cannot fully perceive yet.
This article by The Atlantic covers a lot of great points about how the average citizen doesn’t have the psychological meaning-making capacity to understand the complex problems in our world today which is why populist leaders like Trump can often take advantage of them and use them for their own advantages to get and remain in power.
The problem stems from a failure to grasp the psychology underlying populism.
Psychologists have a more sophisticated way of articulating this distinction. As readers of Daniel Kahneman’s Thinking, Fast and Slow or Malcolm Gladwell’s Blink know, the human mind exhibits two different systems of cognition. The first is rapid and concrete, focusing on primary representations of things you can see, hear, and touch. The second is slower and more laborious, but capable of abstraction and logical reasoning. In some cases, the two systems produce different verdicts. This can create persistent disagreement between common sense and expert opinion.
The problem is that a supply chain is an entirely abstract concept, and so might as well not exist for the average person. Nobody gets worked up about a supply chain.
People who are angry about the cost of living are going to focus on the last link in the chain, the consumer-facing organization, and that means the grocery store.
Where the article goes off track though is when it talks about what can be done about this problem of people not being able to perceive these complex problems. Its recommended solution is for left-wing politicians to only focus on the problems that are directly affecting people (e.g. rising grocery prices) without getting into the complexities of what’s causing them to indirectly occur. It does this even though it knows that framing problems this ways is wrong and “incorrect” because they aren’t “actionable.”
To do populism effectively, politicians must not only focus on problems that the public cares about; by and large, they must also accept the public’s framing of those problems. This creates a dilemma for the left, because that framing, in a complex modern society, will usually be incorrect.
Many of the problems that they hope to resolve, such as climate change, housing scarcity, and surging health-care costs, are complicated. This means that the policies needed to fix them are also complicated, and cannot be explained without ascending to the realm of abstraction.
Climate change, housing scarcity, and surging health-care costs are actually all complex problems not complicated ones.
The problem is figuring out what to do if you win. Because the slogans generally don’t correspond to actionable policies, making life better for people requires some sort of bait and switch.
In effect, the article’s recommended solution is that the focus of politicians should be on talking about what’s affecting people directly rather than trying to explain what’s indirectly causing these problems because it’s too hopelessly complex for people to comprehend them.
In other words, the article sees it as an “either this or that” situation. Either you explain things directly or you explain things indirectly.
That doesn’t help anyone though because you’ve still got a voter base who is voting on things that they currently can’t perceive and comprehend.
To solve this creatively, we need to stop seeing it as a “this or that” situation and perceive it as a “this and that” situation instead.
In effect, explain to citizens how the problems they are directly facing in their daily lives have emerged indirectly over time from deeper root causes.
In other words, people won’t understand how these problems are emerging and arising in their lives, until you can help them perceive and understand the complex scaffolding that is occurring below the surface of their lives.
So definitely start with the surface problems and then guide them down through the layers to the root cause.
All that said though, it is a hell of a lot of work to do. But if you can empower your voting base to actually understand these deeper issues, they can be collectively mobilized to work on them in ways that a voter base who can’t comprehend them couldn’t.
This is effectively what Marina Gorbis, executive director of the Institute of the Future, communicated in her book The Nature of the Future: Dispatches from the Socialstructed World within a chapter entitled “Governance Beyond Government.”
Basically what she communicated is that complex problems are emerging at such a rapid rate today that we can no longer rely upon a top down, hierarchical forms of governance by government. Instead we need engaged citizens getting informed and understanding the deeper issues of the day, so that they can inform and mobilize other people to work on them collectively as a whole.
The key word here is “informed.”
Fishkin and his colleagues have found that average citizens are able to make good decisions in areas as complex as local budgets, regional integration, criminal justice, and tax policy. Studies have shown that in the process, participants greatly increased their understanding of the issues and often changed their minds on the best course of action; that is, they made better and more informed decisions as a result of deliberately thinking about the issues at stake and hearing different views on the subject.
Marina Gorbis, The Nature of the Future
A perfect example of this would be the backstory of how Alexandria Ocasio-Cortez got into government work by beginning her election campaign in April 2017 “waiting tables and tending bar” while operating her campaign out of “paper grocery bag behind the bar.”
Perhaps a much larger example would be how citizens of the United States during World War II mobilized themselves to collectively assist with the war effort, such as collecting scrap which even children could participate in.
To summarize what I’m getting at here is that complex problems often arise systemically from societies themselves. Thus in a sense, a society needs to realize how they are not at war with someone out there that they can blame but rather they are in conflict with themselves, due to their own perceptions which are causing these complex problems to arise.
Of course, it’s easier to mobile a nation against another, when you can point the finger and blame someone else. It’s much more difficult though when you begin to realize your own perceptions are causing problems and thus you yourself need to change and transform your way of looking at the world and yourself in turn.
When we are able to take responsibility for our actions though, learning from them, that’s when true change can take hold and transformations can occur.
Cheryl Dorsey, a CEO of a global nonprofit called Echoing Green that supports emerging social entrepreneurs, touches upon this (at the 9:30 mark) in an interview with her entitled Social Innovation and Social Just in an Age of Pandemics.
But the diagnosis that these systems aren’t working is the same diagnosis that we see from those who are animated by populist anger. Right. So again we come at the problem from the same vantage point. The way we have constructed societal forces are simply not working. I often talk about the weight of systems, systems residue, that are weighting folks down. People of colour. Marginalized folks. Women. We can go through all the forms of oppression. And these systems are exacerbating those.
So we all see it. However our prescription for what to do about it is radically different. Social innovators recognize that indeed there’s a problem but they raise their hands as engaged, committed citizens to say “Well it’s our job to fix it. We roll up our sleeves, we get to work, and we figure out what we can do.”
So much of the populist anger is a nihilistic one as you said Peter. It’s blow it all up, consequences be damned. And these conflicting forces that are butting heads, there has to be a way to engage more folks from the other side who are as frustrated as many of us are who are engaged in the work of social innovation but do it within the realm of democratic practice that provides a seat for all of us at the table. I think that’s the needle to thread. And I think we’ve got to figure it out and we’ve got to figure it out sooner than later.
We lack awareness of how our own perception creates our problems.
That is humanity’s only true problem.
BTW this doesn’t mean that we don’t focus on solving problems. It just means that to solve them, especially the emerging complex ones, it requires a perception that most people don’t have to make sense of them.
It’s like trying to fix a broken machine in a dark room. Transforming perception turns on the lights, letting intelligence be applied meaningfully. So the more people whose perception are transformed (which takes time), the more people will be able to work on these complex problems.
“Vertical development” adds the designation “vertical” to the word “development” to clarify the type of development.
This differentiator is needed in our world today because most businesses will say how “development” is important but they usually just mean “horizontal development” which is gaining new knowledge or technical skills.
Vertical development in comparison means actually transforming your sense of self, the way you perceive yourself and the world.
Why is this important?
It’s because many people today are going through experiences right now which are transforming the way they look at the world and themselves, yet they don’t understand what’s happening to them.
In other words, if this is happening to you, you’re not abnormal. You’re actually just going through a very natural process that will feel abnormal.
The following question I asked ChatGPT arose out of my previous query which relates to why AI’s potential isn’t being fully utilized because it requires people to change their mindsets to do so.
That previous query reminded me of the importance of providing the right context for an AI prompt which in turn made me think of how vertical development is understanding a larger context of life.
So I asked ChatGPT if there was any relationship to these two seemingly different patterns of context. Again, as I expected and intuited, there was. And its answer blew me away because it really resonated with my own experiences and made perfect sense.
In AI Prompting, “context” Is an External Input. in Vertical Development, “context” Is an Internal Expansion.
The more you’re aware of the assumptions embedded in your question, the better the response—whether from AI or from life.
It’s not that the two processes are identical; it’s that the muscles involved—awareness of assumptions, awareness of frames, awareness of blind spots—are similar.
In effect, this is why most people don’t know how to use AI effectively. They often can’t see the assumptions and frames they are immersed within contextually because they’re perceptually blind to them.
This is why the implementation of AI today is revealing more about us than anything else.
NOLLIND
What is the relationship, if any, between prompting AI with the proper contextually information to make the results better and how vertical development is about questioning our assumptions and beliefs to see a larger context of life?
CHATGPT
There is a relationship, but it’s not a straight analogy. It’s more like the two processes rhyme: both involve context, but the way context functions is different. What’s useful is seeing where the comparison breaks—that’s where the insight actually lives.
1. In AI Prompting, “context” Is an External Input. in Vertical Development, “context” Is an Internal Expansion.
When you give an AI proper context, you’re essentially feeding it instructions and constraints so it can operate within the right frame.
“Use this background.”
“Assume this audience.”
“Here’s the style, here’s the data, here’s the purpose.”
You’re curating the frame.
Vertical development, by contrast, is what happens when your frame rearranges itself.
You realize some assumption you’ve been using isn’t universal.
A belief you’ve treated as a given turns out to be optional.
A form of meaning you relied on starts to feel too small.
In prompting, context narrows the space. In vertical development, context becomes more spacious.
That’s the first important contrast.
2. AI Improves by Receiving Clearer Context; Humans Grow by Questioning the Context They Already Have.
AI works better when ambiguity is removed.
Humans grow when ambiguity is allowed—even invited.
AI: “Eliminate uncertainty so I can give you the right answer.” Vertical development: “Stay with uncertainty so a deeper kind of understanding can emerge.”
So the analogy “good prompting = vertical development” breaks if someone assumes both are about better control. One is about control; the other is about releasing it.
3. The Deeper Similarity Is This: in Both Cases, Higher-Quality Output Depends on the Quality of the Frame.
This is the real point of resonance.
AI
If your prompt is missing context, the AI guesses—and usually guesses wrong. If your assumptions about what you want are implicit, the model will misfire.
Vertical Development
If your worldview is built on unexamined assumptions, you misinterpret experience. If your frame is too small, life feels uncooperative or incomprehensible.
In both systems:
The “frame” is the invisible structure generating the result.
When the frame expands, the outputs become more adaptive.
But the mechanism is opposite:
AI needs the user to articulate the frame.
A developing human becomes aware that they are the frame.
That’s precisely why your work on inner mythic landscape matters: you’re moving from “give me the right frame to operate within” to “I am the field that sets the frame.”
4. The Interesting Tension: Prompting Encourages Precision; Vertical Development Encourages Perspective-Taking.
Prompting teaches you:
“Say exactly what you want.”
“Be explicit.”
“Anticipate edge cases.”
Vertical development teaches you:
“Notice what you’re not saying.”
“See how your expectations create the situation.”
“Recognize your viewpoint as one viewpoint among many.”
The danger is conflating the two:
Some people treat AI like a mind that “gets you.” It doesn’t.
Some people approach vertical development like a precision exercise. It isn’t.
5. The Underlying Principle: Clarity Comes From Exposure to the Assumptions Behind the Request.
If there’s a single conceptual bridge, it’s this:
The more you’re aware of the assumptions embedded in your question, the better the response—whether from AI or from life.
With AI:
You get better results when you clarify your assumptions for the model.
With vertical development:
You grow when you clarify your assumptions for yourself.
AI requires articulation. Vertical development requires reflection.
Both require confronting hidden premises.
This is why you already intuit the parallel: your prompting is getting better as your own internal context is widening. It’s not that the two processes are identical; it’s that the muscles involved—awareness of assumptions, awareness of frames, awareness of blind spots—are similar.
I was watching the above YouTube video entitled The AI Rollout is Here – And It’s Messy by the Financial Times, when at one point within it one of the people being interviewed said the following statement (at the 7:47 minute mark).
The difference this time is the inherent capability of the systems is so much greater that you need a lot of training to be able to fundamentally change the way you work…
To put this more specifically, the potential of AI is massively being underused because people don’t know how to effectively use it yet because it requires a change of mindset in the way they do their work.
To see if I was truly perceiving and understanding this at the deeper level I thought I was, I asked ChatGPT the following question. It’s answer pretty much mirrored my intuitions, with the key takeaway below.
Most people can’t use AI well yet because the psychological, cognitive, and identity shifts required are bigger and more uncomfortable than the technical learning curve.
NOLLIND
How are most people not achieving the true potential capabilities of using AI right now because they don’t have the training yet to change the way they work with it?
CHATGPT
Most people under-use AI right now not because the tools are weak, but because their mental models for how to work haven’t updated. The constraint is cognitive, not technical. Let’s break the gap down directly and challenge a few assumptions along the way.
1. People Still Treat AI As a Faster Search Engine, Not a Cognitive Partner
Most users approach AI with “give me an answer” questions. That’s a holdover from Google-era thinking: query → result.
AI requires a dialogical approach—iteration, refinement, co-construction. But this feels unnatural because people aren’t taught to think in loops. They think in requests.
Counterpoint: Even when people are told to iterate, the habit rarely sticks because they still expect linearity. They don’t yet have an embodied sense of “thinking with” rather than “asking from.”
2. They Haven’t Shifted From Task Thinking to System Thinking
AI shines when used to build frameworks, workflows, drafts, rewrites, and simulations—not one-off answers. But most workers define their job as executing discrete tasks. Their identity is wrapped around visible effort, not invisible systems design.
So they don’t realize the real leverage is in changing the shape of the work, not speeding up the same old shape.
Challenge: Even tech-savvy users unconsciously protect their existing workflows because those workflows reinforce their sense of competence.
3. They Lack Prompt Literacy—but the Deeper Issue Is Conceptual Literacy
The common advice focuses on crafting better prompts, but that misses the point.
The core skill isn’t prompt writing; it’s articulating thinking—assumptions, constraints, goals, criteria. Most people aren’t trained to externalize their reasoning, so they give AI vague inputs and get vague outputs.
The overlooked truth: AI exposes gaps in the user’s own clarity more than it exposes gaps in the model.
4. People Don’t yet Think in Terms of Delegation
AI is essentially a general-purpose junior collaborator. But most people have no training in delegation, creative direction, or iterative supervision.
It’s the same reason many managers struggle: they’re used to doing the work, not orchestrating it.
Implication: The shift to AI requires a shift to meta-work—thinking about the work at a higher level. Most people’s schooling never trained that capability.
5. Emotional Resistance Blocks High-Leverage Use
This is rarely acknowledged, but it’s one of the most powerful forces.
Fear of losing control
Fear of feeling stupid
Fear of obsolescence
Fear of exploring unfamiliar territory
Fear of seeing one’s own limitations reflected back
AI doesn’t just accelerate work—it confronts you with your habits. Most users unconsciously avoid modes of interaction where they feel exposed or incompetent.
6. Vertical Development Matters
At a deeper level, using AI well demands the very capacities you explore in your own work:
managing uncertainty
questioning assumptions
holding multiple perspectives
revising mental models
co-creating with something partially unpredictable
Most of the population is still operating from earlier mindsets where authority, correctness, and clear rules dominate. AI dissolves fixed rules. That alone disorients people.
From that perspective, AI isn’t just a tool—it’s a developmental pressure.
7. They Haven’t Unlearned the Idea That “work = Output”
High-level AI use is about shaping the generator, not generating the thing yourself. It’s designer thinking, not producer thinking.
But internalized work identity is slow to change.
If you look across all these points, the pattern is straightforward: Most people can’t use AI well yet because the psychological, cognitive, and identity shifts required are bigger and more uncomfortable than the technical learning curve.