Categories
Vertical Development

AI Tools Reveal A Deeper Societal Issue

AI Platforms like ChatGPT Are Easy to Use but Also Potentially Dangerous – Scientific American
Scientific American is the essential guide to the most awe-inspiring advances in science and technology, explaining how they change our understanding of the world and shape our lives.
www.scientificamerican.com

As the last example illustrates, they are quite prone to hallucination, to saying things that sound plausible and authoritative but simply aren’t so. 

Because such systems contain literally no mechanisms for checking the truth of what they say, they can easily be automated to generate misinformation at unprecedented scale.

When I started using ChatGPT, I completely missed the fact that it can’t go out and read article links on the web. But when I asked it to summarize article links initially, it actually did so with some accuracy. Once I understood it couldn’t go out and read article links, I realized what it was doing and created a false article link…which it proceeded to summarize because it was using the keywords in the link itself to imagine what the article was about.

BTW I only realized that it couldn’t go out and read articles on the Web, when I asked it to provide three of the best articles on vertical development. What it provided was three article names and links with notable authors in the vertical development field for each. When I clicked on them, they went to the appropriate site (i.e Harvard Business Review) but no such article could be found. It was then I realized that not only was it making the article and links up, it was making up the fact that it was reading the links I had asked it to read earlier.

These bots cost almost nothing to operate, and so reduce the cost of generating disinformation to zero.

Nation-states and other bad actors that deliberately produce propaganda are unlikely to voluntarily put down these new arms. Instead, they are likely to use large language models as a new class of automatic weapons in their war on truth, attacking social media and crafting fake websites at a volume we have never seen before. For them, the hallucinations and occasional unreliability of large language models are not an obstacle, but a virtue.

While I’m enjoying using ChatGPT myself, there’s something evident about it when you use it. If you don’t understand and comprehend the deeper meaning of what you’re asking from it, all you’re doing is highlighting your ignorance rather than hiding it. To use it critically, you need to comprehend what it’s communicating, so that you can alter the prompt parameters more effectively and thus get it to communicate more clearly and accurately.

For example, imagine people relying upon it so much in the future for their work that they begin to fear talking to other real people about their work because it will quickly become apparent to others that they don’t understand the deeper meaning of their work.

I think this is part of the problem of the world we live in right now, which is why tools like ChatGPT are kind of exacerbating the misinformation issue. Most of us don’t understand things because we misperceive the meaning of things. But we like to bolster our ego and portray ourselves as knowledgeable “experts” on the subject matter, having perhaps read a snippet from an article or two on the subject, because it helps meet our base psychological needs.

So no one wants to be ignorant but most of us are in one way or another. Until we can get over this hump and let go of this facade, we won’t be able to truly collaborate on the serious issues before us and make any serious headway. In effect, we can’t learn and grow, if we don’t accept that we don’t understand something and begin to question it to better learn about it.

So this is so much more than just about people perhaps misperceiving knowledge, this is about people misperceiving information which they use to live and navigate their daily lives. And what’s scary about this is that people in power are aware of this and using it to their advantage.

All of this raises a critical question: what can society do about this new threat? Where the technology itself can no longer be stopped, I see four paths. None are easy, nor exclusive, but all are urgent.

Fourth, we are going to need to build a new kind of AI to fight what has been unleashed. Large language models are great at generating misinformation, because they know what language sounds like but have no direct grasp on reality—and they are poor at fighting misinformation. That means we need new tools. Large language models lack mechanisms for verifying truth, because they have no way to reason, or to validate what they do. We need to find new ways to integrate them with the tools of classical AI, such as databases, and webs of knowledge and reasoning.

The ending of this article completely misses the bigger picture here though. It’s not about coding new AI to help us fight other AI, thus making us reliant and dependent upon it.

What we need to do is recode ourselves. We need to level up our consciousness, helping us to become more self-aware and more capable of dealing with complex issues. This is why helping people with their personal development using vertical development is to me the number one way to do this. It actually transforms and upgrades their perceptual interface of reality and helps them to see past their previous misperceptions as the illusions that they are, helping them to navigate the ever increasing complexities of life today in a whole new way.

By Nollind Whachell

Questing to translate Joseph Campbell's Hero’s Journey into The Player’s Handbook for The Adventure of Your Life, thus making vertical (leadership) development an accessible, epic framework for everyone.

Leave a Reply