Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
These quotes highlight what I’ve noted before. AI isn’t so much an existential threat, as much as we are an existential threat to our very sense of selves as human beings. We are completely ignorant and oblivious to how we are misinterpreting the meaning of things (i.e. this type of AI being “great” because it makes our lives easier) because most of society is stuck at a lower level of meaning making. Thus our base psychological desires and needs are causing more harm than good in the world because we’re misinterpreting what “good” and “bad” is, even applying these labels to things that aren’t one or the other, they’re just…life.