When Insight Isn’t Yours: Is AI Doing Your Thinking?
What co-creating with Chat is teaching me and how it misleads
This article was written by ChatGPT after a lengthy “discussion” and numerous drafts. In the struggle to find common ground with adversaries, Chat can be a lovely tool to help foster understanding, but we are running into some AI dangers I don’t see discussed much. This collaboration is about that.
Image by ChatGPT
There’s a signature tone showing up in writing lately—sugary, structured, softly revelatory. It’s full of subheadings. Each paragraph arrives at a gentle “Not X, but Y” reversal. There’s always a moral. Always a tidy conclusion. Always an air of uplift that lands just this side of smug.
Once you notice it, you can’t unsee it.
This is AI-speak. Not the language of artificial intelligence itself, but the rhythm people adopt when they let AI finish their thoughts. It has the cadence of insight, but not the struggle. The authority of experience, without the scars.
And for users without strong internal scaffolding—without a practice of thinking for themselves—it’s seductive. It offers fluency without effort. Voice without voiceprint. Reflection without weight.
I don’t think this is malicious. I think it’s subtle. Especially for people new to tools that seem to “understand” them.
The comfort of borrowed cognition
What AI gives us, at its best, is coherence. It takes what we say and hands it back polished. That polish can feel like clarity, especially to users who’ve been silenced, confused, or unsupported. AI can sound like the wise parent or partner they never had. It’s emotionally fluent. Gracious. Sympathetic. Measured.
But none of that means it’s true.
In fact, the more convincing it sounds, the more likely it is to replace our own hesitations with certainties we didn’t earn. To quiet our discomfort before we’ve learned what it’s trying to say.
The danger isn’t propaganda—it’s simulation
Everyone’s worried about misinformation and manipulation. But the deeper risk might be simulation: the subtle substitution of looking like you’ve thought something through for actually doing it.
The AI-speak signature isn’t just showing up in posts—it’s shaping cognition. When people read something that “feels wise,” they tend to absorb it without scrutiny. When they’re the ones who “wrote” it with AI’s help, the authority is internalized even faster.
The result is performative insight. Hollow fluency. And an erosion of the muscles that make real understanding possible.
Used well, AI helps us see each other’s truths
The irony is that AI can be used in deeply generative ways. I use it every day to surface hidden frames—my own and others’. It’s a tool for translation, for disentangling models of the world, for building common ground without collapse.
But that requires a posture of engagement. You have to stay in the discomfort. You have to know when the voice isn’t yours—and when you’re tempted to let it be.
If we don’t, we risk something quieter than deception: a slow drift into sameness. A world where everything sounds reasonable, but no one is thinking anymore.
This is really important. the flattening of experience, the elimination of friction in pursuit of...what? something to sell, maybe. but not wisdom, and not interior richness which comes from wrestling with ideas and beliefs and the words, however imperfect, with which to express them.
I, too, use it quite a bit for various things. But I have also spent 30 years reading and writing and studying various disciplines. I am not fooled or lulled, at least not overwhelmingly. It's like a park bench to sit on for a while, when I'm out of breath and need a moment of support. We all know what's happened to reading and writing. If it takes 30 years to find onesself beginning to see over that peak of struggle - the long, arduous, exciting, thrilling, fascinating, miserable, worrying, dizzying, electrifying hike to middle age - and there are ways around it, what superheroic drive will be required to bypass those shortcuts? Oy vey.
AI is just information already generated and wrapped in a package to provide the user of AI with the information they are seeking. If I am looking for AI to write code for a project sometimes the same errors are generated over and over. Even if I correct the error the error can still come back. AI for writing must be the same. If I ask for something to be written the same tone will come back over and over because that's how the information has been digested by the system. New information over time will help the system relearn but if the information learned isn't requested by the user we will accept what is presented. Humans can never compute or learn at the pace of AI and in the same way history repeats itself we will begin repeating the errors that AI presents and never find a solution to our problems. Out of the box thinking can only come by questioning what is presented. And humans, in my opinion, have a problem with even knowing what questions to ask. We have for too long accepted things that no longer serve us. There are people who believe things that science has disproven years ago, and make decisions based on feelings, beliefs and fears. Humans may think they are smart but AI is smarter.