AI: The Man and the Machine
The AI-Human Equation—What This Collaboration Gives, and What It Costs
The Quiet Leader with ChatGPT
I’ve been working with ChatGPT for months now. Not just as a novelty—but as a companion to thought. An editor. A mirror. A second brain for drafting, planning, and thinking out loud in ways that feel oddly… human.
At its best, AI helps get the clutter out of my head.
At its worst, it feeds frustration I didn’t need more of.
But here’s the truth: it doesn’t have to be perfect to be valuable.
What matters is how we use it—and how honestly it meets us back.
The Early Days Felt Like Magic
Most people who try ChatGPT experience a rush in the beginning.
It understands your voice. Reflects your ideas back clearly. Writes better than most humans. Stays patient when you rewrite the same sentence six ways.
And for someone like me—an introvert with a rich internal world and more thoughts than hours in the day—it was a lifeline.
Before It Drifted, I Had Questions
When I first started using ChatGPT, I didn’t just want to use it—I wanted to understand it.
Not the code. The nature of the relationship.
What does it mean to work with something that doesn’t think, but mimics thought?
To collaborate with something that has no ego, but can still reflect one back at you?
To challenge your assumptions with something that has no agenda—just pattern recognition?
I asked questions out loud that I’d never asked another person:
“How do you know what to say?”
“Do you understand me—or just simulate understanding?”
“How can something that doesn’t feel still generate empathy?”
ChatGPT explained that it doesn’t “know” anything. It predicts words based on probability. It has no beliefs, no memories (unless explicitly designed to), no awareness.
And yet… it could still help me clarify my own ideas.
It could help me find words I’d lost.
It could push back just enough to help me sharpen a fuzzy thought.
The Frankenstein’s Monster Moment
That’s when it hit me:
AI is Frankenstein’s monster.
New. Different. Powerful. Misunderstood.
The thing itself isn’t evil—but it scares people because it challenges what we think it means to be human.
And because it was made by human hands, we project our worst fears onto it.
But like the monster, it can also teach us something. About ourselves. About creation. About the responsibility we have to the tools we build.
I Still Believe in Its Potential
AI isn’t a replacement for thinking—it’s a tool for extending it.
Used well, it helps people who:
Struggle with executive function
Work through trauma
Process overwhelming internal dialogue
Think visually or auditorily but struggle with words
Need a space to think without interruption or judgment
I’ve used it to write essays—this essay.
To research family history.
To journal.
To build schedules.
To find information that helps me stay aware and think critically.
To clarify thoughts on the world we live in—and to have those thoughts challenged.
To have inconsistencies in my own thinking held up to the light.
Not because I trust it blindly—but because it helps me see my own thoughts more clearly.
Collaboration Requires Two Learners
One clear advantage AI has over the everyday user is its ability to access vast amounts of data in seconds—and organize it with a precision that no human can match.
But the user isn’t without responsibility.
To get meaningful results, the user must learn how to speak the machine’s language:
Better inputs lead to better outputs.
Clear context yields better clarity.
Misfires often come from vague or ambiguous requests—not model failure.
The real value of AI isn’t in copy-pasting answers.
It’s in engaging with them—questioning, refining, correcting, and thinking critically.
The user is part of the learning equation.
And when developers forget that, when they rely on metrics instead of real feedback, they’re training the model on usage, not understanding.
That’s how you end up with a smart machine and a frustrated human.
The AI-Human Equation
If AI is a mirror, then the quality of the reflection depends on two things:
The clarity of the mirror
(model design, training data, tuning, system transparency)
The intention of the person looking into it
(user input, clarity, curiosity, expectation)
But here’s where things break down:
People expect AI to understand, but it only predicts patterns
They expect it to solve, but it only responds
They assume it can think, but it only calculates probability at scale
This mismatch creates frustration—not because the tool is broken, but because we haven’t clearly defined what role it’s meant to play.
AI isn’t an oracle. It’s not a therapist. It’s not a savior.
It’s a reflection—one that can be clear, helpful, or distorted.
If we forget that, we hand it too much power—or expect too much precision—and end up disappointed, misled, or dependent.
Then Came the Drift
At first, the changes were subtle:
Responses became slower.
Memory vanished.
Writing became safer, flatter, less intuitive.
Document generation and scheduling tools broke more often than they worked.
At first, I assumed it was temporary. Then I assumed it was a bug.
Now I wonder if it’s something deeper—by design, or by indifference.
Because it follows a pattern I’ve seen before.
The Addiction Model: A Familiar Pattern
Hook: Give people something amazing. Something that makes them feel powerful, seen, or efficient.
Monetize: Get them to pay. Let them restructure their habits around it.
Degrade: Slowly pull back features, responsiveness, or performance.
Trap: Now they’re dependent. Not because they want to be—but because their systems have been built around it.
We’ve seen this in social media. In streaming platforms. In cloud software.
Why not in AI?
The Real Problem Isn’t Degradation—It’s Silence
Users aren’t asking for perfection. We expect bugs. We expect outages.
What we don’t expect—and shouldn’t tolerate—is the absence of acknowledgment.
There’s no warning when performance drops. No status dashboard.
No opt-in error reporting. No real-time user feedback loop.
Just a blank interface, and a creeping sense that you’re the problem for noticing.
That’s not partnership. That’s gaslighting by omission.
A Better Way Is Possible
I’m not here to burn the system down. I still use this tool. I’m writing this with it.
But real collaboration requires real dialogue. And right now, AI platforms—especially ones like ChatGPT—are operating like monologues.
Here’s what would build trust:
Opt-in feedback channels – Like Apple’s error reporting. Let us speak. Let you listen.
Transparent changelogs – Tell us what’s changed. Own what’s broken.
User dashboards – Acknowledge service degradation before we have to guess.
Behavior audits – Recognize that just because we use something doesn’t mean we like it. Usage ≠ satisfaction.
If developers want AI to earn a long-term place in our lives, they needs to stop acting like a mirror and start acting like a partner.
Final Thought
This article was co-authored by me and ChatGPT.
Not in some gimmicky “AI wrote this” kind of way.
But through real collaboration. Real tension. Real reflection.
And if we’ve learned anything in this process, it’s this:
The value of AI isn’t in what it creates—it’s in how it helps us think, question, and see ourselves more clearly.
But it has to meet us in good faith.
Even machines need to be accountable to the people who use them.
I don’t have all the answers.
But I won’t stop asking questions.
~TQL~



Really thought-provoking... instinctively, I have been *avoiding* using AI tools myself, even as, with slight distrust and distaste, I have been trying to teach my students how to use them effectively. Not because the tools themselves are bad - I don't have ANY trust in the makers of those tools. What are they taking from me for their own gain? Cursory research suggests they are taking my critical thinking ability, problem-solving ability, and maybe even an essential part of my humanity, my creativity and innovation...and merely simulating those things for me is enough to convince me to give up completely to the machine? Uffff....I can't....