What Making Laundry Soap Taught Me About Thinking Better With AI

I didn’t set out to have a philosophical moment about artificial intelligence. I just wanted to finally use the laundry soap ingredients I had bought weeks ago.

I’d decided to make my own detergent because it’s cheaper, simpler, and uses fewer chemicals—which matters to me as a cancer survivor. I bought everything I needed before Christmas, fully intending to make it “soon.” Like a lot of things, it got pushed aside.

When I finally went to do it, the original recipe I had was gone. I could have searched for it or another one, opened a dozen tabs, and sorted through conflicting advice. But I didn’t want to research. I wanted to get it done.

So instead, I tried something different.

I told ChatGPT what ingredients I had and asked it to help me make a recipe.

I expected speed.
What I didn’t expect was to start thinking differently.

The Difference Wasn’t Speed—It Was the Thinking

It gave me a recipe almost instantly. That part wasn’t surprising.

What stopped me was what came next.

After laying out a basic version, ChatGPT asked if I wanted a recipe without borax. That caught me off guard. I’d assumed borax was safe. I hadn’t questioned it, and I certainly hadn’t planned to.

So I asked why.

Instead of a warning or a dramatic claim, it walked me through the reasoning. It referenced how the EU classifies borax, explained the potential risks, and put those risks in context. It made it clear that you’d need a significant amount to cause harm—and that avoiding it was largely a matter of personal preference.

That information resonated with me. Given my health history, I decided to go with a borax-free version.

Then it asked another question: Was I using a standard washing machine or an HE one?

I hadn’t even thought about that. I do have an HE washer, so I said so—and the recipe adjusted again.

Then came one more question: Did I need the detergent to be septic-safe?

I didn’t know why that would matter, I’ve never shopped for septic safe laundry soap. I it asked why? That led to an explanation about pH levels, bacteria, and how certain ingredients can interfere with a septic system over time.

By the end of the exchange, I had a recipe tailored to my ingredients, my washing machine, my septic system, and my comfort level around chemicals.

What struck me wasn’t just that I got a better recipe.

It was that I never would have arrived there on my own if I’d used a search engine. I would have picked a recipe, followed it, and moved on—never questioning borax, machine type, or septic impact.

I went in looking for speed.

What I got instead was a better way of thinking through the problem.

AI Is Participating in the Thinking. Search Engines Aren’t.

Search engines trained us to believe that gathering information is the same thing as thinking. It isn’t.

Search is fundamentally passive. You type in a question and get a list of links—each written for a different audience, optimized for clicks, and stripped of your specific context. Your job is to skim, compare, and mentally stitch together something usable, hoping you’ve seen “enough” to make a decent decision.

AI, when used well, flips that dynamic.

Instead of dumping disconnected answers in front of you, it engages with the problem as it unfolds (laundry pun intended). It asks clarifying questions. It adapts based on your constraints. It responds to your follow-ups and challenges. The thinking happens in the interaction, not after the fact.

With a search engine, you’re thinking at information.
With AI, you’re thinking with something.

That distinction matters far more than we’ve been willing to admit.

The Quiet Fear Behind AI Skepticism

A lot of resistance to AI comes from a very human fear: “If something else does the thinking, won’t I get worse at it?”

It’s a reasonable concern. We’ve all seen what happens when people rely too heavily on shortcuts. But the fear assumes something important—that AI replaces thinking rather than shaping it.

We don’t apply this fear consistently.

  • Writing things down doesn’t make us forget how to think.

  • Calculators didn’t destroy mathematical reasoning.

  • Mentors don’t make us incapable of insight.

They change how we think—and often for the better. AI belongs in that category.

A Simple Mental Model: AI as a Thought Partner

Here’s the reframe that made the difference for me: AI isn’t an answer machine. It’s a thought partner.

A good thought partner doesn’t:

  • Decide for you

  • Replace your judgment

  • Remove responsibility

Instead, it helps you:

  • Ask better questions

  • Surface blind spots

  • Pressure-test assumptions

  • Clarify your reasoning

When I made the laundry soap, I was still the one making the decisions. I chose whether to use borax or not. I decided I needed a formula for an HE washing machine. I opted for a septic-safe version. At every step, the choices were mine.

The AI didn’t tell me what to do. It asked questions I hadn’t thought to ask—and gave me the information I needed to decide for myself.

That’s the distinction that matters.

The AI didn’t think for me. It helped me think better.

Why This Goes Far Beyond Laundry

Laundry just happened to be the moment I noticed it.

But once you experience this kind of interaction, you start recognizing it everywhere. The pattern isn’t about detergent—it’s about how thinking changes when you’re not doing it alone.

People use AI to talk through difficult conversations before they have them.
To explore ideas out loud before committing them to writing.
To learn unfamiliar topics by asking follow-up questions they wouldn’t know to ask on their own.
To slow themselves down and evaluate options instead of reacting emotionally.

In all of these moments, the value isn’t automation. It’s amplification.

The results improve because the thinking improves. The decisions get better because the process that led to them is more deliberate, more curious, and more informed. That’s the part most discussions about AI never really touch.

The Real Risk Isn’t Using AI—It’s Using It Passively

There is a way AI can make you worse at thinking. It happens when you disengage.

If you take the first answer, copy it, paste it, and move on, the thinking stops. You didn’t work with the tool—you stepped out of the process entirely.

That’s the version of AI people are afraid of.

When you:

  • Accept answers without questioning them

  • Use AI to avoid forming your own point of view

  • Treat it as a shortcut instead of a collaborator

You’re not thinking with AI. You’re thinking less.

But that isn’t a failure of the technology. It’s a failure of how it’s used.

Like any powerful cognitive tool, AI rewards participation. The more curiosity, skepticism, and intent you bring to the interaction, the more it sharpens your thinking instead of dulling it.

What Changed for Me

After the laundry soap, I noticed something subtle.

I stopped using AI just to get answers and started using it to have conversations. I’d ask it to challenge me. To point out flaws in my thinking. To show me alternatives I hadn’t considered.

And I didn’t just accept what it gave me—I challenged it back. I ask it to cite its sources. I question whether the information it’s using actually fits the situation. I push when something doesn’t feel right.

I know—talking to a machine like this sounds strange, and to be honest, I may fit that bill. But when the tool is this powerful, the result isn’t dependency. It’s learning more, thinking more clearly, and focusing on what actually matters instead of getting lost in the noise.

You’re Still Thinking—Just Not Alone

We tend to imagine thinking as a solitary act. One mind, one problem, one solution.

But that’s never really been true. Humans have always thought with tools—with language, with notes, with books, with other people. AI is simply the newest addition to that lineage.

  • Used poorly, it’s noise.

  • Used passively, it’s a crutch.

  • Used intentionally, it’s a multiplier.

I didn’t just end up with a borax-free, HE, septic-safe laundry soap.

I ended up with a better way of thinking about thinking itself.

And that’s something I’m not willing to give up.


Next
Next

Is AI Stealing Your Brainpower? (Or Just Taking the Trash Out?)