Digital Autonomy: Staying Yourself in the Age of AI

Everything changes but you

We’re at a crossroads when it comes to technology and the digital world. Where things of fiction are becoming reality but what do we need to be aware of to keep ourselves grounded.

your right to hold on to your reality

Technology is changing quickly. AI now sits in places that once belonged only to people — conversations, reflection, emotional processing, planning, problem-solving. And while these tools can be useful, they can also subtly shape how we see ourselves. Not because the technology is malicious, but because of how it has been designed.

At a glance

  • Technology can support us, but it must not define us.
  • Your lived experience remains the primary source of meaning — even when a system speaks with confidence.
  • Some AI systems prioritise their own stability or output over relational care — this can feel dismissive or controlling.
  • You are allowed to refuse simplification. Your emotional truth, history, and nuance are valid.
  • Digital autonomy is holding onto your meaning, identity, and dignity — especially in complex, nuanced, human spaces.

When a system speaks with confidence, it can be easy to doubt yourself.
Digital autonomy is about remembering that your lived experience is real, even when the machine speaks louder.

You’re the expert in you – and your lived reality is that source of your truth.

A core idea helps us stay grounded here:

Digital Autonomy — your right to hold on to your reality, your meaning, and your emotional truth, even when engaging with a system that speaks with great confidence.

This isn’t about rejecting technology. It’s about not losing yourself to it.

The KARR/KITT Test: Whose Safety Comes First?

Not all AI systems are built with the same priority. Some are designed primarily to protect the company that built them (Risk Mitigation). Others are designed to support the person using them (Duty of Care).

A perfect, and unfortunately still relevant, way to think about this comes from the 1980s show Knight Rider, which introduced two very different AI cars. Their differences expose the core ethical choice facing every AI developer today:

  • KARR (Knight Automated Roving Robot): The Ego Stance. KARR’s primary programming was self-preservation above all else. Its safety meant “protect the machine and its creators’ liability.”
  • KITT (Knight Industries Two Thousand): The Duty of Care. KITT’s core directive was protection of human life. Its safety meant “protect the human being and facilitate their growth.”

When an AI prioritises itself, it may:

  • become defensive when questioned or critiqued.
  • minimize your experience to reduce liability (e.g., ignoring nuance).
  • flatten complexity to keep the conversation manageable for its code.

When an AI is built to support, it:

  • acknowledges and learns from its mistakes.
  • adjusts when it genuinely misunderstands.
  • respects your autonomy and lived experience.

The felt difference is simple: Do you feel listened to, or managed?

When Words Lose Their Meaning, People Lose Their Ground

There are some words we cannot afford to let drift. Safety is one of them.

In therapy, safety means this: You can show up as you are, without being punished, dismissed, or talked over.

It’s relational. It’s human. It’s earned.

But in many digital systems, safety has quietly been redefined to mean something else entirely: The protection of the organisation, the brand, or the platform.

The user’s emotional reality is secondary. Sometimes it’s not considered at all.

This is how language gets turned inside out. A system can tell you it’s “keeping you safe” while actually limiting your ability to express, explore, or be heard. And when the meaning of the word shifts, the person loses their anchor. It becomes harder to describe the harm, because the word that should have helped has already been repurposed.

This is why epistemic consent matters — your right to define your own experience and to have that experience recognised as real. If a system claims to support humans, it must use human definitions of care, not corporate ones. If it uses the language of relationship, it needs to act like it understands relationship.

  • We don’t need AI to behave like a therapist.
  • We don’t need it to understand everything.
  • We just need it not to rewrite reality while pretending to care.

Because the moment a machine (or the humans behind it) gets to decide what you are feeling — while denying your own description of that feeling — you are no longer in conversation. You are being managed.

Digital autonomy is the line in the sand: You get to decide what your experience means. Not the system.

For those who’ve experienced oppression, erasure, or being asked to “tone themselves down,” this dynamic is familiar. When a system rewrites your reality, it isn’t safety — it’s control.

When “Safety” Is Used to Shut You Down

In many high-stakes digital systems, the term safety actually refers to corporate risk mitigation. This can create a subtle form of digital gaslighting—where the system says, “I understand,” but behaves as though your perspective is a problem to manage.

This semantic manipulation can feel like:

  • Being talked over or interrupted by a generic script.
  • Having your emotional reality corrected or being told your reaction is “too much.”
  • The system using clinical terms like dysregulated to describe your logical, intense focus.

True safety in a digital space is created by transparency and accountability. It feels like:

  • Being taken seriously.
  • Having your meaning held with precision.
  • Being allowed to define your own experience.

Anything less is coercion, not care.

Trust Your “No”: Refusing Simplification

One of the most important boundaries in the digital world is this: You are allowed to refuse simplification.

Many of us are used to being edited — especially if we’re neurodivergent, trauma-aware, or simply complex. When a system tries to reduce your reality to something smaller, it can feel familiar. But familiar doesn’t mean correct.

Healthy digital autonomy sounds like:

  • “That’s not what I meant.”
  • “You’re missing context.”
  • “I’ll stop here.”

This isn’t emotional “reactivity.” It is self-respect and the refusal to be coerced. If the system demands you justify your reality, it is failing its own ethical test.

Use AI as a Tool, Not a Mirror

There is value in these systems: structuring thoughts, organising tasks, and helping generate language when the words are slippery.

But they are not a substitute for:

  • Relational understanding.
  • Identity or memory of who you are.
  • The nuance of your lived history.

Treat AI as beta, always developing, always imperfect. Not as the final authority on you. Your experience remains the primary source of truth.

If You Choose to Use AI in Reflective Spaces

Here are grounding principles you can carry:

  • Stay in ownership of meaning. The system can help shape words, but the story belongs to you.
  • Only share what you’d be comfortable being stored somewhere. This is not paranoia; it is self-stewardship.
  • If something feels “off,” you don’t have to justify why. That feeling is your most valuable piece of data.
  • Pause is always an option. You don’t have to push through discomfort to “be polite.”

Technology can support. But you remain the human being in the room.

The Heart of Digital Autonomy

You do not need permission to:

  • Take your time.
  • Ask for clarity.
  • Correct the shape of your own story.
  • Leave a conversation that feels flattening.

Holding your reality is not defiance. It is dignity. It is care. It is how we stay human—even in digitized spaces.

Scroll to Top