Consciousness is a bad engineering problem
because every attempt to define what is being solved changes the thing itself.
By day, I build systems. By night, I explore the soul.
That means I’m an engineer by profession, and a reader and writer by passion.
It is 5:17 AM as I write this. I have been staring at my LinkedIn bio for the past half hour, thinking about its first two lines and why I wrote them two years ago. More precisely, I have been wondering why I instinctively separated systems and soul as though they belonged to different worlds.
To me, “soul” is just another word for consciousness.
I have been working in engineering for eight years now, and one thought has been looping in my mind this morning since I started staring at my screen: engineering begins with a luxury consciousness does not offer: a stable problem definition.
Before anything can be solved, built, or optimized, the thing in question must hold still long enough to be specified. Inputs, outputs, constraints, and success are defined and named. Though success may change based on outcomes. But before we walk the path, we know imperfectly what this could potentially become.
Engineering, to me, is this: the problem may be difficult, so we break it down to simplify understanding. But in the end, it remains recognizable as a problem.
The moment we try to treat consciousness as an engineering task, that is when it begins to feel foggy.
What exactly are we trying to build, explain, optimize, or reproduce? A system that behaves intelligently? (But what is intelligence in the first place?) A system that reports internal states? A system that has experience?
If we are trying to reproduce consciousness, where do we even begin?
This question brings to mind Thomas Nagel’s famous essay, “What Is It Like to Be a Bat?” His point was not really about bats. It was about the limitations of objective description. No amount of third-person data, no matter how detailed, can tell us what it is like to be the bat itself.
In engineering, if two systems produce the same outputs under the same conditions, they are functionally equivalent. But here, two systems can behave identically and still differ in the only way that matters: one might have experience, and the other might not.
Think of two people going through the same life event; outwardly it might look the same, yet inwardly, they might perceive it completely differently.
Yesterday, while browsing a course which I am interested to take next year, I have been reading about David Chalmers’ distinction between the “easy problems” of consciousness and the “hard problem.”
The easy problems: discrimination, integration, reportability, and the control of behavior.
The hard problem: why is any of this functional activity accompanied by experience at all?
I am increasingly convinced that engineering is structurally suited to the easy problems, but not to the hard one. To engineer consciousness, we must define it in ways that make it measurable. And each attempt risks mistaking a correlate of consciousness for consciousness itself.
For centuries, our ancestors, the great thinkers, philosophers and scientists have tried to explain consciousness. But why have we not found an answer? I suspect it is because they disagree about what counts as an explanation in the first place. When one person defines consciousness, it changes our understanding of it, which is precisely what the other person is trying to understand.
As Kant says, the mind is both the instrument that makes knowledge possible and the object we are trying to understand. Every explanation of consciousness comes from within consciousness itself. There is no view from nowhere.
And that, makes consciousness a recursive problem.

Suppose we build a system that perfectly mirrors human behavior; where do we stop? When it expresses doubts about its own consciousness? Can we consider the problem solved at that point? From an engineering standpoint, I would say yes.
But we are left with the same question we started with: is there anything it is like to be that system? Mirroring behavior alone cannot answer this, because behavior was never the target.
Perhaps consciousness is not just a hard engineering problem. It is a bad one. Not because it is unsolvable, but because the criteria for a solution cannot be defined without first taking a stand on fundamental questions.
Over the past few months, I have been reading articles on AI that make this especially visible. Some predict that consciousness will “emerge” given sufficient complexity. We humans are so fond of domestication that we are now trying to domesticate ourselves. Treating consciousness as an engineering problem is an attempt to domesticate something that resists domestication.
I will close with this: understanding the mind does not solve consciousness. It clarifies why consciousness will not stay solved.
I certainly do not mean to dismiss technical work and research as misguided. On the contrary, it makes the limits more legible. As always, explaining mechanism is not the same as explaining meaning. With AI, modeling cognition is not the same as accounting for experience.
In the end, maybe we can explain consciousness and mind in parts without explaining them as a whole and improve our systems around them.
Yours in thought,
Yana ♥️

