
I have been thinking about AI a lot lately. Not in the way most people seem to, where it is either the greatest invention since fire or the beginning of the end. My thoughts are messier than that, more tangled, and I suspect more honest because of it.
Let me start with the question everyone seems to be asking: Is AI a threat to jobs?
Absolutely yes. And I say that without hesitation. I have watched tools built on large language models draft emails, write code, summarize legal documents, generate marketing copy, and produce visual designs in a matter of seconds. These are not hypothetical capabilities. They are happening right now, in real workplaces, replacing tasks that once required a human being sitting at a desk for hours. To pretend otherwise, to wave it away with reassurances that “new jobs will be created,” feels dishonest. Sure, new roles will emerge. They always do. But the transition will not be painless, and it will not be fair, and we should stop sugarcoating that.
Now here is where my thinking gets complicated.
There is a much bigger question lurking behind the jobs conversation, one that sounds like science fiction until you sit with it long enough. Can AI evolve into the kind of thing we see in movies? The autonomous robots walking through ruined cities, weapons in hand, posing an existential threat to humanity? On this, I genuinely do not know where I stand. I hold two contradictory ideas in my head at the same time, and I have made peace with the fact that both of them might be partially right.
On one hand, no matter how much we praise AI, no matter how breathlessly we talk about its capabilities, we need to remember what it actually is at a fundamental level. It is a large language model. It is trained on data. It produces outputs based on patterns it has learned. It does not “want” anything. It does not “feel” anything. It is an extraordinarily sophisticated prediction engine. And when I remind myself of that, the Hollywood scenario feels absurd. What is a language model going to do in the physical world? It has no body, no will, no desire for self-preservation. The gap between generating a paragraph and picking up a weapon is not just large; it is a gap of an entirely different kind.
But then I think about the fruit fly.
A few years ago, researchers completed a detailed map of the brain and neural connections of a fruit fly. The entire connectome, every single neuron and synapse, laid out in full. It was a landmark achievement in neuroscience, and it revealed something both beautiful and unsettling. The fruit fly’s brain, tiny as it is, operates through a network of neurons that process information, learn from the environment, and produce behavior. The architecture is, at a high level, not entirely unlike the artificial neural networks we build for machine learning. Obviously the specifics differ enormously. Biological neurons are not the same as artificial ones. But the broad principle, layers of interconnected nodes processing signals and adjusting based on feedback, that principle is shared.
And once I started thinking along those lines, I could not stop.
How, exactly, is our brain fundamentally different from what a large language model does? Yes, our brains are staggeringly more complex. We operate with roughly 86 billion neurons and trillions of synaptic connections. We have embodied experience, sensory input, emotional circuitry, a lifetime of reinforced learning that shapes who we are. But strip all of that away and look at the skeleton of the process, and what you find is a system that takes in information, processes it through layered networks, and produces outputs. We are, in a sense, the product of continuous reinforcement learning running on biological hardware, scaled up to a degree that produces something we call consciousness.
So if intelligence, at its core, is an emergent property of sufficiently complex networks learning from their environment, then who is to say that artificial systems could not, given enough scale and the right architecture, develop something that resembles genuine thought? Not today. Probably not tomorrow. But the question is not whether current AI can do this. The question is whether there is something fundamentally special about carbon-based neurons that silicon-based systems can never replicate, or whether it is all just a matter of complexity and scale.
I do not have an answer to that. I am not sure anyone does.
What I keep coming back to is an even deeper question, one that predates AI by centuries: What is intelligence, really? What is life? We like to think we know, but our definitions tend to be circular. Life is what living things do. Intelligence is what intelligent beings exhibit. We recognize it when we see it, but pinning it down in a way that cleanly separates the biological from the artificial is harder than it sounds. If a system can learn, adapt, generate novel solutions, and respond to its environment in ways that surprise even its creators, at what point do we stop calling it a tool and start calling it something else?
I think the honest position, the one that requires the most intellectual courage, is to say: I do not know. The people who are completely certain that AI will never be more than a tool are making a claim about the nature of consciousness that neuroscience has not settled. The people who are completely certain that AI will soon become sentient are making a technological prediction that outpaces the evidence. Both camps are guessing, and both are more confident than they have any right to be.
Where does that leave me? Paying attention. Watching the research. Thinking carefully about the near-term problems, like job displacement, which are real and immediate, while staying open to the longer-term questions that are harder to wrap my head around. I use AI tools every day. I benefit from them. I am also aware that I am watching something unfold whose full implications none of us can see yet.
And maybe that is okay. Maybe the right response to something genuinely new is not certainty, but curiosity. Not panic, and not blind optimism, but the willingness to sit with difficult questions and resist the urge to pretend we have all the answers.
Because we do not. Not yet.
