The Decision Is Already Ours
I sit down most evenings, after the day job, and run a model against my own code. I read what it finds. Sometimes it surprises me. Sometimes the surprise is the kind that makes me wince. The model is doing the work a senior reviewer would do. On a laptop, in an apartment in Amsterdam, after dinner. For code that comes from anywhere.
The tool I am building is called quodeq. It is the closest I have come to doing the thing I have been arguing for since my twenties.
What I actually care about
I have a political science background and I work as a software engineer. The engineer in me wants to ship the model, see what it does, iterate. The political scientist in me asks who pays, who decides, who is left out of the conversation, and what happens to the people who never opted in.
quodeq is what those two halves agreed on.
The question I keep coming back to is who technology is actually for. Right now it has an answer. And it is not the answer I want.
Two answers
The CEO of Palantir has spent two years making a case in interviews and in a book. AI, he argues, is a strategic asset of the American state. The job of American technology companies is to make sure that state wins. Not to help humans, plural. To help one country and its allies.
He is not hiding anything. He is one of the most explicit voices in the industry on what AI is actually for. His answer is: a flag.
A handful of others, Anthropic among them, are pushing back from inside the industry. They are asking a harder question. What is this technology allowed to optimize for? I share most of their principles. From what they publish, I think their answer would land close to mine.
But alignment is not a product a company can ship to the rest of us. It is a question about what we, as a species, are trying to become. No company can answer a question that size on our behalf. It has to be answered by the people who write the code, the people who use the systems, and the people the systems get used on. By now, that is everyone.
Two private American companies. Two coherent answers. The species, or the nation. I prefer the first. But the first answer is still being given by one company, in one country, under one set of incentives. That is not a future we have built. It is a future we are being offered.
We can do better than accept the offer.
And then the rest of the world
Both of those voices are American. So is most of the conversation. The companies are American, the capital is American, and the interests being served are American.
It is not the only conversation.
China has a third coherent answer. State-directed where the American ones are private. And, in a twist that should make the closed American labs uncomfortable, it releases its model weights more openly than they do. The frontier of openly released models now runs through Hangzhou and Beijing as much as through San Francisco.
It is a third answer to the same question, given by people with their own history and their own reasons. Pretending it does not exist is a Western reflex I am tired of seeing.
And then there is Europe. I live here. I am writing this in Amsterdam.
The European answer to what AI is for has so far been to write rules about how it should not be used. The AI Act is a serious piece of regulation. On balance I am glad it exists. But regulation is not a vision. We have built a legal framework for technology built by other people. Mistral is doing real work and deserves credit. It is not enough.
Where is the European voice? Not the European regulation. The voice. We are a continent whose history includes both the achievements and the catastrophes of the twentieth century. We know what happens when technology gets aimed at the wrong target. We used to have something to say about public goods. We seem to have forgotten.
I do not have a satisfying answer. I have this piece, written as a Spanish expat.
Twelve years of trying
I did not figure this out from the side.
In my twenties I wrote at United Explanations, in Spanish, about free software and digital sovereignty and the 15M movement. I had the conviction of a person who thinks explaining a thing is enough to move it. I worked on an app called Safegees during the refugee crisis. Some of what I made landed. Most of it did not.
At some point I lost a good amount of faith in what one person at a laptop could change. I went and did the day job for years. Android. Shipped code. Watched the industry's promises age. Kept reading.
What pulled me back is the part I should have understood earlier. The people who shape technology are not the people who write about it. They are the people who build with it. If you want AI pointed somewhere other than where the loudest voices are pointing it now, you have to pick up the tools and build.
quodeq reads source code and tells you where it is weak. It uses language models to do the reading. The same kind of models that everyone else is using to write the next online casino, the next crypto exit, the next pipeline that scrapes other people's work and sells it back to them.
It runs locally if you want it to, so your code does not leave your machine. The prompts are in a directory you can open and read. The license is MIT. Fork it, break it, replace what you do not trust, rewrite it from scratch.
The defensive side matters too. The same models that write your unit tests are being used to write convincing typosquats. They are being used to probe the open-source supply chain at a speed human attackers cannot match. I wrote about that last month. The least anyone can do is run a model against their own code first, before somebody else does. Anthropic built a version of this for the Fortune 500 and the intelligence community and called it Project Glasswing. quodeq is the version you can run on your laptop, whether or not you have a security budget.
This is not the answer to the big question. It is one small thing pointed in what I believe is the right direction. Most of what comes will be small. We are going to need a lot of it. Most of it will be built by people you have never heard of.
That is also the good news.
What I am asking
Use these tools. Build with them. But not for the next online casino. Not for the crypto broker that turns retail investors into exit liquidity. Not for the pipeline that scrapes other people's work and sells it back to them. Not for another engagement loop in a new coat of paint.
Build for things that move humans forward. Tools that help someone do their work better. Tools that protect the commons instead of strip-mining it. Tools you would still be willing to defend in ten years, after the round of hype that paid for them is over.
I am asking you. I am also asking myself. None of this is settled. The room is still open.
The thing that decides what AI becomes is not the companies, in the end. It is the millions of small choices made by people who can write code, who choose what to build, who pay attention to who gets hurt and who does not. That capacity is already inside us. The decision, if we want it, is already ours.
quodeq is how I am taking mine back. There will be others. I hope one of them is yours.