Introducing Quodeq
Static analysis tools catch what breaks the rules. They check syntax, enforce formatting, flag known patterns. That is useful. But it is not enough.
They do not tell you if your code is actually good. They do not evaluate whether your authentication flow has a logic flaw, whether your error handling will survive a real failure, or whether the thing you shipped last Friday is going to be a maintenance problem six months from now. The gap between what a linter catches and what a senior engineer would catch in a code review is enormous. That gap is where bugs, security vulnerabilities, and technical debt live.
A different approach
Quodeq uses language models to evaluate source code. Not pattern matching. Not rule engines. Models that can reason about structure, intent, and context the way a human reviewer would, but across an entire codebase at once.
It scores your code across six quality dimensions from ISO 25010: security, reliability, maintainability, performance, flexibility, and usability. Every finding maps to a CWE identifier, the same taxonomy used by NIST, OWASP, and compliance standards like PCI-DSS. This is not a proprietary scoring system. It is the international standard for what software quality means.
You point it at a project. It comes back with scores, violations, and a fix plan.
How it works
Run quodeq eval . to analyze a project. Quodeq discovers your source
files, detects languages and frameworks, and prioritizes what to evaluate first.
Security-critical paths like authentication and API routes go first. Files with high
git churn or many imports get priority. It runs up to five AI subagents in parallel,
then aggregates everything into dimension-level scores.
Run quodeq dashboard to explore the results visually. Scores, trends
across runs, violations by severity, and AI-generated fix plans with file-level
guidance. Everything stored as JSON on your machine.
You choose how it runs. Cloud providers like Claude, Gemini, or Codex give you faster, deeper analysis. A thorough evaluation of a 300-file project costs a few dollars. But Quodeq also runs entirely offline through Ollama with models like Gemma 4. Your source code never leaves your machine. Speed or privacy. Your call.
Why open source
There is something ironic about proprietary code quality tools. They ask you to trust their assessment of your code, but you cannot assess theirs. You are supposed to believe the scores, follow the recommendations, and pay the subscription, all without being able to read a single line of evaluation logic.
Quodeq is MIT licensed. Every line is readable. Every evaluation rule is auditable. Fork it, modify it, build on top of it, use it commercially. No enterprise tier. No "contact sales." No freemium gate. No telemetry. No account. No Quodeq servers.
Code quality is too important to be a black box. If you are going to trust a tool to evaluate your work, you should be able to evaluate the tool.
We are just getting started. If this resonates, check the GitHub repository and give it a try.