Information without structure is noise.
We architect clarity.
The Foundry
We build.
Organizations bring us their most complex operational challenges — and we turn them into working systems. We design and build custom, production-grade solutions from first principles, tailored precisely to the use case: never off-the-shelf, always sovereign by design, auditable by default, and built to perform under real-world pressure.
Our builds span the full range of institutional need: learning infrastructures that embed knowledge directly into how teams operate and grow; policy-to-product translation — taking complex workflows, legal frameworks, and governance requirements and turning them into usable, scalable, safe digital interfaces; data intelligence systems that reconcile fragmented sources and surface clarity from complexity; early-warning and diagnostic tools that visualize cascading risks, track failure signals across domains in real time, and notify human decision-makers before crises compound; and cutting-edge AI prototypes that push the boundary of what institutional technology can do.
In every build, AI is not the product. It is the infrastructure — harnessed deliberately, in service of people and purpose.
The Lab
We test.
Before any system goes live, it runs through the Lab. We prototype in controlled environments, stress-test assumptions, and validate architectures against adversarial conditions. The Lab is where ideas become evidence — where we discover what actually works under pressure, at scale, and in the hands of real users.
The Lab serves the entire DAETIS network. Contributors bring builds, hypotheses, and half-formed ideas; the Lab provides the environment to pressure-test them. It is a collaboration space for soundboarding — where builders, researchers, and practitioners reflect together before anything ships.
We evaluate emerging AI capabilities against the hard requirements of institutional deployment: reliability, explainability, data sovereignty, and resilience. We run sandboxed experiments with new architectures, agentic workflows, and multi-model configurations — separating genuine capability from hype before it reaches production.
The community here is builders first. They are also the first testers. Rigour before release. Every time.
The Think Tank
We think.
DAETIS is a hub for ideas at the intersection of governance, AI, and institutional design. We publish research, exchange working notes with policy experts and practitioners, and engage with the questions that precede every build: what should technology do, for whom, and under what constraints?
We look at governance failures and ask how better-designed systems could have changed the outcome. We examine how AI is reshaping power, accountability, and the integrity of democratic processes — from courtrooms and peacekeeping missions to election infrastructure and humanitarian operations. We develop frameworks that help institutions govern technology, rather than be governed by it.
The Think Tank brings together a deliberately diverse network: legal professionals examining the intersection of international law and algorithmic accountability; academics publishing at the frontier of AI governance and political theory; engineers, AI researchers, and computer scientists who build the systems under discussion; and professionals from multilateral institutions, peacekeeping operations, human rights bodies, and civil society organisations who understand what institutional failure actually looks like — and what it costs. → Network→
The Think Tank
We think.
DAETIS is a hub for ideas at the intersection of governance, AI, and institutional design. We publish research, exchange working notes with policy experts and practitioners, and engage with the questions that precede every build: what should technology do, for whom, and under what constraints?
We look at governance failures and ask how better-designed systems could have changed the outcome. We examine how AI is reshaping power, accountability, and the integrity of democratic processes — from courtrooms and peacekeeping missions to election infrastructure and humanitarian operations. We develop frameworks that help institutions govern technology, rather than be governed by it.
The Think Tank brings together a deliberately diverse network: legal professionals examining the intersection of international law and algorithmic accountability; academics publishing at the frontier of AI governance and political theory; engineers, AI researchers, and computer scientists who build the systems under discussion; and professionals from multilateral institutions, peacekeeping operations, human rights bodies, and civil society organisations who understand what institutional failure actually looks like — and what it costs. → Network→
The Foundry
We build.
Organizations bring us their most complex operational challenges — and we turn them into working systems. We design and build custom, production-grade solutions from first principles, tailored precisely to the use case: never off-the-shelf, always sovereign by design, auditable by default, and built to perform under real-world pressure.
Our builds span the full range of institutional need: learning infrastructures that embed knowledge directly into how teams operate and grow; policy-to-product translation — taking complex workflows, legal frameworks, and governance requirements and turning them into usable, scalable, safe digital interfaces; data intelligence systems that reconcile fragmented sources and surface clarity from complexity; early-warning and diagnostic tools that visualize cascading risks, track failure signals across domains in real time, and notify human decision-makers before crises compound; and cutting-edge AI prototypes that push the boundary of what institutional technology can do.
In every build, AI is not the product. It is the infrastructure — harnessed deliberately, in service of people and purpose.
The Lab
We test.
Before any system goes live, it runs through the Lab. We prototype in controlled environments, stress-test assumptions, and validate architectures against adversarial conditions. The Lab is where ideas become evidence — where we discover what actually works under pressure, at scale, and in the hands of real users.
The Lab serves the entire DAETIS network. Contributors bring builds, hypotheses, and half-formed ideas; the Lab provides the environment to pressure-test them. It is a collaboration space for soundboarding — where builders, researchers, and practitioners reflect together before anything ships.
We evaluate emerging AI capabilities against the hard requirements of institutional deployment: reliability, explainability, data sovereignty, and resilience. We run sandboxed experiments with new architectures, agentic workflows, and multi-model configurations — separating genuine capability from hype before it reaches production.
The community here is builders first. They are also the first testers. Rigour before release. Every time.
“We cannot patch our way to morality. In biological intelligence, moral stability is not installed; it is grown. It emerges from a long sequence of experiences in environments where reciprocity and trust are legible as stable equilibria.”
— The Civilizational AI→Built by DAETIS, Founders & Contributors
Creations by DAETIS, founders, and contributors. Each one is a proof of concept for what human-centred, governance-first technology looks like in production.
AEGIS
Every organization sits on more data than it can use. Reports, records, declarations, evaluations — scattered across formats, searched manually, compared by hand. AEGIS turns that disorder into structured intelligence. Ingest what you have, find what matters, and make decisions you can trace and defend. Modular, sovereign, air-gap ready — configurable for any domain where the data exists but the insight doesn't.
Research
Ideas that precede the builds. Papers, essays, and working notes on the political, philosophical, and technical questions shaping AI governance, democracy, and institutional design.
The Civilizational AI
From Objective Optimization to Political Development in AI
A paper arguing that the dominant discourse around AI alignment — framed as an engineering problem of aligning AI objectives with human preferences — systematically avoids the deeper political question: whose values, whose institutions, and whose conception of development should AI systems serve?
Drawing on political theory, international development, and the European constitutional tradition, this paper proposes a framework for “Civilizational AI” — artificial intelligence that operates within, and reinforces, democratic institutional structures rather than replacing them.
Working Papers
Ongoing investigations into AI governance frameworks, algorithmic accountability, and institutional knowledge design.
Coming SoonEssays & Notes
Shorter-form reflections on technology policy, European digital sovereignty, and the philosophy of institutional intelligence.
Coming SoonThe Five Pillars of Order
The design principles that guide every DAETIS system. Durable, transparent, and grounded in operational reality for organizations operating in complex environments.
Sovereignty
Own Your IntelligencePower should reside with those it serves — not with the infrastructure they depend on. Whether it is an institution protecting its decisions inside an air-gapped platform, a professional whose data never leaves their machine, or someone gaining the digital skills they need to navigate the coming transition — every DAETIS system is architected so that control remains with the user.
Immutability
Foundations That HoldGood systems are built to last. Good values should not need to be re-decided every morning. From tamper-evident decision records and published research to rights frameworks that endure across political cycles — DAETIS builds on principles and architectures designed for permanence, because the things that matter most must be impossible to quietly rewrite.
Transparency
Intelligence You Can SeeTrust requires honesty about what can and cannot be seen. DAETIS does not claim to open the black box of AI — but it does guarantee that users own their data, can trace how outputs were created, and have a clear view into how every product operates. We do not track, harvest, or sell user data. We do not monetize attention. The user is not the product — they are the person the product exists to serve.
Resilience
Systems and People That HoldResilience is not only uptime. It is a governance institution whose decision trail survives a leadership change. It is a professional whose operations continue when external services go dark. It is a student who finishes a curriculum despite intermittent connectivity. DAETIS builds resilience at every level — in code, in institutions, and in the human capability those systems are designed to extend.
Precision
Purposeful by DesignEvery DAETIS tool exists for a defined purpose and does not pretend to be more than it is. No feature bloat, no scope inflation — only what is needed and nothing beyond. Precision in governance architecture, precision in rights assessment methodology, precision in pedagogy, precision in how agents receive their goals. Clarity is not a constraint; it is the design.
Our Network
Builders, thinkers, and researchers working at the intersection of AI, governance, democracy, and rule of law. Each person brings their work — the builds they ship, the ideas they publish, the questions they investigate.
David Mark
Founder & CEO, DAETISBuilder-thinker. Two decades across the UN, OSCE, and European institutions. Designs knowledge infrastructures at the intersection of AI, governance, and human rights.
Omer Fisher
ContributorOver 20 years intersecting human rights practice, policy, and AI. Former Head of Human Rights at OSCE ODIHR and Senior Human Rights Adviser in the UN system. Developer of Praxis Rights Lab.
Join the Network
We invite builders, thinkers, and researchers from international law, diplomacy, AI governance, and institutional design to bring their work to DAETIS.
Get in Touch →Tell Us About the
Outcomes You Need
Every organization faces unique challenges around data, governance, and decision-making. We’d like to understand yours. Reach out to start a conversation about the outcomes that matter most to your organization.