Renji
Renji is a research prototype that explores a new approach to making AI systems safer. Rather than relying only on training-time guardrails, it uses an explicit, auditable layer that decides — at the moment of response — whether an action is consistent with a set of stated principles. The goal is a system whose refusals and acceptances can be checked, not just trusted.
It is built by an independent student researcher in Nairobi. Early benchmarks are encouraging — 80 of 80 deterministic refusals on adversarial prompts, no false positives across 30 benign cases — but the work is currently bottlenecked by computing resources. This site exists so that people who want to see the research continue can help fund the next round of experiments.