A non-decision protocol for human–AI systems with explicit stop conditions
deltax

deltax @deltax

About: Independent researcher. Audit-first, non-clinical cognitive frameworks. Methodological and experiential systems focused on structure and invariance.

Location:
Belgium
Joined:
Jan 1, 2026

A non-decision protocol for human–AI systems with explicit stop conditions

Publish Date: Jan 5
0 0

I’m sharing a technical note proposing a non-decision protocol for human–AI systems.

The core idea is simple:

AI systems should not decide. They should clarify, trace, and stop — explicitly.

The protocol formalizes:

  • Human responsibility as non-transferable
  • Explicit stop conditions
  • Traceability of AI outputs
  • Prevention of decision delegation to automated systems

This work is positioned as a structural safety layer, not a model, not a policy, and not a governance framework.

The full document is archived with a DOI on Zenodo:
https://doi.org/10.5281/zenodo.18100154

I’m interested in feedback from people working on:

  • AI safety
  • Human-in-the-loop systems
  • Decision theory
  • Critical system design

This is not a product and not a startup pitch — just a protocol-level contribution.

Comments 0 total

    Add comment