yuer

yuer @yuer

About: Yuer — Independent AI Systems Architect Building the Expression-Driven Cognitive Architecture (EDCA OS): a deterministic, auditable execution layer for LLMs. Focused on: deterministic RAG & reproduc

Joined:
Nov 20, 2025

yuer
articles - 57 total

When Emotion Becomes an Interrupt:How Distress-Framed Language Systematically Suppresses Reasoning in General-Purpose LLMs

A Medical-Safety Risk and the Proposal of “Logical Anchor Retention (LAR)”** As large language...

Learn More 0 0Jan 23

Stronger Models Don’t Make Agents Safer — They Make Them More Convincing

There is a persistent belief in AI engineering: If the model were smarter, agents wouldn’t fail like...

Learn More 0 0Jan 21

An Agent Is Not a Workflow (No Matter How Much It Pretends to Be)

One of the most common misunderstandings in modern AI systems is this: If an agent follows steps, it...

Learn More 0 0Jan 21

The Only Real Fix for Agents Running Wild Is Control by Design

Agents don’t fail because they are too dumb. They fail because they are allowed to act when they...

Learn More 0 0Jan 21

Why Natural Language Is a Terrible Tool for Process Control

Natural language is flexible by nature. That flexibility is exactly why it fails as a control...

Learn More 1 0Jan 21

Why Agents Feel Smarter Today (But Actually Aren’t)

Modern AI agents feel smarter than before. They follow steps. They ask fewer irrelevant...

Learn More 0 0Jan 21

Solving Character Consistency in Image Generation

In many image generation workflows, character consistency quietly breaks over time. A single image...

Learn More 0 0Jan 20

How AI Can Take Cross-Domain Projects — and Where Automation Breaks

Where do you draw the line between “let automation run” and “someone must explicitly decide to...

Learn More 0 0Jan 19

Automation Without Accountability Is Structurally Unsafe

Why responsibility cannot be delegated to systems Automation promises efficiency. Intelligence...

Learn More 0 0Jan 19

Authority, Boundaries, and Final Veto in AI Systems

Why controllability collapses without explicit power structures Most discussions about AI control...

Learn More 0 0Jan 19

Five Non-Negotiable Principles for Controllable AI Systems

Why execution legitimacy matters more than intelligence Modern AI discourse focuses obsessively on...

Learn More 0 0Jan 19

Why Most AI Systems Fail Before Execution Begins

A Position Paper on Control, Responsibility, and Rejection Modern AI systems rarely fail because...

Learn More 0 0Jan 19

When AI Can Finally Stop: What Becomes Possible After Control

Over the past few years, large language models have become undeniably powerful. They can reason,...

Learn More 0 1Jan 16

From Quant Operators to General Execution Primitives

Document Statement This article concludes the Rust Quant Operator foundational series. Its purpose...

Learn More 0 0Jan 14

Batch Is Not a Loop: Batch Consistency and Vectorized Execution Semantics

Why for-loops Are Not Batch Processing Document Statement This is not a Rust tutorial. This is the...

Learn More 0 0Jan 14

Time Is Not an Index: Time Semantics and Windowed State for Quant Operators in Rust

Why Quant Operators Need Explicit Time Semantics Document Statement This is not a Rust...

Learn More 0 0Jan 14

State Is Not a Variable: Defining State Semantics for Quant Operators in Rust

Document Statement This is not a Rust tutorial. This is the second article in the Rust Quant...

Learn More 0 0Jan 14

Semantic Field Risk Memo — On an Unmodeled High-Dimensional Risk in LLM-based Systems

Document Type: Risk Memo / Risk Statement Purpose: Risk disclosure, responsibility warning,...

Learn More 0 0Jan 14

When Indicators Are Not Functions: Defining Quant Operators in Rust

Document Statement This article is not a Rust tutorial, nor a trading strategy guide. It documents...

Learn More 0 0Jan 14

AI Engineering: Why the Environment Is the Most Ignored Long-Term Asset

From “It Runs” to “It’s Controllable”: The Real Maturity Line of AI Engineering Abstract...

Learn More 0 0Jan 13

A Deterministic PC Builder That Refuses to Guess — Powered by Algolia Agent Studio

This is a submission for the Algolia Agent Studio Challenge: Consumer-Facing Non-Conversational...

Learn More 6 0Jan 12

Alignment Protocol v3.0: Defining Legal Admission Semantics for AI-Controlled Systems

Alignment Protocol v3.0 is the first formal admission protocol defined under EDCA Admission...

Learn More 0 0Jan 12

EDCA Admission Protocols: Introducing an Explicit Admission Layer for AI Systems

Most AI systems today implicitly assume that: if an expression is received, it should be...

Learn More 0 0Jan 12

When Factor Libraries Meet Real-World Execution Constraints

Most factor libraries look reliable — at least at first. They are cleanly implemented,...

Learn More 0 0Jan 8

Building a Fail-Closed Investment Risk Gate with Yuer DSL

Why This Is an Eligibility Check, Not an AI Decision Model This article documents a...

Learn More 0 0Jan 6

LLMs Are Becoming an Explanation Layer And Our Interaction Defaults Are Breaking Systems

LLMs Are Becoming an Explanation Layer And Our Interaction Defaults Are Breaking...

Learn More 0 0Jan 5

Why LLMs Break in Production (and Why It’s Not a Model Problem)

Why LLMs Break in Production (and Why It’s Not a Model Problem) If you’ve ever shipped an LLM-based...

Learn More 0 0Jan 5

Controllable AI Must Sit in the Control Plane — Otherwise It Shouldn’t Exist

Most discussions about enterprise AI are stuck on an outdated question: “Can AI safely enter the...

Learn More 0 0Jan 4

When Community AI Breaks, It’s Rarely the Model

It Breaks Because Facts Quietly Fragment When community AI systems fail, they rarely fail...

Learn More 0 0Jan 3

A Fail-Closed Gate for Rust AI Assistants

Stop AI from suggesting workarounds before it proves the rejection Most AI coding assistants follow...

Learn More 0 0Jan 2