research high credibility
Dark Factories

Stanford Law / CodeX

Published two days after StrongDM’s factory announcement, this Stanford Law analysis identifies the legal vacuum the dark factory creates.

Core Arguments

The blind spot problem: When the same underlying AI model architecture writes code and evaluates it, both share the same systematic failure modes. External scenario testing addresses execution-level gaming, but doesn’t address model-level systematic blind spots.

The liability gap: Legacy software contracts limit liability assuming a human developer reviewed the code. When no human reviews the code, those clauses now cover code no human touched. No legal precedent exists for what happens when agent-authored code causes harm.

The trust question: Enterprise software buyers expect human accountability. “An engineer reviewed this” is a statement of accountability. “The agent passed its own test suite” is not equivalent, even if the test suite is carefully isolated.

Why This Matters

StrongDM’s factory handles enterprise infrastructure access — controlling who can reach databases, servers, and Kubernetes clusters. The legal and accountability questions raised here apply with maximum force to exactly this use case.

This is not a technical critique of the approach. The technical architecture may be sound. The accountability architecture doesn’t exist yet.