When Government Algorithms Quietly Become Rules
Eli Talbert / May 12, 2026Eli Talbert is a US Army Reserve officer on active duty and a Ph.D.-trained data scientist supporting US Special Operations Command. The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the Department of Defense, the US Army, or the US Government.
Federal agencies are increasingly requiring officials to evaluate claims and make enforcement decisions through mandatory digital systems. This shift is part of a broader, largely unexamined move toward governance through software, where the design of a tool can matter as much as the text of the rule it implements. When these systems shape how officials work through decisions, they also shape what enters the administrative record that courts rely on during judicial review.
The result is a form of invisible policymaking: design choices or architectural constraints that determine outcomes across cases without the public notice, comment, or transparency that US administrative law requires. In certain circumstances, such architecture may need to go through formal rulemaking.
The question is especially urgent now. The Social Security Administration is in the middle of its most ambitious digital transformation in decades, centralizing claims processing into national workflows and deploying new automated tools across the agency. Immigration and Customs Enforcement continues to expand its use of algorithmic tools.
As these systems proliferate, a basic governance question remains unresolved. When a mandatory digital system structures how officials reason through a case, who decides what reasoning is allowed? When a legally permissible analytical route is suppressed by system design, a reviewing court has no basis to ask why it is absent, and affected individuals cannot challenge the justification that was never asked of the official deciding their case. The constraint operates below the surface of formal rules, influencing outcomes without producing the kind of visible policy change that would ordinarily trigger judicial scrutiny or public notice.
How system design shapes the administrative record
The mechanism is straightforward. When a mandatory system gates available analytical routes, it shapes the administrative record that the courts review. Suppressed pathways do not appear in the record, leaving courts no basis to ask why they are absent.
The Social Security Administration's Electronic Claims Analysis Tool (eCAT) illustrates the problem. eCAT is mandatory for disability examiners at the initial determination stage. The program walks examiners through a five-step sequential evaluation using a series of screens and required fields. Examiners answer the system’s questions and generate a standardized Disability Determination Explanation (DDE), which becomes the agency's official explanation for the decision and a key part of the record reviewed by administrative law judges and federal courts.
The five-step framework itself is codified in regulation, and eCAT does not alter that standard. But the software shapes how the standard is applied in practice. It determines the order in which evidence is considered, which options are presented at different stages, and how examiners explain their reasoning. Thus, discretion operates within a predefined reasoning architecture.
The agency's own history shows that these design choices shape substantive outcomes. In Administrative Message 14056, issued in connection with eCAT 9.0, SSA modified the system to categorically prevent adjudicators from inadvertently using Medical-Vocational Rule 204.00 to direct a determination. The change was implemented through system configuration rather than amendment of the governing regulation. In effect, SSA adjusted how the regulatory standard functioned across cases by altering the tool's architecture.
What drives this effect is how these systems define which lines of reasoning or reasoning pathways are available to decision-makers, a dynamic that can be understood as "decision-space construction." A governing statute or regulation may allow adjudicators to reason through a case along different permissible paths, weighing different evidence or applying different interpretive approaches, even if no single path is required in every case. But when a system renders some of those options practically unreachable or difficult to use, it shapes how the governing standard functions in practice.
This shaping becomes legally significant because it structures the administrative record itself. The DDE produced through eCAT is what courts see. If a particular line of reasoning is not available within the system, it does not appear in the DDE and therefore does not appear in the record for judicial review.
A court would encounter a record in which certain reasoning had been eliminated, with no indication that architecture, rather than examiner judgment, accounted for its absence.
When system design effectively becomes a rule
Under US law, agencies must usually provide public notice and an opportunity for comment before adopting rules with binding effect. Courts have treated a binding practical effect as central to this determination—examining whether an agency action constrains discretion across cases.
Binding need not take the form of an explicit command. It may arise when an agency makes it harder for decision-makers to reach a different permissible result. A mandatory system that structures available reasoning pathways, suppresses certain analytical approaches, and pushes agency explanations into standardized forms may constrain outcomes much like a generally applicable rule.
This does not mean every internal software tool must go through rulemaking. Federal law exempts rules of agency organization or procedure. The key question is not whether a system is digital, but what it does. When system design encodes substantive value judgments, it can shape outcomes in practice. This can happen by gating arguments, defining what counts as a complete justification, or shifting how standards are applied. In those cases, the system may function as a binding rule, regardless of whether the constraint appears in code or in written policy.
When system changes shift outcomes
This dynamic extends beyond disability adjudication. When Immigration and Customs Enforcement (ICE) modified its Risk Classification Assessment to remove the "release" recommendation option, it altered detention outcomes for thousands of individuals. One study estimated that the change reduced releases by half. But the modification operated through system configuration rather than formal rulemaking, raising questions about whether it functioned as a substantive policy change that required public notice. By eliminating one branch from the decision architecture, the agency narrowed the universe of permissible recommendations across cases.
The ICE example is analytically stronger than the eCAT case because it eliminated an outcome option outright. The eCAT example is more novel. It concerns not the removal of an outcome, but how a system structures the reasoning options that lead to those outcomes. The question is whether design choices that limit certain ways of approaching a decision and that are built into the administrative record can, in practice, function like eliminating an outcome option.
In both contexts, the salient feature is not that software was used, but that architectural modification constrained legally relevant options across cases and shaped the reasoning appearing in the official record.
When decision systems cross the line into rulemaking
Courts have long recognized that agencies may develop policy through case-by-case decisions rather than rulemaking. That flexibility is most defensible where decision-makers retain meaningful discretion within the governing standard.
But the way these systems constrain reasoning operates differently. The constraint functions before any individual case is opened apply uniformly across all cases, and cannot be adjusted through case-specific reasoning.
An agency might respond that tools like eCAT merely organize individualized determinations. But the constraint is different in kind. It is fixed in advance, applies uniformly, and cannot be adjusted by the examiner. If certain arguments or analytical routes cannot meaningfully enter the DDE, they cannot meaningfully enter judicial review. Nor does classifying these tools as procedural automatically resolve the issue. Workflow organization is procedural.
This problem can be bounded in a workable way. It should apply where three features converge: first, mandatory use in decisions affecting benefits, liberty, or other legally protected interests; second, architectural structuring that limits legally permissible lines of reasoning across cases; and third, integration of that structure into the official record used for judicial review.
Minor interface changes, threshold adjustments, or workflow refinements that do not alter available reasoning options or materially affect outcome distributions would not satisfy this test. The record-integration condition remains the principal limiter: when architecture shapes what courts can see, its legal significance increases.
What agencies should do
Where these conditions are met, agencies should, at a minimum, disclose the core design features that structure how decisions are made: gating logic, sequencing constraints, and elements that shape the official record. Material modifications that substantially alter available reasoning pathways or significantly affect outcome distributions should be evaluated under existing notice-and-comment requirements.
This is not a demand for source-code transparency, nor a claim that all algorithmic tools require rulemaking. It is a recognition that agencies can embed binding constraints into the structure of adjudication—and that some of those constraints determine what enters the administrative record.
In practice, accountability depends on what these systems allow decision-makers and courts to see. If decision architecture determines what the record contains, the law must account for those structures. These systems shape what courts are able to see. They also determine what remains invisible to judicial scrutiny when architecture, rather than reasoned judgment, excludes it. As governance increasingly operates through digital systems, accountability requires that these design choices be visible to the public and to the courts charged with reviewing them.
Authors

