Make AI measurable, not magical

  • Colorado’s AI law reset is today’s most important governance signal. Axios Denver reported that Colorado lawmakers are moving to scrap and replace the state’s first-in-the-nation AI law with a new framework focused on automated decision-making systems used for consequential decisions in areas like employment, housing, education, financial services, insurance, and health care. The bill would require consumer notice and correction rights for inaccurate personal data, and could allocate liability to either developers or deployers depending on the violation. For Orthogonal, this reinforces the compliance architecture point: AI products increasingly need risk classification, user notice, correction workflows, and auditability built into the product rather than bolted on later. (axios.com)
  • The Big Law talent-pipeline story is moving from theory to operating-model problem. Axios’ weekend piece argues that AI is beginning to wipe out parts of the entry-level work that historically trained junior lawyers, while firms increasingly restructure around AI workflows, client portals, and self-service systems. The real issue is not simply headcount; it is apprenticeship. If junior lawyers get fewer reps doing review, research, drafting, and issue spotting, firms will need a new training model for lawyers who are expected to supervise AI outputs without having learned the underlying judgment the old way. (axios.com)
  • Professional-services AI adoption is now past the novelty phase. Thomson Reuters’ 2026 AI in Professional Services Report says GenAI has become integral across legal, tax, accounting, risk, fraud, and government professions, with 40% of respondent organizations already using GenAI, up from 22% last year. But the same report flags a gap: many organizations are still not collecting AI ROI metrics or are unsure whether they are doing so. That matters because AI adoption without measurement becomes theater; AI adoption with metrics becomes operating leverage. (thomsonreuters.com)
  • The hallucination conversation is maturing into a controls conversation. A new Thomson Reuters report on responsible AI use argues that legal professionals need to move from informal AI experimentation to structured, documented programs that define when and how AI tools are used. The useful framing is that hallucinations are not merely “model mistakes”; they are workflow failures unless the system includes verification, human review, and professional-grade controls. (thomsonreuters.com)
  • Meta’s robotics acquisition is a reminder that AI is leaving the screen. TechCrunch reported that Meta acquired humanoid robotics startup Assured Robot Intelligence. This is not a legal AI story directly, but it matters to the broader AI strategy layer: the frontier is moving from text workflows to embodied systems, workplace automation, and physical-world agents. The near-term implication is less “build robots” and more “assume agentic systems will increasingly need permissions, logs, approvals, and accountability across real-world workflows.” (techcrunch.com)

Subscribe to Orthogonal

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe