Founder at Aedric.AI.
Principal at DIZON.LAW.
Interested in the intersection of law, AI, and geoeconomics.
◇REACTIONMay 13, 2026
New York's chatbot bill protects model builders and punishes deployers
New York Senate Bill 7263, introduced by Senator Kristen Gonzalez in April 2025, was advanced to third reading on March 4, 2026, and it rewrites the corporate-practice-of-professions doctrine for the AI era.
The drafting choice that matters is the one almost no one is talking about. A proprietor is "any person, business, company, organization, institution or government entity that owns, operates or deploys a chatbot system used to interact with users," and the definition explicitly excludes "third-party developers that license their chatbot technology to a proprietor." The company that deploys the chatbot faces the legal exposure, not the company that built the underlying model.
Read that again. OpenAI, Anthropic, and Google sit upstream of the liability. Every law firm, clinic, telehealth startup, and legal-aid nonprofit that wraps an API sits in the blast radius.
A proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot. That breaks the industry's standard playbook of posting a warning label and moving on. Liability turns on what the bot says, not what the disclaimer says.
SB 7263 creates a private right of action for actual damages and, for willful violations, attorneys' fees and costs. Plaintiffs' firms read that sentence the same way I do.
The practical effect is a moat, not a guardrail. A vertical legal or medical chatbot built by a five-person team in Brooklyn cannot price in serial litigation. A foundation model lab licensing the same capability upstream can.
A law firm, hospital, or financial institution that calls a model provider's API and wraps it in a custom interface is almost certainly a proprietor, and the bill gives those parties no guidance on what engineering steps, contractual safeguards, or disclosure mechanisms might limit their exposure, beyond a notice the bill itself says is insufficient.
This is the corporate-practice-of-medicine and unauthorized-practice-of-law doctrines exported onto software infrastructure, with the same protectionist logic and the same incumbent-friendly outcome. The startups get sued. The platforms get subpoenaed and settle. The licensed professions keep their monopoly.
A statute that immunizes the upstream builder and crushes the downstream deployer is not consumer protection. It is industrial policy with a tort wrapper.
#AIGovernance #LegalTech #HealthTech #AIRegulation #LegalAI
The Llama complaint just gave in-house counsel a three-stage AI audit map
Five major publishing houses (Elsevier, Cengage, Hachette, Macmillan, and McGraw Hill) along with author Scott Turow filed a putative class action against Meta and Mark Zuckerberg on May 5, 2026 for willful infringement of millions of textual works used to train Llama.
The case is Elsevier Inc. v. Meta Platforms Inc., No. 1:26-cv-03689, in the Southern District of New York.
What makes this complaint operationally useful is its structure. The filing attacks Meta's use of the works at three distinct stages: initial torrenting from pirate libraries like Anna's Archive, LibGen, and Sci-Hub; an intermediary stage where copies are made during the training process; and the end-game in which Meta is flooding the market with AI-generated substitutes.
Oppenheim + Zebrak, joined by Debevoise & Plimpton and Keller Rohrback, did not plead training as a single legal event. They pleaded it as a pipeline.
That distinction is the gift to in-house counsel.
Acquisition, intermediate reproduction, and downstream output are three separate exposure surfaces with three separate evidentiary records. A fair use defense at the training layer does nothing for you if the upstream corpus came from a torrent client, and nothing for you if the downstream output substitutes for a licensed work.
Every counsel sitting on top of a generative AI deployment should now audit those three checkpoints independently. Where did the data come from. What was copied during training and where does it sit. What is the model producing and against which markets.
The complaint also alleges Meta illegally removed copyright management information from the works, which adds a DMCA §1202 layer on top of the §106 claims. That is a fourth checkpoint, and it lives in the data preparation step most engineering teams treat as housekeeping.
Plaintiffs are seeking monetary and injunctive relief, including an order to destroy all infringing copies in defendants' possession or control. Destruction remedies do not respect model weights as a special category.
Training is not one act. It is a supply chain. The complaint that just landed in SDNY is the cleanest map of that chain anyone has filed.
#AIGovernance #CopyrightLaw #LegalTech #GenerativeAI #InHouseCounsel
Colorado's AI Act collapse is a warning, not a speed bump
Colorado's AI Act (SB24-205) is effectively frozen weeks before its June 30, 2026 effective date, following an enforcement stay, active litigation, and a legislature that has already begun walking back its own statute.
This is not a regulatory pause. It is a first-mover failure in real time.
Colorado passed SB24-205 in May 2024, positioning itself as the first state to impose comprehensive algorithmic accountability requirements on high-risk AI systems. The framework covered consequential decisions in employment, housing, credit, and healthcare. It was detailed, ambitious, and, as of now, unenforceable.
The litigation challenging the Act did not come from fringe actors. It came from industries that had spent 18 months building compliance programs around a statute the state can no longer defend on its current timeline.
That is the structural problem first-mover regulation creates. Compliance costs are front-loaded. Legal exposure arrives before enforcement infrastructure exists. And when the statute stalls, the regulated community absorbs the cost of preparation without receiving the benefit of certainty.
Every state capitol watching this collapse will draw the same conclusion. Autonomous state AI frameworks carry litigation exposure, preemption risk, and political reversal cycles that erode the very regulatory clarity they promise to deliver. The incentive now points toward waiting for federal preemption architecture and shaping that process rather than absorbing the cost of pioneering it alone.
First-mover advantage is a product strategy. In administrative law, moving first without durable enforcement capacity is not leadership. It is a liability that competing states will cite as evidence for exactly the restraint Colorado failed to exercise.
#AIGovernance #TechPolicy #LegalTech #RegTech
The Code is not a compliance checkbox: it is a market access condition
The European Union's AI Act General-Purpose AI Code of Practice is moving toward finalization, and experts quoted in Compliance Week are already calling it the de facto standard for AI governance globally.
That framing matters more than the regulation itself.
When practitioners outside the drafting process start describing a voluntary code as de facto, they are not making a legal observation. They are describing a procurement reality that will arrive before most legal teams have finished their gap analysis.
Enterprise buyers in financial services, healthcare, and public sector contracting already treat GDPR alignment as a baseline filter. Not a differentiator. A floor. The GPAI Code is on the same trajectory, and the timeline is compressed.
Within twelve to eighteen months, organizations seeking high-value partnerships in European markets will encounter the Code not as a regulatory question but as a due diligence question. Procurement committees do not wait for enforcement guidance. They wait for the first vendor that can demonstrate alignment, then they update their supplier criteria accordingly.
This is how voluntary frameworks become structural barriers. Not through legislation, but through the compounding decisions of procurement officers, partnership leads, and board-level risk committees who prefer the counterparty that has already done the work.
The organizations that treat the Code as a future compliance obligation are solving the wrong problem. The organizations building alignment into their AI governance architecture now are solving a commercial one.
The gap between those two groups will be visible in deal flow before it is visible in regulatory filings.
#AIGovernance #EUAIAct #EnterpriseAI #Compliance
When Washington enters the room: DOJ, xAI, and the jurisdictional map no one has finished drawing
On April 24, 2026, the Department of Justice intervened in a lawsuit filed by xAI challenging Colorado's SB 205, the state's consumer protections law governing high-risk AI systems.
xAI's position is that the Colorado law is preempted by federal authority and imposes unconstitutional burdens on AI developers operating across state lines. The DOJ did not step in to agree.
Federal intervention in a state AI case changes the weight of what is being decided. This is no longer a dispute between a technology company and a state legislature. It is the federal government staking a position on where the boundary between state regulation and preemption doctrine actually sits.
That boundary is not settled. No court has drawn it cleanly for AI systems. The DOJ's filing means the next ruling that attempts to do so will carry the government's reasoning inside it.
Colorado is not alone. At least 40 states have introduced or passed AI-related legislation since 2023. Each statute carries its own definitions, thresholds, and compliance triggers. A single enterprise AI deployment touching multiple states now intersects with multiple legal frameworks, none of which are harmonized.
Compliance teams that built their AI governance posture around waiting for a federal standard have a new variable. The federal government is not arriving with a preemption blanket. It is arriving as a participant in litigation that will define what states are permitted to do.
For organizations designing enterprise AI programs, jurisdictional exposure is no longer a future risk to model. It is a present condition to manage.
The question on every deployment roadmap should now be explicit: which states are you in, what do their laws require, and what does the DOJ's intervention tell you about how long the current ambiguity will persist.
#AIGovernance #RegulatoryStrategy #EnterpriseAI #LegalTech #ComplianceRisk
August 2026 is not a legal deadline, it is a market-access condition
Holland & Knight has flagged August 2, 2026 as the compliance date U.S. companies need to have circled. Under the EU AI Act, any business operating high-risk AI systems inside the European Union must meet documentation, transparency, and human-oversight requirements by that date, regardless of where the company is headquartered.
This is not a GDPR-style grace period with soft enforcement at the edges. The Act creates a tiered liability structure, with fines reaching 3% of global annual turnover for non-compliance in the high-risk category.
The practical effect is straightforward. A U.S. firm deploying AI in hiring, credit scoring, medical devices, or critical infrastructure that touches EU markets now has a regulatory threshold it must clear to maintain access to those markets. Compliance is the ticket, not a preference.
Brussels has done this before. GDPR rewrote data handling practices at U.S. multinationals not because American regulators required it, but because EU market access did. The EU AI Act follows the same structural logic.
What is different this time is the operational depth of what is required. GDPR asked companies to manage data. The AI Act asks companies to govern systems, document risk assessments, maintain technical files, and demonstrate human oversight at the point of deployment. That touches engineering, legal, procurement, and product simultaneously.
Firms that have treated AI governance as a policy document exercise will discover in 2026 that regulators want evidence of process, not the existence of a framework.
Sixteen months is enough time to build the right compliance architecture. It is not enough time to start from nothing.
#AIGovernance #EUAIAct #RegulatoryStrategy #GlobalCompliance
The eighteen-month window: why hedging is now a structural competency
Foreign Policy published a piece this week with a premise that should land hard in every boardroom running a global operations review: the smooth flows of a rules-based world have clotted, and states are finding new pathways because the old ones are closing.
The same dynamic applies to organizations. The firms that treated geopolitical hedging as an emergency response in 2022 and 2023 are now discovering whether that response calcified into actual competency or dissolved back into business-as-usual.
The distinction matters more than most leadership teams currently acknowledge.
Hedging is not diversification. Supplier diversification is a procurement decision. Hedging across supply chains and diplomatic channels is a governance decision — it changes who sits in what room, what intelligence feeds into scenario planning, and how fast the organization can reroute when a framework assumption fails.
The organizations building that capability now are not preparing for a crisis. They are building the structural advantage that will be visible eighteen months from now, when the managed retreat of rules-based frameworks has moved from trend to operating condition.
At the conferences we design for senior leaders across trade, finance, and policy, this is the conversation that keeps extending past its scheduled time. Not because the room lacks answers, but because most institutions have not yet formalized the question.
A geopolitical risk framework that lives in a strategy deck reviewed once a year is not a hedge. It is a record of intentions that will be irrelevant the next time a corridor closes or a bilateral arrangement shifts without warning.
The window to institutionalize this is open. It will not stay open.
#GeopoliticalRisk #GlobalStrategy #SupplyChainResilience #TradePolicy
Ken Griffin's retail warning and the market he happens to compete in
Ken Griffin told the Financial Times that retail investors may not fully understand private credit, specifically that they cannot quickly withdraw their money from the funds holding it.
He is not wrong about the liquidity risk. Redemption gates, notice periods, and NAV-based pricing are genuine structural features that individual investors routinely underestimate.
He is also the founder of Citadel, one of the largest operators in liquid, exchange-traded markets, which happen to be the direct competitive alternative to the private credit allocations he is warning retail investors away from.
That is not a disqualifying conflict. Informed critics with financial interests are still allowed to be right. But the framing deserves scrutiny before it becomes a regulatory talking point.
Griffin's argument, stripped of the positioning, is that complexity plus illiquidity plus retail distribution is a combustible combination. The data does not disagree. Interval funds, non-traded BDCs, and semi-liquid credit vehicles have expanded dramatically into channels that were built for daily-liquidity products. Suitability frameworks have not kept pace.
What the argument omits is that exchange-traded alternatives carry their own retail failure modes. Volatility, leverage, and intraday pricing create different literacy traps, not cleaner ones.
The honest version of this conversation is not liquid good, illiquid bad. It is that retail access to any complex asset class outpaces retail understanding of it, and the industry that benefits from that access has never volunteered to slow it down.
Griffin's warning is worth hearing. The venue from which it was issued is worth noting.
#PrivateCredit #AlternativeAssets #RetailInvestors #AssetManagement