March 4, 2026
New York regulates 38 licensed professions. Senate Bill S7263 would make chatbot operators liable for AI responses covering at least 14 of them, plus law.
A bill heading to the New York State Senate floor would create civil liability when a consumer-facing chatbot gives "substantive" advice in licensed domains like medicine, law, licensed professional engineering, and mental health counseling (plus a long tail of other professions, including podiatry).
This hits consumers first and builders next, including government and nonprofit chatbots that explain tenant rights or basic healthcare next steps. Most lawsuits would likely cluster around ordinary requests: translate medical jargon, summarize a legal notice, or suggest next questions to ask a professional.
Senate Bill S7263, introduced by Senator Kristen Gonzalez in April 2025, reached the Senate floor calendar on February 26, 2026. If it passes the Senate, crosses to the Assembly (where companion bill A6545 already exists), and gets the Governor's signature, chatbot deployers get 90 days before liability starts.
What the bill says
The full bill is two pages (PDF). Here they are:
Page 1: Definitions and scope. | Page 2: Prohibited conduct, liability, and disclosure requirements. |
S7263 adds a new section (SS 390-f) to New York's General Business Law. The core prohibition:
A proprietor of a chatbot shall not permit such chatbot to provide any substantive response, information, or advice, or take any action which, if taken by a natural person, would constitute a crime under section sixty-five hundred twelve or sixty-five hundred thirteen of the education law
In plain English: if a human without a professional license could not legally give that same advice in New York, the chatbot cannot give it either.
The gray zone is where most chatbot use actually lives. Modern models summarize, prioritize, and recommend next steps by default, and that style can be framed as professional judgment under a broad reading of "substantive."
Here are the professions covered:
| NY Education Law Article | Profession |
|---|---|
131 | Medicine |
133 | Dentistry |
135 | Veterinary Medicine |
136 | Physical Therapy |
137 | Pharmacy |
139 | Nursing |
141 | Podiatry |
143 | Optometry |
145 | Engineering, Land Surveying, Geology |
147 | Architecture |
153 | Psychology |
154 | Social Work |
163 | Mental Health Practitioners |
The bill also reaches "unauthorized legal practice" through Judiciary Law Article 15.
Important nuance: "engineering" here means New York Education Law Article 145 professions (professional engineering, land surveying, and geology), not software engineering.
That is a broad set of categories covering everyday questions people already ask AI: "What does this rash look like?" "Can my landlord do this?" "Is this wall load-bearing?" "What are side effects of this medication?" "How do I deal with my anxiety?"
"Proprietor" means whoever deploys the chatbot
The bill defines "proprietor" as any person, business, or entity that "owns, operates or deploys a chatbot system used to interact with users." It explicitly excludes "third-party developers that license their chatbot technology to a proprietor."
That definition is broad. It includes startups and enterprise software teams, but also hospitals, legal aid groups, nonprofits, schools, and government agencies that deploy chatbots for public guidance (and that breadth is exactly what opens the bill to stronger constitutional attack, as discussed at the end of this piece).
OpenAI, Anthropic, and Google are "proprietors" for ChatGPT, Claude, and Gemini. But when their models are licensed via API and deployed by someone else, the deployer is the proprietor. This hits everyone from the largest AI platforms to small teams shipping lightweight wrappers over OpenAI or Anthropic APIs. Whoever runs the interface likely carries the risk, and users pay the price when useful guidance gets blocked.
Disclaimers explicitly do not work
From the bill text:
A proprietor may not waive or disclaim this liability merely by notifying consumers that they are interacting with a non-human chatbot system.
This breaks the industry's standard playbook: put a warning label on the chatbot and move on. Under S7263, liability turns on what the bot says, not what the disclaimer says (which pushes operators to block answers in advance and fuels the prior-restraint argument discussed below).
The bill still requires disclosure (section 4): users must be told they are interacting with AI, in the same language, at prominent font size. It tells people what they are using, but it does not reduce legal exposure.
The private right of action creates a serial plaintiff goldmine
From the bill:
A person may bring a civil action to recover actual damages and, if it is found that such proprietor has willfully violated this section, the violator shall be liable for actual damages together with costs and reasonable attorneys' fees and disbursements incurred by the person bringing such action.
Fee shifting changes the economics. It makes lower-value cases worth filing because the plaintiff's lawyer can be paid by the defendant.
New York has already seen the serial-plaintiff pattern in web accessibility litigation: high-volume filings, template complaints, and settlement pressure. As summarized in Accessible Minds' 2026 ADA web lawsuit analysis, 2025 produced more than 5,000 digital accessibility lawsuits, including 1,427 repeat-defendant cases (about 45-46 percent of federal filings). The same analysis highlights concentrated targeting in a few sectors (about 70 percent e-commerce and 21 percent food service) and repeated activity by a small set of plaintiff firms. S7263 has a similar legal setup: private lawsuits, damages, fee shifting, and an ambiguous "substantive" standard.
It also notes that smaller businesses still make up most defendants. That matters here. If S7263 passes unchanged, smaller AI startups, indie wrappers, and community organizations may face the earliest settlement pressure because they have fewer legal resources to fight broad claims.
What "substantive response" actually means
Who knows! And that's part of the problem. The bill prohibits "any substantive response, information, or advice" in covered domains, but never defines "substantive." That ambiguity is a key weakness for vagueness and overbreadth challenges.
Medical example: "What is ibuprofen?" vs "Given my conditions and meds, what dosage should I take?" Legal example: "What is an eviction notice?" vs "What should I file tomorrow and by when?" Engineering example: "What is a load-bearing wall?" vs "Is this wall safe to remove in my house?"
Those boundaries are exactly where lawsuits will cluster.
This bill hurts the people it claims to protect
Access gets worse for people with the least slack
The first people hit are the ones with the fewest alternatives. A tenant who cannot afford a lawyer uses a chatbot to understand eviction timelines while waiting for legal aid. A parent at 11pm uses a chatbot to decide whether symptoms are urgent before they can reach a pediatrician. Government and nonprofit chatbots that provide this first-line guidance face the same liability pressure as private companies.
A 2025 panic bill with 2026 consequences
AI hallucinations are real. But this bill was drafted in April 2025, when model failures were louder and tools were rougher. The landscape has moved fast since then: better models, stronger guardrails, better retrieval and grounding, and clearer uncertainty signals. AI is still imperfect. This law still reads like a first-reaction policy.
I have a developer friend who still avoids AI coding tools because he tried Cursor in early 2025 and got bad results. Fair reaction then. Bad policy now. The right question is: have we tried this lately?
Protectionist effect, intended or not
Even if that was not the intent, the effect is protectionist. Restrict low-cost guidance channels, and paid professional channels become the default again. That is rent-seeking pressure, and it lands hardest on people with the least money.
What this means for consumers and AI products
If S7263 passes as written:
- Consumers with fewer resources lose first-line legal and medical guidance.
- Government and nonprofit chatbots face the same liability pressure as for-profit AI companies.
- General chatbots will either over-block useful answers or accept open legal risk.
- Vertical tools in health, law, and engineering become direct targets for litigation.
- Customer support bots in insurance, pharmacy, and healthcare will need major restrictions.
The 90-day clock
The bill takes effect 90 days after becoming law. That is a short runway to audit responses across 14 professional domains plus law, build filters, and avoid gutting product usefulness.
It may face First Amendment challenges. Waiting for litigation is not a plan.
For teams deploying AI chatbots in New York, and for people relying on low-cost medical or legal guidance, the time to pay attention is now. The full text of S7263 is short enough to read in five minutes.
Constitutional challenges
If S7263 becomes law, constitutional challenges are probable. The strongest one is the First Amendment.
Even before any court ruling, the law would likely chill product decisions. Startups, nonprofits, and public-sector teams facing vague liability rules will ship less, block more, or try to avoid New York users (and geofencing is a weak fix when users travel, use VPNs, or appear through shifting mobile IP ranges). That is how overbroad regulation causes harm: fewer useful tools reach the people who need them, even if parts of the law are later struck down.
| Likely challenge | Core argument | Merits | Likelihood of success |
|---|---|---|---|
First Amendment (content-based speech restriction) | The law restricts speech by subject matter (medical, legal, engineering, etc.) and by who is speaking (unlicensed chatbot deployers). | Strong. The statute targets the content of speech directly, and "substantive response" reaches plain informational speech. Courts may apply strict scrutiny. | High to medium-high |
First Amendment (overbreadth) | The law sweeps in a large amount of protected speech, including explanations, summaries, and educational guidance. | Strong to moderate. The broader and less precise "substantive" is interpreted, the stronger overbreadth gets. | Medium-high |
Due Process (vagueness) | "Substantive response" is undefined, so operators cannot tell what is legal without guessing. | Strong. Vagueness plus private lawsuits plus attorney fees creates aggressive over-filtering pressure. | Medium-high |
Prior restraint theory | The law effectively pushes pre-publication suppression: operators must block answers in advance to avoid liability. | Mixed. This is not a classic licensing board or injunction regime, so "prior restraint" is not the cleanest doctrinal fit. But the chilling effect argument is real. | Low to medium |
Dormant Commerce Clause | New York is regulating speech products used nationwide, including out-of-state operators with no physical NY presence. | Moderate. Depends on how courts view extraterritorial burden versus NY's consumer-protection interest. | Medium |
Compelled speech (disclosure mandate) | Mandatory chatbot disclosure language and font-size requirements force speech. | Weaker challenge. Courts often uphold factual disclosure requirements, especially in consumer contexts. | Low |
Bless up! 🙏✨