New York has officially enacted sweeping AI safety regulations through the RAISE Act, making it the second state to establish substantial oversight of artificial intelligence development. Governor Kathy Hochul signed the legislation into law after state lawmakers initially passed it in June, though not before substantial negotiations with the technology industry shaped the final version.
What the RAISE Act Requires
The legislation mandates that large AI developers implement robust transparency requirements. Companies must publicly disclose their safety protocols and immediately notify state authorities of any safety incidents—with a strict 72-hour reporting deadline. The framework also establishes a dedicated oversight body within the Department of Financial Services to monitor AI development activities across the state.
Non-compliance carries serious consequences. Organizations that fail to submit required safety reports or provide false information face fines up to $1 million, with penalties reaching $3 million for repeat violations. This enforcement mechanism reflects policymakers’ commitment to ensuring accountability throughout the AI development lifecycle.
Building on State-Level Momentum
New York’s regulatory approach mirrors similar legislation signed by California Governor Gavin Newsom in September. Both states have established themselves as leaders in AI governance, particularly as federal regulation remains underdeveloped. Hochul emphasized this alignment during her announcement, stating: “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public.”
Andrew Gounardes, one of the bill’s principal sponsors, highlighted the significance of passing the legislation despite industry opposition: “The technology sector attempted to weaken our bill, but we remained committed and secured passage of the strongest AI safety law in the nation.”
Industry Response and Division
The legislation has generated mixed reactions across the technology sector. Major AI companies OpenAI and Anthropic publicly expressed support for the bill while simultaneously calling for complementary federal regulations. Anthropic’s head of external affairs, Sarah Heck, stated: “The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them.”
However, not all industry players have embraced this regulatory approach. The divergence reflects ongoing tensions between those advocating for proactive safety measures and those concerned about regulatory compliance costs.
Federal Policy Landscape
This state-level action comes against the backdrop of shifting federal dynamics. Recent executive orders have directed federal agencies to challenge existing state AI laws, representing an attempt to limit states’ regulatory authority. This tension between federal and state authority over AI governance is expected to generate significant legal disputes and policy debates in the coming months.
The passage of New York’s RAISE Act signals that major states are willing to establish their own AI safety frameworks regardless of federal direction, creating a patchwork of state-level regulations that companies must navigate.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
New York Enacts Comprehensive AI Safety Framework, Following California's Lead
New York has officially enacted sweeping AI safety regulations through the RAISE Act, making it the second state to establish substantial oversight of artificial intelligence development. Governor Kathy Hochul signed the legislation into law after state lawmakers initially passed it in June, though not before substantial negotiations with the technology industry shaped the final version.
What the RAISE Act Requires
The legislation mandates that large AI developers implement robust transparency requirements. Companies must publicly disclose their safety protocols and immediately notify state authorities of any safety incidents—with a strict 72-hour reporting deadline. The framework also establishes a dedicated oversight body within the Department of Financial Services to monitor AI development activities across the state.
Non-compliance carries serious consequences. Organizations that fail to submit required safety reports or provide false information face fines up to $1 million, with penalties reaching $3 million for repeat violations. This enforcement mechanism reflects policymakers’ commitment to ensuring accountability throughout the AI development lifecycle.
Building on State-Level Momentum
New York’s regulatory approach mirrors similar legislation signed by California Governor Gavin Newsom in September. Both states have established themselves as leaders in AI governance, particularly as federal regulation remains underdeveloped. Hochul emphasized this alignment during her announcement, stating: “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public.”
Andrew Gounardes, one of the bill’s principal sponsors, highlighted the significance of passing the legislation despite industry opposition: “The technology sector attempted to weaken our bill, but we remained committed and secured passage of the strongest AI safety law in the nation.”
Industry Response and Division
The legislation has generated mixed reactions across the technology sector. Major AI companies OpenAI and Anthropic publicly expressed support for the bill while simultaneously calling for complementary federal regulations. Anthropic’s head of external affairs, Sarah Heck, stated: “The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them.”
However, not all industry players have embraced this regulatory approach. The divergence reflects ongoing tensions between those advocating for proactive safety measures and those concerned about regulatory compliance costs.
Federal Policy Landscape
This state-level action comes against the backdrop of shifting federal dynamics. Recent executive orders have directed federal agencies to challenge existing state AI laws, representing an attempt to limit states’ regulatory authority. This tension between federal and state authority over AI governance is expected to generate significant legal disputes and policy debates in the coming months.
The passage of New York’s RAISE Act signals that major states are willing to establish their own AI safety frameworks regardless of federal direction, creating a patchwork of state-level regulations that companies must navigate.