SF4509 (Legislative Session 94 (2025-2026))
Artificial intelligence safety and disclosure requirements establishment (RAISE Act)
AI Generated Summary
Purpose
This bill creates a new framework in Minnesota to regulate artificial intelligence (AI) for safety and accountability. It defines key terms related to AI and lays out requirements for developers to test, disclose, and manage AI systems before and during deployment, with enforcement and remedies for violations.
Main Provisions
Key definitions:
- Artificial intelligence: a machine-based system that makes predictions, recommendations, or decisions and uses machine and human inputs to perceive environments, model information, and propose actions.
- Artificial intelligence model: the part of a system that uses AI technology and machine-learning methods to produce outputs from inputs.
- Critical harm: death or serious physical/mental injury to 2 or more people, or at least $1,000,000 in rights/property damages caused or materially enabled by an AI model.
- Developer: a person who has trained at least one AI model.
- Safety and security protocol: a documented plan describing protections and procedures to reduce risk of critical harm, cybersecurity protections, testing procedures to evaluate risk and potential misuse, and designated senior personnel responsible for compliance.
- Safety incident: a known event or evidence of increased risk of critical harm, such as autonomous AI actions, theft or misuse of model data/weights, or unauthorized access.
Before deploying an AI model:
- Implement a written safety and security protocol.
- Keep an unredacted copy of the protocol (including updates) for the entire deployment period plus five years afterward.
- Publish a redacted version of the protocol for public view and send the redacted copy to the state Attorney General (AG).
- Allow the AG access to the redacted protocol as required by federal law.
- Record and retain information about the tests and test results used to assess the model, in enough detail that third parties could replicate testing for the deployment period plus five years.
- Implement safeguards to reduce risk of critical harm.
- Designate senior personnel responsible for ensuring compliance.
Safety and security protocol details:
- Includes protections and procedures to reduce risk.
- Describes cybersecurity protections to prevent unauthorized access or misuse.
- Describes testing to evaluate risk, potential evasion, and potential misuse to create new models with higher risk.
- Describes how the developer or third party will comply with the statute.
- Designates senior personnel for accountability.
Prohibition on deployment:
- A developer must not deploy an AI model if doing so creates an unreasonable risk of critical harm.
Annual review and modifications:
- Developers must annually review the safety protocol to account for changes in AI capabilities and industry best practices.
- If the protocol is materially modified, the developer must publish the updated protocol in the same manner as the initial publication.
Safety incident disclosure:
- Developers must disclose each safety incident to the AG within 72 hours of learning of the incident, or within 72 hours of having enough facts to reasonably believe an incident occurred.
- Disclosures must include the date of the incident, why it qualifies as a safety incident, and a plain-language description.
False or misleading statements:
- Developers may not knowingly make false or materially misleading statements or omissions in documents related to this act.
Transparency and testing requirements:
- The act emphasizes transparency in safety protocols and requires documentation that enables third parties to replicate testing efforts.
Enforcement and Remedies
Attorney General enforcement:
- The AG may bring a civil action for violations of the safety and transparency provisions, with civil penalties of up to $10,000,000 for a first violation and up to $30,000,000 for any subsequent violation, plus injunctive or declaratory relief.
Private right of action:
- A person injured by a violation may sue to recover damages, costs, and disbursements (including reasonable attorney fees) and may seek other equitable relief as determined by the court.
Significant Changes to Minnesota Law
- Establishes a new statutory framework (often referred to as the RAISE Act) focused on artificial intelligence safety, transparency, and accountability.
- Creates mandatory pre-deployment safety protocols, mandatory disclosure to the AG (including redacted public versions and AG access), and detailed recordkeeping and testing requirements.
- Introduces annual reviews and material-modification publishing requirements for safety protocols.
- Mandates timely safety-incident reporting to the AG and prohibits deployment if there is an unreasonable risk of critical harm.
- Establishes both a strong enforcement regime (AG civil penalties) and a private right of action for individuals harmed by violations.
Relevant Terms - artificial intelligence - artificial intelligence model - critical harm - safety incident - developer - safety and security protocol - transparency requirements - before deploying - testing and replication - redacted safety protocol - unredacted safety protocol - attorney general - civil penalties - injunctive relief - private right of action - unreasonable risk of critical harm - annual review - material modification - cybersecurity protections - replication of testing - safe deployment period (deployment period plus five years)
Bill text versions
- Introduction PDF PDF file
Actions
| Date | Chamber | Where | Type | Name | Committee Name |
|---|---|---|---|---|---|
| March 16, 2026 | Senate | Action | Introduction and first reading | ||
| March 16, 2026 | Senate | Action | Referred to | Commerce and Consumer Protection |
Citations
[
{
"analysis": {
"added": [
"Adds new sections 325M.40 to 325M.42 implementing the Responsible Artificial Intelligence Safety and Education Act within chapter 325M."
],
"removed": [],
"summary": "This bill proposes adding a new artificial intelligence safety and education framework within Minnesota Statutes chapter 325M (the RAISE Act), with new sections 325M.40 to 325M.42 establishing safety, transparency, enforcement, and civil remedies for AI models.",
"modified": []
},
"citation": "325M",
"subdivision": ""
}
]