HF4423 (Legislative Session 94 (2025-2026))
Social medica behavioral threat assessment reporting requirement created.
Related bill: SF4674
AI Generated Summary
- Purpose
- Establish a social media threat assessment and reporting requirement.
Require social media platforms to continuously review user-generated content to identify articulable threats of targeted violence directed at educational facilities or religious institutions, and report those threats to Minnesota’s Fusion Center (MNFC).
Main provisions
Definitions added or clarified - Accessible user interface: a way for a user to input data, make a choice, or take action on a social media platform in two clicks or fewer. - Account holder: a natural person or legal person who holds an account or profile with a social media platform. - Account interactions: actions within a social media platform that could have a negative impact on another account holder (e.g., sending messages, reporting users, commenting, liking, resharing, posting user-generated content). - Algorithmic ranking system: computational processes that determine the order, priority, or prominence of content shown to a user (including search results, recommendations, content display, etc.). - Content: any media a user views, reads, watches, listens to, or interacts with on a social media platform. - Content includes other accounts or profiles when surfaced to a user. - Expressed preferences: clear, specific indications of a user’s engagement preferences; not based on time spent or low-quality content, and not obtained through manipulative interfaces. - Social media platform: an electronic medium that supports user-generated content and social interaction, with specific exclusions (e.g., certain types of services or networks listed in the bill). - User-generated content: content created by an account holder and uploaded, posted, shared, or disseminated on the platform. - Other terms (e.g., engage, conspicuously, time sensitive) are defined to guide interpretation.
New Section 325M.36: Social Media Behavioral Threat Assessment and Reporting - Subd.1. Definitions for purposes of this section include educational facility and religious institution. - Subd.2. Threat assessment and reporting requirements: - Platforms must continuously review user-generated content to identify and report articulable threats of targeted violence directed at an educational facility or a religious institution. - Platforms may use user reporting and tools like algorithms or AI to assist, but must have humans review and assess content to determine if it should be reported. - Platforms must report articulable threats to the MNFC and follow MNFC’s requests to share threat information, personal data, and other materials. - MNFC must designate staff to specialize in platform threat assessments to coordinate reports. - Platforms may also pursue other required reporting in addition to this section (e.g., to local or state law enforcement). - Platforms must report within 24 hours of discovering the threat, unless there is an indication the individual may act within 24 hours after posting; then reporting should occur immediately through practicable means. - Subd.3. Threat assessment factors: - Violent ideation (desire or fantasies about violence or murder). - Evidence of interest in violent topics, content, or groups that support violence, or past incidents of violence. - Evidence of aggressive or violent behavior in content (e.g., domestic violence, stalking, harassment, animal cruelty). - Direct threats made using platform features. - Expressions of desperation or despair indicating potential harm to self or others. - Fixations or stalking regarding a person, place, belief, or cause. - Indications of training or preparation for targeted violence (including weapons or related materials). - Subd.4. Enforcement: - Attorney General can bring enforcement actions for violations. - Courts may order injunctions and other equitable relief; civil penalties can reach up to $1,000,000 per violation. - Subd.5. Data practices: - Data obtained under this section is treated as criminal investigative data. - Subd.6. Free speech: - Outside an investigation of an articulable threat, this section does not permit government entities to violate protected free speech rights. - Subd.7. Immunity from liability: - A platform that follows these requirements is not liable for others’ criminal actions; no new private cause of action is created against individuals.
- Significant changes to existing law
- Creates a new, mandatory threat-assessment and reporting duty for social media platforms touching on threats to schools and religious institutions.
- Expands statutory definitions to guide platform duties and interpretation.
- Establishes a formal link between platforms and the MN Fusion Center for threat reporting.
- Introduces a specific 24-hour reporting deadline and a framework for human review of automated tools.
- Adds enforcement mechanisms with substantial penalties and injunctive relief, plus data-handling rules.
Provides a free-speech carve-out and liability immunity for platforms meeting the requirements.
Implications and implementation considerations
Platforms will need dedicated teams and procedures to monitor content and coordinate with MNFC.
Potential resource and compliance costs for platforms, especially smaller platforms.
Stronger coordination between digital platforms and state law-enforcement infrastructure.
Privacy and data practices considerations due to handling of user data in threat investigations.
Clear protections for free speech outside threat investigations.
Relevant Terms - accessible user interface - account interactions - algorithmic ranking system - conspicuously - content - engage - expressed preferences - social media platform - user-generated content - articulable threat of targeted violence - educational facility - religious institution - Minnesota Fusion Center (MNFC) - threat assessment factors - civil penalty - injunctive relief - data practices - criminal investigative data - immunity from liability - free speech protections
Bill text versions
- Introduction PDF PDF file
Actions
| Date | Chamber | Where | Type | Name | Committee Name |
|---|---|---|---|---|---|
| March 18, 2026 | House | Action | Introduction and first reading, referred to | Commerce Finance and Policy |
Citations
[
{
"analysis": {
"added": [],
"removed": [],
"summary": "This act amends Minn. Stat. 2024 section 325M.31 (Definitions) to define terms for purposes of sections 325M.30 to 325M.34 and 325M.36 (social media behavioral threat assessment and reporting).",
"modified": []
},
"citation": "325M.31",
"subdivision": ""
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 1 under Minn. Stat. 325M.36 provides the Definitions related to the social media behavioral threat assessment and reporting provisions.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.1"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 2 sets forth authority and processes for reviewing user-generated content to identify articulable threats of targeted violence, with use of human review.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.2"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 3 lists threat assessment factors to consider when determining whether to report an articulable threat of targeted violence.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.3"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 4 provides enforcement provisions, including AG enforcement actions and remedies under this section.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.4"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 5 addresses data practices related to information obtained under this section and treatment of such data.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.5"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 6 preserves free speech rights outside the investigation of threats and clarifies limits on government action.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.6"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Subdivision 7 provides immunity from liability for platforms that comply with the section's requirements.",
"modified": []
},
"citation": "325M.36",
"subdivision": "subd.7"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Cross-reference to Minn. Stat. 8.31 establishes enforcement mechanisms and procedures if the state prevails in enforcing this section.",
"modified": []
},
"citation": "8.31",
"subdivision": ""
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "The data obtained under this section must be treated as criminal investigative data under Minn. Stat. 13.82, subd. 7.",
"modified": []
},
"citation": "13.82",
"subdivision": "subd.7"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Minn. Stat. 116J.39, subd. 1 is cited to define broadband service in the context of social media platform definitions and exclusions.",
"modified": []
},
"citation": "116J.39",
"subdivision": "subd.1"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Minn. Stat. 327.30, subd. 1 (paragraph f) provides the meaning of 'Religious institution' used in the definitions.",
"modified": []
},
"citation": "327.30",
"subdivision": "subd.1"
},
{
"analysis": {
"added": [],
"removed": [],
"summary": "Federal law reference defining 'telecommunications carrier' as part of the bill's contextual definitions.",
"modified": []
},
"citation": "47 U.S.C. § 153",
"subdivision": ""
}
]