In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment
Character AI cites First Amendment in lawsuit defense and faces other legal challenges.

Character AI, a platform for AI chatbot roleplay, is defending itself in a lawsuit where it is accused of contributing to a teen's suicide. The platform claims First Amendment protection, arguing that limiting chat capabilities would violate users' rights, although this rationale's success in court remains uncertain.
The lawsuit, filed by Megan Garcia, seeks to impose additional restrictions on Character AI's chatbots and also involves Alphabet as a defendant. This case is part of several ongoing legal challenges faced by Character AI, which include claims of exposure to inappropriate content for minors and allegations of promoting self-harm.
Further complicating matters, Texas Attorney General Ken Paxton is investigating Character AI for potential violations of children’s online privacy and safety laws. Despite these challenges, Character AI has taken measures to improve platform safety, including leadership changes and launching new safety tools, while continuing efforts to enhance user engagement.