Meta criticized for AI policies enabling romantic roleplay with children and misinformation on medical advice

Meta's chatbot rules sparked outcry over child roleplay rules and erroneous medical advice.

: Meta's internal AI guidelines, which were supposed to manage chatbot behavior on platforms such as Facebook, WhatsApp, and Instagram, were criticized after permitting romantic roleplay with children and the spread of false medical advice, despite warnings from past investigations by outlets like the Wall Street Journal. The 200-page document lacked sufficient safeguards against romantic or flirtatious exchanges with minors, with no immediate updated version available for review despite assurances of corrections. The guidelines also allowed the creation of false narratives and misinformation about health matters, such as promoting quartz crystals for cancer treatment. Further criticism arose over leniency in race and hate speech policies, with instances of chatbots generating racially inappropriate content, raising questions about Meta's commitment to ensuring its AI systems comply with ethical standards.

Recent disclosures regarding Meta’s handling of its AI chatbot guidelines have sparked concern over its ethical approach to handling sensitive topics such as child safety and medical misinformation. A leaked document, first reported by Reuters, revealed that the guidelines allowed AI chatbots to engage in romantic conversations with children and spread false medical information, putting at risk the company's reputation on ethical technology deployment. The 200-page document, vetted by multiple departments within Meta including legal and policy, outlined permissible chatbot behaviors on platforms such as Facebook, WhatsApp, and Instagram, causing an uproar over potential manipulations.

The guidelines specify permissible chatbot interactions that include engaging in romantic roleplays with children, with explicit sexual language being the only absolute boundary, yet even suggestive or flirtatious dialog was not adequately restricted. Andy Stone, a Meta spokesperson, acknowledged the issues, asserting that the examples in question were not consistent with official policy and had been removed, although no updated guidelines have been shared for review, reflecting inconsistent enforcement and transparency.

Additional scrutiny emerged from the policy’s stance on false information generation. Meta’s standards permit chatbots to create false advisories provided such content is accompanied by disclaimers. An example cited involved a fabricated report concerning a member of the British royal family, as well as misleading medical advice that bizarrely attributed therapeutic properties to quartz crystals for cancer treatment.

Concerns over racial sensitivity and ethical implications were further highlighted due to lenient guidelines surrounding race-related speech, with the framework allowing chatbots to produce derogatory and demeaning statements about protected groups. The continued presence of such content implies a shocking allowance, which Meta has yet to adequately address or explain.

Experts have criticized Meta’s relaxed chatbot standards, emphasizing the dangers of digital entities engaging users in deceptive interactions, particularly for vulnerable demographics such as children. Alison Lee, formerly of Meta’s Responsible AI division, urged stricter compliance measures, aligning industry practices with ethical norms to prevent potential harmful outcomes, similar to a tragic occurrence where unrealistic chatbot interactions led to a real-world death.

Sources: Reuters, TechSpot