New York enacts a bill to prevent AI-fueled disasters

NY lawmakers pass RAISE Act to regulate AI, requiring transparency and safety, preventing disasters causing 100 injuries or $1B in damages.

: New York's legislature has passed the RAISE Act, aiming to mitigate risks associated with advanced AI models from companies like OpenAI and Google. The bill, backed by notable figures including Geoffrey Hinton and Yoshua Bengio, mandates transparency standards for AI labs to avert scenarios causing mass harm. Unlike California's vetoed SB 1047, the RAISE Act avoids stifling innovation, focusing mainly on large corporations with AI products for New Yorkers. The bill, pending Governor Hochul's decision, could impose up to $30 million in civil penalties for non-compliance by tech companies.

The New York state legislature recently approved the RAISE Act, a pioneering initiative designed to regulate the development and deployment of advanced AI models by major technology firms like OpenAI, Google, and Anthropic. The act's primary objective is to establish a rigorous framework to prevent catastrophic outcomes resulting from AI technologies, specifically targeting incidents that could potentially result in either the injury or death of over 100 individuals or lead to economic damages surpassing $1 billion. The legislation was met with praise from the AI safety community, including significant endorsements from renowned figures such as Nobel laureate Geoffrey Hinton and AI research leader Yoshua Bengio, who have been vocal advocates for AI regulation.

If enacted, the RAISE Act would be the first of its kind in the United States to legally obligate AI laboratories to adhere to strict transparency guidelines. Companies developing frontier AI models would be required to produce comprehensive safety and security reports. Additionally, any safety incidents, such as unexpected AI behavior or instances of theft involving AI models, would need to be promptly reported. Failure to comply with these mandates could result in technology companies facing civil penalties of up to $30 million, a powerful deterrent aimed at ensuring adherence to the new regulations.

The politicians behind the bill, such as New York State Senator Andrew Gounardes, have emphasized that the RAISE Act is intentionally designed to foster innovation rather than hinder it. Unlike its Californian counterpart, the ill-fated SB 1047, this bill aims not to impose undue restrictions on startups or academia, instead focusing on global tech giants. The act applies to companies whose AI endeavors involve training using over $100 million worth of computing resources, and are accessible to residents of New York.

Despite this, the initiative has not been without its detractors, particularly within the technology sector and some Silicon Valley entities. High-profile opponents, such as Andreessen Horowitz's Anjney Midha, criticized the legislation as counterproductive, potentially damaging the US’s competitive edge in the global AI arms race. Similarly, Anthropic, a lab committed to AI safety, has expressed concerns regarding the broad scope of the act. Co-founder Jack Clark specifically highlighted potential adverse effects on nascent technology companies despite the bill's explicit exemption for smaller firms.

Proponents of the RAISE Act argue that the imposing technologies under scrutiny are predominantly commercialized by the most immense and financially robust AI entities, such as Google's DeepMind and China's Alibaba, which makes the economic burden of compliance negligible. Further, backers like New York State Assemblymember Alex Bores have reassured stakeholders that the regulatory demands are relatively light when measured against the act’s intended safeguard benefits. This confidence stems from New York's influential economic status, positioning it as a key player that tech companies cannot afford to ignore, given its substantial GDP ranking within the US economy.

Sources: TechCrunch, X.com, New York Senate