What does ‘open source AI’ mean, anyway?

The debate on 'open source AI' revolves around whether AI models like Meta's Llama can be considered truly open source, given unique licensing and data issues.

: The definition of 'open source AI' is contentious, especially with AI models not fitting traditional open-source criteria. Meta’s AI models highlight this debate, as their open-source claims have significant restrictions. The Open Source Initiative is working to define 'open source AI' through a meticulous process, acknowledging fundamental differences between software and AI.

The debate over what constitutes 'open source AI' is rooted in the complexities of AI models that do not neatly fit traditional open-source definitions. The Open Source Initiative (OSI) is working to address these issues, with executive director Stefano Maffulli leading efforts to create a clear definition. This initiative includes conferences, workshops, and reports to gather input and refine their approach.

Meta's Llama models are a focal point of the debate. Despite Meta's claims of their models being open source, restrictions exist, such as requiring special licenses for large-scale applications. This has led to criticism and a reevaluation of what 'open source' means in the context of AI by industry experts and the OSI.

The OSI's current draft for an Open Source AI Definition aims to create clear guidelines for AI systems to follow, recognizing the unique challenges related to data and model reproducibility. They emphasize transparency, accessibility, and the ability to replicate AI systems as key criteria. The final version, expected to be confirmed at an upcoming conference, seeks to set a new standard for what can be considered open source in AI, though it's understood that this definition may evolve as technology advances.