The Open Source Initiative has published (news article here) its definition of “open source AI,” and it’s terrible. It allows for secret training data and mechanisms. It allows for development to be done in secret. Since for a neural network, the training data is the source code—it’s how the model gets programmed—the definition makes no sense.

And it’s confusing; most “open source” AI models—like LLAMA—are open source in name only. But the OSI seems to have been co-opted by industry players that want both corporate secrecy and the “open source” label. (Here’s one rebuttal to the definition.)

This is worth fighting for. We need a public AI option, and open source—real open source—is a necessary component of that.

But while open source should mean open source, there are some partially open models that need some sort of definition. There is a big research field of privacy-preserving, federated methods of ML model training and I think that is a good thing. And OSI has a point here:

Why do you allow the exclusion of some training data?

Because we want Open Source AI to exist also in fields where data cannot be legally shared, for example medical AI. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information ­ like decisions about their health. Similarly, much of the world’s Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.

How about we call this “open weights” and not open source?

Leave a Reply

Your email address will not be published. Required fields are marked *

Explore More

YubiKey Side-Channel Attack

September 6, 2024 0 Comments 0 tags

There is a side-channel attack against YubiKey access tokens that allows someone to clone a device. It’s a complicated attack, requiring the victim’s username and password, and physical access to

DARPA competition shows promise of using AI to find and patch bugs

August 12, 2024 0 Comments 0 tags

LAS VEGAS — The Pentagon is one step closer to building autonomous mechanics that can find and fix vulnerabilities in the world’s digital underbelly — and all it took was

Telemedicine Business Owner Faces 20 Years For $136m Fraud

March 11, 2024 0 Comments 0 tags

Nurse practitioner pleads guilty to $136m Medicare fraud plot involving her telemedicine companies