The argument that models should not be regulated but applications should relies on a flawed premise. It suggests that AI models are exempt from liability, which is unprecedented compared to other significant products.
For instance, Andrew Ng's stance that AI, as a foundational technology, should not be regulated overlooks the fact that foundational technologies often face heavy regulation. Electricity, for example, is a foundational technology regulated as a monopoly in many jurisdictions, under a wide array of rules. Suggesting that only applications using electricity should be regulated would be nonsensical.
Open Source advocates often emphasize their need for special treatment but fall short on addressing safety concerns. Regulating applications of Open Source AI models might seem logical, but identifying the creators of malicious applications using these models can be nearly impossible, rendering such regulations ineffective.
It's concerning to see "experts" overlook these fundamental issues and not focus more on ensuring AI safety. A comprehensive approach that includes both models and applications is essential for making AI a force for good in the world.
Two other issues open source advocates always refuse to address are:
- How do they ensure bad actors can't use AI at scale for harmful purposes?
- Why give frontier AI to geopolitical rivals for free, enabling them to surpass the US technologically at almost no cost?