Member-only story
Does Zuckerberg’s AI Strategy Put Humanity at Risk?
Facebook’s former motto, ‘Move fast and break things,’ raises a crucial question: is Mark Zuckerberg now applying this approach to the field of humanity-impacting AI?
Meta’s open-source AI program Llama was released publicly by a leak on March 3, 2023.
That release ignited a race by AI researchers trying to release even more capable AI models that anyone can modify.
Who cares?
We all should.
AI Developers train large AI models like ChatGPT and Llama on much of the world’s history — good and evil.
Closed model developers like OpenAI, Anthropic, and Google DeepMind release AI models with guardrails to disallow bad uses. The guardrails are imperfect, but developers can patch any vulnerabilities since they control the models.
Open-source Developers like Zuckerberg’s Meta and France’s Mistral release models with guardrails but give the keys (known as “model weights”) to allow users to modify and use them for almost anything. Once the models are released, they can’t be patched or regulated.
Recently, the currently most powerful open-source model, by Mistral, was cracked by one user and posted on a big model hosting site — Hugging Face. The user removed guardrails that would keep the…