The tech industry’s biggest companies have spent this year warning that the development of AI technology is beyond their wildest expectations and that they need to limit who has access to it.
Mark Zuckerberg is doubling down on a different tack: He’s giving it away.
Mr. Zuckerberg, CEO of Meta, said on Tuesday that he intends to offer the code behind the latest and most advanced artificial intelligence technology to developers and software enthusiasts around the world for free.
The decision, similar to the one Meta made in February, could help the company compete with competitors like Google and Microsoft. These companies have moved more quickly to integrate generative AI — the technology behind OpenAI’s popular ChatGPT chatbot — into their products.
“When software is open, more people can scan it to identify potential problems and fix them,” Zuckerberg said in a post on his personal Facebook page.
The latest version of Meta AI is built with 40 percent more data than the company released just a few months ago, and is believed to be much more powerful. The Meta provides a detailed roadmap showing how developers can work with the vast amount of data they have collected.
Researchers worry that generative AI could increase the volume of disinformation and spam on the Internet, posing dangers that even some of its creators don’t fully understand.
Meta adheres to the firm belief that allowing all kinds of programmers to tinker with technology is the best way to improve it. Until recently, most AI researchers agreed. But in the past year, companies like Google, Microsoft, and OpenAI, a San Francisco startup, have placed limits on who can access their latest technology and placed controls around what can be done.
The companies say they are limiting access due to safety concerns, but critics say they are also trying to stifle competition. Meta argues that it is in everyone’s interest to share what he is working on.
“Meta has historically been a huge proponent of open platforms, and it has really worked for us as a company,” Ahmed Dahl, Vice President of Generative AI at Meta, said in an interview.
The move would make the software “open source,” which is computer code that can be freely copied, modified, and reused. The technology, called LLaMA 2, provides everything anyone needs to build online chatbots like ChatGPT. LLaMA 2 will be released under a commercial license, which means developers can build their own using Meta’s core AI to power it – all for free.
With LLaMA 2 open source, Meta can take advantage of improvements made by programmers outside the company while — Meta executives hope — spur AI experiments.
Meta’s open source approach is nothing new. Companies often open source technologies in an effort to catch up with competitors. Fifteen years ago, Google made an open source Android mobile operating system to better compete with Apple’s iPhone. While the iPhone had the early lead, Android eventually became the dominant software used in smartphones.
But the researchers argue that someone could publish Meta AI without the safeguards that tech giants like Google and Microsoft use to crack down on toxic content. The newly created open source models, for example, can be used to flood the Internet with more spam, financial fraud, and disinformation.
LLaMA 2, short for Large Language Model Meta AI, is what scientists call Large Language Model, or LLM Chatbots like ChatGPT and Google Bard are built with large language models.
Models are systems that learn skills by analyzing vast amounts of digital text, including Wikipedia articles and books, online forum conversations, and chat logs. By identifying patterns in text, these systems learn to create their own text, including term papers, poetry, and computer code. They can even hold a conversation.
Meta executives argue that their strategy is not as risky as many think. They say that people can already produce large amounts of disinformation and hate speech without the use of artificial intelligence, and that this toxic material can be severely restricted through Meta social networks such as Facebook. And they contend that releasing the technology could ultimately enhance the ability of Meta and other companies to resist abuse of the software.
Mr. Dahl said Meta did additional “Red Team” testing of LLaMA 2 prior to its release. This is a term for testing software for potential abuse and discovering ways to protect against such abuse. The company will also release a responsible use guide that contains best practices and guidelines for developers who want to build software using code.
But these tests and guidelines apply to only one of the models Meta launches, which will be trained and refined in a way that has firewalls and prevents abuse. Developers will also be able to use the code to create chatbots and software without firewalls, a move that skeptics see as a risk.
In February, Meta released the first version of LLaMA to academics, government researchers, and more. The company has also allowed academics to download LLaMA after they have been trained on massive amounts of digital texts. Scientists call this process “weight release”.
It was a remarkable move because analyzing all that digital data required huge computing and financial resources. With weights, anyone can build a chatbot much cheaper and easily from scratch.
Many in the tech industry believe Meta sets a dangerous precedent, and after Meta shared its AI technology with a small group of academics in February, a researcher leaked the technology to the public internet.
In a recent opinion article on Financial TimesNick Clegg, Meta’s head of global public policy, argued that “it is not sustainable to keep foundational technology in the hands of a few large companies”, and that companies that have released open source software have also been strategically introduced.
“I look forward to seeing what you all build!” Mr. Zuckerberg said in his post.