The ethical pros and cons of Meta’s new Llama 3 open-source AI model

April 20, 2024

The ethical pros and cons of Meta’s new Llama 3 open-source AI model

Experts say that while open-source could accelerate innovation, it also could make deepfakes easier.

BY Chris Morris

Meta has a brand-new Llama to show off. On Thursday, the social media giant announced Llama 3, the next version of its open-source model for the Meta AI assistant, which it hopes will make its chatbot the leading artificial intelligence technology.

Putting aside the question of whether this latest large language model (LLM) changes Meta’s positioning within the broader AI arms race, there’s a bigger issue at play here: The advances of the open-source Llama 3 raise some major questions about the safety of democratizing AI this early in the technology’s developmental process. 

Experts say there are both pros and cons. Innovation could be accelerated, but it could also result in creations like deepfakes and more troubling misuses. It’s a thorny, nebulous area. Here’s a look at some of the factors to consider with open source.

What are the advantages of an open-source LLM?

An open-source LLM encourages transparency and could increase public trust in the technology, experts tell Fast Company. When AI companies utilize a closed architecture, there are questions of sourcing and bias. (OpenAI discovered this when it introduced its Sora AI video creation tool and CTO Mira Murati clumsily dodged questions about how it was trained.)

Open sourcing also lets researchers and the community explore new opportunities. In a best-case scenario, it can increase productivity and develop solutions that boost the value of its responses for users. (Meta’s not the only company open sourcing its model. Google’s Gemma is also part of the open-source ecosystem.) 

“With open-source LLM, organizations will have more capabilities to develop and deploy AI-powered solutions,” says Rajiv Garg, a professor at Emory University’s Goizueta Business School. “Llama 3 is a solid model that will reduce the entry barrier significantly for most organizations—especially the ones that want to fine-tune these models with their internal data.”

What are the potential downsides of an open-source LLM?

The internet is full of bad actors—and they’re eager to put new technology to illegitimate uses and cause mayhem.

“Open-source models are susceptible to malicious attacks or data breaches, potentially compromising user data and privacy,” says Dmitry Shapiro, founder and CEO of MindStudio. “[They] can be exploited for harmful purposes, such as spreading misinformation or propaganda.”

Support and maintenance of open-source models can also be an issue, as can quality control. Unless Meta has strict oversight on the LLM, it could give inconsistent answers, which could frustrate users and exacerbate existing societal issues.

The worst-case scenario? It’s about as bad as you would think.

The ethical pros and cons of Meta’s new Llama 3 open-source AI model

“Unintended consequences could arise where open-source models are utilized for malicious purposes, such as generating deepfakes or propaganda,” Shapiro says. “Additionally, there’s the risk of uncontrolled proliferation, where open-source models are employed without adequate consideration for ethical or social implications.”

Does the use of open-source technology increase the need for AI regulation?

Experts are split on this. Shapiro argues that open-source models facilitate transparency, which makes regulation more straightforward. Garg, however, says that without regulation, the doors are open for any sort of application. Therefore, he says, guidelines on creating responsible AI solutions are necessary.

Either way, AI developers are largely in favor of the government setting rules for AI. OpenAI CEO Sam Altman testified before a Senate subcommittee last year, calling on the government to regulate the AI industry, including his company. The following month, Altman and some of the AI field’s biggest names, including Microsoft’s chief scientific and technology officers, signed a statement warning that the technology could be an existential risk. 

What sort of potential liability would Meta and other companies who release open-source models face?

No one knows the real answer to that, but there are plenty of risks, including data privacy, defamation, and libel and reputational damages. If the LLM is misused or malfunctions, Meta or other open-source companies could be held liable, especially if they have failed to comply with regulations.

But that’s all theoretical at the moment. And the many moving parts of an AI system increase the confusion.

“Liability is a difficult topic when it comes to foundational models because there are many interconnected components, from training data, model setup, and applications built atop of it, often all involving different parties,” says Ben James, founder and CEO or Atlas Design. “The liability of a foundational model is tough to nail down if somebody then builds on top of it, and someone else on top of them.”


ABOUT THE AUTHOR

Chris Morris is a veteran journalist with more than 30 years of experience. Learn more at chrismorrisjournalist.com. 


Fast Company

(11)