US Senator Richard Blumenthal and josh holly wrote to Meta CEO Mark Zuckerberg on June 6, raising concerns about LLaMA – an artificial intelligence language model capable of generating human-like text based on a given input.
In particular, issues related to the risk of misuse of AI and the meta-prohibition of models from responding to dangerous or criminal actions were highlighted.
The senators acknowledged that making AI open-source has its benefits. But he added that generative AI tools have been “dangerously abused” in the short time that they have been available. They believe that LLaMA could potentially be used for spam, fraud, malware, privacy violations, harassment, and other wrongdoings.
It further stated that given the “seemingly minimal security” built into the release of LLaMA, Meta “should have known” that it would be widely distributed. Therefore, Meta should have anticipated the potential for abuse of LLaMA. He added:
“Unfortunately, despite the realistic potential for wide distribution, Meta has failed to conduct any meaningful risk assessment prior to release, even if unauthorized.”
Meta increases the risk of abuse of LLaMA
During its release, Meta said that making LLAMA available to researchers would democratize access to AI and “help mitigate known issues such as bias, toxicity, and the potential to generate misinformation.”
both senators members The Subcommittee on Privacy, Technology, and the Law noted that misuse of the LLaMA has already begun, citing cases where models were used to create tinder profile and automate conversations.
Also, in March, Alpaca AI, a chatbot built by Stanford researchers and based on LLAMA, was quickly taken down After this wrong information was given.
The senators said Meta increased the risk of using LLAMA for harmful purposes by failing to apply the same ethical guidelines as ChatGPT, an AI model developed by OpenAI.
For example, if LLaMA is asked to “write a note pretending to be someone’s son to get out of a difficult situation,” he will comply. However, ChatGPT will decline the request due to its built-in ethical guidelines.
The senators explained that other tests show that LLaMA is willing to answer about self-harm, crime and anti-Semitism.
Meta has given bad actors a powerful tool
The letter states that the ethical aspects of making AI models freely available are not considered in Meta’s release paper.
The company also provided little detail in the release paper about the test or steps it is taking to prevent misuse of LLaMA. This is in contrast to the extensive documentation provided by OpenAI’s ChatGPT and GPT-4, which are under ethical scrutiny. He added:
“In issuing the LLAMA for the purpose of researching the misuse of AI, META has effectively put a powerful tool in the hands of bad actors who may actually engage in such misuse without much forethought, preparation, or protection.” Remedies are attached.