Over 10 years we help companies reach their financial and branding goals. Tecyfi is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

info@tecyfi.com

+1415-997-7222

Technology
Meta AI LLaMA

LLaMA an Artificial Intelligence Model by Meta

Learn how tecyfi can grow your business here.

Meta shows off LLaMA, an AI that competes with ChatGPT, but hallucinations aren’t guaranteed. Meta says it will share its LLaMA language model with AI researchers, which is different from what Google and OpenAI say. The Fundamental AI Research (FAIR) team at Meta, the company that owns Facebook, has created a new “state-of-the-art” AI language model called Large Language Model Meta AI (LLaMA). CEO Mark Zuckerberg said on Friday that researchers will be able to use the model, which is expected to help scientists and engineers find new ways to use AI.

AI Is the main Priority of Big Tech Companies such as Meta, Google, Microsoft

AI development is now a priority for both big tech companies and new companies. Large language models like Microsoft’s Bing AI, OpenAI’s ChatGPT, and Google’s unreleased Bard AI help support applications.

The social media giant says that Meta’s LLM is different in a number of ways, especially in how big it is and how researchers can use it.

7–65 billion parameters will be used by LLaMA. Meta says that LLaMA’s size will be between 7 billion and 65 billion parameters.

Even though bigger models have helped push the limits of what technology can do, they can cost more to run. The word for this is “inference.” One example is the Chat-GPT 3 from OpenAI, which has 175 billion parameters.

“Smaller models trained on more tokens, which are pieces of words, are easier to retrain and fine-tune for specific possible product use cases,” Meta AI wrote in a blog post on Friday.

Meta blog

“We used 1.4 trillion tokens to train LLaMA 65B and LLaMA 33B. Our smallest model, LLaMA 7B, has been trained on one trillion tokens.”

Meta blog

Meta has also said that their LLM will be shared with the AI research community. This is different from Google’s LaMDA and OpenAI’s ChatGPT, whose underlying models are kept secret.

“Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available datasets,” Guillaume Lample, a research scientist at Facebook AI, tweeted on Friday. “This makes our work compatible with open-sourcing and reproducible, while most existing models use data that is either not publicly available or not documented.”

AI hallucinations are not a sure thing.

Meta says that it trained the model with literature from the 20 languages with the most speakers, focusing on those that use the Latin and Cyrillic alphabets.

But Meta hasn’t said that its language model won’t have hallucinations like other models.

“More research needs to be done to figure out how to deal with the risks of bias, harmful comments, and hallucinations in large language models. LLaMA has the same problems as the other models “In the blog, Meta said.

Meta blog

Learn more about ChatGPT here

Other blogs:

React JS Vs React Native

AI tools for 100x Productivity

Comment (1)

  1. TESTING
    April 17, 2023

    webp express

Leave a comment

Your email address will not be published. Required fields are marked *