As the A.I. race heats up, Mark Zuckerberg talks about Meta’s new large language model

Mark Zuckerberg, Meta’s CEO, said on Friday that a new large language model has been trained and will be given to researchers.

The LLaMA model is meant to help scientists and engineers find uses for AI, like answering questions and summarizing documents.

Meta’s new model, which was made by its Fundamental AI Research (FAIR) team, was just released at a time when both big tech companies and startups with a lot of money are racing to show off advances in artificial intelligence techniques and build them into commercial products.

Applications like OpenAI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard, which hasn’t been released yet, are based on large language models.

Zuckerberg wrote in his post that LLM technology could one day be used to solve math problems or do scientific research.

“LLMs have shown a lot of promise in generating text, having conversations, summarizing written material, and doing more complicated tasks like solving math theorems or predicting protein structures,” Zuckerberg wrote on Friday.

From Meta’s paper, here’s an example of what the system can do:

Meta says that its LLM is different from other models in a number of ways.

First, it says that it will be available in different sizes, ranging from 7 billion to 65 billion parameters. In recent years, researchers have been able to improve technology by making larger models, but they cost more to run. This is called “inference.”

One example is OpenAI’s Chat-GPT 3, which has 175 billion parameters.

Meta also said that researchers will be able to use its models and that researchers can send in applications. The models that LaMDA and ChatGPT are based on are not available to the public.

“Meta is committed to this open model of research, and we’ll give the AI research community access to our new model,” Zuckerberg wrote.

Leave a Reply

Your email address will not be published. Required fields are marked *