Home / News / Artificial Intelligence / Meta expects recommendations “orders of magnitude” larger than GPT-4. Why?

Meta expects recommendations “orders of magnitude” larger than GPT-4. Why?

Today, Meta announced a remarkable claim to clarify its content recommendation algorithms. It prepares for behavior analysis systems “orders of magnitude” larger than ChatGPT and GPT-4. That’s necessary?

Meta occasionally explains some of its algorithms to demonstrate its transparency. This can be enlightening or raise more questions. It’s a mix.

The social and advertising network posted an overview of its AI models along with “system cards” explaining how AI is used in a given context or app. Even though there’s visual overlap, it’s important to know if a video is about roller hockey or roller derby so you can recommend it.

Meta has been a leader in multimodal AI research, which uses data from multiple modalities to better understand content.

Though we often hear about how these models are used internally to improve “relevance” (targeting), few are released publicly. (Some researchers have access.)

As it builds its computation resources, it mentions this intriguing fact:

Our recommendation models can have tens of trillions of parameters—orders of magnitude more than even the largest language models—to deeply understand and model people’s preferences.

Meta told me these tens-of-trillions models are theoretical. The company clarified, “We believe our recommendation models have the potential to reach tens of trillions of parameters.” It’s like saying your burgers “can” have 16-ounce patties but admitting they’re still quarter-pounders. “Ensure that these very large models can be trained and deployed efficiently at scale,” the company states.

Would a company build expensive infrastructure for software it won’t create or use? Meta did not confirm or deny that they are actively pursuing models of this size, but it seems unlikely. This tens-of-trillions scale model is aspirational and likely in the works.

“Understand and model people’s preferences” means user behavior analysis. A 100-word plaintext list may represent your preferences. To handle recommendations for even a couple billion users, a model this large and complex is hard to understand.

The problem space is huge: billions of pieces of content with metadata, and no doubt complex vectors showing that Patagonia fans donate to the World Wildlife Federation, buy more expensive bird feeders, and so on. Thus, a model trained on all this data may be large. But “orders of magnitude larger” than the biggest, trained on almost every written work?

GPT-4 has no reliable parameter count, and AI leaders have found it to be a reductive measure of performance, but ChatGPT is around 175 billion, and GPT-4 is believed to be higher than that but lower than the wild 100 trillion claims. Meta exaggerates, but this is scary big.

Imagine a massive AI model. Every action you take on Meta’s platforms goes in, and what comes out is a prediction of what you’ll do or like next. It’s eerie.

Of course others do this. Tiktok pioneered algorithmic tracking and recommendation and built its social media empire on its addictive feed of “relevant” content to keep you scrolling until your eyes hurt. Competitors openly envy it.

With its stated goal of creating the biggest model on the block and passages like this, Meta is clearly trying to blind advertisers with science:

In order to deeply understand and model people’s preferences, our recommendation models can have tens of trillions of parameters — orders of magnitude larger than even the biggest language models used today.

Meta told me these tens-of-trillions models are theoretical. The company clarified, “We believe our recommendation models have the potential to reach tens of trillions of parameters.” It’s like saying your burgers “can” have 16-ounce patties but admitting they’re still quarter-pounders. “Ensure that these very large models can be trained and deployed efficiently at scale,” the company states.

Would a company build expensive infrastructure for software it won’t create or use? Meta did not confirm or deny that they are actively pursuing models of this size, but it seems unlikely. This tens-of-trillions scale model is aspirational and likely in the works.

“Understand and model people’s preferences” means user behavior analysis. A 100-word plaintext list may represent your preferences. To handle recommendations for even a couple billion users, a model this large and complex is hard to understand.

The problem space is huge: billions of pieces of content with metadata, and no doubt complex vectors showing that Patagonia fans donate to the World Wildlife Federation, buy more expensive bird feeders, and so on. Thus, a model trained on all this data may be large. But “orders of magnitude larger” than the biggest, trained on almost every written work?

GPT-4 has no reliable parameter count, and AI leaders have found it to be a reductive measure of performance, but ChatGPT is around 175 billion, and GPT-4 is believed to be higher than that but lower than the wild 100 trillion claims. Meta exaggerates, but this is scary big.

Imagine a massive AI model. Every action you take on Meta’s platforms goes in, and what comes out is a prediction of what you’ll do or like next. It’s eerie.

Of course others do this. Tiktok pioneered algorithmic tracking and recommendation and built its social media empire on its addictive feed of “relevant” content to keep you scrolling until your eyes hurt. Competitors openly envy it.

With its stated goal of creating the biggest model on the block and passages like this, Meta is clearly trying to blind advertisers with science:

These systems understand people’s behavior preferences utilizing very large-scale attention models, graph neural networks, few-shot learning, and other techniques. Recent key innovations include a novel hierarchical deep neural retrieval architecture, which allowed us to significantly outperform various state-of-the-art baselines without regressing inference latency; and a new ensemble architecture that leverages heterogeneous interaction modules to better model factors relevant to people’s interests.

Researchers and users don’t care about this paragraph. But imagine an advertiser who is starting to question whether Instagram ads are worth the money. This technical jargon is meant to impress them and convince them that Meta is a leader in AI research and that AI truly “understands” people’s interests and preferences.

“More than 20% of content in a person’s Facebook and Instagram feeds is now recommended by AI from people, groups, or accounts they don’t follow.” As requested! So there. Great AI.

This also highlights Meta, Google, and other companies whose main goal is to sell ads with increasingly precise targeting. Even as users revolt and advertising multiplies and insinuates, that targeting must be reaffirmed.

Meta has never asked me which brands or hobbies I like. They’d rather watch over my shoulder as I browse the web for a new raincoat and act like it’s advanced artificial intelligence when they serve me raincoat ads the next day. The latter approach may be better, but how much? The web has been built on precision ad targeting, and now the latest technology is being used to support a new, more skeptical wave of marketing spend.

You need a ten trillion-parameter model to predict what people like. How else could you justify spending a billion dollars training it!

About Chambers

Check Also

Researchers have recently identified the initial fractal molecule found in the natural world

Fractals, which are self-repeating shapes that can be infinitely magnified without losing their intricate details, …