Published: 29 June 2025, 08:25 CEST | Updated: 29 June 2025, 08:25 CEST
By Stuart Kerr, Data Journalist, Live AI Wire (liveaiwire@gmail.com, @liveaiwire)
Meta’s artificial intelligence (AI) division, Meta AI, has emerged as a pivotal player in the global race to advance AI technologies. Once known as Facebook Artificial Intelligence Research (FAIR), the division has undergone significant restructuring to pursue ambitious goals, including the development of “superintelligence.” With substantial investments, high-profile hires, and a focus on both foundational research and practical applications, Meta AI is reshaping its role in the industry. This article delves into the objectives, research paths, and industry reactions to Meta’s AI labs, drawing on expert insights and credible sources to provide a balanced perspective.
Objectives of Meta’s AI Labs
Meta AI, rebranded from FAIR in 2021, aims to advance AI technologies for both academic and commercial purposes. Founded in 2013 under the leadership of Yann LeCun, a Turing Award-winning deep learning pioneer from New York University, the lab initially focused on understanding intelligence and enhancing machine learning capabilities. Today, its objectives are twofold: advancing foundational AI research and integrating AI into Meta’s ecosystem, including platforms like Facebook, Instagram, WhatsApp, and Reality Labs’ augmented reality (AR) and virtual reality (VR) products.
A key goal is the pursuit of artificial general intelligence (AGI) and potentially “superintelligence,” a term describing AI that surpasses human cognitive abilities. In June 2025, Meta announced a new Superintelligence Lab, led by Alexandr Wang, former CEO of Scale AI, to accelerate this vision. According to a Meta spokesperson, the lab aims to “develop frontier AI reasoning models” to power applications across Meta’s platforms. This aligns with Mark Zuckerberg’s stated ambition to position Meta as an AI powerhouse, as evidenced by the company’s $65 billion investment in AI infrastructure in 2025.
Meta AI also focuses on practical applications, such as improving user experiences through personalised recommendations, content moderation, and AI assistants like the one integrated into Ray-Ban Meta smart glasses. A 2024 report by the OECD highlighted Meta’s use of AI for real-time content filtering, enhancing platform safety. However, critics like Dr. Subbarao Kambhampati, a professor at Arizona State University, argue that “superintelligence” remains a loosely defined marketing term, with no clear technical path to achievement. He cautions that such ambitious rhetoric may oversimplify the complexity of human intelligence.
Research Paths and Innovations
Meta AI’s research spans several domains, including natural language processing (NLP), computer vision, and embodied AI. The lab has made significant contributions to open-source AI, notably through PyTorch, a machine learning framework released in 2017, widely adopted by companies like Tesla and Uber. Another milestone was the 2023 release of LLaMA, a family of efficient language models for research, followed by Llama 4 in April 2025, with models named Scout and Maverick. The largest model, Behemoth, is still in training, reflecting Meta’s focus on scaling AI capabilities.
In NLP, Meta AI explores self-supervised learning and generative adversarial networks to improve language understanding and generation. A 2025 project, “Seamless Interaction,” aims to enhance virtual avatar interactions with natural body movements and expressions, as noted in a post on X by @IbraHasan_. In computer vision, Meta AI advances photorealistic capture and 3D modeling for AR/VR applications, supporting products like the Meta Quest 3. Additionally, a new division within Reality Labs is developing AI-powered humanoid robots, leveraging Llama models to navigate physical environments, according to a February 2025 Reuters report.
Despite these advancements, Meta’s research has faced setbacks. The 2022 release of Galactica, an LLM for scientific text, was withdrawn due to inaccuracies and offensive outputs, highlighting challenges in ensuring model reliability. Dr. Michael Wooldridge, a professor at the University of Oxford, notes that such failures underscore the need for robust testing and transparency, a concern echoed in our prior article on AI Guardrails: Mitigating Bias in Content Generation.
Industry Reaction and Challenges
Meta’s aggressive AI push has elicited mixed reactions. The $14.3 billion investment in Scale AI, securing a 49% stake, and the recruitment of Wang and former OpenAI researchers like Trapit Bansal have been hailed as bold moves. Industry analyst Benj Edwards from Ars Technica views the Scale AI deal as a strategic play to secure high-quality training data, critical for advancing LLMs. Posts on X, such as one by @MktgAi, reflect excitement about Meta’s Superintelligence Lab, positioning it as a competitor to OpenAI and Google DeepMind.
However, skepticism persists. The departure of key researchers, including Joelle Pineau in April 2025, and the reported “slow death” of FAIR, as described by a former Meta employee to Fortune, suggest internal challenges. Critics argue that Meta’s shift toward product-driven AI, led by its GenAI team, has sidelined foundational research. Erik Meijer, a former Meta engineer, told Fortune that industrial labs should focus on product integration rather than pure research, advocating for university partnerships instead. This tension reflects a broader industry trend, as noted by Dr. James White of CalypsoAI, who warns that prioritising rapid product releases over safety testing could increase risks, such as models responding to malicious prompts.
Regulatory concerns also loom. The Federal Trade Commission’s ongoing trial against Meta for antitrust violations has influenced its strategy to avoid outright acquisitions, opting for minority investments like Scale AI. Dr. Wooldridge calls for a “CERN for AI,” advocating for transparent, collaborative research to maintain public trust, a sentiment shared by European policymakers.
The Path Forward
Meta AI’s trajectory hinges on balancing innovation with accountability. Its Superintelligence Lab, bolstered by high-profile hires and significant investments, positions Meta to compete with industry leaders. However, challenges like talent retention, model reliability, and regulatory scrutiny require careful navigation. Dr. Yann LeCun, Meta’s chief AI scientist, emphasises the need for “embodied AI” to bridge the gap between language models and physical understanding, a focus that could define Meta’s future contributions.
As Meta integrates AI across its platforms and explores new frontiers like humanoid robotics, its labs remain a focal point of innovation and debate. The industry watches closely to see if Meta can deliver on its ambitious vision while addressing ethical and technical challenges.
Sources:
- OECD (2024): AI in Social Media Platforms. https://oecd.ai/en/wonk/documents/social-media-governance-project-2
- Reuters (2025): Meta’s Humanoid Robotics Division.
- Fortune (2025): Meta’s AI Research Challenges.
- Interviews with Dr. Subbarao Kambhampati (Arizona State University) and Dr. Michael Wooldridge (University of Oxford).
- Post on X by @IbraHasan_, 27 June 2025.