Meta has come under renewed pressure over privacy after reports that contractors in Kenya reviewed highly sensitive footage captured by users of its Ray Ban AI smart glasses.

The issue emerged from an investigation by Swedish newspapers, which said contractors tasked with improving Meta’s AI systems had access to deeply personal recordings. According to the reports, the material included people undressing, using the bathroom, watching pornography and filming sexual activity.
Some of the footage was said to have been captured unintentionally. One contractor claimed that users often did not realise the glasses were still recording after setting them down, leading to private moments being stored and later reviewed.
The revelations have raised wider concerns about how much users understand when they activate the glasses’ AI features. Reports said customers cannot fully use the AI functions without agreeing to send data to Meta’s servers. Critics argue that while Meta’s terms mention that content may be reviewed by humans, that disclosure is not made clear enough for the average user.
One annotator quoted in the reports said many people would likely avoid using the glasses if they fully understood the extent of the data collection and review process.
The controversy comes as Meta’s smart glasses business records rapid growth. The company sold about 7 million pairs of Ray Ban smart glasses in 2025 alone, a sharp increase from the roughly 2 million units sold across 2023 and 2024 combined. The figures show that Meta’s wearable AI products are moving into the mainstream, even as scrutiny over privacy intensifies.
The reports have added to a growing debate over the trade off between convenience and surveillance in consumer AI devices. Smart glasses are promoted as a hands free way to capture moments, ask questions and interact with AI assistants in real time. But privacy campaigners say the latest allegations expose the risks of always on devices that can record far more than users, or the people around them, may realise.
Meta has maintained that content shared with its AI systems may be used to improve its products. However, the latest claims are likely to deepen calls for stronger safeguards, clearer consent rules and tighter oversight of how personal footage is handled.
For Meta, the challenge now goes beyond technology. As sales climb and AI wearables gain ground, the company may have to convince users that its products can be trusted inside the most private spaces of everyday life.
