Meta has confirmed that any images taken through it can also be used to train its AI models. The company initially dodged the question and then responded to TechCrunch, saying that while photos and videos taken on the Ray-Ban Meta glasses can’t be used for training unless they’re sent to AI, once Meta AI is asked to analyze them, those images’ fall under different policies and can be used for AI training.’
Your data “may be used to improve it”
In an email to TechCrunch, Meta’s policy communications manager Emil Vazquez explained that images and videos shared with Meta AI in the U.S. and Canada “may be used to improve it,” as stated in the company’s privacy policy. That means whenever you ask the AI to analyze your surroundings, you pass data to Meta that it can use to improve its AI models.
The reveal is especially worrying considering the new, easy-to-use AI features rolled out with Ray-Ban Meta glasses. Now, the AI can analyze real-time streams, such as searching through a closet to suggest what to wear, but even those images will also be sent over to Meta to train the AI model.
But as users begin to interact with these smart glasses, it’s unclear that they’re also lending Meta access to personal spaces, loved ones, or sensitive data for its AI development. There is no way around this other than not using Meta’s multimodal AI features. Meta says that interactions with the AI feature can be used to train models, but this isn’t always indicated in the user interface.
As smart glasses’ tails wind into privacy concerns, those concerns echo those surrounding Google Glass, but now with edge AI at their core. Meta, pushing its AI-powered wearables, asks: How far are users willing to go, knowingly or unknowingly, to continue a generation of AI models?
Featured image credit: Meta