ES|EN|日本語|DE

The emergence of glasses with integrated artificial intelligence (AI), such as Ray-Ban Meta, is transforming our technological and social habits by offering advanced features such as voice-based virtual assistants, discreet audio and video recording, cloud connectivity, and facial recognition. However, alongside their advantages, these devices present significant legal challenges. It is essential to analyse the risks and obligations relating to privacy, personal data protection, criminal liability, labour rights, and the protection of fundamental rights under the applicable legislation and prevailing legal doctrine.

Processing of personal data

Any capture, recording, or dissemination of images or sounds of identifiable individuals carried out using smart glasses constitutes processing of personal data and is therefore subject to the General Data Protection Regulation (GDPR) and Organic Law 3/2018 on the Protection of Personal Data and the Guarantee of Digital Rights (LOPDGDD). Both image and sound collected through these devices qualify as personal data, regardless of the method or technology used.

This entails the following obligations for data controllers:

  • (i). To ensure that processing complies with the principles of lawfulness, transparency, data minimisation, accuracy, purpose limitation, and storage limitation.
  • (ii). To provide clear and visible information to data subjects. In the case of traditional video surveillance, signage and information notices placed in a visible location are required. Smart glasses, such as Ray-Ban Meta, may breach this obligation if the recording indicator light is not sufficiently visible, allowing recordings to be made without consent or awareness, which may constitute both an administrative and a criminal offense.

Where the functions of the glasses include automated facial recognition systems or other biometric elements, the legal regime is significantly more stringent. The processing of biometric data requires:

  • (i). A reinforced and exceptional legal basis (as a rule, mere consent is insufficient, particularly in professional contexts).
  • (ii). Carrying out a Data Protection Impact Assessment (DPIA).
  • (iii). Justification of necessity, proportionality, and the adoption of additional transparency safeguards. The large-scale use of biometrics or facial recognition in public spaces, save for extremely limited exceptions, is considered incompatible with current legislation and is expressly prohibited by the European Artificial Intelligence Regulation (AI Act) when aimed at emotional inference, classification based on specially protected data, or the creation of large-scale facial recognition databases.

Consent, transparency, and limitation of use

Processing will be lawful only if it is based on a valid legal ground: explicit consent, performance of a contract, compliance with a legal obligation, protection of vital interests, public interest, or a duly balanced legitimate interest. In contexts where consent is unfeasible or vitiated (for example, employment relationships or situations of imbalance), an alternative valid and justified legal basis must be identified.

Users of smart glasses must be aware not only of the recordings themselves but also of the existence of cloud storage, disclosure to third parties, profiling, or potential uses for algorithm and AI training. The absence of clear and transparent policies regarding the data lifecycle, purposes of processing, and data subject rights is a direct breach of the transparency obligations imposed by the GDPR and the LOPDGDD.

In particular, the processing of metadata (location, usage habits, social interaction) also requires specific information, the possibility to object, and the exercise of rights (access, rectification, erasure, portability, and objection).

Security measures and liability

Cloud-based data storage increases the risk of security breaches and unauthorised access. Manufacturers and users are required, under current regulations, to implement appropriate technical and organisational measures (encryption, access control, auditing, cybersecurity safeguards) to prevent unlawful access to and use of personal data. The absence or inadequacy of such measures may give rise to administrative or civil liability toward affected individuals, as well as severe sanctions imposed by the supervisory authority.

Criminal law aspects

The recording, use, or dissemination of images or sounds of third parties through technical devices, without their consent and given the minimal or non-existent perceptible warning, may constitute the criminal offense of unlawful discovery and disclosure of secrets (Article 197 of the Spanish Criminal Code). Liability is aggravated if the data concern minors, vulnerable persons, or if it is done for profit. The mere transmission or disclosure of unlawfully obtained recordings may constitute a separate and independent offense.

Professional context 

The use of smart glasses in the workplace entails an additional obligation to respect intimacy, safeguard employee privacy, and limit monitoring or video surveillance to justified cases, with clear information provided to employees and their representatives. Recording in rest areas is prohibited, as is any form of permanent, invasive monitoring or monitoring lacking a sufficient legal basis. Where biometric data are collected for attendance control, consent is presumed to be vitiated, and less intrusive alternative measures must be sought.

New legal obligations

Since August 2024, the European Artificial Intelligence Regulation (AI Act) has been in force, and will be fully applicable as of August 2026. This regulation imposes:

  • (i). The principle of “AI literacy”: providers and users must have the minimum training and understanding necessary to ensure the safe use of these systems.
  • (ii). Risk assessments prior to deployment and analysis of whether the device constitutes a “high-risk” system (e.g., biometric systems or automated decision-making affecting individuals).
  • (iii). The prohibition of certain AI practices, such as emotional inference or biometric categorisation for purposes unrelated to duly justified health or security objectives.
  • (iv). Enhanced obligations regarding transparency, user information, and human oversight.

Failure to comply with these obligations may result in the withdrawal of products from the market and the imposition of substantial fines.

Recommendations and best practices

In light of the identified risks and the applicable regulatory framework;

  • Privacy by design: manufacturers must incorporate mechanisms from the outset that guarantee privacy, minimise the collection of unnecessary information, and restrict access.
  • Clear information and visible warnings: effective informational mechanisms must be in place to warn third parties of potential recording.
  • Explicit consent for recordings and biometrics: such processing should be carried out only in exceptional, fully informed contexts.
  • Impact assessments and consultation with the supervisory authority: DPIAs must be conducted prior to deployment in high-risk scenarios.
  • Enhanced security measures for storage and transmission: security must be a priority, particularly given the inherent vulnerabilities of cloud storage and the possibility of remote access.
  • Respect for limits in sensitive and professional settings: avoid the use of these tools in private, sensitive, or rest areas, and always consult the legal representative of employees before implementing monitoring or control technologies.
  • Regular review and updating of privacy and security policies: manufacturers and users must stay up to date with rapid regulatory developments, particularly concerning the AI Act and relevant case law.

 

 

Shameem Hanif Truszkowska

Vilá Abogados

 

For more information, please contact va@vila.es

 

2nd of January 2026