Q&A on augmenting reality - discrimination, sensitive information, free speech and use by children | Fieldfisher
Skip to main content
Insight

Q&A on augmenting reality - discrimination, sensitive information, free speech and use by children

Chris Eastham and Marta Dunphy-Moriel give their views on some of the legal issues which may arise in relation to confidential information and the use of AR technology.

This Q&A was first published on LexisPSL on 27th October 2016 as part of its Augmented Reality series.  The views expressed are mine and my colleague's, and do not necessarily reflect those of either Fieldfisher or LexisNexis.

 

Chris Eastham, senior associate and Marta Dunphy-Moriel, associate at Fieldfisher LLP, give their views on some of the legal issues which may arise in relation to confidential information and the use of AR technology.

Could the unregulated use of AR have intended or unintended discriminatory consequences?

Generally speaking, technology isn't inherently good or bad – it's the way it is used that has a positive or negative impact. One difficulty with AR will be that the potential for discrimination may arise accidentally, if the device gives access to information that would be inappropriate to consider when taking a particular decision.

For example, facial recognition software coupled with AR could allow information about a person being interviewed to be displayed to the interviewer which might not otherwise have been readily available – at least without actively going looking for it. Social media information, for example, might be displayed which gives information as to relationship status; sexuality; religious views; political affiliations; or what the interviewee was doing the night before.

Some of this information would, if used, make the decision-making process discriminatory and so the interviewer wouldn't normally seek it out. However if the information is displayed automatically, then even the most careful interviewer might be swayed by unconscious bias. Furthermore, not everything you might read on the internet is true, and so it would be risky for decision-makers to give weight to information from unverified sources.

This does not necessarily mean that the use of AR has to be regulated, as we already have a regulatory framework to deal with discrimination. Users will need to consider however how their use of AR sits within the existing framework. Perhaps those seeking to avoid bias would elect not to receive information via AR unless it was both relevant and the source verified.

If AR technology accelerates the adoption of wearable technology, could this pose threats in terms of protecting confidential information, particularly where such wearable technology makes recording equipment less obvious or covert?

In short, yes. If wearable technology incorporating recording or transmitting apparatus is used inappropriately, then it could pose a threat to maintaining the confidentiality of information, as well as the privacy of the individual. We believe that these kinds of risks can be mitigated however, if the correct operational and technical measures are taken to protect the information.

Wearers of technology will need to be made aware of the practical risks. As an example, don't wear your internet connected glasses when you go to the bathroom! Social norms will no doubt develop as the technology becomes more widespread, but to assist that we would suggest that organisations include provision for AR in their information security policies.

Once information is captured by the AR technology, it will likely be down to the design and implementation of the device and supporting infrastructure, as well as the policies and procedures of the service provider, which dictate the security (or otherwise) of the information. Providers should be considering privacy and information security from the outset when designing their systems and processes, ensuring (for example) that personal data is not used except for specific purposes of which the user is fully informed and has agreed to. They will need to decide, at an early stage, what information the device will collect, and for what purposes. Providers will then need to ensure that they are transparent in their dealings with the user, and give clear information as to what will be done with the data and who will process it. Users will also need to have choices as to their individual privacy preferences.

Providers will need to be careful, because there is the potential to capture personal information about individuals that are not the user, and so there will not have been any opportunity to consent to the processing - this might include location data about a person that the user passes in the street.

What is the relationship between AR, free speech, and public interest in information?

In our view, the relationship between the information shown via AR and free speech is broadly the same as any other material available over the public internet.

One difference that might arise is that, with material over the internet, it is usually for the user to seek it out. On the other hand, one of the advantages of AR is that it can automatically display information based on context. This might cause issues if content is displayed on the basis of a contextual trigger which is unsuitable for, or offensive to, the user. For example, an advertisement for an adult store based on GPS data, where the user might be a child. Clearly in this example the law related to advertising will apply. This will need to be considered by providers at the design stage, and safeguards will need to be included (such as a content filter).

We are not aware of any AR-specific cases regarding claims of public interest in the disclosure of images captured by AR. Cases in which the Courts seek to strike the balance between privacy, public interest and free speech are not uncommon however, and these issues arise fairly frequently. For example, the recent release of Virgin Trains' CCTV footage to the press and social media in response to a statement Jeremy Corbyn had made about overcrowding on trains. We don't think that the method of image recording affects the underlying principles of law.

Like any other available technology, users should be given adequate information and choice to manage their privacy and the content to which they are exposed in accordance with the current legal framework.

AR is likely to be particularly appealing to younger audiences. Are there any specific privacy considerations when designing applications that are likely to be used by children?

From a privacy standpoint, the first question designers should bear in mind is whether the application is to be intended for use by children or, as is common, is not so intended but is likely to be used by them. Children are becoming tech-savvy at an earlier age than those of earlier generations, with many being more IT-literate than their parents. Whilst it falls on parents to ensure that any provided child safety settings are used, app designers should be proactive in considering privacy and child safety from the outset. In particular, including safeguards to detect and prevent (or cease) any unintended processing, as well as enabling providers to demonstrate that adequate steps were taken.

It is worth noting that under current data protection law (in most jurisdictions), individuals are considered to be children if under 13 years of age. Under the new European General Data Protection Regulation, this threshold goes up to 16 years. Hence more of the users accessing applications will be considered children.

Where an application is designed for children (or to collect children's data), providers must ensure that everything about the application is designed putting the child's interests first. Establishing the recommended minimum age is vital (as is the case when designing toys more generally) in order to ensure that the language and content is appropriate - this includes making all language used in the application (and the privacy language in particular) easy for them to understand. All children's data should be treated as sensitive, and adequate security measures will need to be implemented.

The key takeaways for application designers are:

  • be aware of your risk group – ask yourself whether your application is likely to appeal to tech-savvy under-16s;
  • make it clear to users if your application is not suitable for children;
  • implement suitable mechanisms to limit access or obtain parental consent; and
  • maintain internal protocols to handle situations arising where a child uses the application without parental consent.

Remember that steps taken before the event, and those taken in dealing with any issue which arises, are relevant not only to compliance, but also to mitigating reputational risk.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE