Whilst the efforts of Elon Musk’s company Neuralink to connect human brains to computers have been well publicised, it’s less well known that Synchron may be passing him on the path to the marketplace.
Synchron, a company that originated at the University of Melbourne, has attracted investment from Bill Gates and Jeff Bezos. Perhaps it is not particularly widely known that Australian mining magnate Gina Rinehart has invested in the brain-scanning company Omniscient Neurotechnology, or that the neural interface company CTRL-labs has been integrated into Meta.
If we also take into account the rise of a host of smaller startups in Australia and overseas, it seems there might be more commercial-interest neurotechnologies than is widely understood. But what are these technologies, and should lawyers care about them?
The need to pay attention
Neurotechnologies are devices that monitor the brains or nervous systems of people or animals, and may act on them to influence neural activity. Sometimes neurotechnologies are implanted in the brain, or interact with the nervous system through a headset or another device. They are used in research and in therapeutic contexts, in computer gaming, and sometimes in the workplace, to monitor attention – and they may even be used to fly drones. There is considerable military interest in neurotechnology; in the USA, the Defense Advance Research Project Agency has been investing in this technology for many years. Significantly, the US seems to have become nervous about the export of this technology owing to its military significance, and is considering export controls.
The subject of export controls brings us into the realm of the lawyer, and leads us to ask: what are the implications of neurotechnology for the law, and the legal profession?
A recent report published by the Law Society of England and Wales, written by one of the authors of this piece, argues that lawyers, regulators and parliaments should be paying attention to neurotechnology. And there are many examples to suggest that this recommended action is already being taken. The UK’s Information Commissioner has just released a report on the topic; the President of the Australian Human Rights Commission recently said in a published speech that neurotechnology may soon be on the Commission’s agenda; and Australian firms such as Baker McKenzie and Herbert Smith Freehills have joined with the Law Society of NSW and the University of Sydney’s Law School in hosting neurotech events.
Of course, neurotechnology faces some very stiff competition from other emerging technologies when it comes to attracting the attention of lawyers. Much of the recent discussion about tech in the context of law and the legal profession has focused on the implications of generative AI – particularly the implications of ChatGPT and other chatbots – for the law and for the way lawyers work. Products such as DALL-E, Stable Diffusion and other AI image generators are also giving rise to interesting copyright issues.
Reading the brain – how private is that?
Neurotechnology is increasingly becoming a subset of AI, and generative AI has several implications for lawyers in the neurotechnology context.
Before examining the connection between neurotechnology and recent developments in generative AI, we use the work of Professor Jack Gallant of UC Berkeley as an example of the application of neurotech to research, and the legal implications of this form of technology. Gallant is a neuroscientist whose research has involved using fMRI scanning machines to monitor the brains of people as they watch movies. For some time, he has been able to use this brain-monitoring neurotechnology to perform the astonishing task of creating hazy reconstructions of the movies his subjects are watching, using only their neural activity as decoded by sophisticated algorithms. This means you can show a person a movie, monitor their brain, and, from that brain data, watch on a screen the images of what they are seeing.
Of course, there seems to be a privacy concern here. Should the contents of one’s mind be private, and unavailable for access by other people?
These are some of the kinds of concerns that motivated the Chileans to alter their Constitution at the end of 2021. The next year they unsuccessfully attempted more substantial and comprehensive constitutional change, and the neurotech-inspired changes have remained in place. This makes Chile the only country in the world to have a Constitution that refers to the protection of brain activity and the information coming from it. At the time of writing, Chile’s Neuroprotection Bill, which addresses privacy together with other legal implications of neurotechnology, is also making its way through the legislative process, demonstrating the nation’s commitment to regulating this emerging technology.
Privacy concerns also seem to be ramping up in other countries as a result of developments in generative AI. In March this year, it became evident, through the work of a team of researchers from Japan, that generative AI, and Stable Diffusion in particular, can increase the precision in decoding mental images. Progress is also being made with the conversion of thoughts into text. Recent work by the University of Texas has facilitated the use of neurotech to decode neural activity-related words that have been heard (or even just imagined), and translate them into text.
More rudimentary forms of brain reading are currently being marketed for purposes that many would find troubling. For example, the US company Brainwave Science is marketing a product that it says will “transform your interrogations”. Its headset is said to “monitor a suspect’s brainwave responses under examination and reveal whether they recogni[s]e and have knowledge of critical information”.
‘The US company Brainwave Science is marketing a product that it says will “transform your interrogations”. Its headset is said to “monitor a suspect’s brainwave responses under examination”.’
Of course lawyers will recognise that this is a matter for concern. One might ask whether monitoring someone’s brain in this way during an interrogation is an infringement of their human rights. Questions of this kind have prompted the formation of bodies like the advocacy-focused Neurorights Foundation in New York, and the group of international scholars (one of the present authors is a member) known as the Minding Rights Network. This network aims to explore how international and domestic systems of law should respond to neurotechnologies. These questions are being asked globally; organisations such as the OECD, the Council of Europe and UNESCO have published reports addressing the human rights dimensions of neurotechnology, and the United Nations Human Rights Commission has requested a report. It is not surprising that the Australian Human Rights Commission is now also interested.
Whilst the application of neurotechnology for the purposes described here provokes concern, it is important to be aware that by far the most common current applications of neurotechnologies are highly beneficial to their users.
Therapeutic applications of neurotechnology – and beyond
There is a growing list of neurological and psychiatric conditions for which neurotechnology can provide a benefit. Brain implants have been considered a possible treatment for Parkinson’s disease for quite some time, as have cochlear implants to address hearing issues. The company Cochlear is of course a longstanding Australian success story, and its work encompasses neurotechnology, as its devices provide direct stimulation to the auditory nerve.
More recent innovations include the FDA-approved NeuroPace device, which provides relief for some of those whose epilepsy is not responding appropriately to medication. The device works by monitoring the brain constantly, most of the time without intervention. When, however, it notes the neural precursors to an epileptic fit through its sophisticated machine learning algorithm, it acts to stimulate the brain to avert the fit, and then returns to its passive observational state. One thing to note here is that the device does not operate in the same way as medication: it operates on an as-needed basis, instead of flooding the body constantly with chemicals.
Of course, such devices need approval from the medical regulator – the TGA in Australia, the FDA in the US, and the MHRA in the UK, for example. Therapeutic neurotech companies need guidance on the path through the approval system, which may well be provided by lawyers working in the life sciences. However, the fact devices like NeuroPace use a machine-learning approach adds an AI dimension. Given that large law firms may have some lawyers working in life sciences and others working in technology, some neurotechnologies may test the boundaries of these divisions and require increasing interdisciplinarity.
Regulatory bodies will probably need to expand their scope of regulation to accommodate the range of neurotechnology products. Whilst some products are marketed as addressing medical conditions like epilepsy, others are not, and may be non-invasive and, say, focused on stimulating the brain to increase the capacity to pay attention. Would this type of device need the oversight of the medical regulator if therapy is not its aim? A report to the UK Cabinet from the Regulatory Horizons Council at the end of 2022 suggested that it should. According to this report, all devices that modulate brain or neural activity, including non-invasive devices, should have oversight from the medical regulator even if not marketed as having a therapeutic purpose.
All devices that modulate brain or neural activity, including non-invasive devices, should have oversight from the medical regulator even if not marketed as having a therapeutic purpose.
One of the most remarkable applications of neurotechnology is its use in locked-in syndrome, a condition where a person cannot use their muscle system to engage in bodily acts like talking, walking, or lifting things up. Brain-computer interfaces (BCIs) of the type produced by Synchron can detect the neural activity connected with a user’s mental acts, and translate this into commands that control a device such as a computer. This enables the user with locked-in syndrome to compose text or control an external device such as a wheelchair.
Whilst Synchron’s device is still in the trial stage, this, as well as other devices from other companies, holds great promise; the restoration of some autonomy to a person who is locked in seems nothing short of miraculous.
The Defense Advanced Research Projects Agency, a US military institution, is giving attention to BCIs for soldiers. Future battlefields could conceivably involve soldiers with augmented reality glasses controlling drone swarms by way of mental acts. It is already possible to control a drone by way of an external headset that reads neural activity connected with a user’s intentions for its flight. In fact, brain-drone racing competitions have already been held by some US universities.
But what if a person were to fly a drone at another person, with the intention of injuring or killing them? This would bring neurotechnology into the realm of criminal law.
It is worth noting that, at least for serious crimes, the prosecution must prove two things beyond reasonable doubt: the actus reus and the mens rea. What aspect of conduct would constitute the actus reus in relation to brain-drone murder? What would constitute the neurobionic criminal act? Is this a mental act? Or should the person be thought of as having jiggled their motor cortex, to cause the drone to dive into the victim? The idea of a mental act constituting the actus reus seems to collapse the mens rea/actus reus distinction somewhat, and the idea of brain-jiggling as the conduct constituting actus reus seems strange, to say the least. Neurotechnology may well perplex criminal lawyers.
Consumer law
Thus far we have considered neurotechnology in the legal areas of export control, privacy, human rights, the regulation of therapeutic devices, and
criminal law. A further area where neurotechnology might challenge the law is consumer law.
Neurotechnology has a very significant commercial appeal. It also has huge potential to undermine the efficiency of economies – and, by extension, our welfare – if misused. Efficient economies are driven by competition. Competition requires consumers to make free and informed choices about the goods and services they wish to consume. These consumer choices combine to drive businesses to produce goods and services that respond to consumers’ needs or desires, at a price that consumers are prepared to pay.
As neurotechnology works by reading neural activity from our brains, it can manipulate the choices we make via computer. In using neurotechnology, we may unconsciously reveal a wealth of personal information that can be exploited to sell us more goods and services we may not really want or need, at grossly inflated prices. Take, for example, gaming via a BCI. A BCI could, whether through an implanted, or, more commonly, a wearable device, monitor the consumer’s neural activity and translate this data into outputs that allow the consumer to participate in the game. The information recorded might include how the consumer responds to a variety of stimuli during the game. In a game that offers “in-app” purchases, this information could allow the developer not just to construct apps that consumers are likely to buy, but also to identify, with some precision, which consumers to target in the promotion of those apps and how best to go about this to maximise sales (and profit).
It is not hard to imagine a scenario in which consumers are offered an in-app purchase at the very moment they are most engaged by a game, and detected by the device to be in a heightened emotional state. When fully immersive BCI games become available, it is likely that it will be even more difficult for consumers to resist this kind of marketing while they are absorbed in an artificial world of the developer’s making.
Similarly, it is not hard to imagine that consumers’ personal information, acquired using BCI devices, could be sold to other businesses for marketing purposes. Consumers may struggle to exercise free choice in the face of microtargeted advertising that draws on deeply personal information recorded without their knowledge while they were playing what seemed like a harmless game.
And it is entirely possible that one day BCI devices will advance beyond merely reading our brains to stimulating them in the consumer context. In this way businesses might further interfere with consumer choice, reducing consumer resistance to purchases.
Existing consumer law may not adequately shield the consumer from such conduct. Laws in Australia, the UK, the US, Canada, and Singapore provide consumers with general protection against unconscionable or unfair commercial practices.
It is entirely possible that one day brain-computer interface devices will advance beyond merely reading our brains to stimulating them in the consumer context.
In principle, the conduct described in this article could be caught by these laws. The difficulty here is that consumers may be unaware of what is occurring and, even if they are aware to some degree, may be unlikely to have the evidence to prove it.
A further difficulty is that if the conduct occurs unconsciously or spontaneously it is unlikely to be found to be unconscionable or to be an (unfair) practice. How could this be? As mentioned above, neurotechnology is increasingly being paired with AI. Some AI does not work from computer code that is readily explicable, instead working more organically in a “black box”. That is, AI can be unpredictable and unfathomable. To expand on the example raised earlier, a BCI game could persuade the consumer to make an in-app purchase that the consumer does not really want or need, at a price well above the price paid by other consumers, on terms and conditions that are not explained to the consumer.
If a person were to engage in conduct of this type, they would undoubtedly fall foul of consumer laws, but the position may be murkier if the conduct is generated by black box AI. Other issues for dissatisfied consumers may result from the difficulty in identifying the person responsible when the technical system has not been developed by an individual, but by a mix of humans and other forms of legal entities, all working in separate areas of responsibility.
It is also unclear whether the law extends to saving consumers, or others, from bad bargains made in negotiations with another person who possesses vastly superior knowledge and ability, courtesy of neurotech – a “transhuman”. A paper published in the UK by the Ministry of Defence notes that “[I]n terms of augmentation, brain interfaces could: enhance concentration and memory function … or even allow new skills or knowledge to simply be ‘downloaded’.”
Neurotech may one day significantly enhance the cognitive abilities of some users. Disparities in knowledge and ability are not uncommon in a free market, but neurotech may create vast disparities between those who have and those who do not have the technology – disparities so vast, in fact, that they produce a whole class of people who cannot adequately fend for themselves in the commercial environment. The law has long given children who enter contracts special treatment in recognition of the fact that their immaturity and inexperience make them vulnerable to exploitation. There is no such protection for people dealing with transhumans, where, through their lack of ability to access or manipulate neurotech, they are effectively rendered childlike.
All of this suggests that new laws may be required to regulate the use of the new technology in consumer and commercial transactions. Specific protections in respect of certain commercial activities are nothing new. In a largely bygone era, sales were once made by travelling salespeople at the door of a consumer’s home. Lawmakers recognised that consumers, cornered by salespeople unexpectedly in their home, were especially vulnerable to high-pressure selling techniques. Laws were passed creating a regime where the timing and manner of door-to-door sales were strictly controlled. Those laws continue to apply today.
Nneurotech may create vast disparities between those who have and those who do not have the technology – disparities so vast, in fact, that they produce a whole class of people who cannot adequately fend for themselves in the commercial environment.
Lawmakers now need to turn their attention to the ways in which consumers may be cornered electronically, through neurotech and AI, into a sale. Laws could be made controlling how neurotech and AI is consumed and used in selling goods and services.
Advertisements for the sale of neurotech devices to consumers can also create a significant problem for consumer law. In most jurisdictions, false or misleading claims about goods and services are prohibited. In principle, sellers of neurotech devices cannot lawfully make false or misleading claims about the efficacy of their merchandise. But, as neurotech commentators Wexler and Reiner point out, the technology is so new and novel that proving the falsity or misleading nature of those claims may be difficult, or even impossible.
Neurotechnology and the future of the legal profession In the future, neurotechnology may have the capacity to transform legal practice – for example, by supporting lawyers to manage the rigours of their work. Legal practice is notoriously hazardous to mental health, and around the globe lawyers are reporting higher levels of anxiety and depression than the general community. Changes to improve wellbeing in the profession have been called for, but change has been slow. Until such changes are in place, lawyers could benefit from the therapeutic applications of neurotechnology in alleviating depression and anxiety.
Increased pressure on lawyers?
But there is a possibility that neurotechnology may ramp up the pressures on lawyers rather than alleviating them. The Law Society of England and Wales report mentioned earlier raised the possibility of a transition, for some legal practitioners, from one source of stress to one even more challenging: from the billable hour to the billable unit of attention.
The concept of the billable unit of attention may seem far- fetched, but consider this: as neurotechnological devices increase in their capacity to monitor attention, it is not hard to imagine a transition from a safety orientation to a productivity orientation. It may well be a good thing to fit air traffic safety controllers and heavy goods vehicle drivers with a headset that monitors their brains and alerts them if their attention starts to fade, warning of the risk of an accident. However, using the same parameters, it is not hard to imagine the development of devices for monitoring when lawyers are or are not paying attention to their files.
If the capacity to monitor attention then becomes widely known, some clients may expect to pay for legal services only when their lawyer is fully attentive to their files. Billable hours may just become too crude a billing measure when billable units of attention are an option.
It is not hard to imagine the development of devices for monitoring when lawyers are or are not paying attention to their files.
With this in mind, issues around the right to mental privacy could well apply to the working lives of lawyers as well as to the lives of their clients. In a working environment where neurotechnology impacts on their mental privacy, lawyers may well need therapeutic neurotech to support their mental health.
Peering further into the future, what if lawyers had the capacity to enhance their cognitive powers? Increased powers of recall and capacity to pay attention could well confer an advantage for the legal workplace implementing these applications of neurotechnology. And, as some law firms start to cognitively enhance their own workforce, this might put pressure on those lagging behind in the adoption of neurotech to get with the program and upgrade the cognitive capacity of their own staff. Then, as artificially intelligent competitors start to emerge, the cyborgisation of lawyers might well become an increasingly attractive option.
Neurotechnological advances are creating a lot for lawyers to think about. And who knows? In order to get their head around this view of the future, they may need to subject themselves to a neurotechnological upgrade.