To the Editor — Emerging neurotechnologies raise important governance questions related to, for example, dual use, brain data privacy, and manipulation of personal autonomy. Although many public sector research initiatives have implemented measures to address these issues, similar systematic measures in the private sector have yet to emerge. This gap is critical, as neurotech innovation today is largely driven by a set of companies that are subject to growing public scrutiny1,2,3,4,5. Here we detail lessons, emerging practices and open questions for responsible innovation in the private sector that are the result of three years of policy deliberations that began with a 2018 conference in Shanghai convened by the Organization for Economic Co-operation and Development (OECD) and led to the release of the “OECD Recommendation on Responsible Innovation in Neurotechnology” last year6. The principles therein cover opportunities and challenges for better innovation practices in company settings—including the use of ethics advisory boards, company-level principles, and ethics-by-design approaches—with broad relevance beyond neurotech to digital medicine and corporate R&D activities in today’s era of ‘tech-lash’. We argue that it is time for a radical shift in the conversation about governance of emerging neurotech: effective governance must focus on the private sector as a central actor early on—before trajectories are locked in and scaling takes off—and requires a new set of policy perspectives and collaborative tools to do so. These tools must complement existing efforts in public-sector research ethics, post hoc product regulation and corporate social responsibility. They must also reflect the growing recognition that we cannot rely on industry self-regulation alone to steer innovation activity in socially desirable directions.

Missing the mark with ethics and responsible innovation initiatives?

Emerging neurotechnologies, such as brain–computer interfaces (BCIs) or digital phenotyping for mental health monitoring, hold considerable promise for health and well-being, but also raise important ethical social, and governance questions (Box 1). These questions include concerns about brain data privacy, runaway human enhancement, individual autonomy, vulnerability to political or economic manipulation, direct-to-consumer (DTC) marketing of devices that have variable, if any, effectiveness, dual use, do-it-yourself (DIY) neurotech and neurohacking, and new forms of inequality7,8,9,10,11. Although public-sector research has been quick to implement targeted programs to tackle these concerns—for example, the ‘Ethics and Society’ strand of the Human Brain Project12,13—the private sector has thus far paid relatively scarce systematic attention to them3.

This gap between public and private sector efforts to foster responsible innovation practices is critical insofar as, across OECD countries, over 70% of all R&D is performed by the private sector. Moreover, the implications of many recent innovations have become fully visible to society only when scaled up by companies. This has placed some of the most successful technology firms increasingly into the crosshairs of regulators and a public tech-lash. Facebook, for instance, has been subject to a barrage of inquiries about the free speech and content moderation, data protection, or the effects of surveillance capitalism and echo chambers on democracy. Clearview AI, a facial recognition software company, promises greater public safety through scalable, app-based facial-recognition techniques, but has been criticized for enabling intrusive and potentially authoritarian uses. Digital platforms that have begun to transform entire service sectors, such as Uber or Airbnb, have also raised concerns about new inequalities pertaining to undermining labor laws or driving real estate speculation, respectively. From a policy perspective, these big tech examples beg the question of whether ‘responsible innovation’ efforts focused on public-sector research simply miss the mark. The same holds true for emerging neurotechnologies.

Traditional technology governance is increasingly insufficient

For emerging neurotech, traditional means of governance—including institutionalized research ethics, post hoc regulation and market mechanisms—are ill-equipped to capture the ways in which these technologies could reshape our societies, especially in terms of long-term consequences. The potential uses of non-invasive BCIs in the workplace, for example, are raising new controversies about labor protection and employee surveillance14. Likewise, there is a debate as to whether research into certain types of BCI should be banned because of dual-use applications (for example, covert manipulation of personality), thus foregoing potential civil-use benefits (for example, the restoration of sensorimotor functions after spinal cord injury)10,15. Challenges may even rise to the judicial and constitutional level: the landmark federal case US v. Semrau for the first time considered, though ultimately dismissed, brain scans as a source for lie detection16. The Chilean senate is considering a constitutional amendment that, if approved, would be the first to legally codify ‘neurorights’ to protect the mental integrity and privacy of its citizens17. This reflects wider debates about the need for new human rights in the age of rapidly evolving neuroscience and neurotech18.

Neurotech, like many other innovation domains, is also subject to a patchwork of national and regional regulations that create considerable uncertainty. National attempts to govern emerging technology are frequently seen as ineffective or even detrimental to innovative economies, prompting concerns that companies and technologies may simply move across borders. Developing new international treaties, however, is notoriously difficult, and intergovernmental organizations often rely on soft law, such as the OECD recommendation6,19. Within single jurisdictions, too, there is ample regulatory complexity, as neurotech straddles sectors, applications and regulatory domains. BCIs, digital phenotyping apps and psychopharmacology will be subject to different regulatory regimes straddling health, safety, trade or drug regulation, each of which are governed by different governmental agencies and jurisdiction.

Recognizing these challenges, policymakers have increasingly tried to engage upstream governance approaches—that is, early interventions during the research process—to complement traditional post hoc regulation. In public-sector research, approaches such as anticipatory governance20 and responsible research and innovation (RRI)21 have gained increasing credibility. Instruments of Ethical, Legal and Social Implications (ELSI) piloted by the Human Genome Project22, such as focus group research, citizen juries, ethical review through institutional review boards, or stage-gate processes, have been taken up by the Human Brain Project and BRAIN Initiative, among others23.

Although ELSI and RRI frameworks have successfully penetrated wide parts of public research on neurotech, similar systematic frameworks in the private sector are lacking. Companies tend to sit in a blind spot between early-stage research ethics and post hoc regulatory responses that primarily focus on safety and efficacy, monopoly power or liability. Approaches found in RRI or ELSI programs are neither mandatory nor easily applicable in corporate settings. For one, the broader social consequences of technological change—including new forms of inequality, vulnerability or risk—are hard to capture as part of company metrics, incentive structures and shareholder value logics. For another, the field is largely driven by startup dynamics, which does not afford extended time for deliberation or dedicated organizational resources. The entrepreneurial mindset to move fast, break things, scale up, and worry about consequences later24 is at odds with traditional governance mechanisms such as ethics board reviews and public consultations during product development. This need for speed and scale can lead to unintended consequences as well as overpromising. For example, Lumosity, a company providing a brain-training app, was fined $2 million by the US Federal Trade Commission in 2016 for deceptive claims about enhanced concentration and decreased cognitive impairment in patients with Alzheimer disease using its products. In the absence of good strategies, governments are increasingly embracing experimental strategies to tackle governance challenges or test applications. The US Food and Drug Administration is testing experimental precertification programs partly to get a grip on, among other things, emerging mobile applications for mental health that are increasingly marketed DTC. The city of Reno has embarked on a local experiment to offer app-based mental health services for its residents through the company Talkspace to help alleviate the devastating mental health effects of the COVID-19 pandemic. This took place despite recent controversy around privacy issues for apps, reflecting a common ‘hands-off’ approach by local jurisdictions towards responsible innovation.

What companies should do

The current lack of systematic responsibility frameworks does not mean that embedded upstream governance options for emerging technologies cannot be implemented in the private sector. Our three-year dialogue processes revealed that a range of neurotech companies are actively seeking guidance and developing their own toolkits to bridge structural constraints and the apparent need for greater public oversight. What is more, many leading neurotech companies have a strong interest in publicly demonstrating responsibility and integrity, recognizing that the entire nascent sector can be harmed by single irresponsible actors in the field. Below, we list several emerging practices and principles that can help ensure better governance of neurotechnology innovation in corporate settings.

Enable responsibility review and diverse perspectives as part of the R&D process

One example of a company that appointed an ELSI Advisory Board early in its history is Mindstrong, a company that develops apps to predict mental illness relapses through patient smartphone interactions. This board brought together engineers, ethicists, social scientists and people living with mental health issues to actively shape development of the technology. It was instrumental, for example, in the decision to switch from collecting text or global positioning system (GPS) data, which users considered intrusive of their privacy, to content-free and less readily identifiable signals from the smartphone, such as keyboard interactions patterns25. This diversity-oriented advisory board strategy is broadly consistent with the recent surge in corporate hires from the humanities and social sciences to inject critical and socially inclusive perspectives into innovation processes. Getting these structures right and sustaining them in a corporate environment is not trivial, however. Google famously had to dissolve its AI Ethics Council just one week after its much-anticipated launch, following considerable internal and external backlash about its composition.

Develop robust responsibility principles as part of a startup’s mission

One of us (D.B.) has developed a code of responsibility for his neurotech startup Aifred, which applies deep-learning algorithms to enhance individualized psychiatric treatment. In this ‘meticulous transparency’ framework, all machine-learning projects must be reviewed by the clinical and machine-learning team with respect to their intended outcome, the target population, the representativeness of the available data, interpretability metrics, and monitoring for adverse effects of the model26. The framework helped resolve concrete design dilemmas like the use of binary predictive algorithm outputs, such as ‘being’ or ‘not being’ at risk of suicide—which the company decided should best be designed as a warning system available only to clinicians and only with probabilistic, rather than binary, outputs. This, in turn, affected the way the machine-learning analyses were conceptualized—an example of responsibility-driven design. The focus on responsibility as part of concrete, embedded code differs from the rather high-level, non-committal ethics guidelines for artificial intelligence and other technologies released by the dozens by many corporate giants.

Embrace collectively legitimated ethics-by-design approaches

Standard-setting bodies like the Institute of Electrical and Electronics Engineers (IEEE) are increasingly targeting the engineering phase of product development to address social values and standardize certain critical features from the beginning, including in the fields of neurotech and artificial intelligence27. Upstream ethics-by-design approaches aim to hardwire values into downstream developments. Given their consequences, however, these choices need to be opened up to collective deliberation and be subject to some form of political legitimation. Through bodies like the IEEE, public- and private-sector actors can work together to collectively define product standards and codify responsible design choices that embody shared commitments around values such as privacy and transparency.

Mobilize tech transfer as a critical juncture for social impact

Many universities are adjusting their tech transfer rules to better reflect social priorities. They are emphasizing, for example, inclusiveness in benefit-sharing and requirements to institutionalize certain values and accountability structures. Historically, the incentive structures for technology transfer offices have tended to maximize revenue, the number of startups, or scope of corporate sponsorship, with little attention to ethical and social deliberation elements. With an impressive list of signatories, the “Nine Points to Consider” code of good practice in university technology transfer provides a model for how to leverage technology transfer for more responsible innovation practices, including in neurotech25.

Pressure investors to select for responsible technology development approaches

Shareholders are increasingly stepping up to inject responsibility considerations in company strategy. In 2018, two major Wall Street investors pressured Apple to take steps to fight addictions to iPhone use in children, which led to the development of an app called Screen Time. In neurotech, some companies are actively seeking out investors who match their values. Yet, as the Shanghai OECD conference revealed, the number of venture capital investors foregrounding responsible innovation concerns is limited, despite considerable interest among startups to work with specialized investors who know and acknowledge the ethical and social challenges of their technologies. This opens up an opportunity for a new subset of investment instruments or venture capital niches dedicated to responsible innovation practices, similar to the recent surge in sustainable investment and ‘green bond’ portfolios that target environmental or climate-related projects28. Such developments could be further supported by new standards or certifications on responsible investment in tech startups.

Rethink corporate social responsibility approaches

Traditional corporate social responsibility (CSR) typically addresses the protection of workers, local communities and the environment through self-governance tools. However, CSR has largely ignored innovation as a key arena for social impact and responsible business conduct—as evident in today’s controversies surrounding ‘big tech’. In most neurotech companies, CSR approaches do not help solve the aforementioned ethical, social and governance dilemmas. Likewise, engineering ethics frameworks tend to remain outside the purview of CSR29. Targeting the next generation of innovators, a growing number of universities are offering resources for students, entrepreneurs and startups to consider responsibility and risks to sustainability as part of business model development, including Arizona State University’s Risk Innovation Nexus and the Technical University of Munich’s Master of Arts program “Responsibility in Science, Engineering, and Technology,” in which one of us (S.P.) is involved. The incorporation of responsible innovation into engineering education and nascent business models can create an added value—for example, by gauging long-term societal implications or engaging early with potential future concerns or regulations-in-the-making. In the long run, the disconnect between CSR and corporate R&D raises serious questions about the adequacy of traditional CSR approaches for an era in which business models tend to focus on innovation and disruption.

Finding the right balance

The past 10 years have brought into sharp relief not only seemingly unregulated spaces in which innovative companies can rapidly grow from small startups to powerful global forces, but also the difficulties in exerting regulatory scrutiny in real-time through traditional governance approaches. The burgeoning field of neurotech is no exception. Yet, cognizant of the fallout recently observed in the controversies surrounding ‘big tech’, many neurotech companies are actively looking for guidance on how to increase the social robustness and sustainability of their emerging products and services in this field.

There is, of course, reason to be skeptical that companies alone will ensure socially responsible technology trajectories. Industry self-regulation has regularly failed to deliver the promised results and has instead stoked critiques of tokenization and greenwashing. A similar effect can arguably be observed in the current wave of ‘ethics washing’ (the practice of implementing superficial ethics mechanisms or principles in response to public pressure while purposefully side-stepping more fundamental issues, as most recently observed in the controversies surrounding Facebook’s Oversight Board. Many large tech companies have accepted the cost of asking for forgiveness—such as paying out fines or legal settlements—as the cost of doing business. Thus, the promise of more responsible innovation by way of industry self-governance can only supplement, rather than supplant, public oversight.

However, there is ample evidence that government regulation alone will not suffice, as traditional policy instruments are increasingly at a disadvantage in today’s innovation landscape. This is why dedicated forums and spaces that can mobilize alliances of companies, policymakers, academics and citizens are needed to raise the bar for responsible innovation and co-develop new mechanisms for self-governance on the business side alongside new government regulation, including the ones mentioned above. International organizations such as the OECD or IEEE are uniquely positioned to speak to differences in national regulations and are already playing key roles in fostering the necessary dialogs3,6,27. Universities, too, can mobilize their educational and entrepreneurial ecosystems to heighten sensitivity to responsibility concerns and foster policy dialog when companies are in the startup stage. What is more, experimental ‘living lab’ and ‘sandbox’ approaches could be used to co-develop new regulations and foster public debate about novel technologies, not just to create pro-business innovation environments through lower regulatory standards, as is currently the case in many such settings30. Neurotech companies, with obvious social and ethical challenges on the horizon, have a chance to set an example for the entire tech industry.