Monday, October 2, 2023

Navigating the Risks and Benefits of AI : Lessons from nanotechnology


Two decades ago, nanotechnology paralleled the era of artificial intelligence in its developmental stage. While the intricacies of these technologies differ significantly, the shared quest for responsible and advantageous progress presents intriguing parallels. Notably, nanotechnology, operating at the atomic and molecular scale, confronted its own existential concerns, typified by the "gray goo" scenario.

While AI-based technologies with transformative potential continue to proliferate and capture attention, there is an apparent gap in the application of lessons acquired from the field of nanotechnology by individuals in the artificial intelligence sector.

As scholars who specialize in the future of innovation, we have undertaken a comprehensive examination of these parallels. Our insights are encapsulated in a recent commentary published in the esteemed journal Nature Nanotechnology. Moreover, our commentary underscores the critical importance of active engagement with a diverse community of experts and stakeholders to safeguard the long-term prosperity of AI.

Optimism and Anxiety in Nanotechnology

In the late 1990s and the early 2000s, nanotechnology experienced a notable shift from being a radical and somewhat marginal idea to achieving mainstream recognition. Governments worldwide, including the United States, substantially increased their financial commitment to what was heralded as "the next industrial revolution." Distinguished experts within government circles presented compelling arguments, as exemplified in a foundational report from the U.S. National Science and Technology Council, asserting that the ability to "manipulate matter at the atomic level" held the potential to bring about beneficial transformations in economies, environmental sustainability, and quality of life.

However, a challenge emerged. In the wake of public resistance to genetically modified crops and drawing from the experiences of recombinant DNA research and the Human Genome Project, individuals within the nanotechnology sphere began to harbor concerns. They feared that if not managed adeptly, nanotechnology could face a comparable wave of opposition.


These concerns were well-founded. During the nascent stage of nanotechnology, non-profit entities like the ETC Group and Friends of the Earth, among others, vehemently contested assertions regarding the safety of this technology, the prospect of minimal adverse consequences, and the confidence in the expertise of developers. This period witnessed public demonstrations against nanotechnology and, alarmingly, an act of violence by environmental extremists involving a bombing campaign aimed at researchers in the nanotechnology field.

Much like the contemporary concerns surrounding AI, the emergence of nanotechnology brought forth anxieties about its impact on employment, as a new wave of skills and automation disrupted established career trajectories. Anticipating some of the present-day AI apprehensions, fears regarding existential risks also began to surface. One notable concern involved the potential of self-replicating "nanobots" converting all matter on Earth into replicas of themselves, leading to a worldwide phenomenon often referred to as "gray goo." This particular scenario was prominently featured in an article by Bill Joy, co-founder of Sun Microsystems, published in Wired magazine.

However, numerous of the conceivable hazards linked to nanotechnology were far from theoretical. In much the same way as contemporary attention is directed towards more immediate risks tied to AI, the early 2000s witnessed a concerted effort to scrutinize concrete challenges surrounding the secure and ethical progression of nanotechnology. These encompassed conceivable health and environmental ramifications, ethical and societal concerns, regulatory and governance matters, and an escalating demand for cooperation between the public and stakeholders.

The outcome was the creation of an intricately intricate terrain within the realm of nanotechnology development. This terrain held the potential for remarkable advancements but was simultaneously characterized by a pervasive sense of uncertainty and the peril of eroding public confidence in the event of any missteps.

How Nanotechnology Achieved Success

One among our team, Andrew Maynard, played a leading role in addressing the prospective hazards associated with nanotechnology during the early 2000s. He served as a researcher, co-chaired the interagency Nanotechnology Environmental and Health Implications working group, and assumed the role of chief science adviser for the Woodrow Wilson International Center for Scholars' Project on Emerging Technology.

During that period, the pursuit of responsible nanotechnology development resembled a relentless effort akin to addressing a series of ever-emerging challenges in the domains of health, environment, social aspects, and governance. For each solution we devised, it appeared that a fresh problem promptly surfaced.

However, by actively involving a diverse spectrum of experts and stakeholders, including those who were not initially well-versed in nanotechnology but offered invaluable perspectives and insights, the field generated endeavors that established the groundwork for the flourishing of nanotechnology. This encompassed collaborative efforts involving multiple stakeholders, the establishment of widely accepted standards through consensus, and initiatives led by international organizations like the Organization for Economic Cooperation and Development.

Consequently, numerous technologies upon which society heavily depends today have their foundations rooted in the progress achieved in the realm of nanoscale science and engineering. Additionally, a portion of the advancements in artificial intelligence also hinge on the utilization of hardware derived from nanotechnology.

In the United States, a substantial portion of this cooperative effort was orchestrated by the cross-agency National Nanotechnology Initiative. During the early 2000s, this initiative played a pivotal role in assembling government representatives from diverse sectors to gain a deeper comprehension of the potentials and drawbacks of nanotechnology. It facilitated the convening of a wide-ranging and diverse assembly comprising scholars, researchers, developers, practitioners, educators, activists, policymakers, and various other stakeholders. Together, they collaborated to chart out strategies aimed at ensuring the societal and economic benefits of nanoscale technologies.

In 2003, the enactment of the 21st Century Nanotechnology Research and Development Act solidified the government's dedication to involving a diverse spectrum of stakeholders. Subsequently, a burgeoning array of federally funded initiatives, such as the Center for Nanotechnology and Society at Arizona State University (where one of our team members served on the board of visitors), reinforced the core principle of extensive engagement concerning emerging advanced technologies.

Involvement restricted to domain experts

These and analogous endeavors across the globe played a pivotal role in guaranteeing the emergence of constructive and accountable nanotechnology. However, in contrast to these shared goals in the realm of artificial intelligence (AI), current AI development exhibits a notably higher degree of exclusivity. Notably, the White House has emphasized consultations with CEOs of AI companies, and Senate hearings have predominantly featured inputs from technical specialists.

Drawing from the insights gained from the realm of nanotechnology, we assert that adopting this approach represents an error. Although individuals from the general public, policymakers, and experts outside the AI domain may not possess an in-depth comprehension of the intricate facets of the technology, they frequently exhibit the capacity to comprehend its ramifications fully. Moreover, they contribute a vital diversity of expertise and perspectives, which is indispensable for the effective advancement of a sophisticated technology such as AI.

Hence, in our commentary for Nature Nanotechnology, we advocate for adopting a strategy informed by the experiences of nanotechnology. This strategy underscores the importance of initiating early and frequent engagement with experts and stakeholders, even those who may not possess a profound understanding of the technical intricacies and scientific underpinnings of AI. Nonetheless, they bring valuable knowledge and insights that are indispensable for steering the technology toward its rightful success.


The time is passing swiftly

Artificial intelligence has the potential to be the most revolutionary technology in recent memory. If developed wisely, it has the capacity to bring about positive transformations in the lives of billions. However, this outcome will only materialize if society applies the insights gleaned from previous transitions driven by advanced technologies, such as the one catalyzed by nanotechnology.

Similar to the early stages of nanotechnology's development, the imperative of addressing AI's challenges is paramount. The initial phases of an advanced technology transition significantly shape its trajectory for the ensuing decades. Given the rapid advancements in AI, this critical window of opportunity is closing rapidly.

The fate of artificial intelligence is not the sole concern; it represents just one facet of numerous transformative emerging technologies. Quantum technologies, advanced genetic manipulation, neurotechnologies, and others are advancing rapidly. Unless society draws lessons from its past experiences to adeptly navigate these impending transitions, it stands to miss out on the potential benefits they offer, while confronting the risk of their potential to cause more harm than good.

It was written by: Andrew Maynard, Arizona State University and Sean Dudley, Arizona State University.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home