MEMBERSHIP

After super-powerful chatbots such as ChatGPT-4 started becoming widely available this year, school administrators around the world moved to ban the technology from classroom education. Nearly half a dozen US districts blocked access to AI and other multimodal large language models (MLLMs) on school devices and networks, and some Australian schools turned to pen-and-paper exams after students were caught using chatbots to write essays.

Teacher resistance reached its peak when ChatGPT-4 was released in March 2023. Developed by San Francisco-based OpenAI, this generative AI can write poetry and songs, and it passed the US bar exam in the 90th percentile. MLLMs can process images as well as text, and they answer queries by looking for patterns in online data.

When asked why Seattle schools had moved to restrict ChatGPT-4 from district-owned devices, a spokesperson for the district, Tim Robinson responded: โ€œGenerative AI makes it possible to produce non-original work, and the school district requires original work and thought from students.โ€

However, confronted with AIโ€™s seemingly inevitable growth, many schools are now reversing course, albeit carefully. โ€œThereโ€™s still a fear that students will use large language models as shortcuts instead of practicing to become better writers,โ€ says Tamara Tate, a project scientist at the University of California, Irvineโ€™s Digital Learning Lab. She adds that if AI is here to stay then students might be better served by educational strategies that promote creative uses of the technology. โ€œThese tools can provide students with in-the-moment learning partners on a huge range of topics.โ€

In the view of Tate and other experts, MLLMs have several positive educational roles to play, including encouraging students to evaluate answers rather than automatically accepting them. Careful thought is needed to ensure that these potential upsides are realized, however, and to mitigate any potential downsides. How might AI-assisted education unfold?

Classroom gains and losses

Proponents of the educational uses of generative AI point to several advantages. For one thing, ChatGPT-4 has an extraordinary command of proper sentence structure, which Tate says could be especially useful for non-native speakers seeking insight into how to correctly incorporate words and phrases in real-world settings.

Xiaoming Zhai, a visiting professor who studies applications for machine learning in science education at the University of Georgia in Athens, believes that teachers also stand to benefit from using models like ChatGPT as teaching aids. The models can generate personalized lesson plans and other resources geared to the needs of individual students while assisting with grading and other mundane tasks. In Zhaiโ€™s view, that capability frees time so that teachers can provide students with more one-on-one feedback. By efficiently automating basic tasks like searching out relevant literature and materials and summarizing content, the models allow students and teachers alike to โ€œfocus more on creative thinkingโ€.

Creative thinking will help people get the most from MLLMs. โ€œLarge language models are like search engines: garbage in, garbage out,โ€ Tate wrote in a recent preprint paper.

Teachers can help their students develop expert prompting and search optimization strategies to generate the most helpful content. โ€œTo use the technology effectively, students need to double down on the work of revision,โ€ Tate says. โ€œChatGPT-4 can generate a fluent first-draft response, but not a lot of deep content. The responses can be vague and often wrong.โ€

While researching this article, we asked ChatGPT-4 to tell us, in its own words, why it would be a helpful tool for education. Seconds later, the model provided a detailed answer in which it claimed it had access to vast amounts of knowledge and could respond instantly to questions in multiple languages at any time. But the model was also candid about its limitations, pointing out that if ChatGPT-4 doesnโ€™t understand the nuances of a particular question, then it might deliver incomplete or erroneous information that could be problematic for students who rely solely on the model for answers.

Given that MLLMs may fail to support their claims with reasons or evidence, this gives teachers the opportunity to demonstrate the need for critical reasoning. โ€œStudents need to think about who said what and why in a given response,โ€ Tate says.

Lea Bishop, a law professor at Indiana Universityโ€™s Robert H. McKinney School of Law in Indianapolis, agrees that potential inaccuracies will require students to scrutinize the modelโ€™s output. โ€œYou have to develop the habit of questioning everything you see,โ€ she says. โ€œThat means asking probing follow-up questions and triangulating with other sources of knowledge to see what matches up. I need you to show me that youโ€™re better than the computer.โ€

Dealing with cheating and secrecy

Some experts worry that, for less motivated students, these sorts of models provide a tempting source of ready-made content that diminishes critical thinking skills. The predecessors to ChatGPT-4 proved themselves capable of generating essays and responses to short-answer and multiple choice exam questions. โ€œWe already have a lot of problems with students who feel that learning equates to searching, copying and pasting,โ€ says Paulo Blikstein, an associate professor of communications, media, and learning technologies at Columbia University, in New York. โ€œWith AI, we have an even greater risk that some will take the shortest and easiest path, and incorporate those heuristics and methods as a default mode.โ€

Teachers can try to flag AI-generated content with software packages called output detectors. But these packages have questionable reliability, and in July 2023, OpenAI discontinued its own output detector citing concerns over low accuracy. Experts worry that models like ChatGPT-4 will increasingly put teachers into the unwanted role of having to police students who break rules on AI-generated content.

Such concerns are valid, and contributed to the initial negative responses. Blikstein says early school restrictions may be seen as a โ€œknee-jerk reaction against something that is still very hard to understandโ€.

And although these bans are gradually being lifted, ChatGPT is not yet in the clear: its workings remain opaque, even to the experts. Between its inputs and outputs are billions of โ€˜black-boxโ€™ computations. ChatGPT is said to be OpenAIโ€™s most secretive release yet. The company hasnโ€™t disclosed anything about how the model was trained, and proprietary systems developed by competing companies are now driving an AI โ€˜arms raceโ€™ โ€” advancing at mind-boggling speed.

Defining core skills

Does the rise of MLLMs mean writing itself will go the way of older skills, in much the same way that basic mathematical competence was rendered nearly obsolete by calculators? Experts offer a range of opinions. Taking a bullish stance, Bishop argues that functional writing skills such as spelling, grammar, and knowledge of how to organize a standard essay โ€œwill be totally obsolete two years from nowโ€. Others see need for caution. โ€œWithout practice writing their own content, it will be hard for students to predict where and how writing mistakes are made โ€” and then spot them in AI-generated content,โ€ Tate says.

In Bliksteinโ€™s view, this grey area underscores a need to proceed slowly. โ€œThe stakes are high with language,โ€ he says, adding that generative AI can be a powerful partner for enhancing โ€” not replacing โ€” a studentโ€™s cognition. But important questions remain. โ€œFor instance, we donโ€™t have a good model for authorship in the area of AI-generated content,โ€ he says. โ€œThe text appears out of the ether, and we have no idea where it came from.โ€ For accomplished professionals, using AI to boost writing skills may not pose much of a problem. โ€œBut thatโ€™s not true for younger people who donโ€™t understand the craft of writing to begin with,โ€ he adds.

Blikstein also worries that AI might perpetuate educational inequities. Wealthier school districts have resources to apply the technology with an emphasis on human interaction and project-based learning, while poorer schools might move increasingly towards automation to save money. โ€œIf you settle for something cheap, it can take over your whole system,โ€ he says. โ€œThen five years later, itโ€™s the new normal,โ€ he says.

Ultimately, AI could offer an evolution in educational norms that sends educators back to basics. โ€œWe have to identify the core competencies that we want our students to have,โ€ says Zhao. โ€œHow are we going to incorporate models like ChatGPT into the learning process? We are preparing future citizens, and if AI will be available, then we need to think about how we build competence in education so that students can be successful.โ€

Explore FIIโ€™s publications site for more thought-provoking articles and podcasts about artificial intelligence and the impact of technology on society.

Mobile robots can dance around a stage, perform graceful acrobatics and even lift heavy objects. But if you watched them strut their stuff for an hour or two, you would see the robots grind to a halt. Like humans, mobile robots eventually exhaust the energy that they carry, and need a recharge.

This problem is specific to mobile robots. Robots anchored to a factory floor can do heavy work all day and all night because they can draw inexhaustible energy from the electric grid. Mobility gives robots more flexibility, but at the cost of needing to recharge their energy sources โ€” in most cases, some form of battery.Part of Nature Outlook: Robotics and artificial intelligence

The compact nature of smartphones can fool us into thinking batteries are featherweight objects. That illusion arises because modern electronics need only a trickle of energy to send signals or process data. But transporting robots or people around, or lifting a heavy load, takes much more energy. If you pick up a cordless tool, you will feel that it outweighs a corded one. An electric car that can travel for five hours (around 500 kilometres, the distance from Paris to Amsterdam) at motorway speed between recharges needs batteries that account for one-third or more of the total vehicle weight.

Mobile robots on legs, however, canโ€™t tolerate such massive batteries. Boston Dynamics, a robotics company in Waltham, Massachusetts, sells a four-legged dog-size robot called Spot that weighs about 32 kg โ€” one-eighth of which is batteries. But, the company states it has a typical run-time of only 90 minutes. Humanoid robots that have been developed to walk with heavy loads have the same limitations. Atlas, the companyโ€™s 1.5 metre, 89-kilogram humanoid demonstrator with two arms and two legs, can do gymnastics and lift heavy objects. But the company does not say how long it can run before it needs a recharge. For mobile robots to be more capable workers, their batteries will need greater energy density โ€” that is, they will need to pack more watt-hours of energy into fewer kilograms of mass. โ€œEnergy density is still quite far from the power we need for robotics,โ€ says Ravinder Dahiya, an electrical engineer specializing in robotics at Northeastern University in Boston, Massachusetts.

How serious the energy-density problem is depends on the robotโ€™s size and structure, its function and how much energy it needs. Robots that walk can navigate stairs, the interiors of buildings and rough terrain better than wheels โ€” but they canโ€™t carry as big of a battery pack. Sustained flight requires even more energy, making battery weight a serious limit for anything much bigger than insect size.

The limits of lithium

Batteries have come a long way since the Italian physicist Alessandro Volta invented the earliest version of this technology in 1800. Today, the state-of-the-art power source is the lithium-ion battery, invented in the 1970s by chemist Stanley Whittingham, and now widely used in phones, laptops, tools and electric vehicles. The technology earned Whittingham, now at the State University of New York at Binghamton, a share of the 2019 Nobel Prize in Chemistry.

Batteries donโ€™t generate energy; they store energy produced by chemical reactions that yield positive ions and electrons. The ions accumulate at one end of the battery, called the cathode, and the electrons at the other end, called the anode. The ions and electrons sit on these two ends until they are connected by a conductor, which completes the circuit and allows the electrons to flow as a current from the anode to deliver electrical power to an externally connected load โ€” such as an electric motor โ€” and then to the cathode. When the chemicals are used up, the battery must either be replaced or recharged by passing a current through it in the opposite direction, to reverse the reaction.

Rows and rows of rectangular silver batteries line a factory floor. A worker wearing white gloves checks the batteries.
A worker at a factory that produces lithium batteries for Xinwangda Electric Vehicle Battery Company in Nanjing, China.Credit: AFP/Getty Images

A major advantage of batteries is that they directly deliver energy in the form of electricity, whereas fossil fuels have to be burned to generate heat that drives an electrical generator. This avoids carbon emissions on the spot, although total emissions depend on how the original energy was produced. However, robots must carry the energy they use, and batteries weigh more and occupy more space than fossil fuels. An electric car, for example, needs a battery pack much larger and heavier than a fuel tank.

When lithium-ion batteries reached the market in 1991, they provided 80 watt-hours of electrical energy per kilogram of battery weight1. That meant it took a one kilogram battery to power a (then standard) 60-watt incandescent bulb for one hour and 20 minutes, making it the best battery available. Now, typical commercial lithium-ion batteries carry three times more energy per kilogram. But even such energy-packed batteries are too hefty for a walking robot to lug around.

Chemistry quest

Lithium-ion batteries are running out of steam. The chemistry has โ€œless and less room for improvementโ€, says Richard Schmuch, a chemist at the Fraunhofer Research Institution for Battery Cell Production in Mรผnster, Germany. Lithium itself is rare and expensive. The same is true for cobalt, another crucial element which can make up to 20% of the weight of the cathode in lithium-ion batteries for electric vehicles. Extracting both elements requires large amounts of energy and water. Moreover, the mining of cobalt has been linked to the exploitation of workers.

Another concern is optimizing batteries to meet the needs of robotics. โ€œThe lithium-ion battery is quite versatile,โ€ says Schmuch. โ€œYou can adjust it for different types of operating condition,โ€ from smartphones to cars to robots. Yet it canโ€™t do everything cost-effectively and well. He expects new types of battery will be needed to serve the emerging demands of robotics as well as other applications.

After more than 30 years of development, lithium-ion batteries are considered to be a mature technology. Still, efforts continue to improve these complex electrochemical systems. They are assembled in units called cells, which are packaged together to provide a desired electrical output. Each cell contains โ€” in addition to the anode and cathode โ€” an electrolyte through which ions can move, a separator to prevent short circuits and electrical terminals that connect to other cells in the packaged battery. Extensive research has gone into the composition of each part to achieve high energy density, charging and discharging rates, reliability and longevity. Among the important successes of this technology are batteries that can be recharged as many as 6,000 times.

One attempt to enhance the performance of lithium entails making the anode and cathode from nanostructured sulfur-graphite composites rather than from standard graphite. Such lithiumโ€“sulfur batteries offer the potential of lower costs and higher energy density. These batteries have yet to be commercialized successfully, however; their use might be limited to specialized applications, such as aviation, for which minimizing battery weight is crucial to get off the ground. Battery features could be tailored by adjusting design details, such as the type of nanostructure used and how the ions and electrons flow through the battery. But what many developers want is new battery chemistries designed to meet a variety of needs (see โ€˜Packing in the powerโ€™).

Barchart comparing energy density of six types of battery
Source: Adapted from Fig.1 of Ref. 1

That might entail stepping back from lithiumโ€™s biggest attraction โ€” it is the lightest metal among the elements, with an atomic weight of seven. Yet, although lithium is absolutely essential to the batteryโ€™s energy storage and release, other materials make up more of the batteryโ€™s mass. Including the packaging, only about 1% of the weight of a lithium-ion battery is lithium (most of that in the cathode). The cathode also contains more of four other metals: cobalt, nickel, aluminium and manganese. Several problems with lithium and cobalt have led to serious interest in sodium-ion batteries.

Like lithium, sodium is an alkali metal, and the chemistry of the two is so similar that researchers have pursued sodium-ion batteries as a way around the problems with lithium. One important advantage of the sodium-ion design is the ready availability of sodium in seawater and salt deposits โ€” avoiding the supply-chain problems arising from the cost and scarcity of lithium. Sodium-ion batteries are a bit heavier per kilowatt-hour of energy โ€” sodiumโ€™s atomic weight is 23, more than triple that of lithium. Still, lower material costs are expected to make sodium-ion batteries significantly cheaper. An even bigger benefit of switching to sodium would come from reducing or eliminating the need for cobalt in the cathode, which has been demonstrated in several samples.

โ€œSodium ions are definitely gaining traction,โ€ says Schmuch, citing development of them in Germany, where Fraunhofer is working with industry, and in China, where Contemporary Amperex Technology (CATL) in Ningde โ€” the worldโ€™s leading manufacturer of lithium-ion batteries for electric vehicles โ€” rolled out the first generation of its sodium-ion battery in 2021. This April, Chery Automobile in Wuhu, China, announced plans to install CATL sodium-ion batteries in its cars. Also in April, CATL said it had developed a new electric vehicle battery with an energy density of 500 watt-hours per kilogram. This battery employs a different technology, which CATL have not identified.

Solid-state solutions

Another way to change battery chemistry is to change the state of the electrolytes, replacing the conductive liquids used in lithium-ion batteries with conductive solids. Advocates think such solid-state batteries offer the best prospects for preventing the potentially deadly fires seen with lithium batteries, as well as for improving energy density and reducing costs.

The fire hazard comes from filamentary deposits of metallic lithium called dendrites that grow in electrolytes in the batteries. Lithium atoms present in the solvent crystallize to form metal filaments that spread like plant roots. The metallic lithium is conductive, and as the dendrites spread they can short circuit the battery and ignite fires. Sodium is much less prone to dendrite formation, and developers think that this quality makes sodium-ion batteries significantly safer than lithium-ion batteries. They would also be lower cost and in the long term potentially offer higher energy density.

A wide range of solid-state batteries are in development. Lithium is still a popular material because of its light weight, high energy density and rechargeability. But some researchers are exploring other metals with the hope of avoiding the known problems of lithium.

Mohammad Asadi & Andrรฉs Ruiz Belmonte look at a monitor in the lab
Mohammad Asadi (left) with his colleague Andrรฉs Ruiz Belmonte testing lithium-air batteries in their laboratory at the Illinois Institute of Technology in Chicago.Credit: Courtesy of Illinois Institute of Technology

Switching one type of lithium battery in particular from liquid to solid-state electrolytes has led to a big advance in efficiency. This beneficiary is the lithium-air battery, which produces power from the oxidation of lithium atoms by oxygen from the air. As with sodium-ion batteries, energy density was not the main goal for many people working on solid-state batteries. โ€œThe reason for developing a solid-state electrolyte was to make the lithium-air battery more safe and to make recharging cycles more stable,โ€ says Mohammad Asadi, a chemical engineer at the Illinois Institute of Technology in Chicago.

However, when Asadi and his colleagues from Argonne National Laboratory in Lemont, Illinois, built an experimental solid-state lithium-air battery2, they were surprised to discover that the technology brought significant benefits in energy density as well. The device transferred four electrons per reaction rather than one or two electrons per reaction that lithium-air batteries normally produce. As a result, Asadi says, the solid electrolyte โ€œhelps us store three to four times more energy per unit weightโ€ than is possible with conventional lithium-ion batteries.

In fact, their new solid electrolyte changed the chemistry between oxygen and lithium. In standard lithium-air batteries, oxygen molecules from the air react with lithium atoms to produce one of two compounds. The reaction between one lithium atom and one oxygen molecule produces lithium superoxide (LiO2), which yields one electron. The reaction between two lithium atoms and one oxygen molecule produces lithium peroxide (Li2O2), which yields two electrons.

Asadiโ€™s team made its solid electrolyte by combining nanoparticles containing lithium, germanium, phosphorous and sulfur (Li10GeP2S12) with a polymer. In this structure, two lithium atoms can combine with a single oxygen atom to yield lithium oxide (Li2O) and four electrons. Thatโ€™s hard to do because it requires splitting an oxygen molecule (O2) to produce a single oxygen atom. The test cell, only the size of a coin, was a proof of concept. According to Asadi, this prototype shows that it will be possible to attain a specific energy of one kilowatt-hour per kilogram โ€” higher than is possible with todayโ€™s lithium-ion technology.

Power-studded structures

Increasing the energy density of batteries makes it possible for these power packs to weigh less, which in turn would allow mobile robots of a given size to do more work. Battery design for such robots involves much more than chemistry. And one way to realize this would be to have smaller batteries serve as structural elements of a robot โ€” not just storing energy but also becoming parts of its torso and legs to help it walk and balance. The idea comes from biology. Our bones are not just structural support โ€” they also contain bone marrow that produces blood cells. โ€œMultifunctionality is criticalโ€ when youโ€™re building robots that move around, says Nicholas Kotov, a chemical engineer at the University of Michigan in Ann Arbor.

โ€œRobots are biomimetic, and the smaller the robot, the more biological concepts would need to be there,โ€ Kotov says. He and other roboticists call two-legged robots humanoid because they walk upright and have similar body mechanics to people. โ€œWe want to keep robots light and consuming as small as an amount of energy as possible,โ€ Kotov says. โ€œIf a battery just sits there and does nothing else [but provide power], it is not enough.โ€

Kotovโ€™s group is particularly interested in drones, in which, he says โ€œevery gram counts, and if the battery can serve multiple functions, we can have more functional spaceโ€. His team is now working on structural batteries for military drones, although not much of that work has been disclosed yet. Military laboratories have also worked on humanoid robots for missions such as working inside radiation zones and checking for insurgents hiding inside buildings in combat zones.

Some materials used for battery energy storage are particularly well-suited for also being structural elements. For example, Kotov says, โ€œzinc is a very good case for structural batteriesโ€. It is inexpensive, stores energy well and the metal is stable in air. His lab demonstrated a biomorphic zinc battery that could store 72 times more energy than a lithium battery of the same volume3. However, trade-offs are

Tumore prostatico: la prognosi in base a stadio, grado e rischio
Tumore prostatico: la prognosi in base a stadio, grado e rischio
inevitable. Zinc batteries have limited rechargeability, so they would be best kept stashed away for infrequent use.

Another promising multifunctional material is aluminium, which, last year, showed rapid rechargeability over hundreds of cycles at temperatures up to just above the boiling point of water โ€” and without forming aluminium dendrites4. The researchers project a cost of less than one-sixth of that of comparable lithium-ion batteries with a similar energy capacity.

Kotov is also developing aramid fibres to provide structural strength for battery casings and internal battery structures3. These fibres have a fortunate combination of features including strength, flexibility and hardness that makes them useful for protective shielding. One particularly helpful attribute is their ability to block dendrites from growing between the electrodes. Moreover, aramid offers an environmental advantage โ€” it can be made from recycled Kevlar, a strong, lightweight material, and when the batteries are worn out, the fibres can be recycled for further uses.

Energy beyond batteries

By 2030, Dahiya expects the development of energy sources for mobile robotics to broaden well beyond batteries. Some of these concepts have their roots in biology.

One example is equipping robots that operate at remote sites with energy harvesters, to collect energy from their local environment to top up their stored energy5. Robots can collect energy in the form of radio waves or sunlight, or from a thermal gradient. However, energy harvesting is not as efficient as heat pumps or wireless chargers, but it can operate in any suitable environment without special charging equipment. And there have been demonstrations of tiny bacterial-driven microbial fuel cells, or โ€˜biobatteriesโ€™, that could harvest material from the local biota to provide supplemental power6.

A researcherโ€™s hand wearing blue rubber gloves holds up a paper-thin energy harvesting device
Wearable biobatteries can draw power from the metabolism of bacteria found in sweat.Credit: Seokheun Choi/SUNY, Binghamton

Another concept borrowed from biology is distributing energy in various ways through the robotโ€™s body rather than concentrating it in a single battery backpack as used on some experimental humanoid robots5. Humans have three types of energy storage: triglycerides in fat cells, glycogen clusters around muscles and ATP thatโ€™s produced by mitochondria. Those systems evolved to serve different energy needs. โ€œHumans and animals require fast energy and slow energy,โ€ says Kotov. They need the fast energy to sprint, as well as slow energy to walk for many kilometres.

Robots and drones likewise have different needs at different times. A humanoid robot needs fast energy to lift a heavy load or run up stairs, and slower energy to patrol a field or a car park. Batteries are fine for a steady walk or jog, but not for a sprint. This gap has led to an interest in equipping robots with a different type of device โ€” a supercapacitor โ€” that delivers electrical energy much faster. Instead of using chemistry to store energy, a supercapacitor stores an electrical charge that it collects over a period of time from an electrical circuit. When a burst of energy is needed, the system discharges the stored electrons extremely quickly. Supercapacitors are used in regenerative braking systems in vehicles and can withstand many more chargeโ€“discharge cycles than can batteries7. In the future, they could give mobile robots a quick start โ€” or a quick stop.

The lithium-ion generation of batteries put smartphones in our pockets and electric cars on our roads. Researchers are now in the early stages of developing a generation of portable energy sources that will be lighter, more efficient and potentially cheaper.

Yet costs might still be troublesome, cautions materials chemist Donald Sadoway at the Massachusetts Institute of Technology in Cambridge. Sadoway, who focuses on energy-storage technologies, sees little interest in โ€œnew battery chemistries whose price-to-performance ratio is less favourable than that of todayโ€™s lithium-ionโ€. It is unclear, he says, whether โ€œthe commercial opportunity is large enough to attract the investment needed for the requisite research and development to invent the new technology and to bring it to marketโ€.

The prospects look bright to material scientist Shirley Meng, who is chief scientist at the Argonne Collaborative Center for Energy Storage Science at Argonne National Laboratory. She says using the oxygen from the air as a cathode is โ€œthe ultimate dream of battery scientistsโ€ as it offers high energy with light weight. โ€œGood progress has been madeโ€ on the lithium-air battery, on which Argonne collaborated, but she says โ€œwe still face a lot of challenges in understanding and overcoming the limiting factors in enabling air cathodesโ€.

Meng predicts that the sodium battery, free of elements such as lithium, nickel and cobalt, โ€œwill find its niche to shineโ€ because of its high ratio of performance to cost. Solid-state batteries, โ€œoffer the possibility of achieving the highest volumetric energy density in robotic applications [in which] space is limitedโ€, she says. They also offer unique flexibility in packaging and can operate at extreme temperatures, which is important for some special-purpose robots. Meng is optimistic that developers โ€œwill offer a wide variety of battery solutions to different types of robot applicationโ€, with the potential โ€œto unlock applications that were previously not possibleโ€.

doi: https://doi.org/10.1038/d41586-023-02170-y

This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with the financial support of third parties. About this content.

References

  1. Duffner, F., Kronemeyer, N., Tรผbke, J., Leker, J., Winter, M. & Schmuch, R. Nature Energy 6, 123โ€“134 (2021).Article Google Scholar 
  2. Kondori, A. et al. Science 379, 499โ€“505 (2023).Article PubMed Google Scholar 
  3. Wang, M. et al. Sci. Robot. 5, eaba1912 (2020).Article PubMed Google Scholar 
  4. Pang, Q. et al. Nature 608, 704โ€“711 (2022).Article PubMed Google Scholar 
  5. Mukherjee, R., Ganguly, P. & Dahiya, R. Adv. Intell. Syst. 5, 2100036 (2023).Article Google Scholar 
  6. Gao, Y., Mohammadifar, M. & Choi, S. Adv. Mater. Technol. 4, 1900079 (2019).Article Google Scholar 
  7. Partridge, J. & Ibrahim Abouelamaimen, D. Energies 12, 2683 (2019).

This article was authored by Liam Drew and originally published to Nature

Markus Mรถllmann-Bohleโ€™s left cheek hides a secret that has changed his life. Under the skin, nestled among the nerve fibres that allow him to feel and move his face, is a miniature radio receiver and six tiny electrodes. โ€œIโ€™m a cyborg,โ€ he says, with a chuckle.

This electronic device lies dormant much of the time. But, when Mรถllmann-Bohle feels pressure starting to gather around his left eye, he retrieves a black plastic wand about the size of a mobile phone, pushes a button and fixes it against his face in a home-made sling. The remote vibrates for a moment, then launches high-frequency radio waves into his cheek.

In response, the implant fires a sequence of electrical pulses into a bundle of nerve cells called the sphenopalatine ganglion. By disrupting these neurons, the device spares 57-year-old Mรถllmann-Bohle the worst of the agonizing cluster headaches that have plagued him for decades. He uses the implant several times a day. โ€œI need this device to live a good life,โ€ he says.

Cluster headaches are rare, but extraordinarily painful. People are typically affected for life and treatment options are very limited. Mรถllmann-Bohle experienced his first in 1987 at the age of 22. For decades, he managed sporadic headaches with a mix of painkillers and migraine medication. But in 2006, his condition became chronic, and he would be struck with as many as eight hour-long cluster headaches every day. โ€œI was forced to succumb to the pain again and again,โ€ he says. โ€œI was kept from living my life.โ€

โ€œI was forced to succumb to the pain again and again. I was kept from living my life.โ€

Mรถllmann-Bohle, evermore reliant on painkillers and now also taking antidepressants, was hospitalized numerous times. During one of these stays, however, he heard about an electronic implant that some people had started using to control their cluster headaches.

Developed by the start-up Autonomic Technologies (known as ATI) in San Francisco, California, the device had passed a series of placebo-controlled clinical trials with flying colours. โ€œIt worked remarkably well,โ€ says Arne May, a neurologist at the University of Hamburg in Germany who led some of those trials on behalf of the start-up. In most people, stimulation reduced the pain of an attack, made attacks less frequent, or both1. Side effects were rare. In February 2012, while US trials continued, the European Medicines Agency granted the company approval to market the device across Europe.

Mรถllmann-Bohle contacted May, and travelled from his home near Dรผsseldorf, Germany, to meet him. Filled with hope that this might alleviate his suffering, Mรถllmann-Bohle underwent surgery to have the device fitted in 2013.

โ€œOnce the stimulator was working well, it felt like a rebirthโ€ โ€” Markus Mรถllmann-Bohle

The implant was a revelation. After the pattern and strength of the stimulation had been tailored to Mรถllmann-Bohleโ€™s needs, around an hourโ€™s use five or six times a day was enough to prevent attacks from becoming debilitating. โ€œI was reborn,โ€ he says.

But, by the end of 2019, ATI had collapsed. The companyโ€™s closure left Mรถllmann-Bohle and more than 700 other people alone with a complex implanted medical device. People using the stimulator and their physicians could no longer access the proprietary software needed to recalibrate the device and maintain its effectiveness. Mรถllmann-Bohle and his fellow users now faced the prospect of the battery in the hand-held remote wearing out, robbing them of the relief that they had found. โ€œI was left standing in the rain,โ€ Mรถllmann-Bohle says.

Cochlear implants that give a sense of hearing to the user are an established form of neurotechnology. 
Credit: Zephyr/Science Photo Library

A systemic problem

Hundreds of thousands of people benefit from implanted neurotechnology every day. Among the most common devices are spinal-cord stimulators, first commercialized in 1968, that help to ease chronic pain. Cochlear implants that provide a sense of hearing, and deep-brain stimulation (DBS) systems that quell the debilitating tremor of Parkinsonโ€™s disease, are also established therapies.

Encouraged by these successes, and buoyed by advances in computing and engineering, researchers are trying to develop evermore sophisticated devices for numerous other neurological and psychiatric conditions. Rather than simply stimulating the brain, spinal cord or peripheral nerves, some devices now monitor and respond to neural activity.

For example, in 2013, the US Food and Drug Administration approved a closed-loop system for people with epilepsy. The device detects signs of neural activity that could indicate a seizure and stimulates the brain to suppress it. Some researchers are aiming to treat depression by creating analogous devices that can track signals related to mood. And systems that allow people who have quadriplegia to control computers and prosthetic limbs using only their thoughts are also in development and attracting substantial funding.

The market for neurotechnology is predicted to expand by around 75% by 2026, to US$17.1 billion. But as commercial investment grows, so too do the instances of neurotechnology companies giving up on products or going out of business, abandoning the people who have come to depend on their device.

Electrodes implanted in the brain can help to control the tremors associated with Parkinsonโ€™s disease. Credit: Zephyr/Science Photo Library

Retinal implants by the company Second Sight were fitted in hundreds of people before the firm ended support for the product. Credit: Ringo Chiu/ZUMA Press/Alamy

Shortly after the demise of ATI, a company called Nuvectra, which was based in Plano, Texas, filed for bankruptcy in 2019. Its device โ€” a new kind of spinal-cord stimulator for chronic pain โ€” had been implanted in at least 3,000 people. In 2020, artificial-vision company Second Sight, in Sylmar, California, laid off most of its workforce, ending support for the 350 or so people who were using its much heralded retinal implant to see. And in June, another manufacturer of spinal-cord stimulators โ€” Stimwave in Pompano Beach, Florida โ€” filed for bankruptcy. The firm has been bought by a credit-management company and is now embroiled in a legal battle with its former chief executive. Thousands of people with the stimulator, and their physicians, are watching on in the hope that the company will continue to operate.

When the makers of implanted devices go under, the implants themselves are typically left in place โ€” surgery to remove them is often too expensive or risky, or simply deemed unnecessary. But without ongoing technical support from the manufacturer, it is only a matter of time before the programming needs to be adjusted or a snagged wire or depleted battery renders the implant unusable.

People are then left searching for another way to manage their condition, but with the added difficulty of a non-functional implant that can be an obstacle both to medical imaging and future implants. For some people, including Mรถllmann-Bohle, no clear alternative exists.

โ€œItโ€™s a systemic problem,โ€ says Jennifer French, executive director of Neurotech Network, a patient advocacy and support organization in St. Petersburg, Florida. โ€œIt goes all the way back to clinical trials, and I donโ€™t think itโ€™s received enough attention.โ€

As money pours into the neurotechnology sector, implant recipients, physicians, biomedical engineers and medical ethicists are all calling for action to protect people with neural implants. โ€œUnfortunately, with that kind of investment, come failures,โ€ says Gabriel Lรกzaro-Muรฑoz, an ethicist specializing in neurotechnology at Harvard Medical School in Boston, Massachusetts. โ€œWe need to figure out a way to minimize the harms that patients will endure because of these failures.โ€

Markus Mรถllmann-Bohle has replaced the battery in the hand-held portion of his device several times. Credit: Nyani Quarmyne/Panos Pictures for Nature

Left to their own devices

When Mรถllmann-Bohle had the ATI-made neurostimulator implanted to help with his cluster headaches, he agreed to participate in a five-year post-approval trial aimed at refining the device. He diligently provided ATI with data from his device and answered questionnaires about his progress. Every few months he made an 800-kilometre round trip to Hamburg to be assessed.

But four years in, the company running the trial on behalf of ATI called Mรถllmann-Bohle to tell him it was over. Rumours spread that the firm was in trouble, before a letter from May confirmed his fears โ€” ATI had gone out of business.

Timothy White, another recipient of the companyโ€™s stimulator who took part in the post-approval trial, also heard of ATIโ€™s closure second-hand.

Now head of clinical affairs for a medical-device company based near Frankfurt, White credits the device with allowing him to complete his medical training. Indeed, ATI had seized on this eloquent medical studentโ€™s enthusiasm for its technology and asked him to speak at conferences and to investors.

Yet even White heard about the companyโ€™s collapse only when he contacted May with concerns that his remote control might be under-performing.

โ€œThat was really rough for me,โ€ says White. โ€œI was asking myself, whatโ€™s going to happen if I lose my remote control, if it breaks down, or the battery dies. But no one really had answers.โ€

LISTEN: Timothy White shares his experience using ATIโ€™s implant to suppress his cluster headaches.
LISTEN: Pain specialist Anjum Bux on caring for patients following Nuvectraโ€™s demise.

When an implant manufacturer disappears, what happens to the people using its devices varies hugely.

In some cases, there will be alternatives available. When Nuvectra folded, for example, users of its spinal-cord stimulator who feared a resurgence of their chronic pain could turn to similar devices offered by more established companies.

Even this best-case scenario puts considerable strain on the people using the implants, many of whom are already vulnerable, says anaesthesiologist Anjum Bux. He estimates that around 70 people received the Nuvectra device at his pain-management clinics in Kentucky.

Replacing obsolete implants of this kind requires surgery that would otherwise have been unnecessary and takes weeks to recover from. And at around US$40,000 for the surgery and replacement device, itโ€™s also costly โ€” although Bux says that in his experience, insurance providers have picked up the tab.

A greater challenge arises when no ready replacement is available. The stimulator made by ATI that Mรถllmann-Bohle and White have was the first of its kind. When the manufacturer closed its doors, there was no other implant on the market that they could use to manage their cluster headaches.

Left to fend for themselves, White and Mรถllmann-Bohle each leant on their own professional expertise. White drew on his medical training and found a drug, developed for treating migraines, that suppresses his headaches. But he must take triple the recommended dose, and worries about potential long-term side effects.

Mรถllmann-Bohle, meanwhile, turned to skills he developed as an electrical engineer. In the past three years, he has repaired a faulty charging port on the hand-held portion of his device and replaced its inbuilt battery several times. This battery was never intended to be accessible to the user, and it turned out to be unusual. Mรถllmann-Bohle scoured the Internet and eventually found suitable replacements made by a firm in the United States. When he returned for more, however, he learnt that the company had stopped making them. His most recent replacement came from a Chinese company that custom made what he needed.

His tinkering brought him into conflict with his insurers, who initially advised him not to tamper with the device, but eventually agreed to foot the bill for the replacement parts, after he convinced them he was suitably qualified. โ€œThey put really big obstacles in my way, or at least they tried to,โ€ Mรถllmann-Bohle says. But although his repairs have been successful so far, he knows that he does not have the tools or skills to fix everything that could go wrong.

Markus Mรถllmann-Bohle has relied on his engineering expertise to keep his device functioning. Credit: Nyani Quarmyne/Panos Pictures for Nature
โ€œHow would I be able to manage in the future without the company?โ€ โ€” Markus Mรถllmann-Bohle

Although maintaining the device has been tough, Mรถllmann-Bohle cannot see an alternative. โ€œThere is still no medication reliable enough to help me live a pain-free life without the device,โ€ he says.

He and White are now placing much of their hope in the potential revival of ATIโ€™s stimulator technology. In late 2020, a company now called Realeve, based in Effingham, Illinois, announced that it had acquired the patents for the device. The new company intends to market an essentially identical successor device in both the United States and Europe. In April 2021, Realeve attained FDA breakthrough status, which is intended to speed up access to medical devices in the United States.

Mรถllmann-Bohle and White both approached Realeve earlier this year, and corresponded with then-chief executive Jon Snyder directly, asking for assistance with their implants. So far, they have received none. In an e-mail to Nature in July, Snyder said: โ€œSince we do not have FDA or CE mark approval yet, we are unable to market the therapy and provide support. However, we have investigated the options of providing support via compassionate use approvals in various markets.โ€

Mรถllmann-Bohle desperately wants this support to materialize. โ€œHe [Snyder] assured me that he and his staff are working on providing replacement parts,โ€ he says. There have been changes at Realeve in recent months, with Snyder departing and a consulting firm taking temporary control of the business. But interim chief executive Peter Donato says that the company has now gained approval in Denmark to distribute replacement devices and software to existing users. He hopes that it can begin deliveries in the latter half of 2023, and says that it is also in talks with three other European countries. For Mรถllmann-Bohle and others in Germany, the wait goes on. โ€œThis new start has been in the making for years now,โ€ he says.

โ€œThis new start has been in the making for years now. I’m hopeful, but I’m also a realist.โ€

Jennifer French, an advocate for people who receive implanted neurotechnology, says the issue of abandoned implants has not received enough attention. Credit: TEDxCLE

A commitment to care

Examples of makers supporting implanted neurotechnology when profits fail to materialize are few and far between. French can therefore consider herself one of the lucky ones.

As well as being a prominent advocate for neurotechnology, she has been using an implanted device to help her move for more than 20 years โ€” even though the life-changing technology never became the foundation of a viable business.

In 1999, two years after a snowboarding accident left her unable to move her legs, French enrolled in a clinical trial of an electrical implant system designed by Ronald Triolo, a biomedical engineer at Case Western Reserve University in Cleveland, Ohio.

Over seven and a half hours, surgeons placed 16 electrodes in her body, each of which could stimulate a nerve that runs to her leg muscles. These electrodes were connected to an implanted pulse generator, which is wirelessly powered and controlled by an external unit.

Initially, the implant allowed French to stand and move herself between her wheelchair and a bed or a car. Over time, more electrodes and controllers have been added. Now she can stand and step, and pedal a stationary bike. โ€œI use it on a daily basis for exercise, for standing, for function,โ€ she says.

Jennifer French was given an implanted device that helps her to stand as part of a clinical trial. Credit: Advanced Platform Technology Center
โ€Within that instant, what you knew before then has completely changedโ€ โ€” Jennifer French
Hunter Peckham designed a device to restore hand and arm movement. Credit: Robert Pearce/Fairfax Media/Getty

Although the device was not commercially available at the time French joined the trial, Triolo expected it wouldnโ€™t be long before it was โ€” a similar system developed at Case Western for restoring functional hand and arm movement, known as Freehand, had been brought to market by a local start-up in 1997.

But this did not come to pass. Despite the difference it has made to Frenchโ€™s life, the device she uses has never been commercialized. The company that had acquired the rights to the Freehand system shuttered in 2001, and no other company picked up the device. Freehandโ€™s developer, biomedical engineer Hunter Peckham also at Case Western, attributes the start-upโ€™s failure to impatient investors. โ€œThe uptake was not as fast as they would have liked,โ€ he says.

Around 350 people with Freehand devices, as well as French and her fellow participants in Trioloโ€™s lower-body implant trial, could have lost access to the technology that had become an integral part of their lives. But Peckham and Triolo refused to let this happen.

โ€œWe understood that if there was something that they were benefiting from, if you took that away that would be another loss for them โ€” when they had had such a devastating loss before,โ€ Peckham says.

Using old and dwindling stocks of components โ€” including items that the university had acquired after the demise of the Freehand manufacturer โ€” and tapping into money from academic grants, the researchers continue to support as many people with these devices as they can.

Over two decades, the Freehand devices have been repaired as they gradually failed, and funding for a succession of further fixed-term clinical trials has allowed Triolo to continue to support French and her fellow research participants. He has even been able to offer them upgrades over time. Frenchโ€™s system has failed four times, leaving her unable to stand and acutely aware of her reliance on the technology. Every time, the Case Western team has provided the surgery and parts required to restore her movement.

โ€œSomeone is dedicating their body to our research. We have an obligation to maintain their systems for as long as they want to use them.โ€

โ€œWe’ve invested in her, and she continues to invest her time and effort in advancing our scienceโ€ โ€” Ronald Triolo

French knows her situation is precarious and that it rests on Triolo continuing to attract funding. โ€œI live every day with the fact that this technology might go away,โ€ she says. But she takes heart in what she sees as the researchersโ€™ unwavering commitment to her.

โ€œOur world view,โ€ Triolo says, โ€œis someone is dedicating their body to help advance our research, and we have an obligation to them to maintain their systems for as long as they want to use them.โ€

Protection from failure

Konstantin Slavin is a neurosurgeon at the University of Illinois College of Medicine in Chicago, who contributed to clinical trials of ATIโ€™s cluster-headache device and implanted the spinal-cord stimulator made by Nuvectra. He thinks that anyone given an implanted device as a part of routine clinical care should be able to count on ongoing support. โ€œYou expect them to receive essentially lifelong care from the device manufacturer,โ€ he says.

He is not alone in this view โ€” every device user, physician and engineer Nature interviewed thinks that people need to be better protected from the failure of device makers.

โ€œYou expect them to receive essentially lifelong care from the device manufacturer.โ€

โ€œLong-term support on the commercial side would be a competitive advantageโ€ โ€” Ronald Triolo

One proposal is that neurotechnology companies should ensure that there is money available to support the people using their devices in the event of the companyโ€™s closure. How this would best be achieved is uncertain. Suggestions include the company setting up a partner non-profit organization to manage funds to cover this eventuality; putting aside money in an escrow account; being obliged to take out an insurance policy that would support users; paying into a government-supported safety network; or ensuring the people using the devices are high-priority creditors during bankruptcy proceedings.

Currently, there is little sign that device makers are taking this kind of action. Asked in July if Realeve had plans in place to protect people should its business go the same way as ATI, Snyder, then chief executive, replied: โ€œThere is always the risk that a company may stop operating, but our focus is to be successful in our effort to deliver the Realeve Pulsante therapy to patientsโ€.

Realeveโ€™s interim chief executive Donato thinks that it will take legislation to convince investors or shareholders in companies to take on the expense of a safety net. โ€œUnless, and until, the governments force it on us,โ€ he says, โ€œIโ€™m not sure companies will do it on their own.โ€ But Triolo is optimistic that manufacturers might think differently if the jeopardy faced by device users becomes more widely known, and physicians and prospective patients start to favour companies that do have a safety net in place. โ€œIf that is what it takes to have a competitive advantage, maybe thatโ€™ll be enlightening for our friends on the commercial side of things,โ€ Triolo says.

Indeed, the failures of various neurotechnology start-ups over the past few years are already causing the surgeons responsible for implanting the devices to be cautious.

Robert Levy, a neurosurgeon in Boca Raton, Florida, and a former president of the International Neuromodulation Society, was particularly burnt by the demise of Nuvectra. He had been sufficiently impressed with its technology to become chairman of the companyโ€™s medical advisory board in August 2016. But in 2019, around five months before Nuvectra filed for bankruptcy, he cut ties after what he and others formerly associated with the firm saw as the company side-lining the needs of people using the implant in its attempt to stay afloat. โ€œAll of us who had any association with the company at that time expressed our severe dissatisfaction with such a move, which we felt was unethical,โ€ Levy says.

โ€œMaking patients the victims of bad business practices or a bankruptcy is horrible for them, horrible for the field, and grossly unethical.โ€

โ€œI learned that they completely abandoned the patients, and frankly I was horrifiedโ€ โ€” Robert Levy

From now on, Levy requires any new company that asks him to implant its product to send him a letter guaranteeing support for the people who have the surgery should something happen to the business. โ€œIf they should not supply such a letter, theyโ€™re not going to be included in my practice,โ€ he says.

He plans to write an editorial arguing for this approach in the journal Neuromodulation, of which he is editor-in-chief, to further raise awareness and put pressure on neurotechnology companies. โ€œPatients are suffering terribly,โ€ he says. โ€œMaking them the victims of bad business practices or a bankruptcy is horrible for patients, horrible for the field and grossly unethical.โ€

Momentum is also building behind another way to protect people with implants: technical standardization. The electrodes, connectors, programmable circuits and power supplies used in implanted neurotechnology are often proprietary or otherwise difficult to source, as Mรถllmann-Bohle discovered when looking for replacement parts for his stimulator. If components were common across devices, one manufacturer might be able to step in and offer spares when another goes under.

A 2021 survey of surgeons who implant neurostimulators showed that 86% backed standardization of the connectors used by these devices2. Such a move would not be without precedent, says retired neurosurgeon and medical-device engineer Richard North, formerly at Johns Hopkins Medical School in Baltimore, and president of the Institute of Neuromodulation in Chicago, who led the survey. Cardiac pacemakers have included standardized elements since the early 1990s, when manufacturers voluntarily agreed to ensure that any companyโ€™s power supply could fuel a pacemaker from any other company. Many of those same companies are now the biggest names in spinal-cord stimulators and DBS systems.

Parts from cardiac pacemakers have been standardized since the 1990s. Credit: Louise Oligny/BSIP/Alamy

โ€œItโ€™s inevitable that there will be standardization, and I think the companies involved recognize that too.โ€

North now co-chairs a Connector Standards Committee for the North American Neuromodulation Society, of which the Institute of Neuromodulation is a part, that is promoting the idea. Although the industry has not raced to embrace further standardization, he thinks it is only a matter of time. โ€œItโ€™s inevitable that there will be standardization, and I think the companies involved recognize that too,โ€ he says. As well as making replacement components easier to come by, North thinks that standardization would boost innovation by encouraging companies to develop components that can be used with a wide range of existing systems.

Peckham hopes that the neurotechnology field can go even further โ€” he wants devices to be made open source. Under the auspices of the Institute for Functional Restoration, a non-profit organization that he and his colleagues at Case Western established in 2013, Peckham plans to make the design specifications and supporting documentation of new implantable technologies developed by his team freely available. โ€œThen people can just cut and paste,โ€ he says.

This marks a major departure from the proprietary nature of most current devices. Peckham hopes that other people will build on the technology, and potentially even adapt it for new indications. The benefits for the people using these devices are at the centre of his thinking. โ€œIt starts with a commitment to the patients, to the people who can benefit from this,โ€ he says.

It is exactly that sort of commitment that people such as Mรถllmann-Bohle, White and French want to see โ€” and which they think they are entitled to. A raft of new companies are developing evermore sophisticated neurological implants with the power to transform peopleโ€™s lives. Should any fail, it is the people using the devices, and their physicians, who will be most affected, says Triolo.

The recent run of commercial casualties demonstrates the human cost of abandoning neurotechnology. โ€œItโ€™s impossible,โ€ Triolo says, โ€œfor people not to know that this is becoming a bigger and bigger issue.โ€

References

  1. J. Schoenen et al. Cephalalgia 33, 816โ€“830 (2013). Article
  2. R. B. North et al. Neuromodulation 24, 1299โ€“1306 (2021). Article

Author: Liam Drew

Design: Chris Ryan

Video: Josh Birt, Colin Kelly, Adam Levy

Original photography: Nyani Quarmyne

Audio: Adam Levy

Multimedia editors: Adam Levy, Dan Fox

Photo editors: Jessica Hallett, Madeline Hutchinson

Translation: Shaya Zarrin

Subeditor: Jenny McCarthy

Project manager: Rebecca Jones

Editor: Richard Hodson

This article was authored by Neil Savage and originally published to Nature

Inspiration can come from anywhere. For Radhika Nagpal, it came from her honeymoon.

Nagpal was snorkelling in the Bahamas when she was approached by a school of colourful striped fish, moving as one. โ€œThey come straight at you and check you out and then move off,โ€ says Nagpal, now a mechanical engineer at Princeton University in New Jersey. โ€œI was like, โ€˜Wow, that is a collective behaviour that Iโ€™ve never seen.โ€™โ€Part of Nature Outlook: Robotics and artificial intelligence

Her mind returned to those curious fish years later, when she was pondering ways to build swarms of robots that could coordinate their behaviour in challenging environments. The result is a school of robotic fish โ€” called Bluebots โ€” that can coordinate their activity with their fellows1.

Nagpalโ€™s school is small, only ten fish with limited abilities. The fish are equipped with blue LEDs so that their comrades can spot them underwater. Simple rules in their programming, such as swimming to the left when they see another Bluebot, enable them to synchronize their movement. But Nagpal hopes to eventually build larger collectives with more complex behaviours.

Such robotic schools could be tasked with locating and recording data on coral reefs to help researchers to study the reefsโ€™ health over time. Just as living fish in a school might engage in different behaviours simultaneously โ€” some mating, some caring for young, others finding food โ€” but suddenly move as one when a predator approaches, robotic fish would have to perform individual tasks while communicating to each other when itโ€™s time to do something different.

Aerial video showing lit up fish-like robots reforming a circle formation as someone adds extra robots
These Bluebots, modelled after schools of fish, can synchronize their movement with each other.Credit: Berlinger, F. et al. Sci. Robot. 6, eabd8668 (2021)

โ€œThe majority of what my lab really looks at is the coordination techniques โ€” what kinds of algorithms have evolved in nature to make systems work well together?โ€ she says.

Many roboticists are looking to biology for inspiration in robot design, particularly in the area of locomotion. Although big industrial robots in vehicle factories, for instance, remain anchored in place, other robots will be more useful if they can move through the world, performing different tasks and coordinating their behaviour.

Some robots can already move on wheels, but wheeled robots cannot climb stairs and are stymied by rough or shifting terrain, such as sand or gravel. By borrowing movement strategies from nature โ€” walking, crawling, swimming, slithering, flying or leaping โ€” robots could gain new functionality. They might perform search-and-rescue operations after an earthquake, or explore caves that are too small or unstable for people to venture into. They could carry out underwater inspections of ships and bridges. And unmanned aerial vehicles (UAVs) could fly more efficiently and better handle turbulence.

โ€œThe basic idea is looking to nature to see how things can potentially be done differently, how we can improve our automated systems,โ€ says Michael Tolley, a mechanical engineer who heads the Bioinspired Robotics and Design Lab at the University of California, San Diego.

See Spot run

Perhaps the most obvious strategy for robotic motion is walking, and legged robots do exist. Spot, a low-slung, four-legged machine that looks like a headless yellow dog, can climb uphill and navigate stairs. Its developer, Boston Dynamics in Waltham, Massachusetts, markets the US$74,500 device for mobile inspection of factories, construction sites and hazardous environments. A similar-looking robot, the Mini Cheetah, has been developed at the Massachusetts Institute of Technology (MIT) in Cambridge. โ€œMore than 90% of land animals are quadruped,โ€ says Sangbae Kim, a mechanical engineer at MIT who helped to design the Mini Cheetah. โ€œSo a natural place to look at is the quadrupedal world. And the cheetah is a king of that world in terms of the speed.โ€Sign up for Natureโ€™s newsletter on robotics and AI

The Mini Cheetah can already perform backflips, and it runs as fast as 3.9 metres per second โ€” about one-tenth as fast as an actual cheetah, but speedy for a robot. Now Kim is developing control software that he hopes will allow the robot to move smoothly across varying surfaces. This is challenging because the rules for how best to move a limb vary depending on the friction and hardness of the surface. Currently, moving from grass to concrete, or running up a gravelly hill, can cause the robot to stumble. โ€œIt runs really ugly and awkward,โ€ Kim says. โ€œIt doesnโ€™t fall, but itโ€™s not efficient.โ€

Nevertheless, quadruped robots are one of the better options for negotiating difficult terrain, says J. Sean Humbert, a mechanical engineer who directs the Bio-Inspired Perception and Robotics Laboratory at the University of Colorado, Boulder. Last year, his group took part in the US Defense Advanced Research Projects Agencyโ€™s Subterranean Challenge, in which robots were tasked with navigating tunnels, caves and urban settings to find particular targets; the team took third place, winning $500,000. โ€œThe robots that ended up doing really well across the teams were the legged robots,โ€ Humbert says. But faced with a sandy, uphill, rocky landscape, these robots struggled. โ€œEven our Spot robot tipped over and slid around,โ€ he says.

Feel the strain

One possible solution, Humbert says, is to endow robots with animalsโ€™ innate ability to sense and respond to mechanosensory information, such as pressure, strain or vibration. Heโ€™s been taking that approach with flying machines by embedding strain sensors in the wings of fixed-wing UAVs, as well as in the arms of quadrotor drones, which rely on spinning blades to fly and hover.

The work grew out of studies of honey bees. When Humbert placed bees in a wind tunnel and hit them with sudden gusts of air, their flight would be momentarily disturbed. After a quick change in the pattern of their wing beats, they would right themselves. Honey bees beat their wings 251 times per second, and the animals could make these corrections in just 15 to 20 beats โ€” about 0.08 seconds. โ€œOur conclusion was that [that] had to be mechanosensory information,โ€ Humbert says. โ€œVision is just not fast enough to correct the spins that weโ€™re seeing.โ€ If a drone could similarly sense a disturbance and automatically correct for it that rapidly, he says, it would be much less likely to crash or be knocked off course.

A bee covered in pollen is seen hovering next to a red dahlia flower at right
Some researchers are turning to bees as inspiration for robots that can respond to mechanosensory information.Credit: Sumiko Scott/Getty

Fish also respond to mechanosensory stimuli, using a system of sensory organs known as the lateral line. The structure consists of hundreds of tiny sensors spread along the head, trunk and tail fin, and it enables fish to sense changes in the motion and pressure of water caused by obstacles, such as rocks and other animals. โ€œFish are sensing all of that and are using that, as well as vision, to position themselves relative to each other,โ€ Nagpal says. No comparable underwater pressure sensor exists, but her team hopes to develop one to improve the Bluebotsโ€™ navigation.

In San Diego, Tolley is exploring robots built from polymers or other pliable materials that can more safely interact with humans or squeeze through tight spaces. Squishy, pliable robots could have more flexible motion than hard robots with only a few joints, but getting them to walk on soft legs is a challenge.

Tolley designed a robot with four soft legs, each divided into three chambers2. Pressurized air first enters one chamber, then moves to the next. This movement causes the legs to bend, then relax. By alternatively activating opposing pairs of legs, the robot trundles along like a turtle. And because it does not need electronic controls, its design could be useful even in the presence of electromagnetic interference.

Hard or soft, one issue robots struggle with is falling over. If a multimillion-dollar robot trips over a rock on Mars, an entire mission could be jeopardized. Some researchers are looking to insects for solutions, particularly click beetles, which can jump up to 20 times their body length without using their legs3.

Clip showing a quadruped robot with soft tube-like legs walking forwards (L) compared with a turtle walking forwards (R)
The gait of this soft-legged robot, propelled by pressurized air, resembles that of a turtle.Credit: Left: David Baillot/UCSD. Drotman et al., Sci. Robot. 6. eaay2627 (2021); right: Voshadhi/Getty

Click beetles use a muscle to compress soft tissue, building up energy; a latch system holds the compressed tissue in place. When the animal releases the latch, producing its characteristic clicking sound, the tissue expands rapidly and the beetle is launched into the air, accelerating at about 530 times the force of gravity. (By comparison, a rider on a roller coaster typically experiences about four times the force of gravity.) If a robot could do that, it would have a mechanism for righting itself after tipping over, says Aimy Wissa, a mechanical and aerospace engineer who runs the Bio-inspired Adaptive Morphology Lab at Princeton.

Even more interesting, Wissa says, is that the beetle can perform this manoeuvre four or five times in rapid succession, without suffering any apparent damage. Sheโ€™s trying to develop models that explain how the energy is rapidly dissipated without harming the insect, which could prove useful in applications involving rapid acceleration and deceleration, such as bulletproof vests. Other creatures also store and release energy to trigger rapid motion, including fruit-fly larvae and Venus flytraps (Dionaea muscipula), and understanding how they do so could lead to more-responsive artificial muscles, Tolley says.

Totally legless

In some places, such as narrow underground passages or on unstable surfaces, legs could require too much space or be too unstable to propel a robot. Howie Choset, a computer scientist at the Robotics Institute of Carnegie Mellon University in Pittsburgh, Pennsylvania, builds snake-like robots with 16 joints that provide a range of motion that could drive everything from surgical instruments wending through the body to reconnaissance robots exploring archaeological sites.

In one early project, Choset took his robo-snakes to the Red Sea, where ancient Egyptians had dug caves to store boats that theyโ€™d built for trade with the Land of Punt, thought to be located in modern Somalia. The caves were no longer safe for human explorers, but snake robots seemed well suited to the task โ€” until they didnโ€™t. โ€œThe truth is, we got stuck,โ€ Choset says. โ€œWe couldnโ€™t go up and down the sandy inclines.โ€

To work out how a real snake would approach the problem, Choset looked to sidewinders, snakes that move by thrusting their bodies sideways in an S-shaped curve, gliding easily over sand4. Because sand is granular, it can behave as either a liquid or a solid, depending on how much force is applied. Choset found that sidewinders can exert the right amount of pushing force so that the sand remains solid underneath them and supports their bodies. โ€œIt wasnโ€™t until we started looking at the real snakes, the sidewinders, and how they moved on sandy terrains that we were able to understand how to make our robot work on sandy terrains,โ€ he says.

A snake like robot rears a front camera while curled in sand
This robot, inspired by sidewinding snakes, moves by twisting in an S-shaped curve.Credit: Carnegie Mellon Univ.

As for Wissa, sheโ€™s trying to build robots that can both swim and fly, using an animal that can do both as inspiration: flying fish5. These creatures use their pelvic fins to skim across the waterโ€™s surface and then launch into the air, where they can glide up to 400 metres.

Flying fish, Wissa explains, are โ€œactually very good glidersโ€. But when they drop back to the water, they donโ€™t submerge. โ€œThey actually just dip their caudal fin and they flap it vigorously, and then they can take off again,โ€ Wissa says. โ€œYou can think of it as a taxiing manoeuvre.โ€ She hopes to learn enough about this behaviour to develop a robot that can move through both air and water using the same propulsion mechanisms. โ€œWeโ€™re very good as engineers in designing things for a single function,โ€ Wissa says. โ€œWhere nature really can teach us a lot of lessons is this concept of multi-functionality.โ€

For another type of multi-functional locomotion, Wissa focuses on grasshoppers, which can jump and then open their wings to glide. She hopes to understand what makes them such good gliders. Many other insects rely on high-frequency flapping to fly. Perhaps, she says, it has to do with their wing shape.

A parrot banking mid-flight against a black background
Birds have covert feathers that improve their control over how air flow interacts with their wings. By understanding these feathers, scientists could improve the flight of aerial vehicles.Credit: Barbara Brady-Smith/Tetra/Getty

Wissa also seeks inspiration from birds. Sheโ€™s used aerodynamic testing and structural modelling to investigate covert feathers โ€” small, stiff feathers that overlap other feathers on a birdโ€™s wings and tail6. When a bird tries to land in windy conditions, the covert feathers on the wings deploy, either passively in response to air flow or actively under control of a tendon. The covert feathers alter the shape of the wing and give the bird finer control over its interaction with air flow, and donโ€™t require as much energy as flapping the whole wing. By learning to understand the physics of these feathers, Wissa hopes to improve the flight of a UAV.

A two-way street

Biology has informed robotics, but the engineering involved can also provide insights into animal kinesiology. โ€œWe didnโ€™t start by looking at biology,โ€ Choset says. Instead, he mathematically modelled the fundamental principles of the motion he was interested in. โ€œAnd in doing so, something kind of magical happened โ€” we started coming up with ways to explain how biology works. So, is it robot-inspired biology or biologically inspired robots?โ€More from Nature Outlooks

Other engineers have had similar experiences. Nagpal is collaborating with ichthyologist George Lauder at Harvard University in Cambridge to model the hydrodynamics of schooling, to see whether the formation provides living fish with an energy benefit. And designs that make drones fly in a more energy-efficient way might help to explain how birds and insects have evolved to do something similar. Wissa hopes her work, in addition to building flying, swimming robots, will lead to a greater understanding of flying fish. โ€œWeโ€™re using this model to actually test hypotheses about nature, about why some species of flying fish have enlarged pelvic fins while others donโ€™t,โ€ Wissa says.

But despite the links between biology and engineering, donโ€™t expect bio-inspired robots to ultimately look like the creatures that influenced them. Wissa says that, although many first attempts at mimicking biology resemble the original biological forms, scientistsโ€™ ultimate aim is to understand the principles behind how the systems operate, and then adapt those to different structures and materials. โ€œWeโ€™re just copying the physics and the rules for how things work,โ€ she says, โ€œand then making engineering systems that serve the same function.โ€

doi: https://doi.org/10.1038/d41586-022-03014-x

This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with the financial support of third parties. About this content.

References

  1. Berlinger, F., Gauci, M. & Nagpal, R. Sci. Robot. 6, eabd8668 (2021).Article  PubMed  Google Scholar 
  2. Drotman, D., Jadhav, S., Sharp, D., Chan, C. & Tolley, M. T. Sci. Robot. 6, eaay2627 (2021).Article  PubMed  Google Scholar 
  3. Bolmin, O. et al. Proc. Natl Acad. Sci. USA 118, e2014569118 (2021).Article  PubMed  Google Scholar 
  4. Chaohui Gong, R., Hatton, L. & Choset, H. In 2012 IEEE International Conference on Robotics and Automation 4222โ€“4227 (2012).
  5. Saro-Cortes, V. et al. Integr. Comp. Biol. https://doi.org/10.1093/icb/icac101 (2022).Article  Google Scholar 
  6. Duan, C. & Wissa, A. Bioinspir. Biomim. 16, 046020 (2021).Article  Google Scholar 

Download references

This article was authored by Anthony King and originally published to Nature

Cancer drugs usually take a scattergun approach. Chemotherapies inevitably hit healthy bystander cells while blasting tumours, sparking a slew of side effects. It is also a big ask for an anticancer drug to find and destroy an entire tumour โ€” some are difficult to reach, or hard to penetrate once located.

A long-dreamed-of alternative is to inject a battalion of tiny robots into a person with cancer. These miniature machines could navigate directly to a tumour and smartly deploy a therapeutic payload right where it is needed. โ€œIt is very difficult for drugs to penetrate through biological barriers, such as the bloodโ€“brain barrier or mucus of the gut, but a microrobot can do that,โ€ says Wei Gao, a medical engineer at the California Institute of Technology in Pasadena.

Part of Nature Outlook: Robotics and artificial intelligence

Among his inspirations is the 1966 film Fantastic Voyage, in which a miniaturized submarine goes on a mission to remove a blood clot in a scientistโ€™s brain, piloted through the bloodstream by a similarly shrunken crew. Although most of the film remains firmly in the realm of science fiction, progress on miniature medical machines in the past ten years has seen experiments move into animals for the first time.

There are now numerous micrometre- and nanometre-scale robots that can propel themselves through biological media, such as the matrix between cells and the contents of the gastrointestinal tract. Some are moved and steered by outside forces, such as magnetic fields and ultrasound. Others are driven by onboard chemical engines, and some are even built on top of bacteria and human cells to take advantage of those cellsโ€™ inbuilt ability to get around. Whatever the source of propulsion, it is hoped that these tiny robots will be able to deliver therapies to places that a drug alone might not be able to reach, such as into the centre of solid tumours. However, even as those working on medical nano- and microrobots begin to collaborate more closely with clinicians, it is clear that the technology still has a long way to go on its fantastic journey towards the clinic.

Film still of a tiny spaceship flying through the inside of a human body
In the 1966 film Fantastic Voyage, a miniaturized medical team goes on a mission to remove a blood clot in a scientistโ€™s brain.Contributor: Collection Christophel/Alamy Stock Photo

Poetry in motion

One of the key challenges for a robot operating inside the human body is getting around. In Fantastic Voyage, the crew uses blood vessels to move through the body. However, it is here that reality must immediately diverge from fiction. โ€œI love the movie,โ€ says roboticist Bradley Nelson, gesturing to a copy of it in his office at the Swiss Federal Institute of Technology (ETH) Zurich in Switzerland. โ€œBut the physics are terrible.โ€ Tiny robots would have severe difficulty swimming against the flow of blood, he says. Instead, they will initially be administered locally, then move towards their targets over short distances.

When it comes to design, size matters. โ€œPropulsion through biological media becomes a lot easier as you get smaller, as below a micron bots slip between the network of macromolecules,โ€ says Peer Fischer, a robotics researcher at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. Bots are therefore typically no more than 1โ€“2 micrometres across. However, most do not fall below 300 nanometres. Beyond that size, it becomes more challenging to detect and track them in biological media, as well as more difficult to generate sufficient force to move them.

Scientists have several choices for how to get their bots moving. Some opt to provide power externally. For instance, in 2009, Fischer โ€” who was working at Harvard University in Cambridge, Massachusetts, at the time, alongside fellow nanoroboticist Ambarish Ghosh โ€” devised a glass propeller, just 1โ€“2 micrometres in length, that could be rotated by a magnetic field1. This allowed the structure to move through water, and by adjusting the magnetic field, it could be steered with micrometre precision. In a 2018 study2, Fischer launched a swarm of micropropellers into a pigโ€™s eye in vitro, and had them travel over centimetre distances through the gel-like vitreous humour into the retina โ€” a rare demonstration of propulsion through real tissue. The swarm was able to slip through the network of biopolymers within the vitreous humour thanks in part to a silicone oil and fluorocarbon coating applied to each propeller. Inspired by the slippery surface that the carnivorous pitcher plant Nepenthes uses to catch insects, this minimized interactions between the micropropellers and biopolymers.

Extreme close-up of a nanopropeller
An electron microscope image of a glass nanopropeller.Credit: Conny Miksch, MPI-IS

Another way to provide propulsion from outside the body is to use ultrasound. One group placed magnetic cores inside the membranes of red blood cells3, which also carried photoreactive compounds and oxygen. The cellsโ€™ distinctive biconcave shape and greater density than other blood components allowed them to be propelled using ultrasonic energy, with an external magnetic field acting on the metallic core to provide steering. Once the bots are in position, light can excite the photosensitive compound, which transfers energy to the oxygen and generates reactive oxygen species to damage cancer cells.

This hijacking of cells is proving to have therapeutic merits in other research projects. Some of the most promising strategies aimed at treating solid tumours involve human cells and other single-celled organisms jazzed up with synthetic parts. In Germany, a group led by Oliver Schmidt, a nanoscientist at Chemnitz University of Technology, has designed a biohybrid robot based on sperm cells4. These are some of the fastest motile cells, capable of hitting speeds of 5 millimetres per minute, Schmidt says. The hope is that these powerful swimmers can be harnessed to deliver drugs to tumours in the female reproductive tract, guided by magnetic fields. Already, it has been shown that they can be magnetically guided to a model tumour in a dish.

Credit: Leibniz IFW, Dresden

โ€œWe could load anticancer drugs efficiently into the head of the sperm, into the DNA,โ€ says Schmidt. โ€œThen the sperm can fuse with other cells when it pushes against them.โ€ At the Chinese University of Hong Kong, meanwhile, nanoroboticist Li Zhang led the creation of microswimmers from Spirulina microalgae cloaked in the mineral magnetite. The team then tracked a swarm of them inside rodent stomachs using magnetic resonance imaging5. The biohybrids were shown to selectively target cancer cells. They also gradually degrade, reducing unwanted toxicity.

Another way to get micro- and nanobots moving is to fit them with a chemical engine: a catalyst drives a chemical reaction, creating a gradient on one side of the machine to generate propulsion. Samuel Sรกnchez, a chemist at the Institute for Bioengineering of Catalonia in Barcelona, Spain, is developing nanomotors driven by chemical reactions for use in treating bladder cancer. Some early devices relied on hydrogen peroxide as a fuel. Its breakdown, promoted by platinum, generated water and oxygen gas bubbles for propulsion. But hydrogen peroxide is toxic to cells even in minuscule amounts, so Sรกnchez has transitioned towards safer materials. His latest nanomotors are made up of honeycombed silica nanoparticles, tiny gold particles and the enzyme urease6. These 300โ€“400-nm bots are driven forwards by the chemical breakdown of urea in the bladder into carbon dioxide and ammonia, and have been tested in the bladders of mice. โ€œWe can now move them and see them inside a living system,โ€ says Sรกnchez.

Breaking through

A standard treatment for bladder cancer is surgery, followed by immunotherapy in the form of an infusion of a weakened strain of Mycobacterium bovis bacteria into the bladder, to prevent recurrence. The bacterium activates the personโ€™s immune system, and is also the basis of the BCG vaccine for tuberculosis. โ€œThe clinicians tell us that this is one of the few things that has not changed over the past 60 years,โ€ says Sรกnchez. There is a need to improve on BCG in oncology, according to his collaborator, urologic oncologist Antoni Vilaseca at the Hospital Clinic of Barcelona. Current treatments reduce recurrences and progression, โ€œbut we have not improved survivalโ€, Vilaseca says. โ€œOur patients are still dying.โ€

The nanobot approach that Sรกnchez is trying promises precision delivery. He plans to insert his bots into the bladder (or intravenously), to motor towards the cancer with their cargo of therapeutic agents to target cancer cells, using abundant urea as a fuel. He might use a magnetic field for guidance, if needed, but a more straightforward replacement of BCG with bots that do not require external control, perhaps using an antibody to bind a tumour marker, would please clinicians most. โ€œIf we can deliver our treatment to the tumour cells only, then we can reduce side effects and increase activity,โ€ says Vilaseca.

Close-up of urease-powered nanomotors
An optical microscopy video showing a swarm of urease-powered nanomotors swimming in urea solution.Credit: Samuel Sรกnchez Ordรณรฑez

Not all cancers can be reached by swimming through liquid, however. Natural physiological barriers can block efficient drug delivery. The gut wall, for example, allows absorption of nutrients into the bloodstream, and offers an avenue for getting therapies into bodies. โ€œThe gastrointestinal tract is the gateway to our body,โ€ says Joseph Wang, a nanoengineer at the University of California, San Diego. However, a combination of cells, microbes and mucus stops many particles from accessing the rest of the body. To deliver some therapies, simply being in the intestine isnโ€™t enough โ€” they also need to be able to burrow through its defences to reach the bloodstream, and a nanomachine could help with this.

In 2015, Wang and his colleagues, including Gao, reported the first self-propelled robot in vivo, inside a mouse stomach7. Their zinc-based nanomotor dissolved in the harsh stomach acids, producing hydrogen bubbles that rocketed the robot forwards. In the lower gastrointestinal tract, they instead use magnesium. โ€œMagnesium reacts with water to give a hydrogen bubble,โ€ says Wang. In either case, the metal micromotors are encapsulated in a coating that dissolves at the right location, freeing the micromotor to propel the bot into the mucous wall.

Some bacteria have already worked out their own ways to sneak through the gut wall. Helicobacter pylori, which causes inflammation in the stomach, excretes urease enzymes to generate ammonia and liquefy the thick mucous that lines the stomach wall. Fischer envisages future micro- and nanorobots borrowing this approach to deliver drugs through the gut.

Sign up for Natureโ€™s newsletter on robotics and AI

Solid tumours are another difficult place to deliver a drug. As these malignancies develop, a ravenous hunger for oxygen promotes an outside surface covered with blood vessels, while an oxygen-deprived core builds up within. Low oxygen levels force cells deep inside to switch to anaerobic metabolism and churn out lactic acid, creating acidic conditions. As the oxygen gradient builds, the tumour becomes increasingly difficult to penetrate. Nanoparticle drugs lack a force with which to muscle through a tumourโ€™s fortifications, and typically less than 2% of them will make it inside8. Proponents of nanomachines think that they can do better.

Sylvain Martel, a nanoroboticist at Montreal Polytechnic in Canada, is trying to break into solid tumours using bacteria that naturally contain a chain of magnetic iron-oxide nanocrystals. In nature, these Magnetococcus species seek regions that have low oxygen. Martel has engineered such a bacterium to target active cancer cells deep inside tumours8. โ€œWe guide them with a magnetic field towards the tumour,โ€ explains Martel, taking advantage of the magnetic crystals that the bacteria typically use like a compass for orientation. The precise locations of low-oxygen regions are uncertain even with imaging, but once these bacteria reach the right location, their autonomous capability kicks in and they motor towards low-oxygen regions. In a mouse, more than half the bacteria injected close to tumour grafts broke into this tumour region, each laden with dozens of drug-loaded liposomes. Martel cautions, however, that there is still some way to go before the technology is proven safe and effective for treating people with cancer.

In the Netherlands, chemist Daniela Wilson at Radboud University in Nijmegen and colleagues have developed enzyme-driven nanomotors powered by DNA that might similarly be able to autonomously home in on tumour cells9. The motors navigate towards areas that are richer in DNA, such as tumour cells that undergoing apoptosis. โ€œWe want to create systems that are able to sense gradients by different endogenous fuels in the body,โ€ Wilson says, suggesting that the higher levels of lactic acid or glucose typically found in tumours could also be used for targeting. Once in place, the autonomous bots seem to be picked up by cells more easily than passive particles are โ€” perhaps because the bots push against cells.

Sylvain Martel and his colleagues review information on a bank of computer screens
Nanoroboticist Sylvain Martel (middle) discusses a new computer interface with two members of his team.Credit: Caroline Perron

Fiction versus reality

Inspirational though Fantastic Voyage might have been for many working in the field of medical nanorobotics, there are some who think the film has become a burden. โ€œPeople think of this as science fiction, which excites people, but on the other hand they donโ€™t take it so seriously,โ€ says Martel. Fischer is similarly jaded by movie-inspired hype. โ€œPeople sometimes write very liberally as if nanobots for cancer treatment are almost here,โ€ he says. โ€œBut this is not even in clinical trials right now.โ€

Nonetheless, advances in the past ten years have raised expectations of what is possible with current technology. โ€œThereโ€™s nothing more fun than building a machine and watching it move. Itโ€™s a blast,โ€ says Nelson. But having something wiggling under a microscope no longer has the same draw, without medical context. โ€œYou start thinking, โ€˜how could this benefit society?โ€™โ€ he says.

More from Nature Outlooks

With this in mind, many researchers creating nanorobots for medical purposes are working more closely with clinicians than ever before. โ€œYou find a lot of young doctors who are really interested in what the new technologies can do,โ€ Nelson says. Neurologist Philipp Gruber, who works with stroke patients at Aarau Cantonal Hospital in Switzerland, began a collaboration with Nelson two years ago after contacting ETH Zurich. The pair share an ambition to use steerable microbots to dissolve clots in peopleโ€™s brains after ischaemic stroke โ€” either mechanically, or by delivering a drug. โ€œBrad knows everything about engineering,โ€ says Gruber, โ€œbut we can advise about the problems we face in the clinic and the limitations of current treatment options.โ€

Sรกnchez tells a similar story: while he began talking to physicians around a decade ago, their interest has warmed considerably since his experiments in animals began three to four years ago. โ€œWe are still in the lab, but at least we are working with human cells and human organoids, which is a step forward,โ€ says his collaborator Vilaseca.

As these seedlings of clinical collaborations take root, it is likely that oncology applications will be the earliest movers โ€” particularly those that resemble current treatments, such as infusing microbots instead of BCG into cancerous bladders. But even these therapeutic uses are probably at least 7โ€“10 years away. In the nearer term, there might be simpler tasks that nanobots can be used to accomplish, according to those who follow the field closely.

For example, Martin Pumera, a nanoroboticist at the University of Chemistry and Technology in Prague, is interested in improving dental care by landing nanobots beneath titanium tooth implants10. The tiny gap between the metal implants and gum tissue is an ideal niche for bacterial biofilms to form, triggering infection and inflammation. When this happens, the implant must often be removed, the area cleaned, and a new implant installed โ€” an expensive and painful procedure. He is collaborating with dental surgeon Karel Klรญma at Charles University in Prague.

Another problem the two are tackling is oral bacteria gaining access to tissue during surgery of the jaws and face. โ€œA biofilm can establish very quickly, and that can mean removing titanium plates and screws after surgery, even before a fracture heals,โ€ says Klรญma. A titanium oxide robot could be administered to implants using a syringe, then activated chemically or with light to generate active oxygen species to kill the bacteria. Examples a few micrometres in length have so far been constructed, but much smaller bots โ€” only a few hundred nanometres in length โ€” are the ultimate aim.

Clearly, this is a long way from parachuting bots into hard-to-reach tumours deep inside a person. But the rising tide of in vivo experiments and the increasing involvement of clinicians suggests that microrobots might just be leaving port on their long journey towards the clinic.

doi: https://doi.org/10.1038/d41586-022-00859-0

This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with the financial support of third parties. About this content.

References

  1. Ghosh, A. & Fischer, P. Nano Lett. 9, 2243โ€“2245 (2009).PubMed Article Google Scholar 
  2. Wu, Z. et al. Sci. Adv. 4, eaat4388 (2018).PubMed Article Google Scholar 
  3. Gao, C. et al. ACS Appl. Mater. Interfaces 11, 23392โ€“23400 (2019).PubMed Article Google Scholar 
  4. Xu, H. et al. ACS Nano 12, 327โ€“337 (2018).PubMed Article Google Scholar 
  5. Yan, X. et al. Sci. Robot. 2, eaaq1155 (2017).PubMed Article Google Scholar 
  6. Hortelao, A. C. et al. Sci. Robot. 6, eabd2823 (2021).PubMed Article Google Scholar 
  7. Gao, W. et al. ACS Nano 9, 117โ€“123 (2015).PubMed Article Google Scholar 
  8. Felfoul, O. et al. Nature Nanotechnol. 11, 941โ€“947 (2016).PubMed Article Google Scholar 
  9. Ye, Y. et al. Nano Lett. 21, 8086โ€“8094 (2021).PubMed Article Google Scholar 
  10. Villa, K. et al. Cell Rep. Phys. Sci. 1, 100181 (2020).Article Google Scholar

This article was authored by Neil Savage and originally published to Nature

Bing Liu was road testing a self-driving car, when suddenly something went wrong. The vehicle had been operating smoothly until it reached a T-junction and refused to move. Liu and the carโ€™s other occupants were baffled. The road they were on was deserted, with no pedestrians or other cars in sight. โ€œWe looked around, we noticed nothing in the front, or in the back. I mean, there was nothing,โ€ says Liu, a computer engineer at the University of Illinois Chicago.

Stumped, the engineers took over control of the vehicle and drove back to the laboratory to review the trip. They worked out that the car had been stopped by a pebble in the road. It wasnโ€™t something a person would even notice, but when it showed up on the carโ€™s sensors it registered as an unknown object โ€” something the artificial intelligence (AI) system driving the car had not encountered before.

Part of Nature Outlook: Robotics and artificial intelligence

The problem wasnโ€™t with the AI algorithm as such โ€” it performed as intended, stopping short of the unknown object to be on the safe side. The issue was that once the AI had finished its training, using simulations to develop a model that told it the differences between a clear road and an obstacle, it could learn nothing more. When it encountered something that had not been part of its training data, such as the pebble or even a dark spot on the road, the AI did not know how to react. People can build on what theyโ€™ve learnt and adapt as their environment changes; most AI systems are locked into what they already know.

In the real world, of course, unexpected situations inevitably arise. Therefore, Liu argues that any system aiming to perform learnt tasks outside a lab needs to be capable of on-the-job learning โ€” supplementing the model itโ€™s already developed with new data that it encounters. The car could, for instance, detect another car driving through a dark patch on the road with no problem, and decide to imitate it, learning in the process that a wet bit of road was not a problem. In the case of the pebble, it could use a voice interface to ask the carโ€™s occupant what to do. If the rider said it was safe to continue, it could drive on, and it could then call on that answer for its next pebble encounter. โ€œIf the system can continually learn, this problem is easily solved,โ€ Liu says.

Light grey car in the street with machinery attached to its roof and sides to enable self-driving
Car-technology company Cruise has based its first autonomous vehicle on a Chevrolet Bolt, made by automotive manufacturer General Motors.Credit: Smith Collection/Gado/Getty

Such continual learning, also known as lifelong learning, is the next step in the evolution of AI. Much AI relies on neural networks, which take data and pass them through a series of computational units, known as artificial neurons, which perform small mathematical functions on the data. Eventually the network develops a statistical model of the data that it can then match to new inputs. Researchers, who have based these neural networks on the operation of the human brain, are looking to humans again for inspiration on how to make AI systems that can keep learning as they encounter new information. Some groups are trying to make computer neurons more complex so theyโ€™re more like neurons in living organisms. Others are imitating the growth of new neurons in humans so machines can react to fresh experiences. And some are simulating dream states to overcome a problem of forgetfulness. Lifelong learning is necessary not only for self-driving cars, but for any intelligent system that has to deal with surprises, such as chatbots, which are expected to answer questions about a product or service, and robots that can roam freely and interact with humans. โ€œPretty much any instance where you deploy AI in the future, you would see the need for lifelong learning,โ€ says Dhireesha Kudithipudi, a computer scientist who directs the MATRIX AI Consortium for Human Well-Being at the University of Texas at San Antonio.

Continual learning will be necessary if AI is to truly live up to its name. โ€œAI, to date, is really not intelligent,โ€ says Hava Siegelmann, a computer scientist at the University of Massachusetts Amherst who created the Lifelong Learning Machines research-funding initiative for the US Defense Advanced Research Projects Agency. โ€œIf itโ€™s a neural network, you train it in advance, you give it a data set and thatโ€™s all. It does not have the ability to improve with time.โ€

Model making

In the past decade, computers have become adept at tasks such as classifying cats or tumours in images, identifying sentiment in written language, and winning at chess. Researchers might, for instance, feed the computer photos that have been labelled by humans as containing cats. The computer receives the photos, which it interprets as numerical descriptions of pixels with various colour and brightness values, and runs them through layers of artificial neurons. Each neuron has a randomly chosen weight, a value by which it multiplies the value of the input data. The computer runs the input data through the layers of neurons and checks the output data against validation data to see how accurate the results are. It then repeats the process, altering the weights in each iteration until the output reaches a high accuracy. The process produces a statistical model of the values and the placement of pixels that define a cat. The network can then analyse a new photo and decide whether it matches the model โ€” that is, whether thereโ€™s a cat in the picture. But that cat model, once developed, is pretty much set in stone.

One way to get the computer to learn to identify many objects would be to develop lots of models. You could train one neural network to recognize cats and another to recognize dogs. That would require two data sets, one for each animal, and would double the time and computing power needed to develop each model. But suppose you wanted the computer to distinguish between pictures of cats and dogs. You would have to train a third network, either using all the original data or comparing the two existing models. Add other animals into the mix and yet more models must be developed.

Training and storing more models requires greater resources, and this can quickly become a problem. Training a neural network can take reams of data and weeks of time. For instance, an AI system called GPT-3, which learnt to produce text that sounds as if it was written by a human, required almost 15 days of training on 10,000 high-end computer processors1. The ImageNet data set, which is often used to train neural networks in object recognition, contains more than 14 million images. Depending on the subset of the total number of images that is used, it can take from a few minutes to more than a day and a half to download. Any machine that has to spend days re-learning a task each time it encounters new information will essentially grind to a halt.

Close-up of the drives in a supercomputer
Training some neural networks can require the power of a supercomputer.Credit: CasarsaGuru/ Getty

One system that could make the generation of multiple models more efficient is Self-Net2, created by Rolando Estrada, a computer scientist at Georgia State University in Atlanta, and his students Jaya Mandivarapu and Blake Camp. Self-Net compresses the models, to prevent a system with a lot of different animal models from growing too unwieldy.

The system uses an autoencoder, a separate neural network that learns which parameters โ€” such as clusters of pixels in the case of image-recognition tasks โ€” the original neural network focused on when building its model. One layer of neurons in the middle of the autoencoder forces the machine to pick a tiny subset of the most important weights of the model. There might be 10,000 numerical values going into the model and another 10,000 coming out, but in the middle layer the autoencoder reduces that to just 10 numbers. So the system has to find the ten weights that will allow it to get the most accurate output, Estrada says.

Sign up for Natureโ€™s newsletter on robotics and AI

The process is similar to compressing a large TIFF image file down to a smaller JPEG, he says; thereโ€™s a small loss of fidelity, but what is left is good enough. The system tosses out most of the original input data, and then saves the ten best weights. It can then use those to perform the same cat-identification task with almost the same accuracy, without having to store enormous amounts of data.

To streamline the creation of models, computer scientists often use pre-training. Models that are trained to perform similar tasks have to learn similar parameters, at least in the early stages. Any neural network learning to recognize objects in images, for instance, first needs to learn to identify diagonal and vertical lines. Thereโ€™s no need to start from scratch each time, so newer models can be pre-trained with the weights that already recognize those basic features. To make models that can recognize cows or pigs or kangaroos, Estrada can pre-train other neural networks with the parameters from his autoencoder. Because all animals share some of the same facial features, even if the details of size or shape are different, such pre-training allows new models to be generated more efficiently.

The system is not a perfect way to get networks to learn on the job, Estrada says. A human still has to tell the machine when to switch tasks; for example, when to start looking for horses instead of cows. That requires a human to stay in the loop, and it might not always be obvious to a person that itโ€™s time for the machine to do something different. But Estrada hopes to find a way to automate task switching so the computer can learn to identify characteristics of the input data and use that to decide which model it should use, so it can keep operating without interruption.

Hava Siegelmann sits at her desk with her laptop
Hava Siegelmann is a computer scientist at the University of Massachusetts Amherst.

Out with the old

It might seem that the obvious course is not to make multiple models but rather to grow a network. Instead of developing two networks for recognizing cats and horses respectively, for instance, it might appear easier to teach the cat-savvy network to also recognize horses. This approach, however, forces AI designers to confront one of the main issues in lifelong learning, a phenomenon known as catastrophic forgetting. A network trained to recognize cats will develop a set of weights across its artificial neurons that are specific to that task. If it is then asked to start identifying horses, it will start readjusting the weights to make it more accurate for horses. The model will no longer contain the right weights for cats, causing it to essentially forget what a cat looks like. โ€œThe memory is in the weights. When you train it with new information, you write on the same weights,โ€ says Siegelmann. โ€œYou can have a billion examples of a car driving, and now you teach it 200 examples related to some accident that you donโ€™t want to happen, and it may know these 200 cases and forget the billion.โ€

One method of overcoming catastrophic forgetting uses replay โ€” that is, taking data from a previously learnt task and interweaving them with new training data. This approach, however, runs head-on into the resource problem. โ€œReplay mechanisms are very memory hungry and computationally hungry, so we do not have models that can solve these problems in a resource-efficient way,โ€ Kudithipudi says. There might also be reasons not to store data, such as concerns about privacy or security, or because they belong to someone unwilling to share them indefinitely.

Siegelmann says replay is roughly analogous to what the human brain does when it dreams. Many neuroscientists think that the brain consolidates memories and learns things by replaying experiences during sleep. Similarly, replay in neural networks can reinforce weights that might otherwise be overwritten. But the brain doesnโ€™t actually review a moment-by-moment rerun of its experiences, Siegelmann says. Rather, it reduces those experiences to a handful of characteristic features and patterns โ€” a process known as abstraction โ€” and replays just those parts. Her brain-inspired replay tries to do something similar; instead of reviewing mountains of stored data, it selects certain facets of what it has learnt to replay. Each layer in a neural network, Siegelmann says, moves the learning to a higher level of abstraction, from the specific input data in the bottom layer to mathematical relationships in the data at higher layers. In this way, the system sorts specific examples of objects into classes. She lets the network select the most important of the abstractions in the top couple of layers and replay those. This technique keeps the learnt weights reasonably stable โ€” although not perfectly so โ€” without having to store any previously used data at all.

Dhireesha Kudithipudi and Nicholas Soures work at a clear board
Computer scientist Dhireesha Kudithipudi (right) and her student Nicholas Soures discuss factors that affect continual learning.Credit: Tej Pandit

Because such brain-inspired replay focuses on the most salient points that the network has learnt, the network can find associations between new and old data more easily. The method also helps the network to distinguish between pieces of data that it might not have separated easily before โ€” finding the differences between a pair of identical twins, for example. If youโ€™re down to only a handful of parameters in each set, instead of millions, itโ€™s easier to spot the similarities. โ€œNow, when we replay one with the other, we start looking at the differences,โ€ Siegelmann says. โ€œIt forces you to find the separation, the contrast, the associations.โ€

Focusing on high-level abstractions rather than specifics is useful for continual learning because it allows the computer to make comparisons and draw analogies between different scenarios. For example, if your self-driving car has to work out how to handle driving on ice in Massachusetts, Siegelmann says, it might use data that it has about driving on ice in Michigan. Those examples wonโ€™t exactly match the new conditions, because theyโ€™re from different roads. But the car also has knowledge about driving on snow in Massachusetts, where it is familiar with the roads. So if the car can identify only the most important differences and similarities between snow and ice, Massachusetts and Michigan, instead of getting bogged down in minor details, it might come up with a solution to the specific, new situation of driving on ice in Massachusetts.

A modular approach

Looking at how the brain handles these issues can inspire ideas, even if they donโ€™t replicate whatโ€™s going on biologically. To deal with the need for a neural network that can learn tasks without overwriting the old, scientists take a cue from neurogenesis โ€” the process by which neurons are formed in the brain. A machine canโ€™t grow parts the way a body can, but computer scientists can replicate new neurons in software by generating connections in parts of the system. Although the mature neurons have learnt to react to only certain data inputs, these โ€˜baby neuronsโ€™ can respond to all the input. โ€œThey can react to new samples that are fed into the model,โ€ Kudithipudi says. In other words, they can learn from new information while the already-trained neurons retain what theyโ€™ve learnt.

Adding more neurons is just one way to enable a system to learn new things. Estrada has come up with another approach, on the basis of the fact that a neural network is only a loose approximation of a human brain. โ€œWe call the nodes in a neural network โ€˜neuronsโ€™. But if you see what theyโ€™re actually doing, theyโ€™re basically computing a weighted sum. Itโ€™s an incredibly simplified view of real, biological neurons, which perform all sorts of complex nonlinear signal processing.โ€

In an effort to mimic some of the complicated behaviours of real neurons more successfully, Estrada and his students developed what he calls deep artificial neurons (DANs)3. A DAN is a small neural network that is treated as a single neuron in a larger neural network.

More from Nature Outlooks

DANs can be trained for one particular task โ€” for instance, Estrada might develop one for identifying handwritten numbers. The model in the DAN is then fixed, so it canโ€™t be changed and will always provide the same output to other neurons in the still-trainable network layers surrounding it. That larger network can go on to learn a related task, such as identifying numbers written by someone else โ€” but the original model is not forgotten. โ€œYou end up with this general-purpose module that you can reuse for similar tasks in the future,โ€ Estrada says. โ€œThese modules allow the system to learn to perform the new tasks in a similar way to the old tasks, so that the features are more compatible with each other over time. So that means that the features are more stable and it forgets less.โ€

So far, Estrada and his colleagues have shown that this technique works on fairly simple tasks, such as number recognition. But theyโ€™re trying to adapt it to more challenging problems, including learning how to play old video games such as Space Invaders. โ€œAnd then, if thatโ€™s successful, we could use it for more sophisticated things,โ€ says Estrada. It might, for instance, prove useful in autonomous drones, which are sent out with basic programming but have to adapt to new data in the environment, and will have to do any on-the-fly learning within tight power and processing constraints.

Thereโ€™s a long way to go before AI can function as people do, dealing with an endless variety of ever-changing scenarios. But if computer scientists can develop the techniques to allow machines to make the continual adaptations that living creatures are capable of, it could go a long way towards making AI systems more versatile, more accurate and more recognizably intelligent.

doi: https://doi.org/10.1038/d41586-022-01962-y

This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with the financial support of third parties. About this content.

References

  1. Patterson, D. et al. Preprint at https://arxiv.org/abs/2104.10350 (2021).
  2. Mandivarapu, J. K., Camp, B. & Estrada, R. Front. Artif. Intell. 3, 19 (2020).PubMed Article Google Scholar 
  3. Camp, B., Mandivarapu, J. K. & Estrada, R. Preprint at https://arxiv.org/abs/2011.07035 (2020).

This article was authored by Marcus Woo and originally published to Nature

Fork in hand, a robot arm skewers a strawberry from above and delivers it to Tyler Schrenkโ€™s mouth. Sitting in his wheelchair, Schrenk nudges his neck forward to take a bite. Next, the arm goes for a slice of banana, then a carrot. Each motion it performs by itself, on Schrenkโ€™s spoken command.

For Schrenk, who became paralysed from the neck down after a diving accident in 2012, such a device would make a huge difference in his daily life if it were in his home. โ€œGetting used to someone else feeding me was one of the strangest things I had to transition to,โ€ he says. โ€œIt would definitely help with my well-being and my mental health.โ€

His home is already fitted with voice-activated power switches and door openers, enabling him to be independent for about 10 hours a day without a caregiver. โ€œIโ€™ve been able to figure most of this out,โ€ he says. โ€œBut feeding on my own is not something I can do.โ€ Which is why he wanted to test the feeding robot, dubbed ADA (short for assistive dexterous arm). Cameras located above the fork enable ADA to see what to pick up. But knowing how forcefully to stick a fork into a soft banana or a crunchy carrot, and how tightly to grip the utensil, requires a sense that humans take for granted: โ€œTouch is key,โ€ says Tapomayukh Bhattacharjee, a roboticist at Cornell University in Ithaca, New York, who led the design of ADA while at the University of Washington in Seattle. The robotโ€™s two fingers are equipped with sensors that measure the sideways (or shear) force when holding the fork1. The system is just one example of a growing effort to endow robots with a sense of touch.

Part of Nature Outlook: Robotics and artificial intelligence

โ€œThe really important things involve manipulation, involve the robot reaching out and changing something about the world,โ€ says Ted Adelson, a computer-vision specialist at the Massachusetts Institute of Technology (MIT) in Cambridge. Only with tactile feedback can a robot adjust its grip to handle objects of different sizes, shapes and textures. With touch, robots can help people with limited mobility, pick up soft objects such as fruit, handle hazardous materials and even assist in surgery. Tactile sensing also has the potential to improve prosthetics, help people to literally stay in touch from afar, and even has a part to play in fulfilling the fantasy of the all-purpose household robot that will take care of the laundry and dishes. โ€œIf we want robots in our home to help us out, then weโ€™d want them to be able to use their hands,โ€ Adelson says. โ€œAnd if youโ€™re using your hands, you really need a sense of touch.โ€

With this goal in mind, and buoyed by advances in machine learning, researchers around the world are developing myriad tactile sensors, from finger-shaped devices to electronic skins. The idea isnโ€™t new, says Veronica Santos, a roboticist at the University of California, Los Angeles. But advances in hardware, computational power and algorithmic knowhow have energized the field. โ€œThere is a new sense of excitement about tactile sensing and how to integrate it with robots,โ€ Santos says.

Feel by sight

One of the most promising sensors relies on well-established technology: cameras. Todayโ€™s cameras are inexpensive yet powerful, and combined with sophisticated computer vision algorithms, theyโ€™ve led to a variety of tactile sensors. Different designs use slightly different techniques, but they all interpret touch by visually capturing how a material deforms on contact.

ADA uses a popular camera-based sensor called GelSight, the first prototype of which was designed by Adelson and his team more than a decade ago2. A light and a camera sit behind a piece of soft rubbery material, which deforms when something presses against it. The camera then captures the deformation with super-human sensitivity, discerning bumps as small as one micrometre. GelSight can also estimate forces, including shear forces, by tracking the motion of a pattern of dots printed on the rubbery material as it deforms2.

GelSight is not the first or the only camera-based sensor (ADA was tested with another one, called FingerVision). However, its relatively simple and easy-to-manufacture design has so far set it apart, says Roberto Calandra, a research scientist at Meta AI (formerly Facebook AI) in Menlo Park, California, who has collaborated with Adelson. In 2011, Adelson co-founded a company, also called GelSight, based on the technology he had developed. The firm, which is based in Waltham, Massachusetts, has focused its efforts on industries such as aerospace, using the sensor technology to inspect for cracks and defects on surfaces.

Human hand holding a sensor against the white exterior of an aeroplane, showing a crack and a dent in the 3D imaging.
GelSight, a camera-based sensor, can be used for 3D analysis of aeroplane fuselages (left). The composite images it produces (right) show cracks and defects.Credit: GelSight

One of the latest camera-based sensors is called Insight, documented this year by Huanbo Sun, Katherine Kuchenbecker and Georg Martius at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany3. The finger-like device consists of a soft, opaque, tent-like dome held up with thin struts, hiding a camera inside.

Itโ€™s not as sensitive as GelSight, but it offers other advantages. GelSight is limited to sensing contact on a small, flat patch, whereas Insight detects touch all around its finger in 3D, Kuchenbecker says. Insightโ€™s silicone surface is also easier to fabricate, and it determines forces more precisely. Kuchenbecker says that Insightโ€™s bumpy interior surface makes forces easier to see, and unlike GelSightโ€™s method of first determining the geometry of the deformed rubber surface and then calculating the forces involved, Insight determines forces directly from how light hits its camera. Kuchenbecker thinks this makes Insight a better option for a robot that needs to grab and manipulate objects; Insight was designed to form the tips of a three-digit robot gripper called TriFinger.

Skin solutions

Camera-based sensors are not perfect. For example, they cannot sense invisible forces, such as the magnitude of tension of a taut rope or wire. A cameraโ€™s frame-rate might also not be quick enough to capture fleeting sensations, such as a slipping grip, Santos says. And squeezing a relatively bulky camera-based sensor into a robot finger or hand, which might already be crowded with other sensors or actuators (the components that allow the hand to move) can also pose a challenge.

This is one reason other researchers are designing flat and flexible devices that can wrap around a robot appendage. Zhenan Bao, a chemical engineer at Stanford University in California, is designing skins that incorporate flexible electronics and replicate the bodyโ€™s ability to sense touch. In 2018, for example, her group created a skin that detects the direction of shear forces by mimicking the bumpy structure of a below-surface layer of human skin called the spinosum4.

Zhenan Bao in front of white board, pressing on the tip of a finger of an artificial hand.
Zhenan Bao is a chemical engineer at Stanford University in California.Credit: Bao Lab

When a gentle touch presses the outer layer of human skin against the dome-like bumps of the spinosum, receptors in the bumps feel the pressure. A firmer touch activates deeper-lying receptors found below the bumps, distinguishing a hard touch from a soft one. And a sideways force is felt as pressure pushing on the side of the bumps.

Baoโ€™s electronic skin similarly features a bumpy structure that senses the intensity and direction of forces. Each one-millimetre bump is covered with 25 capacitors, which store electrical energy and act as individual sensors. When the layers are pressed together, the amount of stored energy changes. Because the sensors are so small, Bao says, a patch of electronic skin can pack in a lot of them, enabling the skin to sense forces accurately and aiding a robot to perform complex manipulations of an object.

To test the skin, the researchers attached a patch to the fingertip of a rubber glove worn by a robot hand. The hand could pat the top of a raspberry and pick up a ping-pong ball without crushing either.

Robot arms gently tapping a raspberry without squashing it.
Zhenan Bao and her group at Stanford University in California have created electronic skin that can interact with delicate objects such as raspberries.Credit: Bao Lab

Although other electronic skins might not be as sensor-dense, they tend to be easier to fabricate. In 2020, Benjamin Tee, a former student of Bao who now leads his own laboratory at the National University of Singapore, developed a sponge-like polymer that can sense shear forces5. Moreover, similar to human skin, it is self-healing: after being torn or cut, it fuses back together when heated and stays stretchy, which is useful for dealing with wear and tear.

The material, dubbed AiFoam, is embedded with flexible copper wire electrodes, roughly emulating how nerves are distributed in human skin. When touched, the foam deforms and the electrodes squeeze together, which changes the electrical current travelling through it. This allows both the strength and direction of forces to be measured. AiFoam can even sense a personโ€™s presence just before they make contact โ€” when their finger comes within a few centimetres, it lowers the electric field between the foamโ€™s electrodes.

A robot hand with the foam attached moves away from a human hand as it senses its proximity.
AiFoam is a sponge-like polymer that can sense shear forces and self-heal. Credit: National University of Singapore

Last November, researchers at Meta AI and Carnegie Mellon University in Pittsburgh, Pennsylvania, announced a touch-sensitive skin comprising a rubbery material embedded with magnetic particles6. Dubbed ReSkin, when it deforms the particles move along with it, changing the magnetic field. It is designed to be easily replaced โ€” it can be peeled off and a fresh skin installed without requiring complex recalibration โ€” and 100 sensors can be produced for less than US$6.

Rather than being universal tools, different skins and sensors will probably lend themselves to particular purposes. Bhattacharjee and his colleagues, for example, have created a stretchable sleeve that fits over a robot arm and is useful for sensing incidental contact between a robotic arm and its environment7. The sheet is made from layered fabric that detects changes in electrical resistance when pressure is applied to it. It canโ€™t detect shear forces, but it can cover a broad area and wrap around a robotโ€™s joints.

Bhattacharjee is using the sleeve to identify not just when a robotic arm comes into contact with something as it moves through a cluttered environment, but also what it bumps up against. If a helper robot in a home brushed against a curtain while reaching for an object, it might be fine for it to continue, but contact with a fragile wine glass would require evasive action.

Other approaches use air to provide a sense of touch. Some robots use suction grippers to pick up and move objects in warehouses or in the oceans. In these cases, Hannah Stuart, a mechanical engineer at the University of California, Berkeley, is hoping that measuring suction airflow can provide tactile feedback to a robot. Her group has shown that the rate of airflow can determine the strength of the suction gripperโ€™s hold and even the roughness of the surface it is suckered on to8. And underwater, it can reveal how an object moves while being held by a suction-aided robot hand9.

Processing feelings

Todayโ€™s tactile technologies are diverse, Kuchenbecker says. โ€œThere are multiple feasible options, and people can build on the work of others,โ€ she says. But designing and building sensors is only the start. Researchers then have to integrate them into a robot, which must then work out how to use a sensorโ€™s information to execute a task. โ€œThatโ€™s actually going to be the hardest part,โ€ Adelson says.

For electronic skins that contain a multitude of sensors, processing and analysing data from them all would be computationally and energy intensive. To handle so many data, researchers such as Bao are taking inspiration from the human nervous system, which processes a constant flood of signals with ease. Computer scientists have been trying to mimic the nervous system with neuromorphic computers for more than 30 years. But Baoโ€™s goal is to combine a neuromorphic approach with a flexible skin that could integrate with the body seamlessly โ€” for example, on a bionic arm.

Sign up for Natureโ€™s newsletter on robotics and AI

Unlike in other tactile sensors, Baoโ€™s skins deliver sensory signals as electrical pulses, such as those in biological nerves. Information is stored not in the intensity of the pulses, which can wane as a signal travels, but instead in their frequency. As a result, the signal wonโ€™t lose much information as the range increases, she explains.

Pulses from multiple sensors would meet at devices called synaptic transistors, which combine the signals into a pattern of pulses โ€” similar to what happens when nerves meet at synaptic junctions. Then, instead of processing signals from every sensor, a machine-learning algorithm needs only to analyse the signals from several synaptic junctions, learning whether those patterns correspond to, say, the fuzz of a sweater or the grip of a ball.

In 2018, Baoโ€™s lab built this capability into a simple, flexible, artificial nerve system that could identify Braille characters10. When attached to a cockroachโ€™s leg, the device could stimulate the insectโ€™s nerves โ€” demonstrating the potential for a prosthetic device that could integrate with a living creatureโ€™s nervous system.

Ultimately, to make sense of sensor data, a robot must rely on machine learning. Conventionally, processing a sensorโ€™s raw data was tedious and difficult, Calandra says. To understand the raw data and convert them into physically meaningful numbers such as force, roboticists had to calibrate and characterize the sensor. With machine learning, roboticists can skip these laborious steps. The algorithms enable a computer to sift through a huge amount of raw data and identify meaningful patterns by itself. These patterns โ€” which can represent a sufficiently tight grip or a rough texture โ€” can be learnt from training data or from computer simulations of its intended task, and then applied in real-life scenarios.

โ€œWeโ€™ve really just begun to explore artificial intelligence for touch sensing,โ€ Calandra says. โ€œWe are nowhere near the maturity of other fields like computer vision or natural language processing.โ€ Computer-vision data are based on a two-dimensional array of pixels, an approach that computer scientists have exploited to develop better algorithms, he says. But researchers still donโ€™t fully know what a comparable structure might be for tactile data. Understanding the structure for those data, and learning how to take advantage of them to create better algorithms, will be one of the biggest challenges of the next decade.

Barrier removal

The boom in machine learning and the variety of emerging hardware bodes well for the future of tactile sensing. But the plethora of technologies is also a challenge, researchers say. Because so many labs have their own prototype hardware, software and even data formats, scientists have a difficult time comparing devices and building on one anotherโ€™s work. And if roboticists want to incorporate touch sensing into their work for the first time, they would have to build their own sensors from scratch โ€” an often expensive task, and not necessarily in their area of expertise.

More from Nature Outlooks

This is why, last November, GelSight and Meta AI announced a partnership to manufacture a camera-based fingertip-like sensor called DIGIT. With a listed price of $300, the device is designed to be a standard, relatively cheap, off-the-shelf sensor that can be used in any robot. โ€œIt definitely helps the robotics community, because the community has been hindered by the high cost of hardware,โ€ Santos says.

Depending on the task, however, you donโ€™t always need such advanced hardware. In a paper published in 2019, a group at MIT led by Subramanian Sundaram built sensors by sandwiching a few layers of material together, which change electrical resistance when under pressure11. These sensors were then incorporated into gloves, at a total material cost of just $10. When aided by machine learning, even a tool as simple as this can help roboticists to better understand the nuances of grip, Sundaram says.

Not every roboticist is a machine-learning specialist, either. To aid with this, Meta AI has released open source software for researchers to use. โ€œMy hope is by open-sourcing this ecosystem, weโ€™re lowering the entry bar for new researchers who want to approach the problem,โ€ Calandra says. โ€œThis is really the beginning.โ€

Although grip and dexterity continue to be a focus of robotics, thatโ€™s not all tactile sensing is useful for. A soft, slithering robot, might need to feel its way around to navigate rubble as part of search and rescue operations, for instance. Or a robot might simply need to feel a pat on the back: Kuchenbecker and her student Alexis Block have built a robot with torque sensors in its arms and a pressure sensor and microphone inside a soft, inflatable body that can give a comfortable and pleasant hug, and then release when you let go. That kind of human-like touch is essential to many robots that will interact with people, including prosthetics, domestic helpers and remote avatars. These are the areas in which tactile sensing might be most important, Santos says. โ€œItโ€™s really going to be the humanโ€“robot interaction thatโ€™s going to drive it.โ€

A robot with a computer head and wearing a hoodie hugs a woman who is laughing.
Alexis Block, a postdoc at the University of California, Los Angeles, experiences a hug from a HuggieBot, a robot she helped to create that can feel when someone pats or squeezes it.Credit: Alexis E. Block

So far, robotic touch is confined mainly to research labs. โ€œThereโ€™s a need for it, but the market isnโ€™t quite there,โ€ Santos says. But some of those who have been given a taste of what might be achievable are already impressed. Schrenkโ€™s tests of ADA, the feeding robot, provided a tantalizing glimpse of independence. โ€œIt was just really cool,โ€ he says. โ€œIt was a look into the future for what might be possible for me.โ€

doi: https://doi.org/10.1038/d41586-022-01401-y

This article is part of Nature Outlook: Robotics and artificial intelligence, an editorially independent supplement produced with the financial support of third parties. About this content.

References

  1. Song, H., Bhattacharjee, T. & Srinivasa, S. S. 2019 International Conference on Robotics and Automation 8367โ€“8373 (IEEE, 2019).Google Scholar 
  2. Yuan, W., Dong, S. & Adelson, E. H. Sensors 17, 2762 (2017).Article Google Scholar 
  3. Sun, H., Kuchenbecker, K. J. & Martius, G. Nature Mach. Intell. 4, 135โ€“145 (2022).Article Google Scholar 
  4. Boutry, C. M. et al. Sci. Robot. 3, aau6914 (2018).Article Google Scholar 
  5. Guo, H. et al. Nature Commun. 11, 5747 (2020).PubMed Article Google Scholar 
  6. Bhirangi, R., Hellebrekers, T., Majidi, C. & Gupta, A. Preprint at http://arxiv.org/abs/2111.00071 (2021).
  7. Wade, J., Bhattacharjee, T., Williams, R. D. & Kemp, C. C. Robot. Auton. Syst. 96, 1โ€“14 (2017).Article Google Scholar 
  8. Huh, T. M. et al. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems 1786โ€“1793 (IEEE, 2021).Google Scholar 
  9. Nadeau, P., Abbott, M., Melville, D. & Stuart, H. S. 2020 IEEE International Conference on Robotics and Automation 3701โ€“3707 (IEEE, 2020).Google Scholar 
  10. Kim, Y. et al. Science 360, 998โ€“1003 (2018).PubMed Article Google Scholar 
  11. Sundaram, S. et al. Nature 569, 698โ€“702 (2019).PubMed Article Google Scholar
Membership
chevron-down