Google Unveils AI for Predicting Behavior of Human Molecules

The system, AlphaFold3, could accelerate efforts to understand the human body and fight disease.Artificial intelligence is giving machines the power to generate videos, write computer code and even carry on a conversation.It is also accelerating efforts to understand the human body and fight disease.On Wednesday, Google DeepMind, the tech giant’s central artificial intelligence lab, and Isomorphic Labs, a sister company, unveiled a more powerful version of AlphaFold, an artificial intelligence technology that helps scientists understand the behavior of the microscopic mechanisms that drive the cells in the human body.An early version of AlphaFold, released in 2020, solved a puzzle that had bedeviled scientists for more than 50 years. It was called “the protein folding problem.”Proteins are the microscopic molecules that drive the behavior of all living things. These molecules begin as strings of chemical compounds before twisting and folding into three-dimensional shapes that define how they interact with other microscopic mechanisms in the body.Biologists spent years or even decades trying to pinpoint the shape of individual proteins. Then AlphaFold came along. When a scientist fed this technology a string of amino acids that make up a protein, it could predict the three-dimensional shape within minutes.When DeepMind publicly released AlphaFold a year later, biologists began using it to accelerate drug discovery. Researchers at the University of California, San Francisco, used the technology as they worked to understand the coronavirus and prepare for similar pandemics. Others used it as they struggled to find remedies for malaria and Parkinson’s disease.The hope is that this kind of technology will significantly streamline the creation of new drugs and vaccines.A segment of a video from Google DeepMind demonstrating the new AlphaFold3 technology.Video by Google Deepmind“It tells us a lot more about how the machines of the cell interact,” said John Jumper, a Google DeepMind researcher. “It tells us how this should work and what happens when we get sick.”The new version of AlphaFold — AlphaFold3 — extends the technology beyond protein folding. In addition to predicting the shapes of proteins, it can predict the behavior of other microscopic biological mechanisms, including DNA, where the body stores genetic information, and RNA, which transfers information from DNA to proteins.“Biology is a dynamic system. You need to understand the interactions between different molecules and structures,” said Demis Hassabis, Google DeepMind’s chief executive and the founder of Isomorphic Labs, which Google also owns. “This is a step in that direction.”Demis Hassabis, Google DeepMind’s chief executive and the founder of Isomorphic Labs.Taylor Hill/Getty ImagesThe company is offering a website where scientists can use AlphaFold3. Other labs, most notably one at the University of Washington, offer similar technology. In a paper released on Tuesday in the scientific journal Nature, Dr. Jumper and his fellow researchers show that it achieves a level of accuracy well beyond the state of the art.The technology could “save months of experimental work and enable research that was previously impossible,” said Deniz Kavi, a co-founder and the chief executive of Tamarind Bio, a start-up that builds technology for accelerating drug discovery. “This represents tremendous promise.”

Read more →

A.I. Turns Its Artistry to Creating New Human Proteins

Inspired by digital art generators like DALL-E, biologists are building artificial intelligences that can fight cancer, flu and Covid.An example of an animated diffusion model of A.I.-generated proteins. Video by Ian C. Haydon/University of Washington Institute for Protein DesignLast spring, an artificial intelligence lab called OpenAI unveiled technology that lets you create digital images simply by describing what you want to see. Called DALL-E, it sparked a wave of similar tools with names like Midjourney and Stable Diffusion. Promising to speed the work of digital artists, this new breed of artificial intelligence captured the imagination of both the public and the pundits — and threated to generate new levels of online disinformation.Social media is now teeming with the surprisingly conceptual, in whichshockingly detailed, often photorealistic images generated by DALL-E and other tools. “Photo of a teddy bear riding a skateboard in Times Square.” “Cute corgi in a house made out of sushi.” “Jeflon Zuckergates.”But when some scientists consider this technology, they see more than just a way of creating fake photos. They see a path to a new cancer treatment or a new flu vaccine or a new pill that helps you digest gluten.Using many of the same techniques that underpin DALL-E and other art generators, these scientists are generating blueprints for new proteins — tiny biological mechanisms that can change the way of our bodies behave.Our bodies naturally produce about 20,000 proteins, which handle everything from digesting food to moving oxygen through the bloodstream. Now, researchers are working to create proteins that are not found in nature, hoping to improve our ability to fight disease and do things that our bodies cannot on their own.David Baker, the director of the Institute for Protein Design at the University of Washington, has been working to build artisanal proteins for more than 30 years. By 2017, he and his team had shown this was possible. But they did not anticipate how the rise of new A.I. technologies would suddenly accelerate this work, shrinking the time needed to generate new blueprints from years down to weeks.“What we need are new proteins that can solve modern-day problems, like cancer and viral pandemics,” Dr. Baker said. “We can’t wait for evolution.” He added, “Now, we can design these proteins much faster, and with much higher success rates, and create much more sophisticated molecules that can help solve these problems.”David Baker of the University of Washington.Evan McGlinn for The New York TimesLast year, Dr. Baker and his fellow researchers published a pair of papers in the journal Science describing how various A.I. techniques could accelerate protein design. But these papers have already been eclipsed by a newer one that draws on the techniques that drive tools like DALL-E, showing how new proteins can be generated from scratch much like digital photos.“One of the most powerful things about this technology is that, like DALL-E, it does what you tell it to do,” said Nate Bennett, one of the researchers working in the University of Washington lab. “From a single prompt, it can generate an endless number of designs.”The Rise of OpenAIThe San Francisco company is one of the world’s most ambitious artificial intelligence labs. Here’s a look at some recent developments.ChatGPT: The new cutting-edge chatbot is inspiring awe, fear, stunts and attempts to circumvent its guardrails, our technology columnist writes.DALL-E 2: The system lets you create digital images simply by describing what you want to see. But for some, image generators are worrisome.GPT-3: With mind-boggling fluency, the natural-language system can write, argue and code. The implications for the future could be profound.To generate images, DALL-E relies on what artificial intelligence researchers call a neural network, a mathematical system loosely modeled on the network of neurons in the brain. This is the same technology that recognizes the commands you bark into your smartphone, enables self-driving cars to identify (and avoid) pedestrians and translates languages on services like Skype.A neural network learns skills by analyzing vast amounts of digital data. By pinpointing patterns in thousands of corgi photos, for instance, it can learn to recognize a corgi. With DALL-E, researchers built a neural network that looked for patterns as it analyzed millions of digital images and the text captions that described what each of these images depicted. In this way, it learned to recognize the links between the images and the words.When you describe an image for DALL-E, a neural network generates a set of key features that this image may include. One feature might be the curve of a teddy bear’s ear. Another might be the line at the edge of a skateboard. Then, a second neural network — called a diffusion model — generates the pixels needed to realize these features.The diffusion model is trained on a series of images in which noise — imperfection — is gradually added to a photograph until it becomes a sea of random pixels. As it analyzes these images, the model learns to run this process in reverse. When you feed it random pixels, it removes the noise, transforming these pixels into a coherent image.At the University of Washington, other academic labs and new start-ups, researchers are using similar techniques in their effort to create new proteins.Proteins begin as strings of chemical compounds, which then twist and fold into three-dimensional shapes that define how they behave. In recent years, artificial intelligence labs like DeepMind, owned by Alphabet, the same parent company as Google, have shown that neural networks can accurately guess the three-dimensional shape of any protein in the body based just on the smaller compounds it contains — an enormous scientific advance.Now, researchers like Dr. Baker are taking another step, using these systems to generate blueprints for entirely new proteins that do not exist in nature. The goal is to create proteins that take on very specific shapes; a particular shape can serve a particular task, such as fighting the virus that causes Covid.Much as DALL-E leverages the relationship between captions and photographs, similar systems can leverage the relationship between a description of what the protein can do and the shape it adopts. Researchers can provide a rough outline for the protein they want, then a diffusion model can generate its three-dimensional shape.A protein diffusion model doing unconditional generation, converting noise into plausible structures. Video by Namrata AnandNamrata Anand, a former Stanford University researcher. She is now building a company in generative A.I. protein design.Herve Philippe/TerrificShot Photography“With DALL-E, you can ask for an image of a panda eating a shoot of bamboo,” said Namrata Anand, a former Stanford University researcher who is also an entrepreneur, building a company in this area of research. “Equivalently, protein engineers can ask for a protein that binds to another in a particular way — or some other design constraint — and the generative model can build it.”The difference is that the human eye can instantly judge the fidelity of a DALL-E image. It cannot do the same with a protein structure. After artificial intelligence technologies produce these protein blueprints, scientists must still take them into a wet lab — where experiments can be done with real chemical compounds — and make sure they do what they are supposed to do.For this reason, some experts say that the latest artificial intelligence technologies should be taken with a grain of salt. “Making a new structure is just a game,” said Frances Arnold, a Nobel Laureate who is a professor specializing in protein engineering at the California Institute of Technology. “What really matters is: What can that structure actually do?”But for many researchers, these new techniques are not just accelerating the creation of new protein candidates for the wet lab. They provide a way of exploring new innovations that researchers could not previously explore on their own.“What’s exciting isn’t just that they are creative and explore unexpected possibilities, but that they are creative while satisfying certain design objectives or constraints,” said Jue Wang, a researcher at the University of Washington. “This saves you from needing to check every possible protein in the universe.”Often, artificially intelligent machines are developed to perform skills that come naturally to humans, like piecing together images, writing text or playing board games. Protein-designing bots pose a more profound question, Dr. Wang said: “What can machines do that humans can’t do at all?”

Read more →

A.I. Predicts the Shape of Nearly Every Protein Known to Science

DeepMind has expanded its database of microscopic biological mechanisms, hoping to accelerate research into all living things.In 2020, an artificial intelligence lab called DeepMind unveiled technology that could predict the shape of proteins — the microscopic mechanisms that drive the behavior of the human body and all other living things.A year later, the lab shared the tool, called AlphaFold, with scientists and released predicted shapes for more than 350,000 proteins, including all proteins expressed by the human genome. It immediately shifted the course of biological research. If scientists can identify the shapes of proteins, they can accelerate the ability to understand diseases, create new medicines and otherwise probe the mysteries of life on Earth.Now, DeepMind has released predictions for nearly every protein known to science. On Thursday, the London-based lab, owned by the same parent company as Google, said it had added more than 200 million predictions to an online database freely available to scientists across the globe.With this new release, the scientists behind DeepMind hope to speed up research into more obscure organisms and spark a new field called metaproteomics.“Scientists can now explore this entire database and look for patterns — correlations between species and evolutionary patterns that might not have been evident until now,” Demis Hassabis, the chief executive of DeepMind, said in a phone interview.Proteins begin as strings of chemical compounds, then twist and fold into three-dimensional shapes that define how these molecules bind to others. If scientists can pinpoint the shape of a particular protein, they can decipher how it operates.This knowledge is often a vital part of the fight against illness and disease. For instance, bacteria resist antibiotics by expressing certain proteins. If scientists can understand how these proteins operate, they can begin to counter antibiotic resistance.Previously, pinpointing the shape of a protein required extensive experimentation involving X-rays, microscopes and other tools on a lab bench. Now, given the string of chemical compounds that make up a protein, AlphaFold can predict its shape.The technology is not perfect. But it can predict the shape of a protein with an accuracy that rivals physical experiments about 63 percent of the time, according to independent benchmark tests. With a prediction in hand, scientistic can verify its accuracy relatively quickly.Kliment Verba, a researcher at the University of California, San Francisco, who uses the technology to understand the coronavirus and to prepare for similar pandemics, said the technology had “supercharged” this work, often saving months of experimentation time. Others have used the tool as they struggle to fight gastroenteritis, malaria and Parkinson’s disease.The technology has also accelerated research beyond the human body, including an effort improve the health of honeybees. DeepMind’s expanded database can help an even larger community of scientists reap similar benefits.Like Dr. Hassabis, Dr. Verba believes the database will provide new ways of understanding how proteins behave across species. He also sees it as way of educating a new generation of scientists. Not all researchers are versed in this kind of structural biology; a database of all known proteins lowers the bar to entry. “It can bring structural biology to the masses,” Dr. Verba said.

Read more →

A.I. Predicts the Shapes of Molecules to Come

DeepMind has given 3-D structure to 350,000 proteins, including every one made by humans, promising a boon for medicine and drug design.For some years now John McGeehan, a biologist and the director of the Center for Enzyme Innovation in Portsmouth, England, has been searching for a molecule that could break down the 150 million tons of soda bottles and other plastic waste strewn across the globe.Working with researchers on both sides of the Atlantic, he has found a few good options. But his task is that of the most demanding locksmith: to pinpoint the chemical compounds that on their own will twist and fold into the microscopic shape that can fit perfectly into the molecules of a plastic bottle and split them apart, like a key opening a door.Determining the exact chemical contents of any given enzyme is a fairly simple challenge these days. But identifying its three-dimensional shape can involve years of biochemical experimentation. So last fall, after reading that an artificial intelligence lab in London called DeepMind had built a system that automatically predicts the shapes of enzymes and other proteins, Dr. McGeehan asked the lab if it could help with his project.Toward the end of one workweek, he sent DeepMind a list of seven enzymes. The following Monday, the lab returned shapes for all seven. “This moved us a year ahead of where we were, if not two,” Dr. McGeehan said.Now, any biochemist can speed their work in much the same way. On Thursday, DeepMind released the predicted shapes of more than 350,000 proteins — the microscopic mechanisms that drive the behavior of bacteria, viruses, the human body and all other living things. This new database includes the three-dimensional structures for all proteins expressed by the human genome, as well as those for proteins that appear in 20 other organisms, including the mouse, the fruit fly and the E. coli bacterium.This vast and detailed biological map — which provides roughly 250,000 shapes that were previously unknown — may accelerate the ability to understand diseases, develop new medicines and repurpose existing drugs. It may also lead to new kinds of biological tools, like an enzyme that efficiently breaks down plastic bottles and converts them into materials that are easily reused and recycled.“This can take you ahead in time — influence the way you are thinking about problems and help solve them faster,” said Gira Bhabha, an assistant professor in the department of cell biology at New York University. “Whether you study neuroscience or immunology — whatever your field of biology — this can be useful.”Rich Evans, a DeepMind research scientist, at work on the project at the company’s London office.DeepMindThis new knowledge is its own sort of key: If scientists can determine the shape of a protein, they can determine how other molecules will bind to it. This might reveal, say, how bacteria resist antibiotics — and how to counter that resistance. Bacteria resist antibiotics by expressing certain proteins; if scientists were able to identify the shapes of these proteins, they could develop new antibiotics or new medicines that suppress them.In the past, pinpointing the shape of a protein required months, years or even decades of trial-and-error experiments involving X-rays, microscopes and other tools on the lab bench. But DeepMind can significantly shrink the timeline with its A.I. technology, known as AlphaFold.When Dr. McGeehan sent DeepMind his list of seven enzymes, he told the lab that he had already identified shapes for two of them, but he did not say which two. This was a way of testing how well the system worked; AlphaFold passed the test, correctly predicting both shapes.It was even more remarkable, Dr. McGeehan said, that the predictions arrived within days. He later learned that AlphaFold had in fact completed the task in just a few hours.AlphaFold predicts protein structures using what is called a neural network, a mathematical system that can learn tasks by analyzing vast amounts of data — in this case, thousands of known proteins and their physical shapes — and extrapolating into the unknown.This is the same technology that identifies the commands you bark into your smartphone, recognizes faces in the photos you post to Facebook and that translates one language into another on Google Translate and other services. But many experts believe AlphaFold is one of the technology’s most powerful applications.“It shows that A.I. can do useful things amid the complexity of the real world,” said Jack Clark, one of the authors of the A.I. Index, an effort to track the progress of artificial intelligence technology across the globe.As Dr. McGeehan discovered, it can be remarkably accurate. AlphaFold can predict the shape of a protein with an accuracy that rivals physical experiments about 63 percent of the time, according to independent benchmark tests that compare its predictions to known protein structures. Most experts had assumed that a technology this powerful was still years away.“I thought it would take another 10 years,” said Randy Read, a professor at the University of Cambridge. “This was a complete change.”But the system’s accuracy does vary, so some of the predictions in DeepMind’s database will be less useful than others. Each prediction in the database comes with a “confidence score” indicating how accurate it is likely to be. DeepMind researchers estimate that the system provides a “good” prediction about 95 percent of the time.A protein expressed by the E. coli bacterium. Researchers are using A.I. to understand how pathogens like E. coli and salmonella develop resistance to antibiotics, and to find ways of countering it.DeepMindAs a result, the system cannot completely replace physical experiments. It is used alongside work on the lab bench, helping scientists determine which experiments they should run and filling the gaps when experiments are unsuccessful. Using AlphaFold, researchers at the University of Colorado Boulder, recently helped identify a protein structure they had struggled to identify for more than a decade.The developers of DeepMind have opted to freely share its database of protein structures rather than sell access, with the hope of spurring progress across the biological sciences. “We are interested in maximum impact,” said Demis Hassabis, chief executive and co-founder of DeepMind, which is owned by the same parent company as Google but operates more like a research lab than a commercial business.Some scientists have compared DeepMind’s new database to the Human Genome Project. Completed in 2003, the Human Genome Project provided a map of all human genes. Now, DeepMind has provided a map of the roughly 20,000 proteins expressed by the human genome — another step toward understanding how our bodies work and how we can respond when things go wrong.The hope is also that the technology will continue to evolve. A lab at the University of Washington has built a similar system called RoseTTAFold, and like DeepMind, it has openly shared the computer code that drives its system. Anyone can use the technology, and anyone can work to improve it.Even before DeepMind began openly sharing its technology and data, AlphaFold was feeding a wide range of projects. University of Colorado researchers are using the technology to understand how bacteria like E. coli and salmonella develop a resistance to antibiotics, and to develop ways of combating this resistance. At the University of California, San Francisco, researchers have used the tool to improve their understanding of the coronavirus.The coronavirus wreaks havoc on the body through 26 different proteins. With help from AlphaFold, the researchers have improved their understanding of one key protein and are hoping the technology can help increase their understanding of the other 25.If this comes too late to have an impact on the current pandemic, it could help in preparing for the next one. “A better understanding of these proteins will help us not only target this virus but other viruses,” said Kliment Verba, one of the researchers in San Francisco.The possibilities are myriad. After DeepMind gave Dr. McGeehan shapes for seven enzymes that could potentially rid the world of plastic waste, he sent the lab a list of 93 more. “They’re working on these now,” he said.

Read more →

Slowly, Robo-Surgeons Are Moving Toward the Operating Room

Real scalpels, artificial intelligence — what could go wrong?Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles near his chest.As he moved the handles — up and down, left and right — the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot’s tiny claws did much the same. This is how surgeons like Dr. Fer have long used robots when operating on patients. They can remove a prostate from a patient while sitting at a computer console across the room.But after this brief demonstration, Dr. Fer and his fellow researchers at the University of California, Berkeley, showed how they hope to advance the state of the art. Dr. Fer let go of the handles, and a new kind of computer software took over. As he and the other researchers looked on, the robot started to move entirely on its own.With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Dr. Fer.The training exercise was originally designed for humans; moving the rings from peg to peg is how surgeons learn to operate robots like the one in Berkeley. Now, an automated robot performing the test can match or even exceed a human in dexterity, precision and speed, according to a new research paper from the Berkeley team.The project is a part of a much wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones and warehouse robots, researchers are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerating.Dr. Danyal Fer, a surgeon and researcher, has long used robots while operating on patients.Sarahbeth Maney for The New York Times“It is an exciting time,” said Russell Taylor, a professor at Johns Hopkins University and former IBM researcher known in the academic world as the father of robotic surgery. “It is where I hoped we would be 20 years ago.”The aim is not to remove surgeons from the operating room but to ease their load and perhaps even raise success rates — where there is room for improvement — by automating particular phases of surgery.Robots can already exceed human accuracy on some surgical tasks, like placing a pin into a bone (a particularly risky task during knee and hip replacements). The hope is that automated robots can bring greater accuracy to other tasks, like incisions or suturing, and reduce the risks that come with overworked surgeons.During a recent phone call, Greg Hager, a computer scientist at Johns Hopkins, said that surgical automation would progress much like the Autopilot software that was guiding his Tesla down the New Jersey Turnpike as he spoke. The car was driving on its own, he said, but his wife still had her hands on the wheel, should anything go wrong. And she would take over when it was time to exit the highway.“We can’t automate the whole process, at least not without human oversight,” he said. “But we can start to build automation tools that make the life of a surgeon a little bit easier.”Five years ago, researchers with the Children’s National Health System in Washington, D.C., designed a robot that could automatically suture the intestines of a pig during surgery. It was a notable step toward the kind of future envisioned by Dr. Hager. But it came with an asterisk: The researchers had implanted tiny markers in the pig’s intestines that emitted a near-infrared light and helped guide the robot’s movements.Scientists believe neural networks will eventually help surgical robots perform operations on their own.Sarahbeth Maney for The New York TimesThe method is far from practical, as the markers are not easily implanted or removed. But in recent years, artificial intelligence researchers have significantly improved the power of computer vision, which could allow robots to perform surgical tasks on their own, without such markers.The change is driven by what are called neural networks, mathematical systems that can learn skills by analyzing vast amounts of data. By analyzing thousands of cat photos, for instance, a neural network can learn to recognize a cat. In much the same way, a neural network can learn from images captured by surgical robots.Surgical robots are equipped with cameras that record three-dimensional video of each operation. The video streams into a viewfinder that surgeons peer into while guiding the operation, watching from the robot’s point of view.But afterward, these images also provide a detailed road map showing how surgeries are performed. They can help new surgeons understand how to use these robots, and they can help train robots to handle tasks on their own. By analyzing images that show how a surgeon guides the robot, a neural network can learn the same skills.This is how the Berkeley researchers have been working to automate their robot, which is based on the da Vinci Surgical System, a two-armed machine that helps surgeons perform more than a million procedures a year. Dr. Fer and his colleagues collect images of the robot moving the plastic rings while under human control. Then their system learns from these images, pinpointing the best ways of grabbing the rings, passing them between claws and moving them to new pegs.But this process came with its own asterisk. When the system told the robot where to move, the robot often missed the spot by millimeters. Over months and years of use, the many metal cables inside the robot’s twin arms have stretched and bent in small ways, so its movements were not as precise as they needed to be.Human operators could compensate for this shift, unconsciously. But the automated system could not. This is often the problem with automated technology: It struggles to deal with change and uncertainty. Autonomous vehicles are still far from widespread use because they aren’t yet nimble enough to handle all the chaos of the everyday world.From left: At the University of California, Berkeley, Ken Goldberg, an engineering professor; Samuel Paradis, a master’s student; Brijen Thananjeyan, a doctoral candidate; and Dr. Minho Hwang watched as the da Vinci Research Kit conducted the peg transfer.Sarahbeth Maney for The New York TimesThe Berkeley team decided to build a new neural network that analyzed the robot’s mistakes and learned how much precision it was losing with each passing day. “It learns how the robot’s joints evolve over time,” said Brijen Thananjeyan, a doctoral student on the team. Once the automated system could account for this change, the robot could grab and move the plastics rings, matching the performance of human operators.Other labs are trying different approaches. Axel Krieger, a Johns Hopkins researcher who was part of the pig-suturing project in 2016, is working to automate a new kind of robotic arm, one with fewer moving parts and that behaves more consistently than the kind of robot used by the Berkeley team. Researchers at the Worcester Polytechnic Institute are developing ways for machines to carefully guide surgeons’ hands as they perform particular tasks, like inserting a needle for a cancer biopsy or burning into the brain to remove a tumor.“It is like a car where the lane-following is autonomous but you still control the gas and the brake,” said Greg Fischer, one of the Worcester researchers.Many obstacles lie ahead, scientists note. Moving plastic pegs is one thing; cutting, moving and suturing flesh is another. “What happens when the camera angle changes?” said Ann Majewicz Fey, an associate professor at the University of Texas, Austin. “What happens when smoke gets in the way?”For the foreseeable future, automation will be something that works alongside surgeons rather than replaces them. But even that could have profound effects, Dr. Fer said. For instance, doctors could perform surgery across distances far greater than the width of the operating room — from miles or more away, perhaps, helping wounded soldiers on distant battlefields.The signal lag is too great to make that possible currently. But if a robot could handle at least some of the tasks on its own, long-distance surgery could become viable, Dr. Fer said: “You could send a high-level plan and then the robot could carry it out.”The same technology would be essential to remote surgery across even longer distances. “When we start operating on people on the moon,” he said, “surgeons will need entirely new tools.”

Read more →