Molecular assembly line to design, test drug compounds streamlined

Researchers from North Carolina State University have found a way to fine-tune the molecular assembly line that creates antibiotics via engineered biosynthesis. The work could allow scientists to improve existing antibiotics as well as design new drug candidates quickly and efficiently.
Bacteria — such as E. coli — harness biosynthesis to create molecules that are difficult to make artificially.
“We already use bacteria to make a number of drugs for us,” says Edward Kalkreuter, former graduate student at NC State and lead author of a paper describing the research. “But we also want to make alterations to these compounds; for example, there’s a lot of drug resistance to erythromycin. Being able to make molecules with similar activity but improved efficacy against resistance is the general goal.”
Picture an automobile assembly line: each stop along the line features a robot that chooses a particular piece of the car and adds it to the whole. Now substitute erythromycin for the car, and an acyltransferase (AT) — an enzyme — as the robot at the stations along the assembly line. Each AT “robot” will select a chemical block, or extender unit, to add to the molecule. At each station the AT robot has 430 amino acids, or residues, which help it select which extender unit to add.
“Different types of extender units impact the activity of the molecule,” says Gavin Williams, professor of chemistry, LORD Corporation Distinguished Scholar at NC State and corresponding author of the research. “Identifying the residues that affect extender unit selection is one way to create molecules with the activity we want.”
The team used molecular dynamic simulations to examine AT residues and identified 10 residues that significantly affect extender unit selection. They then performed mass spectrometry and in vitro testing on AT enzymes that had these residues changed in order to confirm their activity had also changed. The results supported the computer simulation’s predictions.
“These simulations predict what parts of the enzyme we can change by showing how the enzyme moves over time,” says Kalkreuter. “Generally, people look at static, nonmoving structures of enzymes. That makes it hard to predict what they do, because enzymes aren’t static in nature. Prior to this work, very few residues were thought or known to affect extender unit selection.”
Williams adds that manipulating residues allows for much greater precision in reprogramming the biosynthetic assembly line.
“Previously, researchers who wanted to change an antibiotic’s structure would simply swap out the entire AT enzyme,” Williams says. “That’s the equivalent of removing an entire robot from the assembly line. By focusing on the residues, we’re merely replacing the fingers on that arm — like reprogramming a workstation rather than removing it. It allows for much greater precision.
“Using these computational simulations to figure out which residues to replace is another tool in the toolbox for researchers who use bacteria to biosynthesize drugs.”
Story Source:
Materials provided by North Carolina State University. Original written by Tracey Peake. Note: Content may be edited for style and length.

Read more →

Machine learning can help slow down future pandemics

Artificial intelligence could be one of the keys for limiting the spread of infection in future pandemics. In a new study, researchers at the University of Gothenburg have investigated how machine learning can be used to find effective testing methods during epidemic outbreaks, thereby helping to better control the outbreaks.
In the study, the researchers developed a method to improve testing strategies during epidemic outbreaks and with relatively limited information be able to predict which individuals offer the best potential for testing.
“This can be a first step towards society gaining better control of future major outbreaks and reduce the need to shutdown society,” says Laura Natali, a doctoral student in physics at the University of Gothenburg and the lead author of the published study.
Machine learning is a type of artificial intelligence and can be described as a mathematical model where computers are trained to learn to see connections and solve problems using different data sets. The researchers used machine learning in a simulation of an epidemic outbreak, where information about the first confirmed cases was used to estimate infections in the rest of the population. Data about the infected individual’s network of contacts and other information was used: who they have been in close contact with, where and for how long.
“In the study, the outbreak can quickly be brought under control when the method is used, while random testing leads to uncontrolled spread of the outbreak with many more infected individuals. Under real world conditions, information can be added, such as demographic data, age and health-related conditions, which can improve the method’s effectiveness even more. The same method can also be used to prevent reinfections in the population if immunity after the disease is only temporary.”
She emphasises that the study is a simulation and that testing with real data is needed to improve the method even more. Therefore, it is too early to use it in the ongoing coronavirus pandemic. At the same time, she sees the research as a first step in being able to implement more targeted initiatives to reduce the spread of infections, since the machine learning-based testing strategy automatically adapts to the specific characteristics of diseases. As an example, she mentions the potential to easily predict if a specific age group should be tested or if a limited geographic area is a risk zone, such as a school, a community or a specific neighbourhood.
“When a large outbreak has begun, it is important to quickly and effectively identify infectious individuals. In random testing, there is a significant risk failing to achieve this, but with a more goal-oriented testing strategy we can find more infected individuals and thereby also gain the necessary information to decrease the spread of infection. We show that machine learning can be used to develop this type of testing strategy,” she says.
There are few previous studies that have examined how machine learning can be used in cases of pandemics, particularly with a clear focus on finding the best testing strategies.
“We show that it is possible to use relatively simple and limited information to make predictions of who would be most beneficial to test. This allows better use of available testing resources.”
Story Source:
Materials provided by University of Gothenburg. Note: Content may be edited for style and length.

Read more →

No batteries? No sweat! Wearable biofuel cells now produce electricity from lactate

Wearable electronic devices and biosensors are great tools for health monitoring, but it has been difficult to find convenient power sources for them. Now, a group of scientists has successfully developed and tested a wearable biofuel cell array that generates electric power from the lactate in the wearer’s sweat, opening doors to electronic health monitoring powered by nothing but bodily fluids.
It cannot be denied that, over the past few decades, the miniaturization of electronic devices has taken huge strides. Today, after pocket-size smartphones that could put old desktop computers to shame and a plethora of options for wireless connectivity, there is a particular type of device whose development has been steadily advancing: wearable biosensors. These tiny devices are generally meant to be worn directly on the skin in order to measure specific biosignals and, by sending measurements wirelessly to smartphones or computers, keep track of the user’s health.
Although materials scientists have developed many types of flexible circuits and electrodes for wearable devices, it has been challenging to find an appropriate power source for wearable biosensors. Traditional button batteries, like those used in wrist watches and pocket calculators, are too thick and bulky, whereas thinner batteries would pose capacity and even safety issues. But what if we were the power sources of wearable devices ourselves?
A team of scientists led by Associate Professor Isao Shitanda from Tokyo University of Science, Japan, are exploring efficient ways of using sweat as the sole source of power for wearable electronics. In their most recent study, published in the Journal of Power Sources, they present a novel design for a biofuel cell array that uses a chemical in sweat, lactate, to generate enough power to drive a biosensor and wireless communication devices for a short time. The study was carried out in collaboration with Dr. Seiya Tsujimura from University of Tsukuba, Dr. Tsutomu Mikawa from RIKEN, and Dr. Hiroyuki Matsui from Yamagata University, all in Japan.
Their new biofuel cell array looks like a paper bandage that can be worn, for example, on the arm or forearm. It essentially consists of a water-repellent paper substrate onto which multiple biofuel cells are laid out in series and in parallel; the number of cells depends on the output voltage and power required. In each cell, electrochemical reactions between lactate and an enzyme present in the electrodes produce an electric current, which flows to a general current collector made from a conducting carbon paste.
This is not the first lactate-based biofuel cell, but some key differences make this novel design stand out from existing lactate-based biofuel cells. One is the fact that the entire device can be fabricated via screen printing, a technique generally suitable for cost-effective mass production. This was possible via the careful selection of materials and an ingenious layout. For example, whereas similar previous cells used silver wires as conducting paths, the present biofuel cells employ porous carbon ink. Another advantage is the way in which lactate is delivered to the cells. Paper layers are used to collect sweat and transport it to all cells simultaneously through the capillary effect — the same effect by which water quickly travels through a napkin when it comes into contact with a water puddle.
These advantages make the biofuel cell arrays exhibit an unprecedented ability to deliver power to electronic circuits, as Dr. Shitanda remarks: “In our experiments, our paper-based biofuel cells could generate a voltage of 3.66 V and an output power of 4.3 mW. To the best of our knowledge, this power is significantly higher than that of previously reported lactate biofuel cells.” To demonstrate their applicability for wearable biosensors and general electronic devices, the team fabricated a self-driven lactate biosensor that could not only power itself using lactate and measure the lactate concentration in sweat, but also communicate the measured values in real-time to a smartphone via a low-power Bluetooth device.
As explained in a previous study also led by Dr. Shitanda, lactate is an important biomarker that reflects the intensity of physical exercise in real-time, which is relevant in the training of athletes and rehabilitation patients. However, the proposed biofuel cell arrays can power not only wearable lactate biosensors, but also other types of wearable electronics. “We managed to drive a commercially available activity meter for 1.5 hours using one drop of artificial sweat and our biofuel cells,” explains Dr. Shitanda, “and we expect they should be capable of powering all sorts of devices, such as smart watches and other commonplace portable gadgets.”
Hopefully, with further developments in wearable biofuel cells, powering portable electronics and biosensors will be no sweat!
Story Source:
Materials provided by Tokyo University of Science. Note: Content may be edited for style and length.

Read more →

Common drug could be used to prevent certain skin cancers

New data published by researchers at The Ohio State University Comprehensive Cancer Center — Arthur G. James Cancer Hospital and Richard J. Solove Research Institute (OSUCCC — James) suggests that an oral drug currently used in the clinical setting to treat neuromuscular diseases could also help prevent a common form of skin cancer caused by damage from ultraviolet-B (UVB) radiation from the sun.
While this data was gathered from preclinical studies, senior author Sujit Basu, MD, PhD, says preliminary results in animal models are very promising and worthy of immediate further investigation through phase I human studies.
Basu and his colleagues reported their initial findings online ahead of print April 12 in Cancer Prevention Research, a journal of the American Association for Cancer Research.
According to the American Cancer Society, more than 5.4 million basal and squamous cell skin cancers are diagnosed annually in the United States. The disease typically recurs throughout a person’s lifetime, and advanced disease can lead to physical disfiguration. These cancers are linked to the sun’s damaging rays, and despite increased public awareness on sun safety precautions, rates of the disease have been increasing for many years.
Previous peer-reviewed, published studies have shown that dopamine receptors play a role in the development of cancerous tumors; however, their role in precancerous lesions is unknown.
In this new study, OSUCCC — James researchers report data showing that the neurotransmitter/neurohormone dopamine, by activating its D2 receptors, can stop the development and progression of certain UVB-induced precancerous squamous skin cancers. Researchers also describe the molecular sequence of events that leads to cancer suppression.
“Cancer control experts have been stressing the importance of reducing exposure to the sun and practicing sun-safe habits for many years, but scientific data shows us that cumulative damage of UV rays ultimately leads to skin cancer for many people. Finding better ways to prevent these cancers from developing is critical to reduce the global burden of this disease,” says Basu, a researcher with the OSUCCC — James Translational Therapeutics Research Program and a professor of pathology at The Ohio State University College of Medicine.
“Our study suggests that a commonly used drug that activates specific dopamine receptors could help reduce squamous cell skin cancer recurrence and possibly even prevent the disease entirely. This is especially exciting because this is a drug that is already readily used in clinical settings and is relatively inexpensive. We are excited to continue momentum in this area of research,” adds Basu.
The OSUCCC — James is working on plans to begin further testing in a phase I experimental clinical trial in the coming months.
Story Source:
Materials provided by Ohio State University Wexner Medical Center. Note: Content may be edited for style and length.

Read more →

Genetic predisposition to schizophrenia may increase risk of psychosis from cannabis use

It has been long been known that cannabis users develop psychosis more often than non-users, but what is still not fully clear is whether cannabis actually causes psychosis and, if so, who is most at risk. A new study published in Translational Psychiatry by researchers at the Centre for Addiction and Mental Health (CAMH) and King’s College London helps shed light on both questions. The research shows that while cannabis users had higher rates of psychotic experiences than non-users across the board, the difference was especially pronounced among those with high genetic predisposition to schizophrenia.
“These results are significant because they’re the first evidence we’ve seen that people genetically prone to psychosis might be disproportionately affected by cannabis,” said lead author Dr. Michael Wainberg, Scientist the Krembil Centre for Neuroinformatics at CAMH. “And because genetic risk scoring is still in its early days, the true influence of genetics on the cannabis-psychosis relationship may be even greater than what we found here.”
Using data from the UK Biobank, a large-scale biomedical database containing participants’ in-depth genetic and health information, the authors analyzed the relationship between genetics, cannabis use and psychotic experiences across more than 100,000 people. Each person reported their frequency of past cannabis use, and whether they had ever had various types of psychotic experiences, such as auditory or visual hallucinations. The researchers also scored each person’s genetic risk for schizophrenia, by looking at which of their DNA mutations were more common among schizophrenia patients than among the general population.
Overall, people who had used cannabis were 50 per cent more likely to report psychotic experiences than people who had not. However, this increase was not uniform across the study group: among the fifth of participants with the highest genetic risk scores for schizophrenia, it was 60 per cent, and among the fifth with the lowest scores, it was only 40 per cent. In other words, people genetically predisposed to schizophrenia were at disproportionately higher risk for psychotic experiences if they also had a history of cannabis use.
Notably, because much less is known about the genetics of schizophrenia in non-white populations, the study’s analysis was limited to self-reported white participants. “This study, while limited in scope, is an important step forward in understanding how cannabis use and genetics may interact to influence psychosis risk,” added senior author Dr. Shreejoy Tripathy, Independent Scientist at the Krembil Centre for Neuroinformatics, who supervised the study. “The more we know about the connection between cannabis and psychosis, the more we can inform the public about the potential risks of using this substance. This research offers a window into a future where genetics can help empower individuals to make more informed decisions about drug use.”
Story Source:
Materials provided by Centre for Addiction and Mental Health. Note: Content may be edited for style and length.

Read more →

Powered prosthetic ankles can restore a wide range of functions for amputees

A recent case study from North Carolina State University and the University of North Carolina at Chapel Hill demonstrates that, with training, neural control of a powered prosthetic ankle can restore a wide range of abilities, including standing on very challenging surfaces and squatting. The researchers are currently working with a larger group of study participants to see how broadly applicable the findings may be.
“This case study shows that it is possible to use these neural control technologies, in which devices respond to electrical signals from a patient’s muscles, to help patients using robotic prosthetic ankles move more naturally and intuitively,” says Helen Huang, corresponding author of the study. Huang is the Jackson Family Distinguished Professor in the Joint Department of Biomedical Engineering at NC State and UNC.
“This work demonstrates that these technologies can give patients the ability to do more than we previously thought possible,” says Aaron Fleming, first author of the study and a Ph.D. candidate in the joint biomedical engineering department.
Most of the existing research on robotic prosthetic ankles has focused solely on walking using autonomous control. Autonomous control, in this context, means that while the person wearing the prosthesis decides whether to walk or stand still, the fine movements involved in those movements happen automatically — rather than because of anything the wearer is doing.
Huang, Fleming and their collaborators wanted to know what would happen if an amputee, working with a physical therapist, trained with a neurally controlled powered prosthetic ankle on activities that are challenging with typical prostheses. Would it be possible for amputees to regain a fuller range of control in the many daily motions that people make with their ankles in addition to walking?
The powered prosthesis in this study reads electrical signals from two residual calf muscles. Those calf muscles are responsible for controlling ankle motion. The prosthetic technology uses a control paradigm developed by the researchers to convert electrical signals from those muscles into commands that control the movement of the prosthesis.

Read more →

Smell you later: Exposure to smells in early infancy can modulate adult behavior

The smells that newborn mice are exposed to (or “imprint” on to use the academic term) affect many social behaviors later in life, but how this happens is still a mystery. Scientists from Japan have now discovered the molecules necessary for imprinting. Their new study sheds light on the decision-making process and neurodevelopmental disorders such as autism spectrum disorders. It also proposes more effective use of oxytocin therapy for such disorders at an early age.
Imprinting is a popularly known phenomenon, wherein certain animals and birds become fixated on sights and smells they see immediately after being born. In ducklings, this can be the first moving object, usually the mother duck. In migrating fish like salmon and trout, it is the smells they knew as neonates that guides them back to their home river as adults. How does this happen?
Exposure to environmental input during a critical period early in life is important for forming sensory maps and neural circuits in the brain. In mammals, early exposure to environmental inputs, as in the case of imprinting, is known to affect perception and social behavior later in life. Visual imprinting has been widely studied, but the neurological workings of smell-based or “olfactory” imprinting remain a mystery.
To find out more, scientists from Japan, including Drs. Nobuko Inoue, Hirofumi Nishizumi, and Hitoshi Sakano at University of Fukui and Drs. Kazutaka Mogi and Takefumi Kikusui at Azabu University, worked on understanding the mechanism of olfactory imprinting during the critical period in mice. Their study, published in eLife, offers fascinating results. “We discovered three molecules involved in this process,” reports Dr. Nishizumi, “Semaphorin 7A (Sema7A), a signaling molecule produced in olfactory sensory neurons, Plexin C1 (PlxnC1), a receptor for Sema7A expressed in the dendrites of mitral/tufted cells, and oxytocin, a brain peptide known as love hormone.”
During the critical period, when a newborn mouse pup is exposed to an odor, the signaling molecule Sema7A initiates the imprinting response to the odor by interacting with the receptor PlxnC1. As this receptor is only localized in the dendrites in the first week after birth, it sets the narrow time limitation of the critical period. The hormone oxytocin released in the nursed infants imposes the positive quality of the odor memory.
It is previously known that male mice normally show strong curiosity toward unfamiliar mouse scents of both genders. “Blocking” Sema7A signaling during the critical period results in the mice not responding in their usual manner; they display avoidance response to the stranger mice.
An interesting result of this study is the conflicting response to aversive odors. Let’s say, a pup is exposed to an innately aversive odor during the critical period; this imprinted odor will now induce a positive response against the innate natural response towards the odor. Here, the hard-wired innate circuit and the imprinted memory circuit are in competition, and the imprinting circuit takes over. To solve this dilemma and reach a conclusion, the brain must have detailed a mechanism of a crosstalk between the positive and negative responses, which opens a variety of research questions in the human context.
So, what do these results say about the human brain?
First, the results of the study open many research questions for the functioning of the human brain and behavior. Like the critical period in the mouse olfactory system, can we find such a period in humans, possibly in other sensory systems? The way the mouse brain chooses imprinted memory over innate response, do we humans also follow similar decision-making processes?
Secondly, this study also suggests that improper sensory inputs may cause neuro-developmental disorders, such as autism spectrum disorders (ASD) and attachment disorders (AD). Oxytocin is widely used for treating ASD symptoms in adults. However, Dr. Nishizumi says, “our study indicates that oxytocin treatment in early neonates is more effective than after the critical period in improving the impairment of social behavior. Thus, oxytocin treatment of infants will be helpful in preventing the ASD and AD, which may open a new therapeutic procedure for neurodevelopmental disorders.”
This study adds valuable new insights to our understanding of decision making and mind struggle in humans and reveals new research paths in the neuroscience of all types of imprinting.
Story Source:
Materials provided by University of Fukui. Note: Content may be edited for style and length.

Read more →

Study suggests new advice for medics treating high blood pressure

New research led by a professor at NUI Galway is set to change how doctors treat some patients with high blood pressure — a condition that affects more than one in four men and one in five women.
The study by researchers at NUI Galway, Johns Hopkins University and Harvard Medical School found no evidence that diastolic blood pressure — the bottom reading on a blood pressure test — can be harmful to patients when reduced to levels that were previously considered to be too low.
Lead researcher Bill McEvoy, Professor of Preventive Cardiology at NUI Galway and a Consultant Cardiologist at University Hospital Galway, said the findings have the potential to immediately influence the clinical care of patients.
Professor McEvoy said: “We now have detailed research based on genetics that provides doctors with much-needed clarity on how to treat patients who have a pattern of high systolic values — the top reading for blood pressure — but low values for the diastolic, or bottom, reading.
“This type of blood pressure pattern is often seen in older adults. Old studies using less reliable research methods suggested that the risk for a heart attack began to increase when diastolic blood pressure was below 70 or above 90. Therefore, it was presumed there was a sweet-spot for the diastolic reading.”
High blood pressure is a major cause of premature death worldwide, with more than 1 billion people having the condition. It is linked with brain, kidney and other diseases, but it is best known as a risk factor for heart attack. More recently, high blood pressure has emerged as one of the major underlying conditions that increase the risk of poor outcomes for people who become infected with Covid-19.
Professor McEvoy and the international research team analysed genetic and survival data from more than 47,000 patients worldwide. The study, published in the medical journal Circulation, showed: There appears to be no lower limit of normal for diastolic blood pressure and no evidence in this genetic analysis that diastolic blood pressure can be too low. There was no genetic evidence of increased risk of heart disease when a patient’s diastolic blood pressure reading is as low as 50. The authors also confirmed that values of the top, systolic, blood pressure reading above 120 increased the risk of heart disease and stroke.Blood pressure medications reduce both systolic and diastolic values.
Professor McEvoy added: “Because doctors often focus on keeping the bottom blood pressure reading in the 70-90 range, they may have been undertreating some adults with persistently high systolic blood pressure.
“The findings of this study free up doctors to treat the systolic value when it is elevated and to not worry about the diastolic blood pressure falling too low.
“My advice now to GPs is to treat their patients with high blood pressure to a systolic level of between 100-130mmHg, where possible and without side effects, and to not worry about the diastolic blood pressure value.” Dr Joe Gallagher, Irish College of General Practioners’ Lead, National Heart Programme, said: “This data helps remove uncertainty about how to treat people who have an elevated systolic blood pressure but low diastolic blood pressure. This is a common clinical problem which causes much debate. It will help impact clinical practice internationally and shows the importance of Irish researchers in clinical research.”
The research team used new technologies to take into account genetic information that is unbiased, which was not the case with prior observational studies. They assessed data from 47,407 patients in five groups with a median age of 60.
Story Source:
Materials provided by National University of Ireland Galway. Note: Content may be edited for style and length.

Read more →

Combining mask wearing, social distancing suppresses COVID-19 virus spread

Studies show wearing masks and social distancing can contain the spread of the COVID-19 virus, but their combined effectiveness is not precisely known.
In Chaos, by AIP Publishing, researchers at New York University and Politecnico di Torino in Italy developed a network model to study the effects of these two measures on the spread of airborne diseases like COVID-19. The model shows viral outbreaks can be prevented if at least 60% of a population complies with both measures.
“Neither social distancing nor mask wearing alone are likely sufficient to halt the spread of COVID-19, unless almost the entire population adheres to the single measure,” author Maurizio Porfiri said. “But if a significant fraction of the population adheres to both measures, viral spreading can be prevented without mass vaccination.”
A network model encompasses nodes, or data points, and edges, or links between nodes. Such models are used in applications ranging from marketing to tracking bird migration. In the researchers’ model, based on a susceptible, exposed, infected, or removed (recovered or has died) framework, each node represents a person’s health status. The edges represent potential contacts between pairs of individuals.
The model accounts for activity variability, meaning a few highly active nodes are responsible for much of the network’s contacts. This mirrors the validated assumption that most people have few interactions and only a few interact with many others. Scenarios involving social distancing without mask wearing and vice versa were also tested by setting up the measures as separate variables.
The model drew on cellphone mobility data and Facebook surveys obtained from the Institute for Health Metrics and Evaluation at the University of Washington. The data showed people who wear masks are also those who tend to reduce their mobility. Based on this premise, nodes were split into individuals who regularly wear masks and socially distance and those whose behavior remains largely unchanged by an epidemic or pandemic.
Using data collected by The New York Times to gauge the model’s effectiveness, the researchers analyzed the cumulative cases per capita in all 50 states and the District of Columbia between July 14, 2020, when the Centers for Disease Control and Prevention officially recommended mask wearing, through Dec. 10.
In addition to showing the effects of combining mask wearing and social distancing, the model shows the critical need for widespread adherence to public health measures.
“U.S. states suffering the most from the largest number of infections last fall were also those where people complied less with public health guidelines, thereby falling well above the epidemic threshold predicted by our model,” Porfiri said.
Story Source:
Materials provided by American Institute of Physics. Note: Content may be edited for style and length.

Read more →

Childbirth versus pelvic floor stability

Evolutionary anthropologists from the University of Vienna and colleagues now present evidence for a different explanation, published in PNAS. A larger bony pelvic canal is disadvantageous for the pelvic floor’s ability to support the fetus and the inner organs and predisposes one to incontinence.
The human pelvis is simultaneously subject to obstetric selection, favoring a more spacious birth canal, and an opposing selective force that favors a smaller pelvic canal. Previous work of scientists from the University of Vienna has already led to a relatively good understanding of this evolutionary “trade-off” and how it results in the high rates of obstructed labor in modern humans. However, it has remained unclear what the advantage of a narrow birth canal is, given its disadvantage for childbirth. It has long been thought that a smaller birth canal is advantageous for bipedal locomotor performance. A different, less prominent explanation is that it enhances pelvic floor functionality. The muscles of the human pelvic floor play a vital role in supporting our inner organs and a heavy fetus, and in maintaining continence. A larger pelvic canal would increase the downward deformation of the pelvic floor, increasing the risk of pelvic floor disorders, such as pelvic organ prolapse and incontinence. However, this “pelvic floor hypothesis” has been challenging to prove.
A team of evolutionary anthropologists and engineers from the University of Vienna, the Konrad Lorenz Institute for Evolution and Cognition Research, and the University of Texas at Austin (USA) used a new approach to test this hypothesis. The researchers, led by Katya Stansfield and Nicole Grunstra from the Department of Evolutionary Biology, simulated a Finite Element model of a human pelvic floor across a range of different surface areas and thicknesses and investigated the deformation in response to pressure. “Finite Element analysis allowed us to isolate the effect of pelvic floor geometry by controlling for other risk factors, such as age, number of births, and tissue weakness,” says Stansfield. This approach also enabled the team to model pelvic floor size across a broader range of variation than can be observed in the human population, “because natural selection may prevent the occurrence of such ‘extreme’ sizes precisely because of the disadvantages for pelvic floor functionality,” explains Grunstra.
As predicted by the pelvic floor hypothesis, larger pelvic floors deformed disproportionately more than smaller pelvic floors. “Our results support the notion that smaller pelvic floors — and thus smaller birth canals — are biomechanically advantageous for organ and fetal support despite their disadvantage for childbirth,” says Stansfield.
The researchers also found that thicker pelvic floors were more resistant to bending and stretching, which partly compensated for the increase in pelvic floor deformation as a result of increased surface area. So why did natural selection not result in a larger birth canal that eases childbirth, along with a disproportionately thicker pelvic floor that compensates for the extra deformation? “We found that thicker pelvic floors require quite a bit higher intra-abdominal pressures in order to undergo stretching, which is actually necessary during childbirth,” says Grunstra. The pressures generated by women in labor are among the highest recorded intra-abdominal pressures and they may be difficult to increase further. “Being unable to push the baby through a resistant pelvic floor would equally complicate childbirth, and so we think we have identified a second evolutionary trade-off, this time in the thickness of the pelvic floor,” concludes Grunstra. “Both the size of the birth canal and the thickness of the pelvic floor appear to be evolutionary ‘compromises’ enforced by multiple opposing selective pressures,” says co-author Philipp Mitteroecker.
Story Source:
Materials provided by University of Vienna. Note: Content may be edited for style and length.

Read more →