NHS AI test spots tiny cancers missed by doctors

Published6 minutes agoShareclose panelShare pageCopy linkAbout sharingBy Zoe KleinmanTechnology editorAn AI tool tested by an NHS hospital trust successfully identified tiny signs of breast cancer in 11 women which had been missed by human doctors.The tool, called Mia, was piloted alongside NHS clinicians and analysed the mammograms of over 10,000 women.Most of them were cancer-free, but it successfully flagged all of those with symptoms, as well as an extra 11 the doctors did not identify.At their earliest stages, cancers can be extremely small and hard to spot.The BBC saw Mia in action at NHS Grampian, where we were shown tumours that were practically invisible to the human eye. But, depending on their type, they can grow and spread rapidly. Barbara was one of the 11 patients whose cancer was flagged by Mia but had not been spotted on her scan when it was studied by the hospital radiologists. Because her 6mm tumour was caught so early she had an operation but only needed five days of radiotherapy. Breast cancer patients with tumours which are smaller than 15mm when discovered have a 90% survival rate over the following five years.Barbara said she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.She told me she met a relative who expressed sympathy that Barbara had “the Big C”.”I said, ‘it’s not a big C, it’s a very little one’,” she said.Without the AI tool’s assistance, Barbara’s cancer would potentially not have been spotted until her next routine mammogram three years later. She had not experienced any noticeable symptoms.Because it works instantly, tools like Mia also have the potential to reduce the waiting time for results from 14 days down to three, claims its developer Kheiron.None of the cases in the trial were analysed by Mia alone – each had a human review as well. Currently two radiologists look at each individual scan, but the hope is that one of them could one day be replaced by the tool, effectively halving the workload for each pair.Of the 10,889 women who participated in the trial, only 81 did not want the AI tool to review their scans, said Dr Gerald Lip, clinical director of breast screening in the north west of Scotland and the doctor who led the project.AI tools are generally pretty good at spotting symptoms of a specific disease, if they are trained on enough data to enable them to be identified. This means feeding the programme with as many different anonymised images of those symptoms as possible, from as diverse a range of people as possible. Getting hold of this data can be difficult because of patient confidentiality and privacy concerns.Sarah Kerruish, Chief Strategy Officer of Kheiron Medical, said it took six years to build and train Mia, which is run on cloud computing power from Microsoft, and it was trained on “millions” of mammograms from “women all over the world”.”I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one,” she said.Breast cancer doctors look at around 5,000 breast scans per year on average, and can view 100 in a single session.”There is an element of fatigue,” said Dr Lip.”You get disruptions, someone’s coming in, someone’s chatting in the background. There are lots of things that can probably throw you off your regular routine as well. And in those days when you have been distracted, you go, ‘how on earth did I miss that?’ It does happen.”I asked him whether he was worried that tools like Mia might one day take away his job altogether.He said he believed it the tech could one day free him up to spend more time with patients.”I see Mia as a friend and an augmentation to my practice,” Dr Lip said.Mia isn’t perfect. It had no access to any patient history so, for example, it would flag cysts which had already been identified by previous scans and designated harmless.Also, because of current health regulation, the machine learning element of the AI tool was disabled – so it could not learn on the job, and evolve during its use. Every time it was updated it had to undergo a new review.The Mia trial is just one early test, by one product in one location. The University of Aberdeen independently validated the research, but the results of the evaluation have not yet been peer reviewed. The Royal College of Radiologists say the tech has potential.”These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care.” said Dr Katharine Halliday, President of the Royal College of Radiologists. Dr Julie Sharp, head of health information at Cancer Research UK said the increasing number of cancer cases diagnosed each year meant technological innovation would be “vital” to help improve NHS services and reduce pressure on its staff.”More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.There are other healthcare-related AI trials going on around the UK, including an AI tool by a firm called Presymptom Health which is analysing blood samples looking for signs of sepsis before symptoms emerge – but many are still in early stages without published results.More on this storyHospitals using AI to diagnose prostate cancerPublished1 day agoAI offers huge promise on breast cancer screeningPublished2 August 2023Scientists excited by potential of AI cancer toolPublished1 November 2023

Read more →

The abortion clues that can hide on your phone

SharecloseShare pageCopy linkAbout sharingImage source, Getty ImagesAfter the Supreme Court overturned citizens’ constitutional right to abortion in the US, there has been concern about data protection, particularly in the 13 states which have already moved to make ending a pregnancy illegal.But what sort of data might incriminate someone, how could the authorities get hold of it, and what are the tech firms doing?Digital tracesGina Neff, professor of technology and society at University of Oxford, tweeted the day after the ruling: “Right now, and I mean this instant, delete every digital trace of any menstrual tracking.”Her message has so far received more than 200,000 likes and been retweeted 54,000 times.Period trackers are used to help women predict when their next period is likely to be, and are often used to either try to prevent pregnancy or to try to conceive.Image source, Google Play StoreLike a number of other high-profile apps, Natural Cycles, which is billed as a digital form of contraception, insisted last month that all the data it stored was “safe and will be protected”.Period apps data warning over US Roe v Wade caseHowever, on Monday it told the BBC it is working on “creating a completely anonymous experience for users”.”The goal is to make it so that no-one – not even Natural Cycles – can identify the user,” it said.That sounds like it is considering encryption. Speaking of which, how about messaging services – that confidential exchange between two close friends that feels so private at the time?The use of end-to-end encryption messaging services such as WhatsApp and Signal (Telegram is not by default encrypted, although it can be) to discuss sensitive issues is generally preferred by security experts and privacy campaigners.Image source, ReutersThe firms which run them cannot see the content of the messages themselves, and do not receive or store them – only the sender’s and recipient’s devices are able to decode them.Can my device be seized?However, this is only useful if those devices are themselves not taken away or unlocked by anybody else.Generally in the US, the police need a warrant to search an electronic device such as a phone or laptop, just as they would to search a house. Broadly speaking, the protection here comes under the Fourth and Fifth Amendments. However, there are some exceptions. Digital rights group the Electronic Frontier Foundation says US police have a right to search without a warrant if they “have probable cause to believe there is incriminating evidence in the house, or on an electronic device that is under immediate threat of destruction”. Under the Fifth Amendment, which is the individual’s right not to incriminate themselves, a person can refuse to unlock a device even if it is taken, but the reality is blurry, according to various lawyers.”Courts have reached conflicting conclusions as to whether and when the compelled decryption of a password – or biometric identifier-protected device runs afoul of the Fifth Amendment,” wrote the Congressional Research Service in a report in 2020.The power of subpoenaAnd if the device itself is not seized – a subpoena from the authorities to the tech firms, asking for an individual’s data, is a powerful tool. Giants like Google and Apple not only run back-up and cloud services for their customers using their own storage, but also collect their own separate user data, including internet activity and location.Google says that even after something has been deleted by a user and is therefore not visible to them – such as a browser history – some of it may still be retained “to comply with legal or regulatory requirements”.If these firms receive an official demand, they can challenge it, but the pressure is on them to comply.In 2021, the New York Times reported that in the first six months of 2020, Apple challenged only 4% of requests for customer account data. and generally complied with 80-85%.According to Google’s transparency report, it supplied “some data” in 82% of cases requesting information in the first six months of 2021. Of almost 51,000 cases, 20,701 were subpoenas and 25,077 search warrants.Tech firms tight-lippedIs this the time for tech firms to reconsider their data practices?Last month, a number of senior members of the US Congress, including Elizabeth Warren and Bernie Sanders, signed an open letter to Google asking it to collect and store less data about its users, including location information, out of concern that it could be used to bring about abortion prosecutions.”No law requires Google to collect and keep records of its customers’ every movement,” they wrote.So far, the tech firms have not commented on whether they plan to make any changes to the way in which they collect and manage customer data in light of the ruling. The BBC has asked for this information.Image source, Getty ImagesWhat many large US firms – including Facebook owner, Meta, as well as Disney and Amazon – have said is that they will fund expenses for employees who have to travel to another state for medical care which is not available where they are, including abortion.US firms pledge to pay abortion travel expensesThere is some concern that people who live in a state where abortion is banned but travel out of state to have one, may face prosecution when they return. It is unclear whether this could be the case, but it is not routinely applied to other laws which vary from state to state, such as gambling.Dr Stephanie Hare, author of the book Technology is not Neutral, says that while the companies’ commitment is “a welcomed first step”, it’s not enough.”That’s only going to help a very small amount of people, assuming some of them want to share this information with their employer in the first place,” she said. “What we need to know is what these firms are going to do to limit data collection on all users, and how they can prevent user data from being used against them in their healthcare choices.”So how can you protect their data if you are worried?Image source, Digital Defense Fund The EFF has published a privacy guide which includes this advice:run a separate browser, phone number and email address for reproductive matters minimise location services when deleting data, make sure the deleted folder is also emptied As for researching abortion online, Prof Alan Woodward, from University of Surrey, believes it’s unlikely that law enforcement will speculatively begin to seek this sort of personal data. “They’re not likely to be going after people who are thinking about having an abortion,” he said.”But if they are gathering evidence after the event, if they have arrested someone – that evidence could then include browser history, emails and messages.”More on this storyWhat happens now Roe v Wade has been overturned?The battle over end-to-end encryptionUS firms pledge to pay abortion travel expensesPeriod apps data warning over US Roe v Wade case

Read more →

Google AI tool can help patients identify skin conditions

SharecloseShare pageCopy linkAbout sharingimage copyrightGoogleGoogle has unveiled a tool that uses artificial intelligence to help spot skin, hair and nail conditions, based on images uploaded by patients.A trial of the “dermatology assist tool”, unveiled at the tech giant’s annual developer conference, Google IO, should launch later this year, it said.The app has been awarded a CE mark for use as a medical tool in Europe.A cancer expert said AI advances could enable doctors to provide more tailored treatment to patients.The AI can recognise 288 skin conditions but is not designed to be a substitute for medical diagnosis and treatment, the firm said.It has taken three years to develop, and has been trained on a dataset of 65,000 images of diagnosed conditions, as well as millions of images showing marks people were concerned about, and thousands of pictures of healthy skin, in all shades and tones.As well as using images, the app also requires patients to answer a series of questions online.It is based on previous tools developed by Google for learning to spot the symptoms of certain cancers and tuberculosis.Currently none of these tools is approved as an alternative to human diagnosis.Google says there are some 10 billion searches for skin, hair and nail issues on its search engine every year.Dermatology Assist has not yet been given clearance by the Food and Drug Administration (FDA) for use in the US, but a similar machine-learning model built by British firm Optellum was recently approved by the FDA for use as an assistant in the diagnosis of lung cancer.Professor Tim Underwood, head of cancer sciences at the University of Southampton, said such tools had the potential to provide more tailored treatments to patients.”The application of AI, both in cancer and in other areas of medicine, informs the conversation around what the diagnosis might be and what treatment to offer to an individual,” he said.This is not the first AI in healthcare, but it is significant for putting the tool in the hands of the public rather than doctors. Google views this AI as better than searching for the information yourself, rather than a substitute for medical advice. Whether people use it like that is another matter – we already know the internet is a source of both medical panic and false reassurance. How people might use the AI has fed into a design that aims to prioritise safety. Medical tools like this, yes even those with an AI at the helm, have to strike a balance. Do you focus on catching everyone who has a disease or on ruling out those who are healthy to avoid unnecessary worry or treatments? One always comes at the cost of the other. The doctors and developers involved told me the AI has been optimised to avoid missing “alarming or scary” conditions such as skin cancer. The flip side is some people will be advised to check out something that will turn out to be benign.

Read more →