Friday June 7 2024: Israel's biggest hospital Sheba Medical Centre launches new 'City of Health' with Medical AI and biotech at the forefront.
Critics ask if Medical AI can always correctly diagnose medical issues and raise concerns with the new biotech medicine which is responsible for the dangerous mRNA technology.
Can Medical AI be trusted to safely diagnose a sick person and give treatment recommendations? Of particular concern is the race to install AI or so-called smart systems in Israel's clinics/hospitals.
Sheba Medical Center, Israel’s largest medical center in concerning news announced the deployment of a new accelerated, AI-powered cancer diagnostics research platform for use in patient diagnosis, treatment and outcomes. Critics though have described many of the real potential dangers of applying artificial intelligence (AI) to the routine care of patients diagnosed with cancer.
Examples of the dangers are algorithm autonomy, avoidance of humanoid interfaces, perceived information asymmetry gaps, obfuscating decision-making rationale, data absenteeism, technology tachyphylaxis, uncanny valley, corrective justice, epistemology of AI, and establishing an iterative and inclusive process.
AI will never estimate the value that a particular patient places on spending one more afternoon with a loved one, walking their dog, listening to a beautiful symphony, enjoying an intelligent conversation or inspiring poem, laughing with an old friend, or the smile of a happy child.
Truly personalized oncology will continue to rely on helping each patient make decisions that they judge to be most consistent with their own preferences, goals, values, and nature. As Hippocrates said “It is more important to know what sort of person has a disease than what sort of a disease a person has.”
Data-related concerns and human biases that seep into algorithms during development and post-deployment phases affect performance in real-world settings, limiting the utility and safety of AI technology in oncology clinics.
AI algorithms are only as good as the data and assumptions they are fed. Biased representation of patient populations and medical scenarios in the training data sets can lead to data overfitting and inaccurate generalizations of AI tools in the real world. Data set shift, the stray of real-world distributions of data from the training set, can lead to a drift in AI performance over time and decrease the reliability of its output.
A number of scientists, philosophers and technology experts at international educational institutions published an article in which they analyzed the danger of AI. The authors of the study, who come from academic institutions and technological bodies in the US, Australia and Madrid, published their findings in the Journal of Artificial Intelligence Research.
“It is noted that due to this technological progress, we are experiencing a resurgence in the discussion of Artificial Intelligence as a potential disaster for humanity. "
The most obvious risk for Sheba Hospital is that AI systems will sometimes be wrong, and that patient injury or other health-care problems may result. If an AI system recommends the wrong drug for a patient, fails to notice a tumor on a radiological scan or allocates a hospital bed to one patient over another because it predicted wrongly which patient would benefit more, the patient could be injured.
Of course, many injuries occur due to medical error in the health-care system today, even without the involvement of AI. AI errors are potentially different for at least two reasons. First, patients and providers may react differently to injuries resulting from software than from human error. Second, if AI systems become widespread, an underlying problem in one AI system might result in injuries to thousands of patients—rather than the limited number of patients injured by any single provider’s error.
Another set of risks arise around privacy. The requirement of large datasets creates incentives for developers to collect such data from many patients. Some patients may be concerned that this collection may violate their privacy, and lawsuits have been filed based on data-sharing between large health systems and AI developers. AI could implicate privacy in another way: AI can predict private information about patients even though the algorithm never received that information. (Indeed, this is often the goal of health-care AI.)
For instance, an AI system might be able to identify that a person has Parkinson’s disease based on the trembling of a computer mouse, even if the person had never revealed that information to anyone else (or did not know). Patients might consider this a violation of their privacy, especially if the AI system’s inference were available to third parties, such as banks or life insurance companies.
AI is simply not able to provide the human interaction which is so important to people in need. What we are in fact witnessing at Sheba Hospital is the de-humanization and cold care of AI. Leaving aside the big issue of putting qualified humans out of work, humans have various social and emotional needs, which are not met by giving them AI especially in a crisis situation.
Martin Blackham Israel First TV Program www.israelfirst.org