Sunday March 3 2024: Israel continues to push globalist agenda of smart hospitals despite the dangers involved.
Hadassah Medical Centre listed as one of the world's best smart hospitals however both doctors and medical staff warn of a dark side to smart hospitals with the loss of human intervention.
Hadassah Medical Organization (HMO) was named as one of the world’s leading hospitals in oncology, cardiology and smart technologies, according to Newsweek’s 2024 global rankings. Newsweek ranks the world's leading smart hospitals, which, according to the ranking employ the most advanced technologies in the provision of medical care, and make use of, among other things, advanced virtual and imaging technologies, the latest digital systems, and decisions based on artificial intelligence and robotics.
No mention was made in the Israel National News article of the resulting unemployment due to automation which could mean that a lot of administrative roles are made redundant.
A key part of the Hadassah smart hospital is Artificial Intelligence (AI). Of particular concern is the lack of empathy in patient and doctor interaction when AI is involved. Especially for the elderly and the most vulnerable patients, relying on technology as the interface of care can cause confusion and frustration; and can result in confusion, treatment plans not being understood properly, or patient non-compliance.
Can Medical AI be trusted to safely diagnose a sick person and give treatment recommendations?
Smart hospitals depend on AI and the most obvious risk is that AI systems will sometimes be wrong and that patient injury or other health care problems may result. If an AI system recommends the wrong medication for the patient, fails to notice a problem on a radiological scan, the patient could be injured.
A number of scientists, philosophers and technology experts at international educational institutions however published an article in which they analyzed the danger of AI. The authors of the study, who come from academic institutions and technological bodies in the US, Australia and Madrid, published their findings in the Journal of Artificial Intelligence Research.
“It is noted that due to this technological progress, we are experiencing a resurgence in the discussion of Artificial Intelligence as a potential disaster for humanity. "
The most obvious risk for Hadassah Hospital is that AI systems will sometimes be wrong, and that patient injury or other health-care problems may result. If an AI system recommends the wrong drug for a patient, fails to notice a tumor on a radiological scan or allocates a hospital bed to one patient over another because it predicted wrongly which patient would benefit more, the patient could be injured.
Of course, many injuries occur due to medical error in the health-care system today, even without the involvement of AI. AI errors are potentially different for at least two reasons. First, patients and providers may react differently to injuries resulting from software than from human error.
Second, if AI systems become widespread, an underlying problem in one AI system might result in injuries to thousands of patients—rather than the limited number of patients injured by any single provider’s error.
Another set of risks arise around privacy. The requirement of large datasets creates incentives for developers to collect such data from many patients. Some patients may be concerned that this collection may violate their privacy, and lawsuits have been filed based on data-sharing between large health systems and AI developers. AI could implicate privacy in another way: AI can predict private information about patients even though the algorithm never received that information. (Indeed, this is often the goal of health-care AI.)
For instance, an AI system might be able to identify that a person has Parkinson’s disease based on the trembling of a computer mouse, even if the person had never revealed that information to anyone else (or did not know). Patients might consider this a violation of their privacy, especially if the AI system’s inference were available to third parties, such as banks or life insurance companies.
There are risks involving bias and inequality in health-care AI. AI systems learn from the data on which they are trained, and they can incorporate biases from those data. In speech-recognition AI systems used to transcribe notes, such AI may perform worse when the provider is of a race or gender underrepresented.
AI is simply not able to provide the human interaction which is so important to people in need. What we are in fact witnessing at Hadassah Medical Centre is the de-humanization and cold care of AI. Leaving aside the big issue of putting qualified humans out of work, humans have various social and emotional needs, which are not met by giving them AI especially in a crisis situation.
Personal care is the most important thing. This certainly does not seem the right environment for AI to say the least. The fact is that here in Israel we are witnessing the relentless push by global elites to replace humans with AI.
Martin Blackham Israel First TV Program www.israelfirst.org