Problems Of Using Artificial Intelligence In The Field Of Medicine: Socio-Philosophical Analysis

Abstract

Today, medicine is one of the most strategic and promising areas in terms of the effective implementation of artificial intelligence (AI) systems. The article presents an overview of artificial intelligence AI systems in specific sectors of medicine in terms of their potential advantages and disadvantages, ethical and social impact. Without a deeper understanding of the ethical and social consequences of the use of AI, new technologies can harm the people they are designed to help. Having analysed the current state of research and its implementation in practical medical activities, we have identified the main directions of using AI in the field of medicine, as well as the tasks in which these systems are used. As a result of our research, it becomes clear that the huge opportunities that have opened up in the provision of vital services give rise to several ethical problems, starting with the interests of government structures, economic, social and ending with issues of education and upbringing of medical personnel, whose activities are now inextricably linked with the use of AI. The need to comply with the most important requirements for developing AI on the part of society is actualized – its safety, efficiency, promotion of human well-being and the goals of human development.

Keywords: Artificial intelligence systems, artificial intelligence, ethics, healthcare, robotics, telemedicine

Introduction

Recently, the number of developments using AI in medicine has been growing, since these systems allow you to work with a huge amount of data, analyse and process them, help in making the most accurate diagnosis, select a treatment regimen, consider the characteristics of each patient. The use of AI in medicine helps specialists to detect various diseases at earlier stages, and also contributes to the emergence and introduction of fundamentally new drugs into the practice of treatment. However, it should be noted that along with the significant simplification and optimization of the work of medical workers, the use of AI in medicine leads to several significant problems, the resolution of which requires special attention and careful, responsible use of these systems by specialists.

A recent systematic study undertaken by Indian scientists conducted a comprehensive review of the current state of research on the use of AI. 2421 scientific papers were mapped in it for the period from 2013 to 2019 (Mehta et al., 2019). As follows from the materials of the article, the leading positions in the number of publications on this topic among the branches of medicine are oncology (352), neurology (271), cardiology (215). The most detailed classification of artificial intelligence systems in medicine and healthcare with a description of specific examples of the use of these systems and an assessment of their availability is presented in the work of Gómez-González et al. (2020) "Artificial Intelligence in medicine and healthcare".

Telemedicine (remote provision of medical services, interaction of medical workers with each other using telecommunication technologies) has recently been updated. Constant monitoring of conditions, including wearable devices with AI, which make up the complex of a home hospital, leads to the loss of the boundary between a medical institution and places of residence and work. Systems related to the so-called "home hospital" are already taking the field of medicine beyond the clinic into the space of a person's daily life (Bryzgalina, 2019). There are apps on the market that use AI for personalized health assessments and home care tips. For example, Alder Hey Children's Hospital in Liverpool is working with IBM Watson to create a "cognitive hospital" that includes an application to facilitate the interaction of medical staff with patients (Alder..., 2017).

In oncology, AI methods are most often used in assessing the genetic risks of developing various types of cancer, analysing the immune status of patients with non-lymphoblastic leukaemia, analysing medical images and aetiology, predicting long-term treatment results, differential diagnosis of cancer, analysis of laboratory blood parameters in expert and clinical oncology (Esteva et al., 2017). AI can analyse and report the moment when cancer cells develop resistance to drugs against tumours. This ability helps in the development of new drugs, predicting their tolerability, optimizing the regimen of chemotherapy sessions.

The greatest use of AI in the field of cardiology is observed in hemodynamic, electrocardiography and echocardiography. For example, the Ultromics system, tested at the John Radcliffe Hospital in Oxford, also uses echocardiography images to analyse heart rate patterns and diagnose coronary arteries (Next-generation..., 2021).

Most recently, Dave and T. published an algorithm for three-dimensional patterns of systolic heart movement, which allowed AI to predict the results of therapeutic effects on patients with pulmonary hypertension with high accuracy. The medical data of 250 patients were used for the study, the program copied more than 30,000 points in their hearts at the moments of contraction of the heart muscle (Dawes et al., 2017). This made it possible to create a virtual three-dimensional model of the heart for each patient and to determine exactly which of its features are capable of provoking early death or right ventricular failure. And it is often used at the post-processing stage to classify images of the heart, to obtain the highest quality images, which allows avoiding invasive coronary angiography in the future and reducing the radiation dose during CT. With MRI, the use of AI makes it possible to automatically segment the resulting image.

AI is also used to predict the development of mental disorders, allows you to effectively monitor the treatment. In the screening of neurological conditions, AI tools are being developed that analyse speech models for predicting psychotic episodes, as well as identify and track the symptoms of pathological neurological conditions, such as Parkinson's disease (Bedi et al., 2015).

It is expected that over time AI successfully can be used in the development of new drugs. In December 2020, DeepMind announced that its Alpha Fold system had solved the so-called "protein folding problem" - the system was able to reliably predict the three-dimensional shape of the protein (Metz, 2020). AI is used to check the compatibility of drugs or to analyse the genetic code (Alekseeva, 2017).

So, the possibilities of using AI are great, medicine gets a chance of automated, fast, rational diagnosis and treatment. At the same time, several already real problems have hitherto seemed hypothetical.

Problem Statement

Today, there is no generally accepted definition of AI. This term in a broad sense means an ensemble of rationally logical, formalized rules developed and encoded by man, which organize processes that allow imitating intellectual structures, producing and reproducing purposeful actions, as well as carrying out subsequent coding and making instrumental decisions regardless of the person (Rezaev & Tregubova, 2019, p. 186).

It is customary to distinguish universal AI (strong AI, general AI) and narrow AI (weak AI, applied AI). Universal AI is a hypothetical AI capable of solving any intellectual task. Discussion of the possibility of implementing universal AI will remain outside the scope of our work. A narrow AI is a system designed to solve any specific intellectual task (for example, an image recognition system). Currently, most AI systems are narrow because they are only able to perform certain tasks or solve pre-defined problems. Modern medical AI systems are considered as "weak AI" because they can successfully perform only "narrow" therapeutic tasks, cannot "interpret" the context and "generate" the most characteristic human traits (creativity, emotions).

As the main subject of the study, we identified the following issues: the problems of adequate use of AI, the problem of excessive trust in AI, the problem of transforming the relationship between a doctor and a patient when using AI, the problem of lack of adequate feedback, the protection of privacy or the problem of confidentiality, the problem of availability of AI technologies.

Research Questions

Huge successes in the introduction of medicine are accompanied by the actualization of several ethical problems. The most significant and dangerous problem today seems to us to be the commercialization of AI and its use for political purposes. Who will own and control AI technologies, by whom he will be trained, who will finance projects related to the introduction of AI, and to what extent these organizations and individuals will be interested in the well–being of the population, and not only in obtaining possible benefits, profits or satisfying the interests of a narrow circle of people - this is a shortlist of the primary issues facing us.

There is a risk that user data may be transferred to companies that use AI technologies to market goods and services, to create products based on forecasts, and will be used, for example, by insurance firms or large technology companies. For example, Google announced a strategic partnership with the University of Chicago and the Medical University of Chicago in the USA in May 2017 (Wood, 2017). The goal of the partnership was to develop new machine learning tools for predicting medical events (for example, emergency hospitalization). To achieve this goal, the university has provided Google with access to hundreds of thousands of medical records of "impersonal" patients. One of the University's patients, Matt Dinerstein, filed a class-action lawsuit against the University and Google in June 2019 on behalf of all patients whose confidential information was disclosed (Shachar et al., 2019). Thus, it became obvious that the technologies AI predicting the appearance of various diseases can generate excessive stigmatization of individuals, expose people to aggressive marketing by pharmaceutical companies and other commercial medical services.

During the COVID-19 pandemic, data collection activities and the creation of digital identities capable of storing and transmitting such information intensified. In China, the QR code system was created from a digital payment system, which has expanded enormously by creating a digital identity infrastructure for storing data on the health of citizens. The system was created by Alipay, a platform for mobile and online payments (Mozur et al., 2020). It is obvious that in case of a possible transfer of the collected data to state institutions, there is a risk of turning it into a form of additional control over the actions of people and even the use of punitive measures by these institutions against individuals. Thus, at the beginning of 2021, the Singapore government admitted that data obtained from the COVID-19 contact tracking application (TraceTogether) could be accessed "for criminal investigation purposes", despite previous assurances that this would not be allowed (Illmer, 2021).

Today, there is a risk of overestimating the benefits of using AI, raising expectations of its potential capabilities and, as a result, introducing untested products and services that have not been thoroughly evaluated for safety and effectiveness. For example, digital technologies developed in the early stages of the COVID-19 pandemic did not always meet any objective performance standards to justify their use (Gasser et al., 2020). AI technologies have been introduced as a response to the pandemic without sufficient evidence and clinical trials. Thus, the imperceptible transfer of AI to make decisions continues, but the consequences and risks of this have not been calculated at all.

Some AI technologies, if not applied carefully, can exacerbate existing inequalities in healthcare, including those related to ethnicity, socioeconomic status, and age marker. Collecting data can be difficult due to language barriers, and mistrust can lead people to provide incorrect or incomplete information.

One of the main problems in the implementation of automated systems is that the system does not provide the necessary feedback. The need for adequate feedback stems from the increased autonomy of modern systems. It is extremely important for specialists using automated systems to know how to get information about changes in their status and behaviour at any time. To avoid undesirable emergencies, an operator working with such a system should have information about the basic principles and features of its functioning, know the algorithms of his actions in the event of such situations, understand the input-output relationship, etc. Otherwise, he loses the ability to control the state and behaviour of the system. The presence of this information and the necessary skills, on the contrary, allows a person to predict the behaviour of automated systems, assess the need to monitor certain parameters or build an algorithm of their actions in case of errors in the system. It is especially problematic today to understand systems that can process millions of input data, and the results of their forecasts can be very accurate precisely because of operating with a large amount of information. It is precise because of the incomprehensibility and non-interpretability of the principles of operation of such systems, due to the lack of necessary feedback, despite their high accuracy, today they are more like a "black box" than human assistants in solving complex tasks and problems.

Thus, automated systems often assume control over the management of processes in critical situations, if a person is not ready for adequate actions or various reasons is not able to take them. In these cases, at first, the system, without notification or signs of failure, begins to curtail the execution of the specified operations, and then, if the specialist cannot react properly, unexpectedly stops any activity for him. When the system cannot cope with the difficulties encountered or is going to take extreme actions, it must provide external user guidance with its step-by-step notification. This will allow him to detect, localize and possibly eliminate the problems that have arisen promptly. Of course, we must not forget about the so-called phenomenon of "inert knowledge", about which Nadine Sartre and David Woods wrote. We are talking about those cases when operators, even having the necessary knowledge about the operation of the system, could not apply this knowledge in practice. The key to solving this problem is training. Of course, standard, habitual training is extremely important, in which some limited time is given to master the necessary knowledge and skills. But an equally important component of specialist training is the opportunity to study in real conditions of practical use of automated systems. The latter is extremely difficult to understand the principles of their work, so it is almost impossible to master them in a given training time interval (Sarter & Woods, 1995). Nadine Sartre and David Woods also noted that feedback mechanisms in the operation of automated systems should be thought out in such a way as not to cause unnecessary panic in the operator. They suggest not using voice alerts, an excessive number of false positives, or too quiet alerts. The development of adequate feedback, therefore, requires a thorough study of human perceptual capabilities.

One of these automated programs is used in the diagnosis of oncological diseases. Based on the analysis of the data array, it can indicate possible lesions in the patient's body, thereby preventing the disease from being out of the doctor's field of view. However, hints can also lead to an opposite effect, when a doctor, relying solely on the results of an automated system, can skip the focus of the disease. In a study conducted in 2013, a group of London specialists argued that computer tools can aggravate the situation when making important decisions in the diagnosis of cancer (Povyakalo et al., 2013). They analysed 180 mammograms, which were interpreted by 50 specialists with and without a computer. The results showed that the computer-aided design system helped less competent specialists, but interfered with more competent ones. This may be explained by the fact that a person's attention is focused most often when there are some unusual and alarming signals. The more reliable automated systems become, the more a person is inclined to trust them, the lazier he is, the less his desire to show independence, curiosity. R. Parasuraman, professor at George Mason University in Fairfax, director of programs on human Factors and Applied Cognition in the Department of Psychology and director of the Center for Excellence in Neuroergonomics (CENTEC), wrote in his article that there are disadvantages in the use of highly reliable automated systems, such as the phenomenon of "acquired carelessness" (Parasuraman & Manzey, 2010). In turn, this may lead to the omission of an abnormal condition or a systemic malfunction in the patient's body.

It is possible that AI can turn into a "superintelligence" with which there is no way to negotiate and the only way out is to turn it off.

The problem of excessive trust was highlighted as another significant factor that leads to the inability of operators to detect system failures or undesirable system behaviour. The problem of excessive trust can be characterized as the development of a false sense of security and the transfer of responsibility to a system that is reliable but may well get out of control. A person ceases to doubt the autonomous system, which creates the image of a very reliable, competent and highly intelligent partner when there is no need for constant monitoring of its work.

Trust in automated systems is closely related to the issue of responsibility. In this context, it is appropriate to recall the accidents associated with radiation therapy devices Therac-25 (1982) and Sagitar-35 (1991), the use of which caused an excess dose of radiation in the process of receiving medical procedures by patients (Leveson, 1995). During the investigation of incidents, as a result of which significant harm was caused to the health of patients, one of the main reasons for which was called software design without adequate feedback: the software does not have self-checking procedures and the ability to edit in case of emergencies (for example, entering erroneous data). It turns out that AI-controlled systems may show errors, for example, received during their creation. At the same time, it is not always clear why the system chose this or that solution. At the same time, a group of experts engaged in the investigation announced the problem of excessive trust, i.e. too much confidence of operators in the reliable and correct operation of the system. Reliability, in the end, began to be identified with security, which, in turn, gave rise to excessive self-confidence of personnel working with automated systems using AI. However, the ability to take responsibility for one's actions today and always is an essential feature of exclusively a human being. Consequently, the priority becomes the task of providing him not just with reliable equipment, but with a human orientation embedded in it, the priority of his interests, predictability, accountability and reliability in use. R. Bilyaletdinov wrote that an individual should not become a hostage of an already predetermined future, but an active operator of change, responsible for his work (Belyaletdinov et al., 2014).

The introduction of AI systems has also significantly changed the nature of doctor-patient interaction. Between them, there is a third part, an intermediary in the form of a technical device. AI systems in modern medicine are assigned a huge part of medical duties, which creates the problem of excessively strict standardization of treatment procedures and reducing the role of the specialist himself, the importance of clinical thinking of doctors (Vvedenskaya, 2020). Applications are designed to facilitate the work of a doctor, when they enter the next data, issue recommendations for treatment, issue invoices and other documents. But over time he begins to rely more and more on computerized solutions and trusts his own experience and knowledge less and less. AI replaces a specialist here and contributes to an increase in his salary, since in these systems the cost of each procedure is added to the total bill. An economic interest is formed in the doctor, which is gradually able to displace the priority of humanistic incentives.

The introduction of AI systems into medicine, among other things, gave rise to a phenomenon called "DE qualification", introduced by Professor Timothy Hoff. Disqualification is characterized by a decrease in the competence of specialists, the quality of medical knowledge, as well as an impersonal and stereotypical attitude towards patients. Hoff, based on the results of his research, concluded that the standardized method of electronic recommendations worsens the understanding of the patient's condition, undermines the doctor's ability to make informed decisions about diagnosis and treatment (Hoff, 2011). The problem of disqualification is closely related to the subsequent loss of work and the loss of the identity of the doctor as a specialist. If the system starts doing most of the work for the doctor, then the specialist will gradually turn into a technician who can only service the machine. But we must not forget and ignore the fact that only a human doctor is capable of understanding, genuine compassion, empathy and responsible attitude to the health and well-being of the patient. The machine is alien to the intuitive search for a solution to a problem, the manifestation of empathy, social sensitivity, it cannot negotiate and overcome differences, convince and take care of other people, which is so necessary and important when it comes to human life and health.

Purpose of the Study

The purpose of our research is a socio-philosophical analysis of the current situation of the use of AI in medicine. For this purpose, the main directions of the introduction of AI into medical practice and the resulting socio-ethical problems were identified, the search for possible ways to eliminate them was carried out. Along with the possibilities, you should pay attention to the greater threat, because having autonomy, AI can put a person in a hopeless situation. In other words, the study aims to identify and analyse the problems and dangers that may arise when using AI in medicine and healthcare, the main problems that arise in medicine when using AI methods are identified. This is the problem of commercialization, availability and confidentiality of information, the appearance of system errors and the transformation of the relationship between doctor and patient. As AI develops the degree of its influence increases and squeezes the subjective world of a person and approaches life. But no logical AI scheme can dominate, because human reality is not exhausted by them. Therefore, the authors believe that society and man should determine how to solve problems that may arise with the use of AI in medicine, long before they arise in reality.

Research Methods

The use of AI is studied by various fields of sciences: medicine, philosophy of science and technology, bioethics, economics, law. The early stage of AI development is characterized by the use of a narrowly disciplinary approach, which includes fairly clear methodological and ethical aspects and practical steps for the development of technologies (for example, a Turing machine). But the modern specialization of AI and its latest results have revealed a noticeable discrepancy between theoretical research in the philosophy and methodology of artificial intelligence and the practical application of AI. Such a discrepancy requires, in our opinion, the development of special interdisciplinary programs and the solution of acute issues arising in this case. The new approaches mentioned in this article can help realize the potential of artificial intelligence to solve the problems of using AI in medicine. We relied on a human-oriented approach instead of a technical-oriented one. The authors also share the views of the leading experts in the field of automation Parasuraman and Manzey (2010), who are concerned about the change of labour, the worker himself due to the introduction of automation. He argues that subordination to automated systems, a biased (overly trusting) attitude towards them, which increases with increasing "reliability", generates profound changes in the essence of a person as a working being, cuts him off from the very process of cognition, therefore, from a deeper understanding of the world.

Findings

Modern medicine is transforming before our eyes. The automation process is more and more important in the provision of medical services. The role of a person in such a system is also changing, and often they are negative. How can you avoid this? What principles should be followed by those who develop and use automated systems with AI for medical purposes?

In our opinion, the focus today, as always, is the person, not the technology. A person should not be turned into maintenance, technical staff, a kind of addition to an automated system. The latter should be perceived as a team player, certainly necessary, but not dominant and not autonomous. In other words, in the process of designing, creating, training, and applying AI systems, first of all, a humanistic component is needed, the fundamental principles of which were formulated in his work by Billings (1991).

The first principle is transparency. A person should always be involved and informed about the current actions, states and behaviour of the system. In other words, automation should be transparent and observable for developers, healthcare professionals, patients, users and regulators. Observability or transparency in this case means interaction between a human user who knows where, how and when to look for the necessary information and a system that structures the available data to ensure the user's successful work. To this end, before the deployment of AI technologies, all the necessary background information on the goals and features of the future system should be announced, published, which will make it possible for a public discussion of its capabilities and prospects for practical application. A clear and transparent specification of the tasks that the system can perform and the conditions under which it can achieve the desired performance should be available to the public. Performers should be responsible for the fact that the created system with AI is capable of solving the assigned tasks, as well as for the fact that the project they propose will be implemented by the stated goals and properly trained specialists. We would like to emphasize that responsibility means, among other things, the application of the so-called "human guarantees", i.e., systems for evaluating patients and clinicians of the development process, the results of the system, as well as the consequences of the introduction of AI technologies. In other words, the entire process of creating and using automated systems should be carried out under the vigilant supervision of the public, its authorized governing bodies, and be fully accountable to them.

Such an approach largely reveals the essence of the next most important principle of the human-oriented approach - the principle of autonomy. Autonomy in the field of medicine means the ability for a person to control the health care system and those methods, methods of solving medical problems that are proposed for implementation and use in practice.

The introduction of telemedicine has undoubtedly ensured the maximum availability of high-tech medical care. Nevertheless, do not forget that the use of this technology also implies possible risks. It is not always possible to fully replace a doctor with his digital counterpart. As Leongard (2018), an American futurologist, wrote, “to be human means to possess qualities that (shortly) we cannot calculate, measure, algorithmically determine, imitate and fully understand. What makes us human has no mathematical basis and cannot be reduced to a purely chemical and biological nature” (p. 63).

It is important for the patient to interact with a living person-a doctor, to exchange with him not only information but also energy, emotions. The intuitive component is very important, the ability to feel the psychological state of the patient. It helps doctors in the diagnosis and treatment of diseases. The formation by the doctor of a patient's positive emotional attitude to the measures taken for his recovery can also have a beneficial, positive effect on the course of treatment of the patient.

The next principle is to promote the well-being of people and the goals of the development of society. Artificial intelligence technologies should not cause either moral or physical harm to people. Developers of artificial intelligence technologies in their activities should be guided by the regulatory requirements for ensuring safety, accuracy and efficiency, formulated at the legislative level in any modern state. They are obliged to ensure that the use of AI and computing systems does not entail any harm to an individual, society as a whole. We are primarily concerned about the "humanistic ideology ... considering a person as a supreme goal, an end in itself and value" (Oreshnikov & Shkerina, 2017, p. 8), which should be at the very core of AI.

Equally important is the principle of ensuring inclusiveness and fairness in the use of AI technologies. The latter should be used as widely as possible, rationally, fairly in society, and access to them should be open and independent of age, gender, income level, race, ethnicity, sexual orientation, abilities or any other characteristics of people. AI mustn’t become another factor in social life, exacerbating existing inequality, contributing to discrimination, the formation of unfair, biased attitudes towards those social groups and strata that, for various reasons, are already marginalized, infringed on their rights and may be deprived of access to modern medical services. It is also unacceptable, in our opinion, for AI technologies to serve as a reason for the formation of unequal, unfair relations between service providers and patients, between companies and governments that create and deploy artificial intelligence technologies and those of them that only use or rely on them.

Finally, we note the principle of responsiveness, which requires designers, developers and users of AI-based systems to systematically and transparently evaluate the applications they use. This will determine whether these systems can adequately perform their functions and meet the expectations and requirements imposed on them by various social groups. Responsiveness also requires AI technologies to be designed in such a way that, on the one hand, it is possible to minimize the environmental consequences of their use, to achieve compliance with their global requirements to reduce the anthropogenic impact on the environment, ecosystems and the Earth's climate, and on the other hand, to increase energy efficiency, ergonomics and ease of use.

Conclusion

In conclusion, we emphasize once again the idea of the twofold and contradictory influence of AI technologies on the life of modern society. These technologies are increasingly being used in various fields – in finance, manufacturing, service, education, and finally, in medicine. They certainly open up new opportunities in the provision of vital medical services, they help to synthesize new medicines, diagnose diseases. But at the same time, new AI technologies have a kind of anaesthetic, narcotic effect on people: people imperceptibly find themselves in the role of an assistant in the service of automated systems, which inevitably destroys the individual's personality, his self-identification. But even the most advanced technologies should not displace a person. He is still the centre of being, the subject of the historical process, setting goals, shaping and setting the direction of technology development, using their capabilities for the benefit of society, to solve its many problems. And therefore, a person today must realize his extremely increased responsibility when deciding on the introduction of AI technologies into the practice of social life. A person should be aware of the consequences of their use, what transformative impact they can have on the relationship between a doctor and a patient, their status, their rights and obligations.

References

  • Alder Hey Children’s NHS Foundation Trust (2017). Welcome to Alder Hey – the UK’s first cognitive hospital. https://alderhey.nhs.uk/login

  • Alekseeva, A. (2017). Iskusstvennyy intellekt v meditsine. XXI vek [Artificial intelligence in medicine. XXI century]. https://22century.ru/popular-science-publications/artificial-intelligence-in-medicine

  • Bedi, G., Carrillo, F., A Cecchi, G., Fernández Slezak, D., Sigman, M., B Mota, N., Ribeiro, S., C Javitt, D., Copelli, M., & Corcoran, Ch. (2015). Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schitzophrenia, 1(15030).

  • Belyaletdinov, R. R., Grebenshchikova, Ye. G., Kiyashchenko, L. P., Popova, O. V., Tishchenko, P. D., & Yudin, B. G. (2014). Sotsiogumanitarnoye obespecheniye proyektov personalizirovannoy meditsiny: filosofskiy aspekt [Socio-humanitarian support of personalized medicine projects: a philosophical aspect. Philosophy and Modernity]. Filosofiya i sovremennost, (4), 12–23.

  • Billings, C. E. (1991). Human-Centered Aircraft Automation: A Concept and Guidelines. (NASA Technical Memorandum 103885). Moffett Field, CA: NASA-Ames Research Center. https://archive.org/details/nasa_techdoc_19910022821

  • Bryzgalina, Ye. V. (2019). Meditsina v optike iskusstvennogo intellekta: filosofskiy kontekst budushchego [Medicine in the optics of artificial intelligence: the philosophical context of the future]. Chelovek, 30(6), 54-71. DOI:

  • Dawes, T., de Marvao, A., & Shi, W. (2017). Machine learning of three-dimensional right ventricular motion enables outcome prediction in pulmonary hypertension: a cardiac MR imaging study. Radiology 283(2), 381-90.

  • Esteva, A., Kuprel, B., & Novoa, R. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542, 115-118.

  • Gasser, U., Ienca, M., Scheibner, J., Sleigh, J., & Vayena, E. (2020). Digital tools against COVID-19: Taxonomy, ethical challenges, and navigation aid. Healthpolicy 2(8), E425-E434. DOI:

  • Gómez-González, E., Gomez, E., Márquez-Rivas, J., Guerrero-Claro1, M, Fernández-Lizaranzu, I., Relimpio-López, M. I., E. Dorado, M., Mayorga-Buiza, M. J., Izquierdo-Ayuso, G., & Capitán-Morales, L. (2020). Artificial Intelligence in Medicine and Healthcare: a review and classification of current and near-future applications and their ethical and social impact. https://arxiv.org/ftp/arxiv/papers/2001/2001.09778.pdf

  • Hoff, T. (2011). Deskilling and Adaptation among Primary Care Physicians Using Two Work Innovation. Health Care Management Review 36(4), 338-348. DOI:

  • Illmer, A. (2021). Singapore reveals COVID privacy data available to police. BBC News. https://www.bbc.com/news/world-asia-55541001

  • Leongard, G. (2018). Tekhnologii protiv cheloveka [Technology versus people]. AST Publishing House

  • Leveson, N. (1995). Medical Devices: The Therac-25. Addison-Wesley. http://sunnyday.mit.edu/papers/therac.pdf

  • Mehta, N., Pandit, A., & Shukla, S. (2019). Transforming healthcare with big data analytics and artificial intelligence: A systematic mapping study. Journal of Biomedical Informatics, 100, 103311. DOI:

  • Metz, C. (2020). London AI lab claims breakthrough that could accelerate drug discovery. The New York Times. https://nyti.ms/2VfKkvA

  • Mozur, P., Zhong, R., & Krolik, A. (2020). In coronavirus fight, China gives citizens a color code, with red flags. TheNewYorkTimes. https://news.abs-cbn.com/business/03/02/20/in-coronavirus-fight-china-gives-citizens-a-color-code-with-red-flags

  • Next-generation Echocardiography is here (2021). http://www.ultromics.com/technology/

  • Oreshnikov, I. M., & Shkerina, T. I. (2017). Filosofskiye razmyshleniya o probleme iskusstvennogo intellekta [Philosophical reflections on the problem of artificial intelligence. History and pedagogy of natural sciences]. Istoriya i pedagogika yestestvoznaniya, (4). https://cyberleninka.ru/article/n/filosofskie-razmyshleniya-o-probleme-iskusstvennogo-intellekta

  • Parasuraman, R., & Manzey, D. (2010). Complacency and Bias in Human Use of Automation. An Attentional Integration, 52(3), 381-410.

  • Povyakalo, A. A., Alberdi, E., Strigini, L., & Ayton, P. (2013). How to Discriminate between Computer-Aided and Computer-Hindered Decisions: A Case Study in Mammography. Medical Decision Making, 33(1), 98-107.

  • Rezaev, A. V., & Tregubova, N. D. (2019). Iskusstvennyy intellekt i iskusstvennaya sotsial'nost': novyye yavleniya i problemy dlya razvitiya meditsinskikh nauk [Artificial intelligence and artificial sociality: new phenomena and problems for the development of medical sciences]. Epistemology & Philosophy of Science, (4). https://cyberleninka.ru/article/n/iskusstvennyy-intellekt-i-iskusstvennaya-sotsialnost-novye-yavleniya-i-problemy-dlya-razvitiya-meditsinskih-nauk

  • Sarter, N. B., & Woods, D. D. (1995). Autonomy, Authority, and Observability: The Evolution of Critical Automation Properties and Their Impact on Man-Machine Coordination and Cooperation. 6th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Man-Machine Systems. Cambridge, MA. https://billhiggins.medium.com/notes-on-sarter-and-woods-autonomy-authority-and-observability-properties-of-advanced-7d4cd444272f

  • Shachar, C., Gerke, S., & Minssen, T. (2019). Is data sharing caring enough about patient privacy? Part I: The background. Cambridge (MA): Bill of Health, Harvard Law, Petrie Flom Center. https://blog.petrieflom.law.harvard.edu/2019/07/26/is-data-sharing-caring-enough-about-patient-privacy-part-i-the-background/

  • Vvedenskaya, Ye. V. (2020). Eticheskiye problemy tsifrovizatsii i robotizatsii v meditsine [Ethical problems of digitalization and robotization in medicine]. Filosofskiye nauki, 63(2), 104–122. DOI:

  • Wood, M. (2017). UChicago Medicine collaborates with Google to use machine learning for better health care. At the Forefront: U Chicago Medicine. https://www.uchicagomedicine.org/forefront/research-and-discoveries-articles/uchicago-medicine-collaborates-with-google-to-use-machine-learning-for-better-health-care

Copyright information

About this article

Publication Date

03 June 2022

eBook ISBN

978-1-80296-125-6

Publisher

European Publisher

Volume

126

Print ISBN (optional)

-

Edition Number

1st Edition

Pages

1-1145

Subjects

Cite this article as:

Badmaeva, M. H., Bagaeva, K. A., Balchindorzhieva, O. B., Zolkhoeva, M. V., & Chagdurova, E. D. (2022). Problems Of Using Artificial Intelligence In The Field Of Medicine: Socio-Philosophical Analysis. In N. G. Bogachenko (Ed.), AmurCon 2021: International Scientific Conference, vol 126. European Proceedings of Social and Behavioural Sciences (pp. 97-108). European Publisher. https://doi.org/10.15405/epsbs.2022.06.12