
Artificial intelligence is rapidly reshaping Europe’s health-care systems, offering faster diagnoses, improved patient services, and relief for overburdened medical staff. But a new report from the World Health Organization (WHO) warns that the region is deploying these tools without the proper safeguards needed to protect patients, health workers, and public trust.
The WHO analyzed 50 countries across Europe and Central Asia and found major gaps in regulation, funding, and strategic planning for AI in the health sector.
AI Use Is Expanding Rapidly—But Oversight Isn’t
AI tools are now widespread in hospitals and clinics across the region. According to the report:
- Half of the countries are using AI chatbots to assist patients.
- 32 countries use AI for diagnostics, especially in imaging and disease detection.
- Many nations are exploring AI for pathology, mental health support, patient screening, data analysis, administration, and workforce planning.
Examples include:
- Spain, which is testing AI to improve early detection of serious diseases.
- Finland, where AI tools are being used to train the health-care workforce.
- Estonia, which uses AI for large-scale data analysis.
Despite this momentum, oversight is lagging far behind.
Only Four Countries Have a Health-Care AI Strategy
Although 26 countries have identified priority areas for AI in health care, only 14 have allocated funding to support AI projects.
More concerning, only four countries—Andorra, Finland, Slovakia, and Sweden—have developed a national strategy specifically focused on AI in the health sector.
The WHO warns that this leaves most nations without clear guidance on safety, ethics, and long-term planning.
“AI’s Promise Won’t Be Realised Without Protection,” WHO Says
Dr. Hans Kluge, WHO Regional Director for Europe, said the rapid expansion of AI must be matched with strong safeguards.
“AI is on the verge of revolutionising health care, but its promise will only be realised if people and patients remain at the centre of every decision,”
—Dr. Hans Kluge
He warned that without solid governance, Europe risks deepening health inequalities and undermining public confidence.
Data Bias, Safety Risks, and Legal Gaps Raise Red Flags
AI systems depend on massive datasets, which can be:
- Biased
- Incomplete
- Flawed
These weaknesses can produce dangerous medical errors, including missed diagnoses or incorrect treatment recommendations.
The WHO notes that many countries do not yet have laws that clarify:
- Who is responsible for AI-driven medical mistakes
- How patients can seek redress
- How data should be protected during AI training and deployment
Health Workers Are Unsure How to Use AI
The lack of clear standards and training is also creating hesitation among health-care workers.
Dr. David Novillo Ortiz, who leads WHO’s work on AI and digital health in Europe, said frontline staff need confidence in the systems they use.
European countries must ensure AI systems are tested for safety, fairness, and real-world effectiveness before they reach patients,”
—Dr. David Novillo Ortiz
Without established rules, many doctors and nurses may resist using AI tools—even when they could improve outcomes.
WHO’s Recommendations for Safer Health-Care AI
To close the gap, the WHO calls on European governments to:
- Align AI development with public health goals
- Strengthen data privacy and legal protections
- Train health-care workers to use AI responsibly
- Improve transparency around how AI tools are used
- Set clear rules on accountability for AI-driven decisions
- Invest in national AI strategies and long-term funding
The organization emphasizes that AI can transform health care—but only if countries build systems that are safe, fair, and centred on patient rights.
