糖心Vlog

The Secretary General of the International Telecommunications Union (ITU) Houlin Zhao opened the third 鈥淎I for Good Summit鈥 in Geneva in May 2019 with the plea, 鈥渓et鈥檚 turn data revolution into a development revolution鈥. This post examines health-related themes at the summit, and the complexities arising from an AI-enabled 鈥榙evelopment revolution鈥.

In his opening address, Zhao highlighted the health benefits of AI, especially in the fields of breast cancer, skin disease, and vision loss. He and others spoke of the rapid spread of internet accessibility, with 鈥榥ow over half the world鈥 using the internet, through an apparent 28 billion devices. Speakers also agreed that the accelerating pace of AI development and use is outstripping policy development and methods to control and protect everyone鈥檚 data.

The ITU-sponsored AI for Good initiative is framed around achieving the Sustainable Development Goals. ITU has an AI Repository, into which projects can be uploaded to demonstrate commitment to 鈥榞ood鈥, and nominate which SDGs the projects address. This technocratic engagement with the SDGs is reductive, and co-opts the SDGs to promote the AI projects, rather than demonstrating the projects might reduce inequalities through the underlying SDG human rights principles, and leave 鈥榥o one behind鈥.

It was therefore interesting to observe that few of the presentations throughout the four days actually mentioned the SDGs. The focus remained more squarely on the positive outcomes that would result from uptake of the AI-driven technologies, but with an openness about many of the challenging issues that developers and project implementers are facing.

Building trust and transparency were themes repeated throughout all sessions; for many, this is a pragmatic response to the need to make more data available 鈥 data being the fundamental component needed to drive AI. In health sessions, there was occasionally genuine incredulity that patients can be reluctant to share their data when anonymity is promised. Such AI advocates urged those in health sectors to promote AI as a public good. But this is a contestable framing of AI when most its products are owned by the private sector, and there is no certainty as to how patient data will be used, who will be granted access to its benefits, and to whom profits will be distributed. Patients in the UK have already experienced breaches of their data confidentiality and privacy when the Royal Free partnered with Google鈥檚 DeepMind, as Amy Dickens and Linsey McGoey have written.

Yet, public private partnerships (PPPs) abound in the AI-based 鈥榙evelopment revolution鈥. The industry and government representatives spoke of the partnerships as so obviously symbiotic that there was no space to discuss the arrangements. Some countries, namely the Bahamas, Tunisia, and Zimbabwe, had MPs in attendance to lure developers because of their belief that AI would give them a chance to 鈥榗atch up鈥. Zimbabwe鈥檚 ICT minister stated, 鈥淲e are open for business鈥. An MP from Tanzania outlined the policy and infrastructure development they have undertaken to prepare for AI.

Now in its third year, some speakers suggested there was a greater openness at this summit of risks associated with AI. Wendell Wallach of Yale University鈥檚 Interdisciplinary Centre for Bioethics claimed he was the only speaker at the first summit who spoke of risks and unintended consequences. But at this third summit, not only did most panels discuss risks and challenges, there was a full workshop on unintended consequences. Wallach attributed some of this shift to Brexit and the Trump election and associated general awareness of cyber insecurities; other presenters have throughout the past three years experienced unanticipated problems and consequences as their projects have been implemented.

The discussions on risks and challenges remain firmly focused on issues that fall under the purview of civil and political rights: privacy, data ownership, and security. Even when speakers discussed PPPs, risks raised where around partners鈥 expectations not being met; panellists did not address risks associated with a future in which there is private ownership of the fundamental tools necessary for society to function 鈥 our economic, social and cultural rights.

However, Thomas Wiegand, chair of the ITU-WHO focus group on AI in health, called for research into what happens after AI is deployed, to identify the real impact. He suggested for example that an iPhone app that detects skin lesions could result in half the population of Germany turning up to their GPs鈥 clinics, thus overloading the health system. But there are real challenges in assessing the impact of one tool in a whole system; if the AI app is for a diagnostic function, then patient improvement depends on much more than a correct diagnosis 鈥 a quality and accessible treatment plan is also needed.

This is illustrative of a systems approach to health 鈥 something which can easily be overlooked by AI developers. For example, several keynote speakers praised AI company Baidu鈥檚 ocular fundus screening tool which is presented as a way to overcome the shortage of ophthalmologists in most low resource settings, and get a rapid diagnosis of eye disease. But there is little point in getting a diagnosis at all, if there is no follow-on treatment available; the stated shortage of ophthalmologists necessarily means diagnosed ocular disease cannot be treated.

One of the largest threats to health rights arises from the ownership the fundamental blocks of health technology; will the owners ensure the benefits of these new technologies are equally accessible to all. The biggest players in the Big Data/AI research domain include Google, Facebook, Microsoft, Apple and IBM, and although Microsoft was represented at the summit, questions about ownership of data and AI tools were not raised.

Hidas Bitran of Microsoft Israel outlined Microsoft鈥檚 interest in AI for health, covering DNA mapping and genome sequencing; use of 鈥榓mbient technology鈥 to capture the clinical records arising from patient consultations; and healthcare bots. Microsoft partners with 168,000 healthcare organisations in 140 countries. This gives the company an enormous amount of patient data from which to build its AI technologies and from which it necessarily must return a profit. But issues around the responsibilities Microsoft has to the individuals, hospitals, and countries from whom it mined all this information, and the price at which it would sell back the healthcare tools, again did not feature in the panels.

Alvin Kabwama Leonard of Cognitive AI Tech Ltd discussed the AI technology made in partnership with Google鈥檚 DeepMind to reduce maternal mortality through the use of rapid urine test results. Partnership and data ownership details were not provided. This urine test initiative returns to the issue raised above about focusing on only one aspect of a health system. It is correct that urine samples will detect raised protein levels, and with good management, this can stop pre-eclampsia, and can save a woman鈥檚 life. But there are multiple issues to consider before embarking on deployment of an AI (or any other) programme to provide urine tests. From a right to health perspective, these include:

  • an understanding of the whole health system including the availability of trained health workers to test and treat women, especially those most at risk of maternal mortality;
  • knowing the main causes of maternal mortality so that finite resources can be allocated to the most appropriate care 鈥 and that may not be AI programmes;
  • knowledge of the supply chain for drugs so treatment is on hand for women who need it; and
  • clarity around contracts and partnerships with good and transparent governance, and awareness of the financial implications both in short and long term, especially as PPPs have a history of being far more costly to governments than ever anticipated.

While the summit was vocal about risks inherent in AI, these were mainly those technological risks that threaten privacy and security. These are of course high stake risks, and when Eleanore Pauwels of the United Nations University-Centre for Policy Research discussed the threats arising from the hybrid of AI merging with cyber attacks, harnessing the internet of genomes; simulation of fake biological data, and precision biology attacks, data poisoning and weaponization, there was no doubting her concerns that such events could completely destroy government institutions including hospitals. But these are technological threats, and they may be able to be contained with technological solutions.

Whereas, having handed public data over to the private sector, to make and own the tools to enable societies and institutions everywhere to function, is an irreversible step towards abrogating state responsibility and handing power to the private sector. This is a continuation of the neoliberal agenda, which has witnessed growing inequalities worldwide for the past 40 years. While AI was shown in so many presentations at this summit to be developing tools that are 鈥榞ood鈥, a human rights perspective must surely ask, 鈥榞ood for whom?鈥