Ensuring Accountability in AI Systems

“There are certain things in the world that once you break you cannot bring back. So, we need to worry about them.”

Arthit Suriyawongkul

Many steps in Arthit Suriyawongkul’s career were taken after events that happened “by accident”, he says. But, with a background in computer science, as well as in anthropology, he has been building knowledge around connecting the dots between humans and technologies, from different perspectives. Today, his research focuses on Artificial Intelligence (AI) accountability.

While there are some high-risk areas associated with AI that “we should avoid” and “prohibit” for the time being, because “we do not have enough scientific knowledge” on what its long-term consequences will be for humans, “there are other areas in which we actually have enough knowledge to deal with it, to control it”, he explains.

“For those areas [we can deal with], we have some standards and we need to make sure that people who operate AI systems follow those standards. And that is the job of auditors”, he says, though admitting that the establishment of such standards at the regulatory level by individual countries is still “in the very beginning” and much auditing is done “case by case”.

Arthit Suriyawongkul first joined UNU Macau as a Visiting Research Fellow, and continues to collaborate with the Institute as a researcher – he is currently working in the “Cyber Resilience among Women CSOs and Women Human Right Defenders in Southeast Asia” with UN Women.

As a PhD Candidate at Science Foundation Ireland Centre for Research Training in Digitally-Enhanced Reality (d-real), based in Trinity College Dublin, Ireland, he is working on automations that can improve the work of AI human auditors – which is usually linked to the safety aspects of AI, as he explains. One of the reasons for working on these automations is that there is a gap between the speed at which AI systems are being developed and the speed at which human auditors can assess all those systems.

He gives the example of decisions by public sector bodies. Justice systems include appeal instances to which people can turn to if they do not agree with a court’s decision. And work in both instances involve similar scale of speed, he notes, as in both cases that work is performed by humans. However, if AI is used to generate court decisions at a first instance level, while allowing the possibility of people then appealing to instances where decisions will be made by human judges, because we still need to ensure the human oversight somewhere in the system, the work speed will be different, and the situation will become unbalanced.

Arthit Suriyawongkul illustrates the idea with potential figures: “If with AI we would be able to make verdicts for 10,000 cases a day, the capacity of the appeal judges would still be around five cases a day. So, we would actually have a huge gap in checks and balances in the system. This means the level of accountability inside the justice system would be lower.”

So, such a system, he points out, is not necessarily “useless”, but does require “some upgrade” to ensure accountability.

“But there are certain things in the world that once you break you cannot bring back. So, we need to worry about them, to be careful about them”, the researcher points out.

Environmental impact assessment mechanisms, he exemplifies, can introduce precautionary measures to avoid consequences such as the destruction of a rainforest, for instance, because “a small harm may lead to a harm that is irreversible”. Similarly, he says, “there is a fear that, in some very high-risk areas, AI may create the same kind of effect”.

One of such high-risk areas is “subliminal AI”, a set of techniques that have the potential to manipulate the human mind and to alter a person’s behaviour. “If we, as a human society, do not actually have enough scientific knowledge yet of what will happen if we allow AI to do certain activities in a large scale, the current precautionary measure [available] is to prohibit it”, he says.

“So, if an AI auditor, after his/her assessment feels [a certain use or tool of AI] is just too dangerous and we do not have enough knowledge about it, let’s not go further.”

 

From software to anthropology, and back

 

Originally from Thailand, Arthit Suriyawongkul used to love drawing as a child, and dreamed of being an architect. But playing videogames in his high school years eventually made him develop an interest in computer science. “I wanted to be a programmer.”

After he finished his undergraduate studies in information technology, he got a job “in a huge company, at that time”, working in a team that was localising software into Thai language, as well as its algorithms and counting systems – like collation order, calendar, time system or currency –, “to fix the needs of local users”.

This first “accident”, or experience in getting “more involved with languages and cultures”, made him start reflecting on the linkages between software engineering or computer science and “the societal, cultural settings” of the users, and of this “other set of knowledge we should be aware of”.

An eager learner, he moved on to pursue a master’s in Cognitive Science and Natural Language.

The next “accident”, he says, was that, after completing his master, he had a chance to work in academic research, this time in “automatic summarisation”. “It was about allowing more people to get access to information”, as abstracts or summaries “actually help people decide whether they would like to invest their time to read further books or paper or articles”.

Around that time, he spent many of his free hours editing the Thai Wikipedia. “People were expecting the Internet to bring a lot of knowledge to them, but Internet connection alone could not ensure that. You needed content. So, me and some other volunteers started to translate content.”

This experience developed over a period of political tension in his home country and new regulations regarding the online space. He and others became aware that part of the online content was being censored by authorities. In 2008, he and his colleagues co-founded the Foundation for Internet and Civic Culture (also known as Thai Netizen Network), an NGO promoting civil rights in the digital environment.

Working closely with some journalists, activists and lawyers, the computer scientist became increasingly aware that “the Internet is not only about computers connecting to other computers”: “Behind the screen, there are users; and they are humans.” So, he moved on to study a second master, this time in Anthropology. “It felt like a natural step for me in order to understand the human culture.”

So, after encountering “language” as a potential “barrier” in the relation between digital technology and its users, and “time” as a second one, Arthit Suriyawongkul identified “Internet censorship” as a third obstacle.

Looking at the world today, he admits not only his understanding of issues changed over time, but also the society and even the Internet infrastructures. Nowadays, for many people in some countries, he says, “Facebook is the Internet” – they go online mainly through their mobile phones and “they know Facebook before they know websites”. Another example he gives is the landscape of e-mail service providers, which, “globally”, is now concentrated in a handful of companies.

At the same time, he adds, web encryption was not as widespread back then as it is today. “It’s actually getting harder for local ISPs [Internet Service Providers] to censor the Internet”, but the problem now, he points out, is that “because of the concentration of services in very few big platforms, it is actually easier for those platforms to censor the Internet”. Today, he notes, search engines and social media giants “have more power in terms of information control than a lot of governments”.

And while legal instruments – such as Constitutions – are able to hold a government accountable in its use of power, a similar mechanism is lacking for these private companies, the researcher notes. He adds there is a debate now on “digital constitutionalism”, which deals with the need for a framework that ensures mechanisms of checks and balances and the protection of fundamental rights in the digital society. “It is a challenge.”

Though Arthit Suriyawongkul says his interests change over time, a main theme has always been “allowing people to get access to knowledge and fulfil their curiosity, their autonomy”.

As he loves to “find answers”, he feels “privileged” for, as a researcher, being able to work on the problems that he is interested in. Although he enjoys the individual reflection process very much, he highlights the importance of talking with co-workers, even if remotely. “Colleagues”, he says, are the one thing he cannot work without.

left
right