23.Feb .2021 23:00

IDFI: AI Systems at Georgian Public Sector at an Early Stage

IDFI: AI Systems at Georgian Public Sector at an Early Stage

The introduction of artificial intelligence systems in the Georgian public sector is at an early stage of development, although in the private sector there are many successful examples of the use of this technology, such as remote verification systems, automatic document identification systems, communication automation programs, and many other tools.
“This sector has become particularly profitable in the face of the pandemic due to increased demand for remote services. Therefore, the potential for large-scale use of artificial intelligence in terms of increasing efficiency and cost-effectiveness in various processes has already become evident. However, there are also risks that can arise as a result of systemic misuse, technical glitches, and mismanagement of personal data,” reads the recent report of the Institute for Development and Freedom of Information (IDFI) on Artificial Intelligence: International Tendencies and Georgia - Legislation and Practice. 
Artificial intelligence is not just another electronic assistance tool. Pursuant to the authors of the report, “it substantially increases the governing capacity of the state, thereby increasing the temptation for its illicit use. This risk is particularly high in developing democracies. The study revealed that law enforcement agencies are the only area in the public sector where the process of introducing artificial intelligence is stable, which serves as an indirect indication of the imminent nature of these risks.” 
The present study highlights the lack of normative acts regulating artificial intelligence systems and documents defining ethical norms in the target agencies. In order for the public sector to be able to make the most of these technologies, access to information and transparency regarding the systems is critically important, along with technological readiness, so that the public be informed about the peculiarities of the functioning of these systems, to exclude the risks of bias, and make it possible for an external observer to discuss the possible shortcomings of the system and for it to gain a high degree of trust. The study has shown that information on the use of artificial intelligence is so scarce that it is difficult not only to control its use but also to exercise the right to a fair trial.
The public sector has two major roles to play in the development of artificial intelligence, and it, therefore, faces a dual challenge, reads the report. “Firstly, the public sector should promote the formation of a national ecosystem for national startups and industry aimed at the exploitation of AI, attract investors and donors, use AI applications in different sectors, and achieve socio-economic growth and prosperity through artificial intelligence. 
Simultaneously, for the development of artificial intelligence, the government should create a regulatory framework that balances and reduces the threats, risks, and challenges associated with artificial intelligence systems; one that provides effective mechanisms for enforcing the adopted legal and ethical standards. It is also important to outline procedures for auditing the operations of artificial systems, to define responsibilities, and to make the results of such inspections available to the public. It is important to take appropriate steps in this direction from the very beginning of the introduction of artificial intelligence”.
IDFI requested information about IT systems, software, and artificial intelligence systems created and used by the 54 public institutions identified at the initial stage. Responses were received only from 36 agencies. Only 12 of these 36 agencies provided information on the software they used, while some informed IDFI that they did not use artificial intelligence systems. 
A significant number of the target agencies did not respond to the letter requesting public information at all, while most of the responses received were limited to a statement that the agency did not use artificial intelligence. Such responses have, on several occasions, raised the suspicion that these agencies have taken advantage of the ambiguity of the term "artificial intelligence" and thereby avoided disclosing information.