Home Artificial Intelligence AI and the future of society! Thinking about AI means thinking about the society you want to live in –

AI and the future of society! Thinking about AI means thinking about the society you want to live in –

by Yasir Aslam
0 comment

In recent years, the development of AI research has greatly changed our lives, the way we work, and even the way society should be. Under such circumstances, automation and prediction by AI have made the world convenient and prosperous, but on the other hand, social problems such as fake news, surveillance society, and discrimination caused by AI are being pointed out.

Against this backdrop, discussions on ethical, legal, and social aspects of AI have become active in Japan and other countries, and guidelines for the development and use of AI are being drawn up one after another. Companies are also required to have AI governance, and some companies are formulating their own AI usage policies and AI guidelines.

Table of Contents

  • Issues are “data maintenance” and “overestimation/underestimation of AI technology itself”
  • Think about the “vision” of the society you want to live in | Discussions that ensure diversity
  • AI Governance Ecosystem|A platform that can coordinate the supply chain is necessary
  • Thoughts put into publication|So that not only companies but also individuals can raise their voices
  • In conclusion

Issues are “data maintenance” and “overestimation/underestimation of AI technology itself”

While specializing in science, technology and society, Mr. Ema conducts various discussions and research in Japan, and is active overseas with a wide range of knowledge regarding AI governance, including serving as a committee member at international conferences.

We asked Mr. Ema about the current state of AI utilization in modern society.

ーーEma-san, what inspired you to become involved in this field of AI?

Mr. Ema: Since I was an undergraduate, I have majored in a field called science technology studies or science technology and society (STS), which is science, technology and society theory, and my research has focused on information and social issues in particular. rice field. STS is a field that not only conducts academic research on science, technology and society, but also engages in social activities such as science communication and science and technology policy.

In April 2012, after receiving my doctorate, I went to Kyoto University Hakubi Center. In January 2014, there was a cover issue (*) at the Japanese Society for Artificial Intelligence (JSAI), and while I was standing and talking about the problem with people involved in the Japanese Society for Artificial Intelligence at my university, I realized that information technology alone was the cause of this problem . It was not a closed issue, but rather a lack of contact with researchers in fields such as philosophy, ethics, and social sciences . and started discussions across disciplines.

 

(*)
The Japanese Society for Artificial Intelligence (JSAI) has been criticized for its January 2014 cover illustration in the academic journal AI .

 

ーーWhen considering ethical and social issues, how do you see the current state of AI utilization?

Mr. Ema: Recently, DX (Digital Transformation) has become popular in Japan, and I think that interest in AI utilization is increasing, but I have the impression that the data problem has not been solved before AI .

It seems that there are many problems, such as the problem of data and algorithm bias, the lack of data in the first place, the lack of a system for sharing information, and the lack of a mechanism to protect privacy and security when sharing. I’m here.

Also, I think that the image of what kind of role is expected of AI and what kind of role it should play is not well conveyed, and it is overrated and underestimated .

Specifically, there are people who think that AI is omnipotent once it is introduced, but even so, there are times when AI cannot do anything without appropriate data. On the contrary, I think that there is still a problem that we are not able to use it because we are too scared to think that we can do anything like humans.

In addition to technology, if we do not properly design interfaces that allow us to communicate with people, no matter how high-precision technology we create, it may not be accepted or used. In that sense, the AI ​​technology itself is still in a situation where the division, consciousness, and way of thinking about “what can and cannot be done” are still scattered. So I think there are quite a few areas where social implementation is not progressing.

Think about the “vision” of the society you want to live in | Diversity guaranteedHave a discussion

AI, which is also applied to solve social issues, is expected to contribute to the realization of ESG and SDGs. There is a demand for the use of AI technology that considers environmental issues, the aging society, and future generations.

ーーHow do you think about the relationship between ESG, SDGs, and AI governance?

Mr. Ema : This is one of the messages in the book I published this time . I will come. In the first pages of the book, a map introduces the values ​​such as freedom, rights, and stability that each country or region emphasizes.

Discussing AI is also about discussing society and the values ​​that society seeks, so it is highly compatible with the goals set by the SDGs. The concepts of Responsible AI and Sustainable AI have also been advocated. It is important to select and pursue such a vision, but the challenge is how to put the philosophy into practice . This is because values ​​such as responsibility, sustainability, and fairness have different answers, depending on who is discussing them and in what context. To add one more point, the items listed in the SDGs themselves are also visions that contain trade-offs.

Also, the value that is emphasized changes with the times. Our predecessors have continued to raise their voices about issues of race and gender, and they have become social issues. What is important is how you can derive the optimal solution that suits the times, or whether you can explain your own vision and be convinced.

Furthermore, whether the vision itself that you stand by is biased depending on the situation and context in which you are placed, you can doubt it, or you can look at who is creating the vision. I think that is also very important.

ーーPrinciples for AI utilization are being created all over the world, especially in developed countries. What kind of discussions are held in creating the vision? Also, are there any problems during the discussion?

Mr. Ema: The “Social Principles of Human-Centered AI” published by the Japanese Cabinet Office advocate values ​​that can be shared with the principles of each country, region, and international organization. However, compared to the principles of other countries and regions such as Europe and the United States, there is a problem that the gender perspective and diversity of the people who created the principles are lacking . This is despite the fact that “diversity and inclusiveness” is the underlying tenet of the Principles. In other words, the idea does not match the practice. This is similar to the fact that many paper materials and plastic bottles are lined up at events that promote environmental issues, food loss is occurring at social gatherings, and that wheelchairs cannot enter the building where discussions about an inclusive society are held. It’s the same scheme. There is an unconscious bias that we don’t notice until we are told about such inconsistencies and “contradiction” .

What you discuss is important, but it is necessary to question the vision of who is saying what position. For example, is there a vision that leans toward developed countries? It is necessary to consider whether the ethics of AI are not created only in developed countries, but also include the discussions of people in the so-called global south, such as Africa, the Middle East, and South America.

Currently, when evaluating the AI ​​efforts of each country and region at international conferences, we not only evaluate the contents of what is written, such as “diversity is advocated,” but also the diversity of the people themselves who discuss and create principles. It is also done to confirm whether the is secured. The emphasis is on process rather than results.

AI Governance Ecosystem|A platform that can coordinate the supply chain is necessary

ーーSometimes it is said that Japan is focused on defense and does not have an offensive governance. Mr. Ema, what do you think about AI governance in Japanese companies?

Mr. Ema: I think that governance generally means the framework of an organization such as a company, that is, corporate governance.

Currently, big companies such as GAFA, which are actively working on AI governance and ethical issues, are platformers, but at the same time they can be said to be huge B2C (Business-to-Consumer) companies. As a result, it is possible to communicate with consumers (consumers) fairly quickly, and when problems or changes occur, they can be dealt with relatively quickly. From the consumer’s point of view, it is easy to see the companies that provide services and systems, and it is easy to clarify where responsibility lies.

On the other hand, considering the industrial structure of Japan, there are many B2B (Business-to-Business) companies (see the image below). It’s not just small and medium-sized businesses and startups, but even large corporations these days get their main profits from B2B rather than B2C. As a result, it will be a very long supply chain that includes companies that create AI systems, service providers, vendors, and end users. For example, when a user contacts a service provider when a problem occurs, they say, “I don’t understand how this system works, so I ‘m going to restore it to its original form.” There is There is also the problem that AI itself is a black box, and it is difficult to understand why it produces certain results, making it extremely difficult to guarantee quality.

These two problems overlap, and there are cases where a single company can no longer identify problems or take responsibility when something happens to an AI system or service, and Japanese companies have to think about this. It is a problem that cannot be

This figure (image below) is a bird’s-eye view of the “AI Governance Ecosystem” created by Mr. Matsumoto of Deloitte, who served as the Vice Chair of the “AI Governance and its Evaluation” Study Group of the Japan Deep Learning Association. The “AI governance ecosystem” is a proposal that we must consider AI governance as an ecosystem in cooperation with various places such as external institutions that form governance, monitoring institutions, insurance, and audits. On the website of the study group, topics related to each item in the figure are provided, and the contents are summarized in both Japanese and English, so please take a look.

Discussions on the standardization of AI governance have also begun. IT governance is based on corporate governance, but there are also discussions that it will become important to consider governance on a cross-organizational basis in the future. I feel that this is an interesting international trend.

–Please briefly explain the risk chain model based on Mr. Ema’s efforts so far.

Mr. Ema: In June 2020, the University of Tokyo Institute for Future Initiatives proposed a policy proposal for the risk chain model . The website now has a how-to guide and some examples. This study group is operated as a joint research project between the University of Tokyo and Deloitte Tohmatsu Risk Service Co., Ltd., but we are also collaborating with various other companies and organizations.

The original problem was how to put into practice items such as transparency, fairness, and trust in the principles of AI. I believe that sharing, visualizing, and accumulating not only the results of discussions but also the process with others is one way to ensure transparency. Ultimately, the purpose is to create a database of various cases so that anyone can refer to them. I think that creating a framework that enables discussion is one of my missions and what I want to do as I have been working in different fields .

For example, when a vendor and the side creating an AI system discuss or talk together, they are required to explain not only the changes, but also what aspects they are paying attention to when creating this system. We need a common platform for that.

Various organizations have already published collections of best practices, and examples of how well AI is being used in practice or failed are being published. However, since it is difficult to discuss problems with chatbots and problems with medical AI in the same category, it would be nice if there was a framework or platform that could compare and examine similar items. That started the discussion.

ーーHow would you like to develop the discussion of the risk chain model in the future?

Mr. Ema: We have created a risk chain model guide , and here we explain how to use the risk chain model and how to incorporate it into management and discuss it.

This guide is not the end of the story, but we plan to use it to get feedback and accumulate more and more cases. I think that by doing so, we can refer to the risks in advance and use them as a reference for what we need to discuss with the developers at this time. In July 2021, we held an online event using the risk chain model, using “recruitment AI” as an example. The report and summary video will also be posted on the website from autumn, and are currently being prepared.

ーーFrom now on, a framework will be born that will be used in the practical phase by further expanding recognition.

Mr. Ema: That’s right. However, it should be noted that the existence of a framework does not guarantee the fairness or safety of AI. This is just one way of thinking, and the risks surrounding AI change from moment to moment, and the judgment criteria also differ depending on the recipient’s sense of values.

Since this is not a consulting project but a research project conducted at a university, I would like to spread it as a tool that everyone can use.

Thoughts put into publication|So that not only companies but also individuals can raise their voices

In May 2021, Mr. Ema published ” AI and Society through Pictures and Diagrams: How to Relate to Technology that Opens Up the Future ” (Gijutsu Hyoronsha).

Many illustrations are used, and the basics of AI technology to social issues surrounding AI are explained in a way that beginners can easily understand.

ーーPlease tell us about the thoughts you put into the book that was published this time. Who else would you like to read?

Mr. Ema: Due to the revision of the Course of Study, a new subject called “Information” will be added to the standardized university entrance exam from 2025. I think that people involved in education and students will have more opportunities to come into contact with AI, so I hope that they will use it as a supplementary teaching material. However, since this book also introduces the AI ​​governance ecosystem and risk chain model that I have talked about so far, it is not only suitable for junior high and high school students, but also for people of various ages, such as those involved in AI system sales, legal affairs, and public relations at companies. I would like you to read it as an introduction.

As I wrote in Chapter 6 of the book, the current situation is that companies will not pay for environmentally friendly systems or systems that consider fairness unless there is a need from customers and consumers. That is why it is important for consumers and citizens to raise their voices and create movements. There are things that AI can and cannot do, so I would like companies and individuals to be able to raise questions by noticing these problems and eliminating unconscious biases.

In conclusion

AI governance is likely to become even more actively discussed as the social implementation of AI progresses in the future. How to involve as many people as possible and create a world where no one is discriminated against by AI is a challenge for Japan and the world as a whole, where IT is advancing rapidly.

You may also like

Leave a Comment