[ad_1]
We know that predictive models developed by artificial-intelligence (AI) and machine-learning (ML) algorithms are based on data. And, because we know how this data is used to build AI-based models, the main target of AI ethics is addressing how AI models become biased based on the quality and the quantity of the data that is used.
This first part of this two-part series discusses the nonengineering applications of AI and ML and how human biases such as racism and sexism can be included in AI models through the inclusion of biased data during the training of the algorithms. Because engineering applications of AI and ML are used to model physical phenomena, Part 2 of the series will discuss how AI ethics can determine and clarify how human biases of traditional engineers—assumptions, interpretations, simplifications, and preconceived notions—can be revealed in the engineering applications of AI and ML.
Introduction
The reasons nuclear weapons did not end up destroying our planet (at least so far) had much to do with worldwide treaties and agreements on how to handle nuclear bombs. In the same vein, a similar set of worldwide treaties and agreements should be eventually made by the politicians around the world about AI. One of the main reasons many individuals are worried about how AI is going to affect our world in the next few decades concerns the governments of several countries. Some governments are using this technology for their own objectives that are a function of their views, beliefs, and understanding of democracy and their intention of becoming a world leader based on how AI can serve them. The ethics of AI lately has become an important topic that must be understood by individuals who already are, or are becoming, interested in AI and ML algorithms.
Since the mid-2000s, when most people were exposed to AI-based image recognition, voice recognition, facial recognition, object recognition, and autonomous vehicles, interest in AI and ML has increased significantly. As new science and technology, AI and ML will change a lot of things in the 21st century. AI has become one of the more interesting technologies with which people, companies, and academia are getting involved on a regular basis.
For example, banks recently have started using AI and ML models to make the first steps in the decisions about giving loans. Human resources departments of large companies also are using AI and ML models to help make decisions about whom to hire. From an engineering point of view, some operating petroleum companies have been interested in using AI to develop fact-based reservoir-simulation models.
Banks use the AI models to minimize the number of applicants whose characteristics they must evaluate in detail, while companies use AI-based models to evaluate the large number of applicants who have applied for employment and then significantly reduce the number of applicants on which the actual human-resources professionals must concentrate. Petroleum companies’ objectives of using AI-based reservoir simulation is to enhance their oil and gas production. The way AI and ML have been used by banks and companies to make loans or hire people has made the ethics of AI an incredibly important topic to understand. The same is true for petroleum companies regarding the engineering application of AI and ML.
The ethics of AI is important to engineers and scientists who have become enthusiastic about using this technology to solve engineering-related problems. While AI ethics in engineering may not have much to do with politics (at least in this article), it is affected to a large extend by (a) a lack of scientific understanding of AI, (b) a lack of success in realistic problem solving through engineering applications of AI, and (c) the incorporation of traditional engineering biases (e.g., assumptions, interpretations, simplifications, and preconceived notions) into the AI-based models of the physical phenomena.
Currently, some people and companies who claim they use engineering applications of this technology are including a large amount of human biases so that they can solve problems using ML algorithms after they fail to build an AI-based model that does not include human biases. Human biases in engineering have much to do with how mathematical equations are built to solve physics-based problems.
Data: The Foundation of AI-based Modeling
AI uses ML algorithms to develop tools and models to accomplish its objectives. The development of AI-based models has a lot to do with data. The quality and quantity of the data are major factors in how the AI-based model will behave. As was mentioned in the last section, banks have started using AI and ML models to make the first step in decisions about giving loans. The AI models usually are developed using historical data provided by the loan applicants along with previous results of loan payments. The amount of positive and negative loan payments, as well as the input data from the loan applicants such as gender, ethnicity, credit, living location, and income, will determine the quality of the AI-based model developed for the loan. Such models also can include certain characteristics determined by the bank management.
The same general approach is applicable to the AI models used by human resources departments of large companies to make the decision about whom to hire. Such models also are developed using existing data from multiple companies about the applicants, as well as the quality of the employees who have been hired in the past. Other applications of AI that make AI ethics highly important include face recognition, face detection, face clustering, face capture, and face matching. Such technologies are used by mobile phones, security, police, and airports.
In the engineering application of AI, the characteristics of the data, including its quality and quantity, that are used for model development affect the quality of the AI-based models. The engineering application of AI and ML is the use of actual measurements and actual physics-based data to model physics rather than using mathematical equations to build models for the physical phenomena. Traditionally, in the past few centuries, modeling physics at any given time involved engineers’ and scientists’ understanding of the physical phenomena that were being modeled. As scientists’ understanding of the physical phenomena evolves, so do the characteristics of the mathematical equations that were used to model that physical phenomenon.
AI Ethics Addresses the Bias in AI-Based Modeling
The quality and quantity of the data used to build an AI-based model determine whether any biases have been incorporated into the model. The objective of AI ethics is to identify the quality and quantity of the data that is used to build the model and to identify if any bias has been (intentionally or unintentionally) incorporated in the model through the data.
The way AI and ML have been used by banks and companies to loan or to hire has made the ethics of AI an incredibly important topic. The same is true about the engineering applications of AI and ML. As long as realistic and nontraditional statistics-based ML algorithms are incorporated, the quality of the AI-based model is based purely on the quality and the quantity of the data. The data that is used to develop the AI-based model, therefore, completely controls the essence of the model that is developed and used to make decisions.
As this technology moved forward and started solving more problems, scientists became interested in learning more about how AI and ML work. The main characteristic of AI and ML has emerged as its use of data to come up with the required solutions and to make decisions. Because data is the main source of AI-based model development, of greater importance became learning (a) where the data is coming from and what is its main source and (b) to what extent the data includes all the required information (even not explicitly) from which AI and ML can extract patterns, trends, and information.
Almost a decade of research and study was necessary before it became clear, through examining the actual application of this technology, that AI and ML have the potential of being political (1, 2), racist (1, 2), and sexist (1, 2). This has to do with the type of data that is used to build the AI and ML models. In other words, creating a biased AI and ML model that can do what you want it to do is quite possible. This is completely tied to the data that is used to train and build the model. This is how AI ethics addresses the engineering application of AI when traditional engineers intentionally, or unintentionally, modify the quality of the AI-based models so they generate what the engineers believe is the right thing rather than modeling the physical phenomena based on reality, fact, and actual measurements.
The Massachusetts Institute of Technology has published articles regarding the biases that can occur when using AI and ML. Some of these articles clearly state: “Three new studies propose ways to make algorithms better at identifying people in different demographic groups. But, without regulation, that won’t curb the technology’s potential for abuse.” And, “this is how AI bias really happens—and why it’s so hard to fix. Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it.”
Another interesting article mentions that, “There are two main ways that bias shows up in training data: Either the data you collect is unrepresentative of reality or it reflects existing prejudices. The first case might occur, for example, if a deep-learning algorithm is fed more photos of light-skinned faces than dark-skinned faces. The resulting face recognition system would inevitably be worse at recognizing darker-skinned faces. The second case is precisely what happened when Amazon discovered that its internal recruiting tool was dismissing female candidates. Because it was trained on historical hiring decisions, which favored men over women, it learned to do the same.” What has been mentioned in this article is the results of research that was conducted to learn how bias can be included in the model. This is so true and very important for both engineering and nonengineering applications of AI and ML.
Research on the fundamentals of AI and ML algorithms revealed clearly that this technology is incredibly powerful for discovering patterns in the data that is used to train and develop models, making predictions, and helping make decisions. Because what AI and ML algorithms do is all about data, clearly, as long as the data provided to the AI and ML algorithms are generated on the basis of biases, interpretations, and assumptions, then the models and work flows that this technology develops become representative of such biases, interpretations, and assumptions.
Watch for Part 2, coming soon.
[ad_2]
Source link