Human autonomy: using technology to achieve our goals
An analysis of autonomy and how this influences our consideration towards AI.
An analysis of autonomy and how this influences our consideration towards AI.
This is part 5 of a 5-part series on Ethical AI. If you missed part 1- Ethical AI Standards: A Process Approach, click here.
Freedom can and should be viewed as an intrinsic good. It is true that there are some freedoms we wish to give up, and we do so with good reason. These two concepts do not conflict, and we will see how we can utilize the concept of autonomy to see what freedoms we must preserve, and what freedoms we can do away with. Therefore, we will give an analysis of autonomy and see how this influences our consideration towards AI.
As individuals, we each have a unique set of preferences. In economic theory, we mathematically model such preferences via utility functions. This concept is generally is applied to goods, that is the consumption of goods and so on, but there is no strict barrier that the preferences must refer only to goods- we could envision- and create utility functions mapping onto preferences regarding matters of taste, life goals, and so on (e.g., the utility of education as an intrinsic good, the utility of time spent with friends, etc.).
The point of this, then, is to see what barriers, if any, pop up in our ability to satisfy our preferences. In classical economics, agents are bound by a certain amount of wealth- they have budget constraints and so can only purchase a certain amount of goods. We can say the choices an agent may make depends on their budget, that is the limiting factor. We can inspect the limiting factors across other sets of preferences, and what we find are the things that limit our autonomy: e.g., the lack of women's suffrage is a barrier to any woman who wants to have a political voice- it strips her of her autonomy. So, inverting our view from boundaries to autonomy, we arrive at our definition: autonomy is the ability to satisfy our preferences. Given the fact our preferences range across numerous categories, our autonomy is multifaceted and may excel in one area and be deficient in another (e.g., money satisfies material preferences, the ability to contemplate satisfies mental preferences, and so on).
Calculation is a worthwhile endeavor in determining our preferences- sometimes we have vague ideas of what we want, but we don’t know how to get there in the most optimal way. A good example is course selection for a university degree- one might be able to come up with a list of their preferences but then they must figure out through the available means what configuration of courses allows them to satisfy their preferences the best (e.g., “I prefer morning classes over night classes” or “I’d rather have longer days of class if it meant fewer days at the university” and so on). Therefore, having more information available helps satisfy our preferences and ultimately means more autonomy
Some preferences are held that are ultimately contradictory, and one preference ultimately wins out (e.g., wanting to smoke a cigarette right this moment, but also wanting to quit smoking; the two desires cannot be both satisfied at once, and so a choice must be made). Ultimately, then, we must strive to satisfy our most noble preferences, these would be the most moral, grand, and fulfilling desires we could have. As humans, we struggle to achieve these noble goals (see the smoking example above), and when we can be helped in any way, we ought to take that help to extend our autonomy- paradoxically this means we must give up some autonomy in the process towards other things (e.g., allowing a friend to tear up your cigarettes means you can no longer satisfy your desire to smoke a cigarette at this moment- a price to pay now for the greater and ongoing reward later). Luckily, having the foresight of what to give up seems in reach- we are able to reason and develop meta preferences- preferences of preferences, and so we, with great determination and utilization of all of our advantages, may set up our systems and technologies to coax our autonomy in the proper way.
Let’s now utilize our analysis to see how AI can help us with our overarching goals. Firstly, calculation is obviously a strong area that AI will naturally be good at. Consider nugget’s scoring system and we easily see how AI can be utilized to calculate options and derive optimal shortlists and suggestions.
This is not the end, however, because the reasons behind the calculations are important. This is in fact related to and covered by transparency concerns, but we are starting to see how the concepts are connected and work off each other. Any increasingly transparent nature of AI helps give new information to users, and thus adds to our autonomy: if not only we have an ordered shortlist generated by nugget’s screening engine, and we also have benchmarks for each relevant area we are interested in, then we at once are in a better position to pick candidates. And it does not stop there. The classical notion of autonomy relates to freedom of choice, and so AI programs having the option to accommodate changes or decisions not picked by the AI itself is advisable. So, the aforementioned shortlist is customizable after the fact, meaning the choice remains and so fulfills our autonomy. Consider that some of our inclinations are best left behind. We all have these biases within us, perhaps you think certain institutions of education entail the best and brightest of individuals. You may have your reasons (e.g., such-and-such university is #1 for biochemical engineering, so all biochemical engineers ought to be hired from that university) but nevertheless they may fall flat in your overall goal (e.g., having the best biochemical engineers working for you; nowhere is it absolutely necessary that they be from any such school if their skills and disposition satisfy your requirements). So, it is of considerable benefit to have AI set towards our highest goals, that is, achieving the goals which are not subject to error. Sometimes, our highest goals are unattainable via AI alone, and to put the onus on that would be a mistake (imagine a medical AI program who has this goal: remove all suffering in the world, this is simply untenable). Therefore, the construction of goals that are in line with the overall goal, i.e., instrumental goals, are perfect for AI to handle (return to the medical AI program and limit its scope to determining which disease a patient has- the success of such a program would help save resources diagnosing patients, bring faster processing, and overall reducing the amount of suffering in the world).
Instrumental goals therefore must not be conflated with potentially contradictory goals (e.g., setting up an AI to specifically search out talent that has schooling from such-and-such a university, as opposed to looking for skills related to the best worker for biochemical engineering). Again, the concept of a correct ontological basis determines proper instrumental goals, so if one knows the proper skills involved in biochemical engineering, then one would be able to set up instrumental goals that serve the ultimate goal.
So, AI ought to strive to let humans make decisions, a forceful AI is a dangerous AI. The only kinds of things we should let the AI do without our interference, is remove our worst tendencies. So, letting the AI calculate objectively without our contradictory goals helps keep the process on the right track. At the same time, our ability to peer into the AI’s considerations gives us more knowledge, and thus lets us make better decisions.
This concludes our 5-part series on Ethical AI. This was written with the process in mind- AI is developing, not developed, and so will undergo continuous improvements and changes. The ideas presented in this series aims at aiding this process, the adherence to these guidelines will lessen pitfalls and bolden justification and development of AI. There will be new additions to our ethical considerations as more problems become apparent, and with determination we will stay ahead of the curve and continue to derive useful warnings and pieces of advice. The AI development marches on.