Event

Peter Railton (University of Michigan)

Thursday, January 18, 2018 15:30
Room Z330, UdeM, Pavillon Claire McNicoll, CA

Title: Moral Learning and Artificial Intelligence.

Café/biscuits dès 15h.Traditional approaches to moral development have emphasized the explicit teaching of norms, e.g., via parental instruction, or the acquisition of behavioral dispositions by “social learning”, e.g., via infant imitation and modeling of observed behaviors, or progression through a fixed set of developmental “stages”. But what if we understood moral learning as closer to causal learning and the development of commonsense physics? Developmental evidence suggests that infants early on begin to model their physical environment and its possibilities (Gopnik & Schulz, 2004), using observation but receiving very limited explicit instruction or external reinforcement. Similarly, there is evidence that infants early on begin learning a kind of commonsense psychology that enables them to model others’ behavior in terms of intentional states, once again, using observation but very limited explicit instruction or external reinforcement (Wellman, 2014). These internal models enable infants to interact reasonably successfully with their physical and social environment even if they are unable to articulate the causal or psychological principles involved—the knowledge underlying these capacities is therefore generalizable despite being implicit, and so is spoken of as intuitive. Internal models are not limited to causal and predictive information, however, but also appear to encode evaluative information, including evaluation of possible actions or third-party social interactions for such features as helpfulness, harm, knowledgeability, and trustworthiness (Hamlin et al., 2011; Doebel & Koenig, 2013). When combined with an implicit capacity to empathically simulate the mental states of others, these evaluative capacities can underwrite a kind of intuitive learning of commonsense morality. Such learning occurs without much explicit instruction in moral principles, yet with a capacity to generalize and with some degree of moral autonomy—so that by age 3-4, infants will resist conforming to imposed rules that involve harm or unfairness toward others (Turiel, 2002). To be genuinely intelligent, artificial systems will need to possess the kinds of intuitive knowledge involved in commonsense physics and psychology. And to be both autonomous and trustworthy, artificial systems will need to be able to evaluate situations, actions, and agents in the terms of such categories of commonsense morality as helpfulness, harm, knowledgeability, and trustworthiness. Deep learning approaches suggest how intuitive knowledge of the kind involved in predictive learning might be acquired and represented, without being “programmed in” or explicitly taught. How might further developments of these approaches make possible the acquisition of intuitive evaluative knowledge of the kind involved in commonsense epistemic or moral assessment?


Follow us on

Back to top