Timothy Lesaca MD
Public discussion about artificial intelligence has largely framed the moral stakes of the technology in terms of decision-making authority. The dominant questions are familiar: Can artificial intelligence make ethical decisions? Should it be allowed to do so? What happens if machines replace human judgment? These concerns, while not frivolous, are also misdirected. They presuppose that morality is primarily a matter of choosing correctly in discrete moments, and that the central ethical risk of AI lies in erroneous or unprincipled choice.
This framing misses something more fundamental. The most consequential moral effects of artificial intelligence do not arise when systems make decisions instead of us, but when they quietly reshape the conditions under which human beings experience themselves as deciding at all.
Artificial intelligence rarely confronts clinicians, professionals, or ordinary individuals with dramatic ethical dilemmas. More often, it recommends. It ranks. It predicts. It frames options in ways that make certain choices appear obvious, safe, or normatively neutral. The influence is subtle and typically benign. Indeed, the value of these systems lies precisely in their ability to reduce uncertainty, smooth friction, and minimize error. Yet it is this very smoothness that warrants ethical attention.
This chapter advances a simple but demanding claim: artificial intelligence does not remove morality; it removes involvement. The moral risk is not that human beings will begin to behave badly, but that they will gradually experience themselves as less personally engaged in the judgments that shape their professional actions and moral identities.
From the outside, such a world may appear orderly, efficient, and even just. Outcomes may improve. Errors may decrease. But from the inside, moral life may grow thinner—less emotionally inhabited, less narratively owned, less formative of character. This thinning is not a failure of virtue or willpower. It is a predictable consequence of how human beings acquire moral habits.
To understand why this matters, we must shift our attention away from rules, principles, and decision outputs, and toward moral formation. Moral character is not something most people deliberately construct. It is something they acquire, gradually and often unintentionally, through repeated patterns of action, affect, and responsibility. The philosopher who understood this most clearly was David Hume.
Modern ethical discourse often assumes that moral life is fundamentally cognitive—that individuals deliberate, apply principles, and select the right course of action. This picture owes much to rationalist traditions, particularly Kantian moral philosophy, in which ethical agency is identified with rule-governed choice. While such accounts have undeniable normative force, they offer a limited description of how moral life is actually lived.
Hume’s moral philosophy begins from a different psychological premise. Human beings, he argued, are not primarily moral reasoners but creatures of custom. Our approvals and disapprovals arise first as sentiments—felt responses of sympathy, discomfort, admiration, or aversion—and only later are they rationalized into principles. Moral judgment, on this view, is learned in much the same way as language: through immersion, imitation, correction, and repetition rather than through explicit instruction.
In A Treatise of Human Nature, Hume famously denied that reason alone could motivate moral action, insisting instead that moral distinctions are “derived from a moral sense” grounded in feeling and habit. This was not a debunking of morality, but an attempt to describe its actual psychological foundations. We become moral agents not by mastering abstract rules, but by being shaped through experience—by encountering the consequences of our actions, feeling their emotional weight, and gradually internalizing patterns of response that come to feel natural.
Several implications of this view are crucial for understanding the ethical significance of artificial intelligence.
First, moral character develops through practice, not intention. Individuals do not wake up one day and decide to become more responsible, more compassionate, or more morally attentive. These qualities emerge slowly, through repeated involvement in situations that demand judgment, expose uncertainty, and attach consequences to choice.
Second, emotional friction is not incidental to moral development; it is essential to it. Feelings of hesitation, anxiety, regret, or pride are not merely psychological byproducts of ethical decision-making. They are the mechanisms through which moral dispositions are formed. To act under conditions of uncertainty, knowing that one will bear responsibility for the outcome, is to participate in the formation of one’s moral self.
Third, what is rarely practiced becomes effortful and fragile. Habits strengthen through repetition; capacities weaken through disuse. If moral judgment becomes something one exercises only occasionally—when overriding a default or challenging a recommendation—then moral judgment itself becomes less confident, less intuitive, and less central to one’s sense of agency.
From a Humean perspective, the moral significance of artificial intelligence does not depend on whether machines can reason ethically. It depends on whether their use alters the habits through which human moral capacities are exercised and sustained.
Technologies shape human behavior not only by enabling or constraining action, but by reconfiguring the background conditions under which habits form. This insight is not new. Scholars of technology have long observed that tools alter cognition and skill even when they function flawlessly. The calculator reshapes numerical intuition; GPS systems alter spatial reasoning; checklists reorganize professional attention. In each case, capacities are not eliminated but redistributed, becoming supportive rather than primary.
Artificial intelligence differs from earlier tools in several ethically relevant respects. First, it operates continuously rather than episodically. Second, it embeds itself within justificatory structures, offering not merely outputs but reasons—rankings, probabilities, confidence scores—that preemptively rationalize action. Third, it tends to present its guidance as descriptive rather than normative, framing recommendations as reflections of what “usually works” rather than as directives that demand moral endorsement.
These features produce a subtle but important shift in the phenomenology of choice. When a clinician follows a recommendation generated by an algorithmic system, the experience is rarely one of obedience. More often, it is experienced as deferring to evidence, aligning with best practice, or avoiding unnecessary risk. The moral texture of the decision changes. Responsibility does not disappear, but it becomes diffused—shared with the system, the data, the institution.
Over time, this diffusion can invert the structure of moral deliberation. Instead of beginning with the question, “What is the right thing to do?”, the agent begins with, “Is there a reason not to follow this recommendation?” Moral judgment becomes an override function rather than a generative one. It is exercised primarily in exceptional cases, when something feels sufficiently wrong to warrant deviation.
From a Humean standpoint, this inversion is ethically significant because habits form around defaults. What feels normal becomes morally invisible. What requires justification becomes effortful and emotionally costly. If moral agency is repeatedly experienced as deviation rather than authorship, then moral confidence gradually erodes—not through corruption, but through disuse.
The individual does not become immoral. Rather, they become something closer to a relay than an author: a conduit through which decisions pass, increasingly justified by external systems rather than internal judgment. This shift is often welcomed, particularly in high-stakes environments such as medicine, where the burden of responsibility can be heavy. Yet it is precisely this relief that carries a moral cost.
Most ethical evaluations of artificial intelligence focus on first-order effects: accuracy, bias, safety, liability, and outcomes. These concerns are necessary, particularly in clinical contexts where harm can be concrete and immediate. Yet first-order effects tell us little about how repeated interaction with AI systems reshapes the moral psychology of the user. For that, we must attend to what can be called second-order moral effects—changes not in what decisions are made, but in how moral agency is experienced and practiced over time.
Recommendation systems are especially important in this regard. Unlike automated decision-makers, they preserve the appearance of human control. The clinician remains “in the loop.” Choice is formally intact. Yet the structure of deliberation has changed. Options arrive pre-ranked, pre-validated, and pre-justified by reference to population-level data, institutional norms, or probabilistic success.
This matters because moral judgment is not exercised in a vacuum. It is shaped by framing, default settings, and the emotional economy of choice. When one option is presented as standard, safe, or evidence-aligned, deviation carries an implicit moral burden. The clinician who overrides a recommendation must not only decide differently, but also justify that difference—to colleagues, to institutions, and often to themselves.
Over time, moral judgment migrates toward the margins. It becomes something one does against a system rather than through oneself. The capacity for judgment remains, but its role is altered. It is no longer the primary site of moral authorship, but a corrective mechanism activated under special conditions.
This is not a matter of laziness or moral weakness. It is a predictable response to environments that reward conformity to optimized pathways and penalize idiosyncratic judgment. Hume would recognize this immediately. Human beings adapt their sentiments to what is repeatedly reinforced. When alignment with algorithmic recommendation feels safe and deviation feels risky, moral confidence will naturally attach to the former.
The ethical concern, then, is not that clinicians will stop caring about right and wrong, but that moral agency will be increasingly experienced as optional rather than constitutive. Judgment becomes episodic. Responsibility becomes shared. The emotional weight of decision-making lightens. And with that lightening comes a subtle loss: fewer opportunities for moral formation.
Clinical medicine provides a particularly vivid context in which to observe these dynamics, precisely because it has long been committed to standardization, evidence-based practice, and risk reduction. These commitments are ethically justified and professionally indispensable. The question is not whether clinical decision support should exist, but how it reshapes the experience of responsibility for those who rely upon it.
Consider the increasing use of risk stratification tools, predictive models, and algorithmic treatment pathways. These systems rarely issue commands. Instead, they offer probabilities: likelihood of deterioration, risk of readmission, predicted response to treatment. The clinician remains responsible for the final decision. Yet the moral phenomenology of that decision has changed.
When outcomes are favorable, the system recedes into the background. When outcomes are unfavorable, explanation often flows outward: the model, the score, the guideline. This outward orientation is not dishonest. It reflects a genuine redistribution of epistemic authority. But it also subtly alters the clinician’s relationship to their own judgment. The decision feels less authored, less owned, less narratively integrated into one’s professional identity.
In mental health care, these effects may be even more pronounced. Diagnostic aids, symptom checklists, and treatment algorithms can be enormously helpful, particularly for consistency and access. Yet therapeutic judgment has always involved irreducible uncertainty, interpersonal attunement, and moral risk. To decide how to intervene in another person’s inner life is not merely a technical act; it is a deeply moral one.
When AI-supported tools reduce that uncertainty—or at least appear to—some of the emotional burden of decision-making is relieved. This relief is not trivial. It can protect against anxiety and burnout. But it can also attenuate the very experiences through which clinicians develop moral confidence, humility, and practical wisdom. If one rarely feels the full weight of uncertainty, one rarely practices bearing it.
This is what might be called moral deskilling: not the loss of ethical knowledge, but the gradual weakening of moral facility through underuse. Just as physical examination skills decline when replaced by imaging, moral judgment can atrophy when repeatedly deferred. The capacity remains, but it no longer feels central or reliable.
Importantly, moral deskilling does not manifest as ethical failure. It manifests as detachment. Decisions are made competently, outcomes are acceptable, but the clinician feels less personally implicated. Responsibility becomes procedural rather than lived.
Any discussion of responsibility in clinical contexts must proceed carefully. Medicine has rightly sought to move away from cultures of blame that punish individual clinicians for systemic failures. Artificial intelligence often appears as an ally in this effort, distributing responsibility across systems and reducing individual exposure to error.
Yet responsibility and blame are not identical. Blame concerns punishment and moral condemnation. Responsibility concerns ownership, authorship, and moral participation. It is possible—and increasingly common—to reduce blame while also reducing experienced responsibility.
From a moral developmental perspective, this distinction matters. Human beings do not become morally mature through punishment alone, but neither do they mature in its absence. Moral development requires being implicated—feeling that one’s judgment mattered, that one could have acted otherwise, and that the outcome is, in some meaningful sense, one’s own.
Artificial intelligence complicates this experience by offering a form of ethics-by-proxy. Decisions are justified not by reference to one’s own moral reasoning, but by alignment with external systems that carry institutional legitimacy. This can be protective, but it can also distance the agent from the moral meaning of their actions.
The result is a paradox: responsibility is formally retained but experientially thinned. Clinicians remain accountable, yet feel less like moral authors of their decisions. Over time, this can erode moral confidence and deepen a sense of professional alienation—not because work is too hard, but because it is no longer fully inhabited.
Moral development requires what might be called moral friction: moments of uncertainty, hesitation, emotional discomfort, and risk. These experiences are often treated as inefficiencies to be eliminated. In many contexts, that impulse is justified. Unnecessary suffering should be reduced where possible. Yet not all friction is gratuitous. Some is formative.
Children do not learn responsibility by being insulated from consequence. They learn it by feeling embarrassment, pride, guilt, and repair. These experiences are not optional features of moral life; they are its developmental engine. Adults are no different. Moral character continues to form throughout professional life, shaped by repeated encounters with uncertainty and consequence.
Artificial intelligence excels at reducing friction. It smooths decision pathways, minimizes hesitation, and offers reassurance. But in doing so, it may also remove opportunities for moral practice. A clinician who rarely experiences uncertainty does not become wise. A professional who seldom feels the weight of consequence does not deepen responsibility. These qualities do not emerge automatically from good systems. They emerge from lived engagement.
This is why the ethical concern surrounding AI cannot be resolved by appeals to outcome optimization alone. Moral life is not only about what happens, but about who we become in the process of acting.
One way to understand the long-term moral impact of artificial intelligence is through the concept of narrative identity. Human beings make sense of themselves through stories: accounts of what they have done, why they acted, and what those actions mean. Moral responsibility is partly narrative. It involves being able to say, “This is what I decided, and this is why.”
When decisions are externally justified—by algorithms, scores, or protocols—narrative ownership becomes harder to sustain. Actions are explained, but not fully integrated. The moral self-story fragments. Over time, this can produce what might be called moral thinness: a life in which moral agency is present but lightly inhabited.
Moral thinness does not look like vice. It looks like disengagement. From the outside, everything functions. From the inside, something essential feels absent: the sense that one’s judgment truly matters.
This may help explain why technological efficiency can coexist with professional burnout. Burnout is often attributed to workload, but it may also reflect moral detachment—the erosion of meaning that occurs when individuals no longer experience themselves as authors of their actions.
The ethical tradeoff posed by artificial intelligence is not between right and wrong, nor between humans and machines. It is between participation and passivity. A world can be efficient, fair, and well-regulated while still producing individuals who feel oddly peripheral to what happens through them.
Hume would not counsel the rejection of technology or a return to some imagined moral past. He would remind us that human beings are shaped by what they repeatedly do without thinking. If moral judgment becomes something we exercise only rarely, we should not be surprised when it begins to feel unfamiliar.
The moral question of AI, then, is both simple and difficult: what aspects of moral life must remain lived, even when delegation is easier? This is not a question that can be answered by policy alone. It requires attentiveness to habit, formation, and the subtle ways in which technologies shape who we become.
Artificial intelligence does not remove morality. But it can make morality thinner—less something we live inside of, more something that happens around us. Preserving moral life in the age of AI will require not only better systems, but deliberate efforts to protect human involvement, even when efficiency tempts us to let it go.
Speculation has an uneasy status in academic writing. It risks projection, exaggeration, or nostalgia. Yet moral philosophy has always depended, at least in part, on retrospective imagination: the capacity to ask not only what we are doing, but how it may one day appear when its formative effects are fully visible. When the consequences of a technological shift are primarily developmental rather than catastrophic, such imagination is not optional. It is necessary.
If, fifty years from now, clinicians and scholars were to look back on the early decades of artificial intelligence in professional life, what might they say they experienced? What lessons might have emerged—not about system performance, but about human moral agency?
First, it is unlikely that the story will be one of moral collapse. Artificial intelligence will not have rendered clinicians immoral, nor will it have stripped medicine of ethical aspiration. On the contrary, many first-order moral goods will almost certainly have improved. Diagnostic accuracy will have increased. Certain inequities will have narrowed. Some forms of preventable harm will have decreased. From a purely outcome-oriented perspective, the era may be judged a success.
Yet alongside these gains, something more ambiguous will likely have been felt. Many professionals will recall a gradual change in the texture of moral life rather than its content. Decisions will have become faster, smoother, and more defensible. At the same time, they will often have felt less personally authored. The emotional intensity of judgment—its anxiety, its doubt, its lingering sense of responsibility—will have softened.
Clinicians may describe a professional life that was less haunted by regret but also less shaped by it. Fewer sleepless nights, perhaps—but also fewer moments of moral reckoning that permanently altered how one practiced. The experience of responsibility will not have disappeared, but it will have felt increasingly procedural: something discharged through adherence rather than inhabited through deliberation.
Many will remember that they did not choose this shift. It occurred quietly, through adoption, normalization, and repetition. Moral involvement faded not because anyone rejected it, but because it was no longer consistently required.
If the period is examined honestly, one lesson will likely stand out: moral agency cannot be preserved by intention alone. Good values, explicit commitments, and ethical codes will have proven insufficient to sustain moral involvement when everyday practice no longer demanded it.
We will likely have learned—perhaps belatedly—that responsibility is not merely a legal or institutional designation but a phenomenological experience. It must be felt in order to be formative. When systems absorbed uncertainty and redistributed authorship, they did not merely protect clinicians; they reshaped them.
Another lesson may concern the limits of delegation. Early optimism will have assumed that judgment could be offloaded without altering the judge. Experience will have shown otherwise. Delegation always trains the delegator. What is repeatedly handed over does not remain intact in reserve; it weakens, changes, or recedes.
We may also have learned that moral development does not scale automatically with system quality. Even near-perfect systems will not generate wisdom, courage, or responsibility as byproducts. These traits will still have required practice—and where practice was absent, formation stalled.
Among the successes, we will rightly note that artificial intelligence helped reduce unnecessary suffering. It will have supported clinicians under immense cognitive and emotional strain. It will have prevented some harms that previously arose from overload, bias, or inconsistency. In many domains, it will have allowed moral attention to be redirected toward cases where it mattered most.
We may also judge it a success that overt blame cultures diminished. Shared responsibility, when thoughtfully implemented, will have protected individuals from being morally crushed by systemic failures they could not control. In this respect, AI will have functioned as a moral buffer—absorbing risk that once fell too heavily on individual shoulders.
These achievements should not be minimized. They represent genuine ethical progress.
Yet certain failures may only become visible when viewed across decades.
One such failure may be the underestimation of second-order effects. Ethical oversight will have focused heavily on fairness, bias, and safety while paying comparatively little attention to how repeated reliance reshaped moral confidence, judgment, and narrative ownership. Moral deskilling, where it occurred, will not have announced itself as a problem. It will have been experienced as ease.
Another failure may lie in design philosophy. Systems will have been built to minimize friction rather than to preserve formative engagement. Override will have been treated as exceptional rather than ordinary. Articulation of reasons will often have been optional rather than required. In doing so, technologies will have optimized outcomes while impoverishing practice.
Finally, we may recognize a failure of language. We will have lacked adequate terms to describe what was being lost. Because morality was not disappearing, only thinning, the loss escaped clear articulation. Without a vocabulary for diminished involvement, the change went largely unquestioned.
Looking back, a more mature ethical assessment will likely conclude that the central challenge of artificial intelligence was never whether machines could act ethically, but whether human beings would continue to experience themselves as moral agents in the full sense—as authors rather than overseers, participants rather than managers.
The most successful systems, in retrospect, may not be those that fully optimized decision-making, but those that deliberately preserved zones of human judgment, uncertainty, and responsibility. Not because humans were superior decision-makers, but because moral formation required continued involvement.
If this lesson is learned, it may still shape the future. Technologies can be designed not only to support correct action, but to sustain moral practice—to require articulation, invite reflection, and normalize ownership rather than merely compliance.
From a Humean perspective, this would represent genuine ethical wisdom: an acknowledgment that human beings become what they repeatedly do without thinking, and that moral life must therefore be protected at the level of habit, not aspiration.
Artificial intelligence will not have ended morality. But it will have taught us—perhaps more clearly than any philosophical argument—that morality cannot survive as a spectator sport. It must be lived, practiced, and felt, even when doing so is no longer strictly necessary.
Hume, D. (1739/1978). A Treatise of Human Nature. Oxford University Press.
Hume, D. (1751/1998). An Enquiry Concerning the Principles of Morals. Oxford University Press.
Aristotle. (c. 350 BCE/2009). Nicomachean Ethics (W. D. Ross & L. Brown, Trans.). Oxford University Press.
Annas, J. (2011). Intelligent Virtue. Oxford University Press.
Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over Machine. Free Press.
Gigerenzer, G. (2014). Risk Savvy. Viking.
Mol, A. (2008). The Logic of Care. Routledge.
Montgomery, K. (2006). How Doctors Think. Oxford University Press.
Sullins, J. P. (2012). “Ethics and Artificial Intelligence.” Philosophy & Technology, 25(2), 169–179.
Verbeek, P.-P. (2011). Moralizing Technology. University of Chicago Press.
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
Char, D. S., Shah, N. H., & Magnus, D. (2018). “Implementing Machine Learning in Health Care—Addressing Ethical Challenges.” New England Journal of Medicine, 378, 981–983.