RSF Author David G. Robinson Discusses His New Book: Voices in the Code

February 24, 2023

Listen to Part 1 of the interview

Listen to Part 2 of the interview

Listen to Part 3 of the interview

David G. Robinson is the author of RSF book Voices in the Code: A Story About People, Their Values, and the Algorithm They Made. In Voices in the Code, Robinson tells the story of how the kidney transplant community created the kidney transplant algorithm. In a new interview with the foundation, Robinson discusses the development of the current Kidney Allocation System and tackles moral questions about algorithmic decision-making. The interview has been edited for length and clarity. 

David G. Robinson is a visiting scholar at the Social Science Matrix at the University of California, Berkeley, and a member of the faculty at Apple University. From 2018 to 2021, he developed Voices in the Code as a Visiting Scientist at Cornell’s AI Policy and Practice Project. Earlier, Robinson co-founded and led Upturn, an NGO that partners with civil rights organizations to advance equity and justice in the design, governance, and use of digital technology. 

Q. What motivated you to write Voices in the Code? What are algorithms? Are they limited to software, and why is it important to study them?

We’ve got software making lots of choices about people’s lives. Choices about who gets to go to which schools, choices about whether someone’s going to go to jail, choices about who gets a job, who receives government benefits, or how much or what kind of help people are going to get. And we’ve had some real struggles to govern those systems effectively. We’ve had some examples where decisions are made badly, where policymakers seem to be struggling to figure out how technology works and to get it to do what we want it to do, or even figure out what we want it to do. So, one of the things that I’ve been studying is the question of how we govern software that’s making important decisions in our lives. 

‘Algorithm’ is one of those words that has both a technical and popular meaning. Technically, it’s some rules for solving a problem. For example, in grade school, if you did long division with pencil and paper in math class, one could say that’s an algorithm. And that’s true. But people usually trot out the word algorithm when they’re thinking about a specific context, which is an algorithm on a computer in software. And typically, if we’re thinking about governing it in some careful way, which is what I study, an algorithm is making a decision that we think is important. It's not just deciding how the pop song you’re listening to is going to sound or putting pixels of a video I’m watching onto the right arrangement in a grid. It’s making some decision about somebody’s life or something that really matters. And we’re particularly eager to try and get that right. 

I’ve been studying these kinds of problems for many years. I’ve been thinking hard for a long time and working on contexts in which we are particularly eager to have software do the right thing because it’s deciding something important in people’s lives. For a number of years in Washington, D.C., I worked alongside civil rights organizations on the full gamut of classic civil rights issues – criminal legal system issues, housing, who gets healthcare, or how educational opportunities are distributed. And we see software increasingly being used to make those decisions. And if you’re an advocate, you probably didn’t get into the civil rights world by being a computer nerd. But now, because technology is in the middle of deciding and shaping people’s lives, in order to be effective as an advocate, you have to understand how this technology works. And so that’s the space that I was working in for many years: technical expertise in civil rights advocacy. My training is in law, and my co-founder at Upturn, an NGO in Washington, D.C. that works on civil rights and technology issues, is a computer scientist. 

I was teaching a law seminar at Georgetown Law called “Governing Automated Decisions” where we were going through all of these different contexts where this software was being used and the governance around it just didn’t seem to be up to the job. Wrong decisions were being made about people and the people using these systems, like judges and other public officials, didn’t understand how the systems worked. That the system was maybe giving them biased advice or that there were errors in it that they may not have been aware of. And a couple of students came to me and said, “You know, Professor, we’ve studied all these cases in which things are being done badly. But we did some research, and we think we found an example of a case where things are being done relatively well.” An example where some of the governance processes that are recommended have actually been tried in practice. And those processes have been used for years in the context of organ allocation. So, who gets the donated transplantable kidney when it becomes available when there are 100,000 Americans waiting for an organ? It’s really a software process. It turns out that there are a lot of problems with that process, but there are also a number of important things that are done right in that context. I think it’s a really valuable learning opportunity. 

So, when people talk today about algorithms or big data or AI in the policy conversation, all of those words are big umbrellas. What they really mean is software making important decisions that we need to get right. And the people who are working on this organ transplant algorithm don’t think of themselves as doing AI ethics. They don’t look in the mirror and say, “We’re doing algorithm design” or “We’re doing big data and ethics.” But, I argue in the book that’s what they’re doing. What they are doing is really figuring out how to govern software ethically. And so even though it might not say AI ethics on the tin of what they’re doing, I think we’ve got a lot to learn from them. And the book tries to do that, to bring this experience into conversation with the broader debate about how to make software.

Q. A central theme of the book is a shared moral burden. Can you talk about that in the context of algorithms? How can algorithms act as moral anesthetics for high-stakes decision-making?

Let’s zoom in for a second on the context of kidney allocation, which is what I study in the book. So, an organ becomes available and there are all these people waiting. One often hears it described as a waiting list, as though it were a first come first serve queue. But that’s not exactly the correct picture. In fact, it’s a matching process where lots of different factors need to be considered. So, when an organ becomes available, who’s nearby and who’s ready to get that organ? Who’s got the right blood type? Who’s got a compatible immune system? All of these things matter. So, there are medical factors and there are logistical factors. And last but not least, there are moral factors; we can’t give the organ to everyone. In fact, we can’t give the organ to more than one person at all. So, there’s a variety of ways in which potential recipients could be prioritized. Do you give it to the young and maximize benefit? Or do you give everyone an equal chance, which might mean that it’s not efficient in some sense, but it’s fairer? Well, that’s not a technical decision, it’s really a moral decision. And everybody who’s impacted by this system ought to have some voice in it. If we think about why we even bother with democracy at all, it’s because the basic idea is that people who are impacted by a rule that we all have to live under should have a voice in setting that rule. There are edge cases – children, people with profound disabilities, temporary members of a community – who might not have the same voice as others; these are edge cases to the system. But, the basic idea is that if it’s having an important impact on your life, you ought to have a voice in setting the rule. And when it comes to these technological contexts where software is making important decisions, we often see that’s not happening. 

Instead, the numbers are being used as a kind of anesthesia. Where, basically, people don’t want to stare at the moral crisis that we can’t save everyone whose life needs saving in the face. In the case of kidney transplants, people can be sustained on dialysis if they don’t get a kidney. But dialysis is pretty grim. Twenty percent of people die within the first year that they’re on dialysis. So, getting people off dialysis means, on average, saving their life. But we can’t do it for everyone, so there’s this very powerful impulse to come up with some neutral pretext for making a hard moral choice. Where instead of saying, we’re going to give this to this person because we flipped a coin and there’s no fair way, but we have to give it to someone. Or, instead of saying, “Well, we think this person’s life is worth more,” we’re going to say something much more neutral and quantitative, like, “This person had a higher allocation score.” In fact, Congress says that organs should be given out on “medical criteria” and then there’s a whole process of unpacking what that means. But the truth is that medical criteria can’t get you all the way to an answer about who should get each organ, because many more people could benefit from the given organ than who can receive it.

And so, the inescapable truth is there’s a moral decision-making piece that has to happen. So how do you do that? Well, one way is to come up with numbers that act as a kind of neutral excuse, or what I call a moral anesthetic. If you think about anesthesia in science and technology studies, people often observe that numbers seem objective and neutral. They often hide these hard moral choices. And the typical observation that people make about this in science and technology studies is that it’s bad. That it alienates us from the moral substance of what we’re really doing as a society to one another. And there’s of course, a grain of truth in that. At the same time, though, I would say that we do need to make these choices, even though they are in some sense impossible. Or what Calabrese and Bobbitt, two scholars that I quote in the book, called tragic choices. Tragic in the sense that nothing we can do is going to fully live up to our values. And yet, we have to do something. And in that context, having a number that gives us a reason to do one thing instead of another and helps us put one foot in front of the other in a situation where we can’t save all of life can be, I think, sometimes useful.

Q. What role did Teflon, and its creator, Dr. Scribner, play in bringing collaborative decision-making to the world of kidney transplants?

Let’s flashback to 1960. At this time, there had been a handful of kidney transplants. In particular, there was a gentleman by the name of Herrick. He was a military veteran who needed a kidney transplant and happened to have an identical twin brother, who had a kidney that he could give to Herrick. Because they were identical twins, all of the genetics of their immune systems were identical, and that meant that the healthy twin brother could give the ailing twin a kidney and the body of the ailing twin would think the kidney was its own. In other words, the immune system wouldn't reject this transplant the way that it would reject a transplant from a stranger. So, it was sort of a tantalizing stuck moment because medical professionals knew how to do the surgery, but, in most cases, they couldn't keep the organ ticking inside the body of the recipient.

Meanwhile, the other way of dealing with kidney failure was dialysis – hemodialysis is the typical form. That means taking someone's blood out of their body, cleaning it by taking the toxins out and putting it back. The first time that happened was actually during World War II in Holland. But getting the blood out and back in was a mess. You had to use a large bore needle, and in the same way that injecting drug users eventually run out of access points to their own bloodstream, that's what would happen to these dialysis recipients after about a month. One source I read described Lazarus-like recoveries for dialysis patients – people would come back to life. But then, as soon as they ran out of places to put the large bore needle, they couldn't get dialysis anymore and they would die.

That is until 1960 when Belding Scribner, a doctor at the University of Washington, takes this new material Teflon – a nonstick material that was the same that they used for pots and pans at that time – and creates a nonstick tube that you could leave in the patient's arm indefinitely. And that meant you could do dialysis for the same person over and over, potentially for many years. So, you could now save anybody who had kidney failure, which otherwise would be a fatal condition. 

Now that Scribner has this nonstick tubing, he also now had a really sticky moral problem on his hands because he only had four of these dialysis machines, and there needed to be some way of deciding who would get treatment and be saved. There was a huge population of people with kidney failure; he was inundated with desperate calls from dying patients and their physicians. He decided that there was a medical piece to this, which was figuring out who can benefit from the treatment and who is medically eligible. But there’s also an ethical piece, which was that there are many, many more people who are medically eligible than we can help. We have to ration care, and someone, somehow, has to decide which of these people is actually going to get this care. And instead of saying, “Well, doctors are the experts,” Scribner said doctors are the experts on medicine, but they aren’t exclusive experts on ethics. 

So, he and his team, with the help of the medical society, got together a group of laypeople from the local community, including a minister, a housewife, and a banker. They trained these people on what dialysis was and what patients needed to be able to do in order to thrive on dialysis. And this committee had a file for each patient and would anonymously evaluate these people. When news of this committee came out there was a tremendous public backlash. People thought it was too ad hoc and that the committee didn't have clear rules to use to decide who to save. And there was some reporting that suggested that the committee was favoring men with families and were breadwinners. A couple of critics at the time wrote about the committee, and one said, “The Pacific Northwest is no place for a Henry David Thoreau with bad kidneys.” In other words, rebels, artists, and nonconformists were not going to be at the front of the line here. And so, the committee was shut down. But it created a precedent that said what happened here was profoundly, even uncomfortably, honest and we want to be honest with the broader public, and we want to bring the broader public into making some of these decisions. And that tradition of public engagement continues to this day, and that shapes the design of this allocation algorithm.

Q. What lead to the federal decision to fund dialysis? How did this take the moral burden of kidney transplant allocation nationwide?

One way, one very expensive way, of escaping from the problem of deciding who should get dialysis is to essentially write a blank check and say that the federal government will pay for dialysis for all the people who need it. And that is what Congress did in the early 1970s. It's a Medicare program, but you don't have to be over 65 for this disease-specific program. 

In order to persuade Congress to do this, a group of patients went to Capitol Hill. They wheeled a dialysis machine into the hearing room. And the wife and primary caregiver of one of the patients hooked her husband up to the machine. When he got there his blood was full of toxins, he was lethargic, and so on. And over the course of this several-hour hearing, before the eyes of Congress, he was literally revived to health. There are transcripts where people say things like, “Never in all my years of being a legislator have I seen anything as persuasive as this revival. Of course, we have to fund this.” So, they fund it. And this is still the case today, that anyone who needs dialysis can have it and we’ve now got a large population of people who are getting dialysis.

But, as I said before, the problem with transplants was that unless you have a close relative, the recipient’s body would reject the organ that they received. The immune system of the recipient would say, “This isn’t our kidney. This is an invading object that we need to attack.” Of course, I am speaking metaphorically about how immunology works, and probably too loosely for the comfort of actual clinicians of immunology. But what ends up changing the game here is the introduction of a drug called cyclosporine in the early 1980s. Cyclosporine is a drug that stops the immune system from attacking a transplanted kidney. This suddenly makes it possible for transplants to take place between strangers. So, now you can be an organ donor and a huge variety of recipients could receive and have their life improved by this same organ. So, now that original question that the lay committee in Seattle, sometimes called the Seattle Guide Committee, faced deciding who could get dialysis changed to deciding who can get each organ, and that is what the software does today.

Q. Can you briefly talk about the changes made to the kidney transplant system from 2004-2014? How was the development of the current kidney transplant system, the Kidney Allocation System (KAS) an example of using best practices when creating algorithms?

The book describes a period between about 2004 and 2014 when there was a redesign of the national kidney allocation algorithm. So, anywhere in the United States, an organ becomes available, and this software makes a prioritized list saying offer it to this person first, and then that person, and then this next person, and so on down the line. And the first idea about how to redesign this system was that we should maximize the total amount of benefit that we get from the supply of organs. So, with each organ that becomes available, we’re going to ask, “Who would get the greatest duration of extra years of life?” And then we give it to them. That way, we’ll maximize the total benefit from the pool of organs. This was an approach that was particularly popular among doctors who liked to think that one year of life saved is as good as any other. To them, it seems unfair to value one life year that someone would have more than somebody else’s life year. 

But, as it turns out, this kind of maximizing approach had some real fairness problems. Maybe the most obvious one is that if you want to maximize the number of years of life that you save, it makes sense to focus very heavily on the youngest people who need organs. And, of course, most people think there should be some focus on that, but if you really want to strictly maximize how many years of life you can save, it basically means locking out many of the older candidates from getting organs at all. And people looked at that and said, it’s not fair to do that. 

The other thing is, it’s not random who needs a kidney. There are social determinants of health. If you look at where kidney failure is happening in the United States, the rate in the African American community, for instance, is three or four times higher than in the White community. Why is that? Well, it’s because access to care, in general, leads to people having better kidney health. It’s because socially determined conditions like high blood pressure, diabetes, and even stress play a significant role in contributing to kidney failure. And those things are concentrated. So, if you decide you want to maximize the total benefit, which might sound rather evenhanded on its face, where you end up in a place where you punish people for having lacked access to good medical care in the past. The maximizing benefit was rejected as unfair once there were public meetings. And getting the public meaningfully involved is a best practice. Instead, they ended up with a compromise where the youngest and healthiest organs would go to the youngest and healthiest recipients. Everyone on the waiting list would get a chance to receive an organ. 

What they ended up with was not perfect. Nobody got exactly what they wanted. But, for me as a scholar, this was a powerful reminder that democracy isn’t about getting exactly what one wants. That, really, the success case for democracy is a hot mess that gets us to a mutually tolerable compromise that we can live with until we come back and make it better. And even though I can point to lots of drawbacks and defects in the system as it exists today, fundamentally, I admire it. And I admire the process that led to it. It gave me a sense of humility. If you’re just thinking in abstract, philosophical terms about how things ought to be, it’s easy to stand back and tut-tut and say, “This isn’t good enough, it ought to be better.” What’s really hard, though, is rolling up our sleeves and making a thing better and tolerating the ways in which it’s still broken, even as we’re trying to improve it. I think the larger AI ethics conversation has spent a lot of time thinking about ideals and admiring the problem of algorithms being poorly governed and being biased. And I’m hoping the book can be part of a pivot in the policymaking of the scholarship around AI ethics. I’m hoping we can start to devote more of our attention to the question of how we can make systems that are more accountable and that are better and fairer for people than what we’ve had before.

Q. What are some benefits and drawbacks of the current Kidney Allocation System?

On the benefit side, one would have to say the tens of thousands of lives that are saved or prolonged every year as a result of this system. I also think it gives people a way to contribute. Organ donors can sign up to give their organs with reasonable confidence that good things will happen if, sadly, they at some point become a donor. And the people who are doing these transplants are just heroic. It’s especially heroic when you hear about the early years before some of the modern techniques have been developed. They would have to conduct these 20-hour surgeries when the chance of surviving for a year would be well under 50 percent. But they were still trying because recipients would otherwise be dying. So, they were trying desperately to bring people back to life and back to health. 

I went to the annual meeting of transplant surgeons and clinicians. There’s the sort of cartoon version one might imagine that a professional annual meeting for a group of doctors might look like: seminars until lunchtime and golf in the afternoon or something like that. But, I’ll tell you, that was not the case with the surgical community of transplant folks. They had sunrise intensives at 7:00 AM sharp in a windowless conference room, briefing about some refinement of some surgical technique. The work ethic and the scale of human achievement are just incredible.

That said, there are also some things about the system that are vexed in various ways. I’ll mention just two of them briefly. One is, if you think about the organ allocation algorithm, the basic story is we’ve got a bunch of people on a waiting list and the algorithm is deciding which of those people are going to get organs. We can spend a lot of time and energy, as we have, on trying to get that algorithm to be as fair as possible. But one thing that doesn’t address is who gets onto the waiting list in the first place. And it turns out, there are a lot of access barriers to even being allowed to join the list. So, for example, surgical centers get graded on how successful they are with transplants, or the center might be shut down. So, they have to think carefully about who they are even willing to put a transplanted organ into. If they think this patient lacks access to care or might not be able to drive back to the hospital for their checkups, or who doesn’t have somebody at home who can look after them while they recuperate, then the center is thinking, “This patient could be bad for my numbers. And if my numbers aren’t good, then I’m at risk of getting shut down by the federal authorities.” At a meeting of surgeons that I went to, one of the most striking things that I heard was a surgeon who said, “I can’t afford to be a 95% surgeon in a 97% world, because they will shut us down.” So, that’s one way in which I think the system still has trouble – who gets onto the list in the first place.

The other is, as a national policy, we underwrite dialysis for as long as anyone needs it. On the other hand, the federal government only funds three years of immunosuppressive medications a transplant recipient needs to keep the organ going. One thing that happens is that people of modest means will receive a transplant, they’ll get the benefit of immunosuppressive medicine for three years, and then they will lose their eligibility for benefits. They won’t be able to afford the medicine to keep their transplant working and their organ transplant will fail, and they’ll go back on the transplant list. Obviously, it is morally perverse and horrible that we would do that to people. But, also, if you want to have a cold, green-eyeshade perspective on this, it’s massively wasteful monetarily. In the long term, it’s cheaper to have a transplant than to be on dialysis, because the medicines are cheaper than the dialysis treatments. But doing the hugely expensive process of a transplant, then neglect the transplant, let it fail, and put the person back on dialysis is just about as backward a combination of behaviors as one could imagine in this area. And it’s happening with disconcerting frequency. So, I would say that it’s not only the algorithm but the whole surrounding social context that we want to think about when we’re thinking about what’s fair.

Q. What makes an algorithm high-stakes? In addition to the KAS, what are some examples of high-stakes algorithms?

I think that whether something is high stakes is not a matter of technology, but a matter of the impact on people’s lives. The question is, “What does this mean to the people who are being evaluated or judged?” I think about things like pre-trial risk assessments. When someone is arrested, they’re presumed innocent under the law. The question is, can they go home? Or do they have to wait in jail until their trial? And it turns out that people who are made to wait in jail have great difficulty marshaling an effective defense. They can’t go around and gather evidence or do other things. Meanwhile, it is often the case that prosecutors will offer people release from jail if they plead guilty. It’s a perverse system. And we’ve got software now that decides who’s dangerous and ought to be put in jail until their trial, or at least deciding what a judge should be told about this person’s dangerousness. So, even if, formally, the final decision rests with a human, we’re building software to paint these labels of dangerousness onto human beings that we’re contemplating putting into jail cells. It’s a real challenge and it’s one that I’ve worked on.

Another example that has been studied carefully is the child welfare system. You have a government office that’s charged with protecting children. It’s getting calls from teachers and social workers who say that they’re worried about a child and then the child welfare system has to decide which of these calls they’re going to investigate. And there again, the data that they have and that they use is based on who they’ve investigated in the past and who has contact with public services. Relatively wealthy families make less use of public services, so even if the same things are happening in those families as poorer families, they’re less likely to be called into child services for those problems. So, there are all kinds of difficulties there.

You could also think about your own life and when you were judged by software. If you’ve ever submitted a resume to a piece of technology that decides whether you get to speak to a human. Or if you’re a parent and you have a kid who is being assigned to a school through some kind of mechanism. I think we’ve all seen those kinds of systems. There are also more nuts-and-bolts kinds of things that might not be as often in the headlines. For example, if you’re selling your house or you’re trying to buy a house, the question is, what’s this house worth? And there’s some algorithm that’s deciding what your house is worth and what it should be listed for. If there are biases in this software, it can have a huge impact on your family and intergenerational wealth. I should also add that as our use of technology expands, there are additional areas where software is making new kinds of decisions or is making decisions that impact our lives more often. So, I think the question is how do we do that accountably? How do we do it democratically? That matters not just for the software we have today, but also for the world we’re trending into in the future.

Virginia Eubanks is a scholar who wrote a book called “Automating Inequality,” which is a trio of case studies. One of the case studies is of the homeless shelter allocation algorithm used in Los Angeles. Eubanks describes how everyone get excited about this algorithm that was going to determine who from the waiting list should be prioritized for shelter in Los Angeles. And what she points out is that the question of whether the algorithm is fair is zoomed in too much; it’s too narrow. It’s the wrong question. The bottom line is that there is not enough shelter for all the people who need shelter. We got so excited about this process of being “fair,” with the allocation of these shelter beds. But she says that what we really need is not a fancier algorithm, we need more beds. And this, to me, is such a powerful example of how one of the drawbacks of a careful process for governing an algorithm is the way that it zooms us in. We might get so focused on the fairness inside that software, that we forget to explore broader questions about the fairness of the surrounding social situation. 

So, in the context of my book, we’re so focused on who from the waiting list gets an organ, that we’re forgetting questions that need to be asked about how people get on the list in the first place or what happens to them after they get a transplant. So, I think that if we are going to continue these careful processes of building software in an accountable way, we also need to keep an eye out for the surrounding context that’s not in the software but is in people’s lives.

Q. Can you talk briefly about the four practices that experts believe should apply to the governance of algorithms? What are some of the benefits of these practices and what are some of the arguments against them?

I grouped the practices that people have recommended into four buckets. It’s not the only way to think about these practices, but I found it to be useful. So, first, you have ideas about public participation. You also have ideas about transparency – can we see what the system is doing? You have forecasting – before we go in and do something, can we anticipate what the consequences would be? Another term for forecasting is an impact statement, which is a policymakers’ jargon term. And finally, is auditing – so once the system is out there in the world, we should have a publicly accountable kicking of the tires that asks if the system is working the way it ought to. 

So those are public participation, transparency, forecasting, and auditing, which are all techniques that people are proposing for other high-stakes systems and each of them has been part of the story of developing the transplant system for many years. So, the public meeting where people said maximizing benefit would not be fair is a participation story. The fact that everyone knows how the system works is a transparency story. Participation takes time and transparency is hard work. It’s not enough just to disclose the source code or some technical details. If you really want an inclusive public conversation you need to provide information in a way that people can absorb. You need it to be possible for someone to say, “I’m not a medical expert, but I have a sense of how this works.” In order to create that, the experts need to invest effort in putting things into plain language, making clear visualizations, and making simple descriptions of what the process looks like and what’s happening. That gives a lot of power to the experts who are doing the describing and deciding how to put things in plain language. And that’s a risk. But ultimately, we can’t all be experts in everything, so inclusiveness means some level of translation. And that, I think, is part of what transparency really is. It’s the same with forecasting. It’s really valuable to have some centralized data analysis where people publish forecasts so that if you are an impacted community member and you want to know how this system will affect you, you don’t have to reinvent the wheel. That means you want somebody with the right incentives to be honest about the drawbacks of systems that they are considering. And finally, auditing. There, again, you need somebody who can kick the tires. You need some kind of institutional structure to do that. In the transplant case, it’s an independent nonprofit. So, you’ve got one group that operates the transplant allocation system and a separate group, called the Scientific Registry of Transplant Recipients, that analyzes the outcomes. That follows a model that auditors in many different fields, including the financial area, have used for many years. Whoever is checking whether things are done correctly or not, should be different that whoever is doing things, because there are different incentives and different perspectives there.

RSF

RSF: The Russell Sage Foundation Journal of the Social Sciences is a peer-reviewed, open-access journal of original empirical research articles by both established and emerging scholars.

Grants

The Russell Sage Foundation offers grants and positions in our Visiting Scholars program for research.

Newsletter

Join our mailing list for email updates.