Can an algorithm* be sexist? Or racist? In my last post I said no, and ended up in a debate about it. Partly that was about semantics, what parts of the process we call an algorithm, where personal ethical responsibility lies, and so on.
Rather than heading down that rabbit hole, I thought it would be interesting to go further into the ethics of algorithmic use… Please remember – I’m not a philosopher, and I’m offering this for discussion. But having said that, let’s go!
To explore the idea, let’s do a thought experiment based on a parsimonious linear model from the O’Reilly Data Science Salary Survey (and you should really read that anyway!)
So, here it is:
70577 intercept +1467 age (per year above 18; e.g., 28 is +14,670) –8026 gender=Female +6536 industry=Software (incl. security, cloud services) –15196 industry=Education -3468 company size: <500 +401 company size: 2500+ –15196 industry=Education +32003 upper management (director, VP, CxO) +7427 PhD +15608 California +12089 Northeast US –924 Canada –20989 Latin America –23292 Europe (except UK/I) –25517 Asia
The model was built from data supplied by data scientists across the world, and is in USD. As the authors state:
“We created a basic, parsimonious linear model using the lasso with R2 of 0.382. Most features were excluded from the model as insignificant”
Let’s explore potential uses for the model, and see if, in each case, the algorithm behaves in a sexist way. Note: it’s the same model! And the same data.
Use case 1: How are data scientists paid?
In this case we’re really interested in what the model is telling us about society (or rather the portion of society that incorporates data scientists).
This tells us a number of interesting things: older people get paid more, California is a great place, and women get paid less.
This isn’t good.
Back to the authors:
“Just as in the 2014 survey results, the model points to a huge discrepancy of earnings by gender, with women earning $8,026 less than men in the same locations at the same types of companies. Its magnitude is lower than last year’s coefficient of $13,000, although this may be attributed to the differences in the models (the lasso has a dampening effect on variables to prevent over-fitting), so it is hard to say whether this is any real improvement.”
The model has discovered something (or, more probably, confirmed something we had a strong suspicion about). It has noticed, and represented, a bias in the data.
Use case 2: How much should I expect to be paid?
This use case seems fairly benign. I take the model, and add my data. Or that of someone else (or data that I wish I had!).
I can imagine that if I moved to California I might be able to command an additional $15000. Which would be nice.
Use case 3: How much should I pay someone?
On the other hand, this use case doesn’t seem so good. I’m using the model to reinforce the bad practice it has uncovered. In some legal systems this might actually be illegal, as if I take the advice of the model I will be discriminating against women (I’m not a lawyer, but don’t take legal advice on this: just don’t do it).
Even if you aren’t aware of the formula, if you rely on this model to support your decisions, then you are in the same ethical position, which raises an interesting challenge in terms of ethics. The defence “I was just following the algorithm” is probably about as convincing as “I was just following orders”. You have a duty to investigate.
But imagine the model was a random forest. Or a deep neural network. How could a layperson be expected to understand what was happening deep within the code? Or for that matter, how could an expert know?
The solution, of course, is to think carefully about the model, adjust the data inputs (let’s take gender out), and measure the output against test data. That last one is really important, because in the real world there are lots of proxies…
Use case 4: What salary level would a candidate accept?
And now we’re into really murky water. Imagine I’m a consultant, and I’m employed to advise an HR department. They’ve decided to make someone an offer of $X and they ask me “do you think they will accept it?”.
I could ignore the data I have available: that gender has an impact on salaries in the marketplace. But should I? My Marxist landlord (don’t ask) says: no – it would be perfectly reasonable to ignore the gender aspect, and say “You are offering above/below the typical salary”**. I think it’s more nuanced – I have a clash between professional ethics and societal ethics…
There are, of course, algorithmic ethics to be considered. We’re significantly repurposing the model. It was never built to do this (and, in fact, if you were going to build a model to do this kind of thing it might be very, very different).
It’s interesting to think that the same model can effectively be used in ways that are ethically very, very different. In all cases the model is discovering/uncovering something in the data, and – it could be argued – is embedding that fact. But the impact depends on how it is used, and that suggests to me that claiming the algorithm is sexist is (perhaps) a useful shorthand in some circumstances, but very misleading in others.
And in case we think that this sort of thing is going to go away, it’s worth reading about how police forces are using algorithms to predict misconduct…
*Actually to be more correct I mean a trained model…
** His views are personal, and not necessarily a representation of Marxist thought in general.