## Content
When we find that the use of algorithms are resulting in discriminatory outcomes, the tendency is to try and fix it intuitively. e.g. If an algorithm is discriminating between 2 sets of people based on particular characteristics, the natural inclination is to prevent/ban targeting based on the same characteristics.
An episode of the Two Think Minimum [podcast](https://techpolicyinstitute.org/publications/bigdata/catherine-tucker-on-algorithmic-bias/) with Professor Catherine Tucker makes the case for a cautious approach and the need for better understanding.
*This was the experiment:*
> ... So, what we did was we started a Facebook ad in 190 different countries, and the ad was going to be promoting careers in science, technology, engineering, and math, which, you know, has been traditionally an area where women are underrepresented. And what we had, the reason we decided to launch this in 190 countries was we thought, well, wouldn’t it be interesting if the algorithm picked up something about the degree to which females actually had opportunities and so on, as opposed to the hypothesis you hear in the algorithmic discrimination literature, that algorithms pick up discrimination from a training dataset and then perpetuate it.
*What did they find?*
- The ad was shown 20% fewer times to women than men.
- They determined that it wasn't because women were not clicking on the ad, in fact, they were more likely to click on it.
- Was it picking up on a 'cultural prejudice'? It turns out that the country with the greatest disparity was Canada.
*So, what was the cause?*
> ... what we worked out was actually what was going on was that women have more expensive eyeballs than men....
> So as a result, the algorithm wasn’t told whether to target men or women, it was told to target both genders. And it simply went out there and found the most inexpensive eyeballs, which happened to be male ones. And so this was an example where have a disquieting result, which looks like algorithmic discrimination, an ad not been shown to women, which should be shown to women, and being shown instead to men. But instead, it was just a result of the algorithm going out there and trying to save the advertiser a bit of money and being cost-effective.
*But... you promised us unintended consequences...*
- The solution appeared to be that recruiters should be willing to pay more in order to reach women.
- But, while they were working on this as a potential solution, something else happened:
> What happened in the interim is there’d been, I think, some kind of lawsuit or pressure on Facebook, which meant that you could no longer target ads based on gender specifically to men or women, and so as a result, we couldn’t solve it.
- Now, for that unintended consequence:
> And so, we were almost in the worst possible place where you’d had some, I would say analog era, well-intentioned regulation comes in, say you can’t target on gender. But the moment you do that is the moment that you can’t actually correct the problem. **So now, anytime anyone runs an ad for a job on one of these major digital platforms, because they can’t target by gender, in the end, you’re going to end up with a situation where they’re going to show it to men**. And so that’s just sort of an example of what not to do.
## Colophon
%%
title:: Fixing Algorithms Intuitively and Unintended Consequences
type:: [[output]]
tags:: [[algorithm]]
url::
file::
creator:: Prateek Waghre
%%
created:: 2022-01-18
status:: [[brewing]]