A Game of Discrimination

The trigger

It began with a question in a tweet:

That is “How can people be blind to a systemic behaviour they participate in?” The question has a specific context (US white evangelicals), but I liked the question on its own, context-free.

If something is systemic, it may not be evident to any particular participant. In a psychological sense certainly (repression, cognitive dissonance and all that), but it could literally not be evident. You could discriminate or be discriminated against without knowing, or conversely you could think you are without it being the case. When something is embedded in a system, it can be hard to tell unless you know the system.

That led to another thought: What about a simulation where nodes discriminate based on pre-set properties, A Game of Discrimination?

The rationale

Humans prejudge and discriminate. This is generally considered a bad thing, but it is pretty rational. When you encounter something new, you behave based on previous experience.

This even goes down to the level of perception. The brain doesn’t have the time or energy to continuously refresh your picture. What you see and hear is based on what you have seen and heard earlier (which leaves open the theoretical possibility that there are things the brain will literally never perceive unless tricked into it).

A major problem with this is, our brains being chronically lazy, that we stick to our first impressions, even when they are later shown to be wrong.

However, the psychology of discriminating is another issue than the one I wanted to explore. What happens system-wide when you have a population with individuals that discriminate, positively or negatively, against other individuals?

The goal of the game

I wanted something dead simple and generic. Human behaviour was the trigger, and how learning algorithms discriminate is a very current concern, as they are used less for pure amusement and more for decisions that affect people.

But the goal of the game was not to simulate either humans or “AI” statistical systems emulating human behaviour, but to explore what happens to the system itself, what emergent behaviours we could find. We will not set out to prove anything, purely to discover.

Someone could adapt this to a mental model of human (or human-emulating algorithms), but this simulation will be about one thing only: discrimination. We will have a population of actors that discriminate against each other and are discriminated against, and repeat this for a number of iterations.

The rule set for the static game (version 0)

At a very minimum we need a number of actors/nodes, each with at least one property with at least two values to discriminate. We also assume that each actor is 100% aware of what property values other actors have, including themselves (they are self-aware). This could be fiddled in later versions, but for now keep it simple. Given origin and level of abstractness, each could have a property colour, with values purple and green.

We also need to have some action to discriminate with. Since we want this simple, let’s go with money. Assuming for now the actors can’t go into debt, it’s a positive or zero real value.

While all actors could be equally discriminatory, it would be more interesting if they had a range, the range being a parameter of the simulation. That would go from discrimination 1 (always prefer own colour), to 0 (have no preference), to -1 (always prefer the other colour). This could later be extended to more than two colours, by either discriminate against non-self indiscriminately or having a specific colour discrimination array.

Each round wealthy actors will give away a fraction of their money, let’s say half (another parameter to set) to a number of other actors (parameter). To whom they give these money away is a function of their social network and their discriminatory value. If discrimination is 0 that would be equally shared, if it is 1 or -1, it will be split between recipients of preferred colour. Otherwise, it will be unequally shared.

The social network

Simplest version the actors live in a pool, and give away to random parameter-set recipients each turn. More realistically, at least more interestingly, the actors live in a social graph of contacts.

Each turn the actors give to their “friends” and/or “charity” to random (for the moment) actors in the pool, based on their discrimination value. Each actor could have a fixed or variable number of “friends”. The edges could be undirected (like Facebook friends) or directed (like Twitter following/followers).

Jumping out of the game (first fairness predictions)

Even starting out with a pool, and no friend graph, and everyone starting out with the same amount of money, and discrimination set to 0, after a few rounds there will be random fluctuations, some will have more, others less. However each actor will on average get half initial wealth and get half their initial wealth, so in the long run all are equal. Furthermore, a node with less than average is likely to get more than it gives away, conversely one with more than average will also trend towards average.

Already, we (if not the nodes) start to have a sense of fairness. Every node is not equal (some have more money than others), but all have equal opportunity, and the system is self-correcting towards average for every node, faster the more they diverge from average.

If we added another property, average-wealth, appended with this round’s money, divided by number of rounds played, it would converge for all nodes to same value, the initial value.

Now, if discrimination diverge from 0, the system may no longer have these fairness properties, and self-correction can be limited. One colour or the other may systematically be better off, and the average-wealth would be colour-wise self-correcting. One colour would consistently have more, but all nodes of the same colour would converge towards same value.

Now, if we add a social graph, not even this is a given. Some nodes may reside in “nice areas” having rich friends, and enriching their friends, and ending up with consistently higher average-wealth than their peers, including their same-colour peers. The rules are still the same for every node in the graph.

Rule set for a dynamic game (version 1)

In version 0 the nodes don’t change, neither behaviour nor social graph.

There are many ways the nodes may learn, lets begin by assuming that they are aspirational. They want to enrich themselves. The giving rules are the same, they give away half their money each round, but they try to maximise their own incomings. We still assume that their discrimination is innate. They are not aware of it and cannot change it.

Now the edges (friends) are directional, that is outgoing edges are those a node follows, incoming edges are followers. And it is assumed that donations now are shared according to some key between following, followers, and general pool.

Sub-scenario 1: Being aspirational, the nodes will try to have edges with currently richer nodes, in the hope of greater future incomings. Each node have a set number of edges (friends), and they can create one new edge to the richest of their friends’ friends (that is, the friend of a friend with highest money). To keep the number of friends constant they then drop the poorest of their current friends.

This algorithm is “colour-blind” in theory as the discrimination property is not consulted. But as money is unequally shared, changes in connections. and thus the entire graph, will not be random, nor fair, maybe not even self-correcting. There might be sub-graphs where wealth accumulate.

Rule set for a moral game (version 2)

In the previous two versions, nodes may have edges to other nodes, but they don’t have reciprocal relationships. The social graph in version 1 may have aspirational changes, that nodes add edges for future rewards, but is unaffected by past behaviour. What if we again change the rules of the game, and nodes discriminate not just on the money and discrimination properties, but on previous interactions?

This could add a few more feedback loops. This could be tit-for-tats. If node A has been generous to node B, B might be more inclined to be generous to A. This could even be a “golddigging” or backscratching strategy for A, if B is more wealthy.

And what if discrimination is no longer a fixed property, but mutable, based on the colour property of the nodes with previous interactions? What if the attraction of another node is based not just on its money, or its social graph, but on its deduced discrimination and willingness to share with self or own social graph?

Going further, what if the system-wide properties, such as fairness, graph properties, self-correction were estimated by each node based on its known interactions. How would these estimates differ from our global view? Which would over- or underestimate these properties? Could a node be “blind” to systemic behaviour in its social graph?

What next?

As you have probably guessed this post is really a “note for later”. I got other things to do at the moment. I like to come back to this at some other time, but like to write this down so I don’t forget, maybe a seed for further ideas.

But also, maybe, someone else finds a Game of Discrimination intriguing, and have some ideas and proposals of their own. Do you? You read this far after all. Maybe you know of some existing simulation. Do tell. Same if you like this idea, but have nothing (at the moment) to add. I don’t know when I will return to this, but it would certainly be sooner if I get any feedback.

Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.