Out on Marginal Revolution, Alex Tabarrok has an excellent post on why “sexism and racism will never diminish“, even when people on the whole become less sexist and racist. The basic idea is that there is always a frontier – even when we all become less sexist or racist, there will be people who will be more sexist or racist than the others and they will get called out as extremists.
To quote a paper that Tabarrok has quoted (I would’ve used a double block-quote for this if WordPress allowed it):
…When blue dots became rare, purple dots began to look blue; when threatening faces became rare, neutral faces began to appear threatening; and when unethical research proposals became rare, ambiguous research proposals began to seem unethical. This happened even when the change in the prevalence of instances was abrupt, even when participants were explicitly told that the prevalence of instances would change, and even when participants were instructed and paid to ignore these changes.
Elsewhere, Kaiser Fung has a nice post on some of his learnings from a recent conference on Artificial Intelligence that he attended. The entire post is good, and I’ll probably comment on it in detail in my next newsletter, but there is one part that reminded me of Tabarrok’s post – on bias in AI.
Quoting Fung (no, this is not a two-level quote. it’s from his blog post):
Another moment of the day is when one speaker turned to the conference organizer and said “It’s become obvious that we need to have a bias seminar. Have a single day focused on talking about bias in AI.” That was his reaction to yet another question from the audience about “how to eliminate bias from AI”.
As a statistician, I was curious to hear of the earnest belief that bias can be eliminated from AI. Food for thought: let’s say an algorithm is found to use race as a predictor and therefore it is racially biased. On discovering this bias, you remove the race data from the equation. But if you look at the differential impact on racial groups, it will still exhibit bias. That’s because most useful variables – like income, education, occupation, religion, what you do, who you know – are correlated with race.
This is exactly like what Tabarrok mentioned about humans being extremist in whatever way. You take out the most obvious biases, and the next level of biases will stand out. And so on ad infinatum.
So once again I’ve taken myself off Twitter and Facebook. After a three-month sabbatical which ended a month back, I was back on these two social networks in a “limited basis” – I had not installed the apps on my phone and would use them exclusively from my computer. But as days went by, I realised I was getting addicted once again, and losing plenty of time just checking if someone had replied to any of the wisecracks I had put on some of those. So I’ve taken myself off once again, this time for at least one month.
This post is about the last of my wisecracks on facebook before I left it. A facebook friend had put an update that said “good things do happen to those who wait”. I was in a particularly snarky mood, and decided to call out the fallacy and left the comment below.
In hindsight I’m not sure if it was a great decision – perhaps something good had happened to the poor guy after a really long time, and he had decided to celebrate it by means of putting this cryptic message. And I, in my finite wisdom, had decided to prick his balloon by spouting gyaan. Just before I logged out of facebook this morning, though, I checked and found that he had liked my comment, though I don’t know what to make of it.
Earlier this year I had met an old friend for dinner, and as we finished and were walking back to the mall parking lot, he asked for my views on religion. I took a while to answer, for I hadn’t given thought to the topic for a while. And then it hit me, and I told him, “once I started appreciating that correlation doesn’t imply causation, it’s very hard for me to believe in religion”. Thinking about it now, a lot of other common practices, which go beyond religion, are tied to mistaking correlation for causation.
Take, for example, the subject of the post. “Good things happen to those who wait”, they say. It is basically intended as encouragement for people who don’t succeed in the first few attempts. What it doesn’t take care of it that the failures in the first few attempts might be “random”, or that even success when it does happen is the result of a random process.
Say, for example, you are trying to get a head upon the toss of a coin. You expect half a chance of a head the first time. It disappoints. You assume the second time the chances should be better, since it didn’t work out the first time (you don’t realise the events are independent), and are disappointed again. A few more tails and disappointment turns to disillusionment, and you start wondering if the coin is fair at all. Finally, when you get a head, you think it is divine retribution for having waited, and say that “good things happen to those who wait”.
In your happiness that you finally got a head, what you assume is that repeated failure on the first few counts actually push up your chance of getting your head, and that led to your success on the Nth attempt. What you fail to take into account is that there was an equal chance (assuming a fair coin) of getting a tail on the Nth attempt also (which you would have brushed off, since you were used to it).
In my comment above I’ve said “selection bias” but I’m not sure if that’s the right terminology – essentially when things go the way you want them to, you take notice and ascribe credit, but when things don’t go the way you want you don’t notice.
How many times have you heard people going through a happy experience saying they’re going through it “by God’s grace?”. How many times have you heard people curse God for not listening to their prayers when they’re going through a bad patch? Hardly? Instead, how many times have you heard people tell you that God is “testing them” when they’re going through a bad patch?
It’s the same concept of letting your priors (you see God as a good guy who will never harm you) affect the way you see a certain event. So in my friend’s case above, after a few “tails” he had convinced himself that “good things do happen to those who wait” and was waiting for a few more coin tosses until he finally sprang a head and announced it to the world!
Now I remember: I think it’s called confirmation bias.
Recently, my colleague Pavan Srinath put out a post on testing whether someone “defers to scientific reason above and beyond ideologies”. In his post, he made three statements, which he said are all strongly backed by scientific evidence:
The core argument in climate change is that the earth’s surface warmed significantly in the 20th century due to human-linked emissions of greenhouse gases.
The argument with nuclear safety is that health risks from nuclear power generation, both chronic and acute, have been grossly exaggerated and that due to an obsession with nuclear safety for the past 6 decades, nuclear power is now safer than most other sources of energy.
The argument with genetically modified crops is that they are just as safe as other crops, both for growing and for consumption. Additionally, crop modification through targeted molecular biology techniques is in fact less genetically invasive than conventional hybridisation techniques.
All three arguments have overwhelming scientific evidence on their side, and the nature of the scientific debate is very different from the public and political discussions regarding the same.
If you were not ideologically biased and if you were scientifically aware, he said, you would be extremely likely to agree with all the three above statements. If, however, you were biased to the “left” you were likely to agree with the first statement but not with the last two. If biased to the “right”, you were likely to agree with the latter two and not with the first.
We decided to test these beliefs by putting out a survey. The survey had exactly three questions – the above three statements that Pavan mentioned in his blog, and the respondent was supposed to agree or disagree with these statements on a five-point Likert scale. The “sample” on which the survey was administered was biased – most respondents we believe were connected on Facebook or Twitter (the two avenues we used to publicize the survey) to someone in the Takshashila community. It is very likely that most of the respondents were educated urban upper-middle-class Indians (this is a guess; we didn’t ask for these data points in the survey itself).
142 people responded to the survey. Most of these responses came within a day of our putting out the survey. Here are the results of the survey:
Firstly, we will look at the individual responses to each of the three questions:
This shows that opinion in favour of global warming is fairly strong.
While a majority of the people believe that health risks from nuclear power have been exaggerated, the opinion is not as overwhelming as it is on the global warming front. There still exist a significant number of doubters of safety of nuclear energy.
When it comes to GM crops, however, public opinion is largely divided. As many people agree that GM crops are safe, as do people who believe they are unsafe.
Next, we will look at interactions. The next three graphs here show bilateral “heatmaps” of responses to the three questions. The greater the redness of a particular cell in this map, the greater the number of respondents who fall in that cell.
There seems to be a positive correlation between these two beliefs that are towards different ends of the political spectrum. Among people who agree that global warming exists, more people believe that nuclear power risks are exaggerated than otherwise.
An interesting thing here is that extreme views on one issue are correlated with extreme views on another. Note that people who strongly agree on global warming are more likely to strongly agree or disagree with GM Food safety, while those who merely “agree” with global warming are more likely to simply “agree” or “disagree” with GM Food safety. Also notice the large mass of people who strongly agree with global warming but are neutral about the safety of GM Foods. This indicates that there isn’t as much debate and discourse on the safety of GM foods as there should be.
These are the two “right wing issues”. Notice that the top left and bottom right areas are almost empty. People who agree with one of these are more likely to agree with the other.
So how many of our respondents can be classified as being “scientifically aware” based on Pavan’s metrics? Given that Pavan states that someone who is scientifically aware should agree with all three statements, we will consider someone who has “agreed” or “strongly agreed” with all three statements as being “scientifically aware”. This number comes out to 27 out of 142 respondents or about 19%.
How many of our respondents are scientifically unaware? For this we will look at people who either disagree or strongly disagree with each of the above statements. There are only 3 people among those we surveyed who can be thus classified (and one of them has given his/her name as “Troll” so we may not take that seriously).
Then, again going back to Pavan’s definitions, how many left-wingers do we have? For this we will consider people who agree with the statement on global warming and disagree with the other two. There are 17 such people. What about right wingers? These are people who disagree with the statement on global warming but agree with the other two. There are 7 of them. There are 63 respondents who have said that they are neutral on at least one of the three questions.
There is so much more one can do with these responses. I have anonymized the responses and put up the data here for your benefit. You are free to analyze it and draw your own conclusions. However, I would encourage you to share your conclusions with the larger community by leaving a comment on this post.
The “patient” has an incentive to overestimate the extent of his illness, since he can “get away” with certain things by claiming to be more sick than he is
People around the patient have an incentive to underestimate the extent of illness. They think the person is claiming illness only to extract sympathy and get away with things that would be otherwise not permissible
The second point here leads to internal conflict in the patient, as he can’t express himself fully (since others tend to underestimate). Feelings of self-doubt begin to creep in, and only make the problem worse
There are no laboratory tests in order to detect most kinds of “mental illness”. Diagnosis is “clinical” (eg. if 8 out of following 10 check boxes are ticked, patient suffers from XYZ). This leads to errors in diagnosis
The method of diagnosis also leads to a lot of people in believing that psychiatry is unscientific and some reduce it to quackery. So there is little the medical profession can do to help either the patient or people around him
That diagnosis is subjective means patients have incentive to claim they’re under-diagnosed and people around are incentivized to say they’re over-diagnosed
I don’t think the effect of a lot of medicines to cure mental illness have been studied very rigorously. There are various side effects (some cause you to sleep more, others cause you to sleep less, some cause impotence, others increase your mojo, and so on ), and these medicines are slow to act making it tough to figure out their efficacy.
There is a sort of stigma associated with admitting to mental illness. Even if one were to “come out” to people close to him/her, those people might dissuade the patient from “coming out” to a larger section of people
If you were to be brave and admit to mental illness, people are likely to regard you as a loser, and someone who gives up too soon. That’s the last thing you need! And again, the underestimate-overestimate bias kicks in.
Some recentstudies, though, show a positive correlation between mental illness and leadership and being able to see the big picture. So there is some hope, at least.
Why does the government require colleges in India to have “objective criteria” for admissions? I understand that such criteria are necessary for government-owned or run or aided colleges where there’s scope for rent seeking. But why is it that “private” colleges are also forced to adopt “objective criteria” such as board exam marks or entrance test scores for admission?
Abroad, and here, too for MBA admissions, admission is more “subjective”. While of course this has the scope to introduce bias, and is a fairly random process (though I’d argue that the JEE is also a fairly random process), won’t it reduce pressure on the overall student population, and bring in more diversity into colleges?
As a natural experiment I want to see a few state governments deregulating the admissions process for private colleges, making it possible for the colleges to choose their students based on whatever criterion. So what would happen? Of course, some seats would be “reserved” for those with big moneybags. Some more would be reserved for people who are well connected with the college management. But would it be rational for the college boards to “reserve” all the seats this way?
Maybe some colleges would take a short-term view and try to thus “cash out”. The cleverer ones will realize that they need to build up a reputation. So while some seats will be thus “reserved”, others will be used to attract what the college thinks are “good students”. Some might define “good students” to be those that got high marks in board exams. Others might pick students based on how far they can throw a cricket ball. The colleges have a wide variety of ways in which to make a name, and they’ll pick students accordingly.
The problem with such a measure is that there is a transient cost. A few batches of students might get screwed, since they wouldn’t have figured out the reputations of colleges (or maybe not – assuming colleges don’t change drastically from the way they are right now). But in a few years’ time, reputations of various sorts would have been built. Colleges would have figured out various business models. The willingness to pay of the collective population would ensure that reasonably priced “seats” are available.
And remember that I mentioned that a few states should implement this, with the others sticking to the current system of regulating admissions and fees and all such. In due course of time it’ll be known what works better. Rather, it’ll be known what the students prefer.
It’s crazy that colleges now require students to get “cent per cent” in their board exams as a prerequisite to admission. It’s crazy that hundreds of thousands of students all over India, every year, spend two years of their prime youth just preparing to get into a good college (nowadays the madness is spreading. A cousin-in-law is in 9th standard, and he’s already joined JEE coaching). On reflection, it’s crazy that I wasted all of my 12th standard simply mugging, for an exam that would admit me to a college that I knew little about. Madness, sheer madness.
I guess from my posts on religion you people know that I’m not the religious types. I don’t believe in rituals. I don’t believe that saying your prayers daily, or hourly, or monthly has any kind of impact on the orientation of the dice that life rolls out to you.
I believe in randomness. I believe that in every process there is a predictive component and a random component, and that you have no control over the latter. I believe that life can be approximated as a series of toin cosses, er. coin tosses, and some times the coins fall your way, and some times they don’t.
I was brought up in a strange household, in religious terms that is. My mother was crazily religious, spending an hour every day saying her prayers, and performing every conceivable ritual. My father was, for all practical purposes, atheist, and I never once saw him inside the prayer room in the house. I don’t ever remember having to make a conscious choice though, but I somehow ended up becoming like my father. Not believing in prayers or rituals (except for a brief period during my sophomore year at college), not believing that any actions of mine could bias the coin tosses of life.
A couple of years back I bought and read Richard Dawkins’ The God Delusion. I found the book extremely boring and hard to get through. And it really shocked me to read that people actually believe that praying can change the bias of the coins of life. Or that there exist people (most of Americal, shockingly) who think there was a “God” who created the earth, and that evolution doesn’t make sense.
Anyway the point now is that the missus thinks that I’m atheist because it’s the convenient thing to be, and because I haven’t made that extra effort in “finding God”. She things I’m not religious because I’m too lazy to say my prayers, and light incense, and all such. The irony here is that she herself isn’t the ritual types, instead choosing to introspect in quiet temples.
Just want to mention that you might find me write a lot more about religion over the next few days, or weeks, or months, as I try find my bearings and convince myself, and the missus, of my beliefs.
For starters, I’d say that if there exists a god, he does play dice.
So it is around the time when I’m taking part in religious ceremonies that I question my religion, or lack of it. That’s when I need to interact with priests regularly, and sometimes talking to them is frightening. What is most frightening is their level of belief in certain things that I find absurd.
Every major religion is founded on a basic set of axioms. These axioms are designed in a way that they cannot be disproved scientifically.
Sure, there is no way to prove these axioms either, but then given that religion is the “defending champion” it has fallen upon the atheist to disprove the religious axioms. But the way these axioms are stated makes it extremely hard to disprove them. The best that most rational people can do is to call the axioms “absurd” and leave it at that, but that does nothing to convert people on the fence.
For example, take this concept of rebirth and reincarnation which forms the basis of a lot of Hindu thoughts. I find it absurd, and there is no scientific way to prove it (especially since the “universe” is so large since you could be reborn as any species). But there is no scientific way to disprove it either, which is what gives the proponents of this axiom more mileage.
The other thing I observe is that the easiest way to propagate religious thoughts is to create a sense of fear. Stuff like “say your prayers daily else god will punish you”. And then there are some selective examples (with heavy bias in selection) given of people who didn’t make the right religious noises and hence had to suffer. When faced with all this, the young child has no option but to comply with what the religious elders are telling him.
Then I realize that the way you are “taught” religion is extremely absurd. Growing up, you are simply taught a set of processes that you need to go through, without ever going to the significance of any of them. Even the axioms that form the basis of the religion are not exactly taught. In some cases, even the parents would have simply “mugged up the religious practices” and are in no position answer when kids ask them questions about these practices.
For example, when I read Dawkins’s book a couple of years back, I was shocked that there are people that actually believe that there was some “god” who created the universe. I’d always taken evolution as a given. Similarly while talking to priests yesterday (my mother’s first year death anniversary ceremonies are going on) I was shocked to find they actually believe in rebirth, and life after death. Of course, I do believe in Live After Death and think it’s an awesome album.
I just hope I’ll be able to inculcate a sense of questioning and rational reasoning in my kids, and help them protect themselves from blind faith.