Relationship Stimulus

This post doesn’t necessarily restrict its scope to romantic relationships, though I will probably use an example like that in order to illustrate the concept. The concept that I’m going to talk about any kind of bilateral relationship, be it romantic or non-romantic, or between any two people or between man and beast or between two nations.

Let us suppose Alice’s liking for Bob is a continuous variable between 0 and 1. However, Alice never directly states to Bob how much she likes him. Instead, Bob will have to infer this based on Alice’s actions. Based on a current state of the relationship (also defined as a continuous variable between 0 and 1) and on Alice’s latest action, Bob infers how much Alice likes him. There are a variety of reasons why Bob might want to use this information, but let us not go into that now. I’m sure you can come up with quite a few yourself.

Now, my hypothesis is that the relationship state (which takes into account all past information regarding Alice’s and Bob’s actions towards each other) can be modelled as an exponentially-smoothed variable of the time series of Alice’s historical liking for Bob. To restate in English, consider the last few occasions when Alice and Bob have interacted, and consider the data of how much Alice actually liked Bob during each of these rounds. What I say is that the “current level” that I defined in the earlier paragraph can be estimated using this data on how much Alice liked Bob in the last few interactions. By exponentially smoothed, I mean that the last interaction has greater weight than the one prior to that which has more weight than the interaction three steps back, and so on.

So essentially Alice’s liking for Bob cannot be determined by her latest action alone. You use the latest action in conjunction with her last few actions in order to determine how much she likes Bob. If you think of inter-personal romantic relationships, I suppose you can appreciate this better.

Now that you’ve taken a moment to think about how my above hypotheses work in the context of human romantic relationships, and having convinced yourself that this is the right model, we can move on. To simplify all that I’ve said so far, the same action by Alice towards Bob can indicate several different things about how much she now likes him. For example, Alice putting her arm around Bob’s waist when they hardly knew each other meant a completely different thing from her putting her arm around his waist now that they have been married for six months. I suppose you get the drift.

So what I’m trying to imply here is that if you are going through a rough patch, you will need to try harder and send stronger signals. When the last few interactions haven’t gone well, the “state function of the relationship” (defined a few paragraphs above) will be at a generally low level, and the other party will have a tendency to under-guess your liking for them based on your greatest actions. What might normally be seen as a statement of immense love might be seen as an apology of an apology when things aren’t so good.

It is just like an economy in depression. If the government sits back claiming business-as-usual it is likely that the economy might just get worse. What the economy needs in terms of depression is a strong Keynesian stimulus. It is similar with bilateral relationships. When the value function is low, and the relationship is effectively going through a depression, you need to give it a strong stimulus. When Alice and Bob’s state function is low, Alice will have to do something really really extraordinary to Bob in order to send out a message that she really likes him.

And just one round of Keynesian stimulus is unlikely to save the economy. There is a danger that given the low state function, the economy might fall back into depression. Similarly when you are trying to get a relationship out of a “depressed” state, you will need to do something awesome in the next few rounds of interaction in order to make an impact. If you, like Little Bo Peep, decide that “leave ’em alone, they will come home”, you are in danger of becoming like Japan in the 90s when absolute stagnation happened.

Arranged Scissors 13 – Pruning

Q: How do you carve an elephant?
A: Take a large stone and remove from it all that doesn’t look like an elephant

– Ancient Indian proverb, as told to us by Prof C Pandu Rangan during the Design of Algorithms course

As I had explained in a post a long time ago, this whole business of louvvu and marriage and all such follows a “Monte Carlo approach“. When you ask yourself the question “Do I want a long-term gene-propagating relationship with her?” , the answer is one of “No” or “Maybe”. Irrespective of how decisive you are, or how perceptive you are, it is impossible for you to answer that question with a “Yes” with 100% confidence.

Now, in Computer Science, the way this is tackled is by running the algorithm a large number of times. If you run the algo several times, and the answer is “Maybe” in each iteration, then you can put an upper bound on the probability that the answer is “No”. And with high confidence (though not 100%) you can say “Probably yes”. This is reflected in louvvu also – you meet several times, implicitly evaluate each other on several counts, and keep asking yourselves this question. And when both of you have asked yourselves this question enough times, and both have gotten consistent maybes, you go ahead and marry (of course, there is the measurement aspect also that is involved).

Now, the deal with the arranged marriage market is that you aren’t allowed to have too many meetings. In fact, in the traditional model, the “darshan” lasts only for some 10-15 mins. In extreme cases it’s just a photo but let’s leave that out of the analysis. In modern times, people have been pushing to get more time, and to get more opportunities to run iterations of the algo. Even then, the number of iterations you are allowed is bounded, which puts an upper bound on the confidence with which you can say yes, and also gives fewer opportunity for “noes”.

Management is about finding a creative solution to a system of contradictory constraints
– Prof Ramnath Narayanswamy, IIMB

So one way to deal with this situation I’ve described is by what can be approximately called “pruning”. In each meeting, you will need to maximize the opportunity of detecting a “no”. Suppose that in a normal “louvvu date”, the probability of a “no” is 50% (random number pulled out of thin air). What you will need to do in order to maximize information out of an “arranged date” (yes, that concept exists now) is to raise this probability of a “no” to a higher number, say 60% (again pulled out of thing air).

If you can design your interaction so as to increase the probability of detecting a no, then you will be able to extract more information out of a limited number of meetings. When the a priori rejection rate per date is 50%, you will need at least 5 meetings with consistent “maybes” in order to say “yes” with a confidence of over 50% (I’m too lazy to explain the math here), and this is assuming that the information you gather in one particular iteration is independent of all information gathered in previous iterations.

(In fact, considering that the amount of incremental information gathered in each subsequent iteration is a decreasing function, the actual number of meetings required is much more)

Now, if you raise the a priori probability of rejection in one particular iteration to 60%, then you will need only 4 independent iterations in order to say “yes” with a confidence of over 95% (and this again is by assuming independence).

Ignore all the numbers I’ve put, none of them make sense. I’ve only given them to illustrate my point. The basic idea is that in an “arranged date”, you will need to design the interaction in order to “prune” as much as possible in one particular iteration. Yes, this same thing can be argued for normal louvvu also, but there I suppose the pleasure in the process compensates for larger number of iterations, and there is no external party putting constraints.

assignments and studs and fighters and algorithms

For some reason, today I happened to look back at some of my old IIT textbooks, and happened to see this book by Cormen, Leiserson and Rivest on algorithms. I was reminded of the algorithms course at IITM. The prof had just finished his term as HoD and much relieved as he was, he put a lot of enthu into the first half of the course. Every class would start with a “thought for the day” which would be related to what we were going to do. Then, the classes were extremely well structured and there was a regular assignment schedule also.

We were divided into groups of four, and each assignment used to have a “part A” and a “part B”, which carried equal weightage. The former was common to all groups, and would have fairly straightforward stuff – minor variants of what we discussed in class, etc. There would be several problems, and it was frankly a bit of a pain.

Part B gave a separate problem for each group, and this would be usually non-straightforward and required some bit of thinking. There was a good chance that the group never solved it at all, while on the other hand, at times it would hardly take time.

Looking back, I think in more than half the assignments, I ended up doing part B. The problems used to be fairly interesting, and I’d somehow end up solving them before the group even met to discuss the work. And given that all that was required to solve the problem was a moment of inspiration, the process of solving them were, in hindsight, interesting.

One problem was solved when I had taken one hostelmate’s new Bajaj Eliminator for a test ride. Another got solved when I was playing table tennis. Yet another while I was perched on the parapet reading the newspaper.

I also remember this particular incident. In the first assignment, we managed to find a fairly simple and intuitive solution to our part B problem. Now, two guys in my group were topper-types and fighter, and writing a simple and intuitive proof was against their ethos. They said that it would mean that we hadn’t shown much effort, and might result in our getting lesser marks. They finally put enough effort to convert the four line proof into some kind of formal mathematical notation which took four pages. I don’t know if anyone bothered to read that.

Computer Science and Economics

Left-wing economics is idealistic, and the basic assumption is that everyone is a good guy, and he will work in the best interests of the system.

On the other hand, the basic assumption behind right-wing economics is that everyone is inherently a bad guy, and will work only for his own benefit. Hence, systems have to be devised so as to align a person’s selfish interests with the system’s interests.

To put it in other words, left-wing stuff is ideal. It assumes best case performance, or marginally below best case performance, from all players in the system. Similarly, in assuming that everyone has only a selfish motive, right-wing economics does what can be called a worst-case design.

Training in Computer Science inherently teaches you to think about the worst-case possibilities in everything.

In this context, isn’t it? surprising that so many people from a computer science background are leftist?