Programming back to the 1970s

I learnt to write computer code circa 1998, at a time when resources were plenty. I had a computer of my own – an assembled desktop with a 386 processor and RAM that was measured in MBs. It wasn’t particularly powerful, but it was more than adequate to handle the programs I was trying to write.

I wasn’t trying to process large amounts of data. Even when the algorithms were complex, they weren’t that complex. Most code ran in a matter of minutes, which meant that I didn’t need to bother about getting the code right the first time round – apart from for examination purposes. I could iterate and slowly get things right.

This was markedly different from how people programmed back in the 1970s, when computing resource was scarce and people had to mostly write code on paper. Time had to be booked at computer terminals, when the code would be copied onto the computers, and then run. The amount of time it took for the code to run meant that you had to get it right the first time round. Any mistake meant standing in line at the terminal again, and further time to run  the code.

The problem was particularly dire in the USSR, where the planned economy meant that the shortages of computer resources were shorter. This has been cited as a reason as to why Russian programmers who migrated to the US were prized – they had practice in writing code that worked for the first time.

Anyway, the point of this post is that coding became progressively easier through the second half of the 20th century, when Moore’s Law was in operation, and computers became faster, smaller and significantly more abundant.

This process continues – computers continue to become better and more abundant – smartphones are nothing but computers. On the other side, however, as storage has gotten cheap and data capture has gotten easier, data sources are significantly larger now than they were a decade or two back.

So if you are trying to write code that uses a large amount of data, it means that each run can take a significant amount of time. When the data size reaches big data proportions (when it all can’t be processed on a single computer), the problem is more complex.

And in that sense, every time you want to run a piece of code, however simple it is, execution takes a long time. This has made bugs much more expensive again – the amount of time programs take to run means that you lose a lot of time in debugging and rewriting your code.

It’s like being in the 1970s all over again!

Simulating segregation

Back in the 1970s, economist Thomas Schelling proposed a model to explain why cities are segregated. Individual people choosing to live with others like themselves would have the macroscopic impact of segregating the city, he had explained.

Think of the city as being organised in terms of a grid. Each person has 8 neighbours (including the diagonals as well). If a person has fewer than 3 people who are like himself (whether that is race, religion, caste or football fandom doesn’t matter), he decides to relocate, and moves to an arbitrary empty spot where at least 3 new neighbours are like himself. Repeat this a sufficient number of times and the city will be segregated, he said.

Rediscovering this concept while reading this wonderful book on Networks, Crowds and Markets yesterday, I decided to code it up on a whim. It’s nothing that’s not been done before – all you need to do is to search around and you’ll find plenty of code with the simulations. I just decided to code it myself from first principles as a challenge.

You can find the (rather badly written) code here. Here is some sample output:

Sample output

As you can see, people belong to two types – red and blue. Initially they start out randomly distributed (white spaces show empty areas). Then people start moving based on Schelling’s rule – if there are less than 3 neighbours of the same kind, you move to a new empty place (if one is available) which is more friendly to you. Over time, you see that you get a segregated city, with large-ish patterns of reds and blues.

The interesting thing to note is that there is no “complete segregation” – there is no one large red patch and one large blue patch. Secondly, segregation seems rather slow at first, but soon picks up pace. You might also notice that the white spaces expand over time.

This is for one specific input, where there are 2500 cells (50 by 50  grid), and we start off with 900 red and 900 blue people (meaning 700 cells are empty). If you change these numbers, the pattern of segregation changes. When there are too few empty cells, for example, the city remains mixed – people unhappy with their neighbourhood have no where to go. When there are too many empty cells, you’ll see that the city contracts. And so forth.

Play around with the code (I admit I haven’t written sufficient documentation), and you can figure out some more interesting patterns by yourself!

Making coding cool again

I learnt to code back in 1998. My aunt taught me the basics of C++, and I was fascinated by all that I could make my bad old x386 computer to do. Soon enough I was solving complex math problems, and using special ASCII characters to create interesting pattens on screen. It wasn’t long before I wrote the code for two players sitting on the same machine to play Pong. And that made me a star.

I was in a rather stud class back then (the school I went to in class XI had a reputation for attracting toppers), and after a while I think I had used my coding skills to build a reasonable reputation. In other words, coding was cool. And all the other kids also looked up to coding as a special skill.

Somewhere down the line, though I don’t remember when it was, coding became uncool. Despite graduating with a degree in Computer Science from IIT Madras, I didn’t want a “coding job”. I ended up with one, but didn’t want to take it, and so I wrote some MBA entrance exams, and made my escape that way.

By the time I graduated from my MBA, coding had become even more uncool. If you were in a job that required you to code, it was an indication that you were in the lowest rung, and thus not in a “management job”. Perhaps even worse, if your job required you to code, you were probably in an “IT job”, something that was back then considered as being a “dead end” and thus not a very preferred job. Thus, even if you coded in your job, you tended to downplay it. You didn’t want your peers to think you were either in a “bottom rung” job or in an “IT job”. So I wrote fairly studmax code (mostly using VB on Excel) but didn’t particularly talk about it when I met my MBA friends. As I moved jobs (they became progressively studder) my coding actually increased, but I continued to downplay the coding bit.

And I don’t think it’s just me. Thanks to the reasons explained above, coding is considered uncool among most MBA graduates. Even most engineering graduates from good colleges don’t find coding cool, for that is the job that their peers in big-name big-size dead-end-job software services companies do. And if people consider coding uncool, it has a dampening impact on the quality of talent that goes into jobs that involves coding. And that means code becomes less smart. And so forth.

So the question is how we can make coding cool again. I’m not saying it’s totally uncool. There are plenty of talented people who want to code, and who think it’s cool. The problem though is that the marginal potential coder is not taking to coding because he thinks that coding is not cool enough. And making coding cool will make more people want to take it up, which will lead to greater number of people take up this vocation!

Any ideas?