Chat GPT and Information Content

For the last week or so I’ve been fooling around with Chat GPT. It’s been a lot of fun, if you can endure the verbosity of its answers. Yesterday, at work, I used it for tech support when some people I was talking to were using too much jargon.

Before I came home from work yesterday, my last conversation was with a couple of recruiters, about a role that I’m hiring for. So when I got home, on a whim, I wondered if I could check out of Chat GPT could write job descriptions for me.

I asked:

write a job description for a senior data scientist at India’s largest logistics company. 4-7 years exp. Needs to have experience in ML, OR and statistics, and be able to work close to business and on ambiguous problems

It answered (pasting in full since the screenshot above is limited):

Job Title: Senior Data Scientist

Company: India’s Largest Logistics Company

Location: India

Experience: 4-7 years

We are seeking an experienced and highly skilled Senior Data Scientist to join our team at India’s largest logistics company. In this role, you will be responsible for using machine learning, operations research, and statistics to drive business insights and decision making.

As a Senior Data Scientist, you will have a deep understanding of data science techniques and technologies, and will be able to apply them to complex and ambiguous problems. You will be able to work closely with business teams to understand their needs and to develop solutions that drive value for the company.

To be successful in this role, you should have at least 4-7 years of experience in data science, and should have a strong background in machine learning, operations research, and statistics. You should be able to work independently and be comfortable working on complex and ambiguous problems.

Responsibilities:

  • Use machine learning, operations research, and statistics to drive business insights and decision making
  • Work closely with business teams to understand their needs and develop solutions
  • Develop and maintain data pipelines and systems
  • Collaborate with other data scientists and cross-functional teams to develop and implement solutions

Qualifications:

  • Bachelor’s or Master’s degree in a quantitative field, such as computer science, statistics, or engineering
  • At least 4-7 years of experience in data science
  • Strong background in machine learning, operations research, and statistics
  • Ability to work independently and on complex and ambiguous problems
  • Excellent communication and collaboration skills
  • Experience with Python, R, or other programming languages is a plus

We offer a competitive salary and benefits package, as well as opportunities for growth and advancement. If you are an experienced data scientist with a passion for driving business insights and decision making, we encourage you to apply.

In the past when I’ve been hiring, I’ve been a bit annoyed by people asking for job descriptions. I’ve largely found them to be fluff and boilerplate, and I’ve tried to get away with a sentence or two. However, due to the insistence of recruiters and candidates alike, I’ve ended up writing elaborate job descriptions. I don’t need to any more.

This post, however, is not about hiring (but yeah – I am hiring, and the above job description is reasonably accurate). It’s about information content in general.

Traditionally, information content has been measured in bits. A bit is the amount of information contained in an equi-probable binary event.

Sometimes when we find that someone is too verbose and using too many words when fewer would suffice, we say that their bit rate is low. We also use “low bit rate” to describe people such as former Prime Minister Atal Behari Vajpayee, who would speak incredibly slowly.

However, beyond the bit, which is a fairly technical concept, it has been difficult to quantify information content. Sometimes you read an article or a story and find that there is nothing much to it. But given the natural language, and the context of various words, it is impossible to quantify the information content.

Now, with Chat GPT, maybe it becomes a bit easier (though one would need a “reverse chat GPT algo”, to find the set of prompts required for Chat GPT to churn out a particular essay). Above, for example, I’ve shown how much fluff there generally is to the average job description – a fairly short prompt generated this longish description that is fairly accurate.

So you can define the information content of a piece or essay in terms of the number of words in the minimum set of prompts required for Chat GPT (or something like it) to come up with it. If you are a boring stereotypical writer, the set of prompts required will be lower. If you are highly idiosyncratic, then you will need to give a larger number of prompts for Chat GPT to write like you. You know where I’m going.

This evening, in office, a colleague commented that now it will be rather easy to generate marketing material. “Even blogs might become dead, since with a few prompts you can get that content”, he said (it can be a legit service to build off the Chat GPT API to take a tweet and convert it into an essay).

I didn’t tell him then but I have decided to take it up as a challenge. I consider myself to be a fairly idiosyncratic writer, which means I THINK there is a fair bit of information content in what I write, and so this blog will stay relevant. Let’s see how it goes.

PS: I still want to train a GAN on my blog (well over a million words, at last count) and see how it goes. If you know of any tools I can use for this, let me know!

 

Management watch

About a year back, a few months after I had started my current job, I was working late into the evening. I was sitting on the sofa with my laptop when my wife said, “you cannot call yourself senior management if you work like this”.

“What do you mean”, I asked.

“If you are truly senior management, you should not be using your computer after normal work hours. You should be doing everything using your phone. Do you remember, six months into my job at <@#R@#$@@>, I would work late into the night, but only with my phone?”, she countered.

I had to admit this was a good point. More practically, in terms of work stuff, I started thinking about making dashboards and reports more mobile-friendly. I started questioning interactive dashboards – if they are aimed at top management, the latter largely see the stuff on their phones, so interactivity is full of fat fingers.

Of course, the nature of my job means that I can never truly be senior management by this metric – I’m generally  too hands on to be able to work exclusively on my phone. However, that hasn’t stopped me from evangelising this theory of my wife. The theory itself is strong enough.

Recently I’d met a former client. He was using an iPad as a work “laptop”. I told him the theory and that he has truly arrived. He said he had been given a choice of an iPad and a Surface –  basically his company has internalised how senior management ought to be treated.

While I can never be senior management by this metric, I’m in a way trying to leapfrog it. Recently I got myself an Apple Watch. Apart from other things, it gives me notifications for all my messages, and I can reply using the watch as well. And this is where the magic begins.

For starters, Apple offers this standard set of templatised replies you can use. Now, Apple being Apple (and not Google), these replies are not customised to the message that you get. It drives me nuts that there is an “OK” and a “Sure!” and a “No” but no “Yes”. If this template doesn’t work for you, you can actually type a message on the watch itself. My fingers are fat (and I wear my watch on my dominant hand), so this is not so useful for me. However, there is also a voice typing mode, and that is rather good. And that is where things get real.

The other day, I shut work early and went off for a walk (I like doing that). My team had not shut their work though, and they kept bombarding me with messages. And that is when I realised I could actually read their messages and REPLY TO THEM using my watch. Most of the messages were the template monosyllables. Sometimes I spoke into my watch (without breaking my stride), and let Apple’s excellent voice-to-text do the rest.

And so I have this new theory, which is an extension of my wife’s theory. The next level of senior management is to be able to get all your work done simply using your watch – not even needing your phone. Of course, limitations exist – only a few lines of text are shown for each email, and images don’t load, but it is only a matter of time before watches solve for this.

But then, I’ve discovered one massive downside of replying to messages using my watch – the tone. The template monosyllables are all come across as rude (or curt). And the voice-to-text means you don’t really have your filter on while typing, and you end up “writing as you would speak”, and that can’t be great as well.

The other day I was walking from our Michaelpalya office to our Binnamangala office, when I was bombarded with messages from someone. And without breaking my stride I replied to all the messages, speaking into my watch. I “wrote” as I would speak (complete with swearwords), and that turned out to be an incredibly rude set of messages I ended up sending (I apologised later that day when I saw what I’d “written” on my phone later).

So leapfrogging and trying to act too cool can sometimes come at a price.

Why calls are disruptive to work

It is well known in my company that I don’t like phone calls. I mean – they are useful at times, but they have their time and place. For most normal office communication, it is far easier to do it using chat or mail, and less disruptive to your normal work day.

Until recently, I hadn’t been able to really articulate why phone calls (this includes Meet / Zoom / Teams / whatever) are disruptive to work, but recently had an epiphany when I was either drunk or hungover (can’t remember which now) during/after a recent company party.

Earlier that day, during the said party, one colleague (let’s call him C1) had told me about another colleague (let’s call him C2) and his (C2’s) penchant for phone calls. “Sometimes we would have written a long detailed document”, C1 said, “and then C2 will say, ‘I have to make one small point here. Can you please call me?’. He’s just the opposite of you”

I don’t know why after this I started thinking about circuit switching and packet switching. And then I realised why I hate random office calls.

Currently I use a Jio connection for my phone. The thing with Ji0 (and 4G in general, I think) is that it uses packet switching for phone calls – it uses the same data network for calls as well. This is different from earlier 2G (and 3G as well, if I’m not wrong) networks where calls were made on a different voice (circuit switching) network. Back then, if you got a call, your phone’s data connection would get interrupted – no packages could be sent because your phone was connected through a circuit. It was painful.

Now, with packet switching for phone calls as well, the call “packets” and the browsing “packets” can coexist and co-travel on the “pipes” connecting the phone to the tower and the wide world beyond. So you can take phone calls while still using data.

Phone calls in the middle of work disrupt work in exactly the same way.

The thing with chatting with someone while you’re working is that you can multitask. You send a message and by the time they reply you might have written a line of code, or sent another message to someone else. This means chatting doesn’t really disrupt work -it might slow down work (since you’re also doing work in smaller packets now), but your work goes on. Your other chats go on. You don’t put your life on hold because of this call.

A work phone call (especially if it has to be a video call) completely disrupts this network. Suddenly you have to give one person (or persons) at the end of the line your complete undivided attention. Work gets put on hold. Your other conversations get put on hold. The whole world slows down for you.

And once you hang up, you have the issue of gathering the context again on what you were doing and what you were thinking about and the context of different conversations (this is a serious problem for me). Everything gets disrupted. Sometimes it is even difficult to start working again.

I don’t know if this issue is specific to me because of my ADHD (and hence the issues in restarting work). Actually – ADHD leads to another problem. You might be hyper focussing on one thing at work, and when you get a call you are still hyper focussed on the same thing. And that means you can’t really pay attention to the call you are on, and can end up saying some shit. With chat / email, you don’t need to respond to everything immediately, so you can wait until the hyper focus is over!

In any case, I’m happy that I have the reputation I have, that I don’t like doing calls and prefer to do everything through text. The only downside I can think of of this is that you have to put everything in writing.

PSA: Google Calendar now allows you to put “focus time” on your own calendar. So far I haven’t used it too much but plan to use it more in the near future.

 

Upgrade effect in action

So the workflow goes like this. Sometime a week to 10 days back, I read about the “upgrade effect“. It has to do with why people upgrade their iPhones every 1-2 years even though an iPhone is designed to last much longer (mine is 5 years old and going strong).

The theory is that once you know an “exciting upgrade” is available, you start becoming careless with your device. And then when the device suffers a small amount of damage, you seize the chance to upgrade.

I’m typing this on a MacBook Pro that is 6 years old. It is one of the last “old Macbooks” with the “good keyboard” (the one with keys that travel. I’ve forgotten if this is “butterfly” or “scissor”).

With consistently bad feedback about the other keyboard (the one where keys didn’t travel), I was very concerned about having to replace my Mac. And so I took extra good care of it. Though, this is what the keyboard has come to look like.

Last year I dropped a cup of milk tea on it, and panicked. Two days of drying it out helped, and the computer continued to work as it did (though around the same time the battery life dropped). Last year Apple reintroduced the old keyboard (with keys that travel), and I made a mental note to get a new laptop presently.

However, with this year having been locked down, battery life has ceased to be a problem for me (I don’t have to work in cafes or other places without charging points any more). And so I have soldiered on with my old Mac. And I’ve continued to be happy with it (I continue to be happy with my iPhone 6S as well).

And then on Wednesday I saw the announcement of the new M1 chip in the new Macbook Pro, with much enhanced battery and performance. I got really excited and thought this is a good time to upgrade my computer. And that I will “presently do it”.

I don’t know if I had the article about the “upgrade effect” but the same afternoon, sitting with my laptop on my lap and watching TV at the same time, I dropped it (I forget how exactly that happened. I was juggling multiple things and my daughter, and the computer dropped). I dropped it right on the screen.

Immediately it seemed fine. However, since yesterday, some black bands have appeared on the screen. Thankfully this is at one edge so it doesn’t affect “regular work”  (though last 3-4 months I’ve been using an external monitor at home). Yet, now I have a good reason to replace my laptop sooner than usual..

Based on the reviews so far (all of them have come before the actual hardware has shipped), I’m excited about finally upgrading my Mac. And this computer will then get donated to my daughter (she has figured out to type even on a keyboard that looks like the above).

I hadn’t imagined that soon after learning about the “upgrade effect” I would fall for it. Woresht.

Open and closed platforms

This is a blogpost that I had planned a very long time (4-5 weeks) ago, and I’m only getting down to write it now. So my apologies if the quality is not as good as my blogposts usually are. 

Many of you would have looked at the title of this blogpost and assumed that the trigger for this was the “acquisition” of Joe Rogan’s podcast by Spotify. For a large sum of money, Spotify is “taking his podcast private”, making it exclusive to Spotify subscribers.

However, this is only an “immediate trigger” for writing this post. I’d planned this post way back in April when I’d written one of my Covid-19 related blogposts – maybe it was this one.

I had joked the post needed to be on Medium for it to be taken seriously (a lot of covid related analysis was appearing on Medium around that time). Someone suggested I actually put it on Medium. I copied and pasted it there. Medium promptly took down my post.

I got pissed off and swore to never post on Medium again. I got reminded of the time last year when Youtube randomly pulled down one of my cricket videos when someone (an IP troll, I later learnt) wrongly claimed that I’d used copyrighted sounds in my video (the only sound in that video was my own voice).  I had lodged a complaint with Youtube, and my video was resurrected, but it was off air for a month (I think).

Medium and Youtube are both examples of closed platforms. All content posted on these platforms are “native to the platform”. These platforms provide a means of distributing (and sometimes even marketing) the content, and all content posted there essentially belongs to the platform. Yes, you get paid a cut of the ad fee (in case your Youtube channel becomes super powerful, for example), but Youtube decides whether your video deserves to be there at all, and whose homepages to put it on.

The main feature of a closed platform is that any content created on the platform needs to be consumed on the same platform. A video I’ve uploaded on Youtube is only accessible on Youtube. A medium post can only be read on medium. A tweet can only be read on twitter. A Facebook post only on Facebook.

The advantage with closed platforms is that by submitting your content to the platform, you are hoping to leverage some benefits the platform might offer, like additional marketing and distribution, and discovery.

This blog doesn’t work that way. Blogposts work through this technology called “RSS”, and to read what I’m writing here you don’t need to necessarily visit noenthuda.com. You can read it on the feed reader of your choice (Feedly is what I use). Of course there is the danger that one feed reader can have overwhelming marketshare, and the destruction of that feed reader can kill the ecosystem itself (like it happened with Google Reader in 2013). Yet, RSS being an open platform means that this blog still exists, and you can continue to receive it on the RSS reader of your choice. If Medium were to shut down tomorrow, all Medium posts might be lost.

Another example of an open platform is email – it doesn’t matter what email service or app you use, my email and yours is interoperable. India’s Universal Payment Interface (UPI) is another open platform – the sender and receiver can use apps of their choice and still transact.

And yet another open platform (which a lot of people didn’t really realise is an open platform) is podcasting. Podcasts run on the RSS protocol. So when you subscribe to a podcast using Apple Podcasts, it is similar to adding a blog to your Feedly. This thread by Ben Thompson of Stratechery (that I just stumbled upon when I started writing this post) sums it up well:

What Spotify is trying to do (with the Joe Rogan and Ringer deals) is to take these contents off open platforms and put it on its own closed platform. Some people (like Rogan) will take the bait since they’re getting paid for it. However, this comes at the cost of control – like I’m not sure if we’ll have another episode of Rogan’s podcast where host and guest light up a joint.

Following my experiences with Medium and Youtube, when my content was yanked off for no reason (or for flimsy reasons), I’m not sure I like closed platforms any more. Rather, someone needs to pay me a lot of money to take my content to a closed platform (speaking of which, do you know that all my writing for Mint (written in 2013-18) is behind their newly erected paywall now?).

In closing I must mention that platforms being “open” and platforms being “free” are orthogonal. A paid podcast or newsletter is still on an open platform (see Ben Thompson tweetstorm above), since it can be consumed on a medium independent of the one where it was produced – essentially a different feed is generated depending on what the customer has paid for.

Now that I’ve written this post, I don’t know what the point of this is. Maybe it’s just for collecting and crystallising my own thoughts, which is the point behind most of my blogposts anyway.

PS: We have RSS feeds for text and podcasts for audio. I wonder why we don’t have a popular and open protocol for video.

Zoom in, zoom out

It was early on in the lockdown that the daughter participated in her first ever Zoom videoconference. It was an extended family call, with some 25 people across 9 or 10 households.

It was chaotic, to say the least. Family call meant there was no “moderation” of the sort you see in work calls (“mute yourself unless you’re speaking”, etc.). Each location had an entire family, so apart from talking on the call (which was chaotic with so many people anyways), people started talking among themselves. And that made it all the more chaotic.

Soon the daughter was shouting that it was getting too loud, and turned my computer volume down to the minimum (she’s figured out most of my computer controls in the last 2 months). After that, she lost interest and ran away.

A couple of weeks later, the wife was on a zoom call with a big group of her friends, and asked the daughter if she wanted to join. “I hate zoom, it’s too loud”, the daughter exclaimed and ran away.

Since then she has taken part in a couple of zoom calls, organised by her school. She sat with me once when I chatted with a (not very large) group of school friends. But I don’t think she particularly enjoys Zoom, or large video calls. And you need to remember that she is a “video call native“.

The early days of the lockdown were ripe times for people to turn into gurus, and make predictions with the hope that nobody would ever remember them in case they didn’t come through (I indulged in some of this as well). One that made the rounds was that group video calling would become much more popular and even replace group meetings (especially in the immediate aftermath of the pandemic).

I’m not so sure. While the rise of video calling has indeed given me an excuse to catch up “visually” with friends I haven’t seen in ages, I don’t see that much value from group video calls, after having participated in a few. The main problem is that there can, at a time, be only one channel of communication.

A few years back I’d written about the “anti two pizza rule” for organising parties, where I said that if you have a party, you should either have five or fewer guests, or ten or more (or something of the sort). The idea was that five or fewer can indeed have one coherent conversation without anyone being left out. Ten or more means the group naturally splits into multiple smaller groups, with each smaller group able to have conversations that add value to them.

In between (6-9 people) means it gets awkward – the group is too small to split, and too large to have one coherent conversation, and that makes for a bad party.

Now take that online. Because we have only one audio channel, there can only be one conversation for the entire group. This means that for a group of 10 or above, any “cross talk” needs to be necessarily broadcast, and that interferes with the main conversation of the group. So however large the group size of the online conversation, you can’t split the group. And the anti two pizza rule becomes “anti greater than or equal to two pizza rule”.

In other words, for an effective online conversation, you need to have four (or at max five) participants. Else you can risk the group getting unwieldy, some participants feeling left out or bored, or so much cross talk that nobody gets anything out of it.

So Zoom (or any other video chat app) is not going to replace any of our regular in-person communication media. It might to a small extent in the immediate wake of the pandemic, when people are afraid to meet large groups, but it will die out after that. OK, that is one more prediction from my side.

In related news, I swore off lecturing in Webinars some five years ago. Found it really stressful to lecture without the ability to look into the eyes of the “students”. I wonder if teachers worldwide who are being forced to lecture online because of the shut schools feel the way I do.

Hanging out on Hangouts

The covid-19 crisis has fundamentally changed the way we work, and I thikn some things we are not going to get back. For the foreseeable future, at least, even after official lockdowns have been lifted, people will be hesitant to meet each other.

This means that meetings that used to earlier happen in person are now going to happen on video calls. People will say that video calls can never replace the face-to-face meetings, and that they are suboptimal, especially for things like sales, account management, relationship management, etc.

The main reason why face-to-face interactions are generally superior to voice or video calls is that the latter is considered transactional. Let’s say I decide to meet you for some work-related thing. We meet in one of our offices, or a coffee  shop, or a bar, and indulge in pleasantaries. We talk about the traffic, about coffee, about food, do some random gossip, discuss common connects, and basically just hang out with each other for a while before we get down to work.

While these pleasantaries and “hanging out” can be considered to be a sort of transaction cost, it is important that we do it, since it helps in building relationships and getting more comfortable with each other. And once you’ve gotten comfortable with someone you are likely (at the margin) to do more business with them, and have a more fruitful relationship.

This kind of pleasantaries is not common on a phone call (or a video call). Usually phone calls have more well defined start and end boundaries than in-person meetings. It is rather common for people to just get started off on the topic of discussion rather than getting to know one another, cracking jokes, discussing the weather and all that.

If we need video and phone calls to become more effective in the coming months (at least as long as people aren’t stepping out), it is imperative that we learn to “hang out on hangouts”. We need to spend some time in the beginning of meetings with random discussions and gossip. We need to be less transactional. This transaction cost is small compared to the benefit of effectively replicating in-person meetings.

However, hanging out on hangouts doesn’t come easily to us – it’s not “natural”. The way to get around it is through practice.

On Sunday night, on a whim, I got onto a group video call with a bunch of college friends. Midway through the call I wondered what we were doing. Most of the discussion was pointless. But it gave us an opportunity to “hang out” with each other in a way we hadn’t for a long time (because we live in different places).

Overall, it was super fun, and since then I’ve been messaging different groups of friends saying we should do group video chats. Hopefully some of those will fructify. Along with the immediate fun to be had, they will also help me prepare better for “hanging out” at the beginning of my work meetings.

I think you should do them, too.

Fancy stuff leads to more usage

A couple of months back, I decided to splurge a bit and treat myself to a pair of AirPods. Not the Pro version, which hadn’t yet been released, but this was the last generation. For someone who had hardly ever bought earphones in life (mostly using the ones that came bundled with phones), and for someone who would incessantly research before buying electronics, this counted as an impulse purchase.

A few months back a friend had told me that he had researched all the earphones in the market, and concluded that the best one for making calls is the AirPods. As it happens, he has an Android phone, and so decided it’s not worth it in the absence of an iPhone. And when he told me this, I figured that with an all-Apple lineup of devices, this is something I should seriously consider.

In the past I’d never been that much of a earphone user, mostly using them to listen to music when seated with my laptop outdoors. I hardly ever used them with my phone (a cable jutting out of the pocket was cumbersome). Based on that rationale, when I was in the market for a pair last year, I ended up buying a random cheap pair.

What my AirPods have shown me is that having a good device makes you use it so much more.

The UX on the AirPods is excellent and intuitive. Right now, for example, they’re connected to my laptop as I listen to music while writing this. If I were to get a call right now, I can very quickly switch them to pair with my phone, and talk on. And then after the call it’s two clicks to get them back to pair with the laptop.

This kind of experience is something that cannot be quantified, and because you cannot quantify and compare this across competing devices, in deep research you can miss out on this. This is one of those points that Rory Sutherland makes in Alchemy, which I read last month. And you fail to appreciate things like experience until you have really experienced it.

The amazing UX on the AirPods, not to talk about the great sound, means that I’ve, in a month, used them far more than I’d use other earphones in a year. Even when alone at home, I don’t blast music on my computer now – it’s always through the AirPods. I sometimes wear them while going on walks (though long walks are reserved for introspection with nothing streaming through my ears).

I was in Mumbai on Tuesday, and on the flight on both ways, I listened to podcasts using the AirPods. I’m surprised I had never thought of the idea before – it’s incredibly neat since you can close your eyes and listen, and sleep at your leisure. On commutes between meetings in Mumbai, I listened to podcasts in taxis. And so on.

So this is a learning for the next time – when I’m researching for a product that I think I may not use frequently, I need to keep in mind that if I like it I will use it far more than whatever it replaces. And if that is going to make my life better, the premium I would have paid for it will be really really worth it.

Oh, and coming back to AirPods, one question I keep getting is if they’re easy to lose. Based on the evidence so far, the biggest risk on that count is the daughter running off with one or both of them and misplacing them somewhere!

Lullabies and walled gardens

There’s still a bit of walled gardens going on in the device and voice control space. About two years ago, in London, we acquired an Amazon Echo, and found that Alexa voice assistant could be used to play songs through either Spotify or Amazon Music, but not through Apple Music, which we then used.

And so, we got rid of Apple Music and took a subscription to Spotify. And among the things we would make Alexa do was to play the daughter’s lullabies on Spotify. And that is how, at the age of two, Berry spoke her first complete sentence, “Alexa, use Spotify to play Iron Man by Black Sabbath”.

We don’t have that Echo any more, and as a household are in a complete “apple ecosystem” as far as devices are concerned. Two Macs, two iPhones, an iPad and now a pair of AirPods. However, we had quite got used to Spotify and its playlists and its machine learning, and even though the India catalogue is nowhere as good as the one in the UK, we continued our subscription.

However, bands such as Black Sabbath, Led Zeppelin and Iron Maiden are critical for us, not least because their songs are part of the daughter’s sleeping portfolio. So we need something other than Spotify. And then we discovered that in India, Amazon Prime Music comes bundled with the Amazon Prime membership. And so we created the daughter’s sleeping playlist there, and started using it for bands not available on Spotify.

It was an uncomfortable arrangement, not least because Amazon Music is a terrible software product. Since family subscriptions are still not a thing with Spotify India, during periods of deep work the wife and I would fight over who would get Spotify and who had to make do with Amazon Music.

And then there is voice. Being in a complete Apple EcoSystem now, we found that Siri couldn’t control Spotify or Amazon Music, and for seamless voice experience (especially given I use it in car, using Apple Carplay) we needed Apple Music. And given how painful Amazon Music is to use, I thought spending ?149 a month on Apple Music Family Subscription is worth it, and took the subscription yesterday.

Since then I’ve been happily using it using voice control on all devices. Except until an hour back when I was putting the daughter to sleep. She requested for “baby has he”, which is her way of saying she wants Iron Man by Rockabye Baby (rather than by Black Sabbath). And so I held down the home button of the iPad and barked “play lullaby renditions of Black Sabbath”.

I don’t know what Siri interpreted (this is a standard command I’d been giving it back in the day when I used to exclusively use Apple Music), but rather than playing Lullaby Renditions of Black Sabbath, it played some “holy lullabies”, basically lullaby versions of some Christian songs. I tried changing but the daughter insisted that I let it be.

And so she kept twisting and turning in her bed, not going to sleep. I soon lost patience. Abandoning voice, I opened the iPad and switched from Apple Music to Spotify, where I knew the Rockabye Baby album was open (from last night – we hardly use the iPad otherwise nowadays), and started playing that.

Before Iron Man was halfway through, the daughter was fast asleep.

Television and interior design

One of the most under-rated developments in the world of architecture and interior design has been the rise of the flat-screen television. Its earlier avatar, the Cathode Ray Tube version, was big and bulky, and needed special arrangements to keep. One solution was to keep it in corners. Another was to have purpose-built deep “TV cabinets” into which these big screens would go.

In the house that I grew up in, there was a purpose-built corner to keep our televisions. Later on in life, we got a television cabinet to put in that place, that housed the television, music system, VCR and a host of other things.

For the last decade, which has largely coincided with the time when flat-screen LCD/LED TVs have replaced their CRT variations, I’ve seen various tenants struggle to find a good spot for the TVs. For the corner is too inelegant for the flat screen television – it needs to be placed flat against the middle of a large wall.

When the flat screen TV replaced the CRT TV, out went the bulky “TV cabinets” and in came the “console” – a short table on which you kept the TV, and below which you kept the accompanying accessories such as the “set top box” and DVD player. We had even got a purpose-built TV console with a drawer to store DVDs in.

Four years later, we’d dispensed with our DVD player (at a time when my wife’s job involved selling DVDs and CDs, we had no device at home that could play any of these storage devices!). And now we have “cut the cord”. After we returned to India earlier this year, we decided to not get cable TV, relying on streaming through our Fire stick instead.

And this heralds the next phase in which television drives interior design.

In the early days of flat screen TVs, it became common for people to “wall mount” them. This was usually a space-saving device, though people still needed a sort of console to store input devices such as set top boxes and DVD players.

Now, with the cable having been cut and DVD player not that common, wall mounting doesn’t make sense at all. For with WiFi-based streaming devices, the TV is now truly mobile.

In the last couple of months, the TV has nominally resided in our living room, but we’ve frequently taken it to whichever room we wanted to watch it in. All that we need to move the TV is a table to keep it on, and a pair of plug points to plug in the TV and the fire stick.

In our latest home reorganisation we’ve even dispensed with a permanent home for the TV in the living room, thus radically altering its design and creating more space (the default location of the TV now is in the study). The TV console doesn’t make any sense, and has been temporarily converted into a shoe rack. And the TV moves from room to room (it’s not that heavy, either), depending on where we want to watch it.

When the CRT TV gave way to the flat screen, architects responded by creating spaces where TVs could be put in the middle of a long wall, either mounted on the wall or kept on a console. That the TV’s position in the house changed meant that the overall architecture of houses changed as well.

Now it will be interesting to see what large-scale architectural changes get driven by cord-cutting and the realisation that the TV is essentially a mobile device.