Getting inspired by the work of computer scientists for decades to make computers more efficient in their work, we can learn from the principles of computer science to take decisions in our own lives, which you would see boils down a lot to old sayings and advice, but this time more quantified and with proof (left as an exercise to the reader).
Disclaimer: This post has been fully inspired by the book “Algorithms to Live By”, authored by Professors Brian Christian and Tom Griffiths. I was hooked by the content and how relevant it has been to my recent life.
Foreword
I suffer from indecisiveness a lot because of the complexity of life choices. You can argue about it, that life is but a sequence of decisions. That’s true for computers as well, and that is no coincidence. The problems faced by computers and us go hand in hand, for instance a few examples are:
| architecture problem |
principle |
real life equivalent |
| using finite memory to satisfy several processes’ space requirements |
space multiplexing |
few lecture halls satisfying needs of multiple courses |
| ability to only one task but required to do more |
time multiplexing |
a person satisfying the role of a learner, employee, chef, parent, friend etc. |
| performing multiple tasks |
scheduling |
laying out a weekly time schedule |
| capability to perform all kinds of tasks, from playing music to generating text |
algorithmic decomposition |
a firm distributing different specialized teams decoupled work to build a car. |
| processing incomplete/insufficient information |
assumptions, simplifications, latent variables |
exchange market, knowing when to buy/sell based on unknown public belief |
| incorrect reading before write |
synchronization |
The Gift of the Magi |
| Process A is blocked by Process B, which in turn is blocked by Process A. |
deadlock |
Dining Philosophers Problem |
And therefore are the solutions inherently the same. Now that you’re convinced with the similarities, let’s look at one problem discussed in the book.
Optimal Stopping
There is a popular YT video from Alaine de Botton in the School of Life, linked here which also talks about this topic.
My experience
Recently I was in a house-search in Zürich, which is a mess due to massive demand within the city. The range of prices vary so much and are not too correlated with the value you get. You might find a shack of a room for 900- and you can get a fantastic room for as low as 400-.
When you see a room, you would be able to compare its status to others you’ve seen so far, whether its better or worse, and there is a vague presence of an absolute scale/score of each room, and this score can vary from person to person, but for me was proximity to university, the cost of rent and the proportion of space shared.
For instance I saw a room priced 680-, 35 minutes from my campus and shared by 4 people. This would definitely be better than another room of price 700- ceteris paribus. I therefore say the score is vague because I didn’t have a metric to compare oranges to apples, and here campus distance to rent cost.
I also had a deadline to get a place (otherwise become homeless), and the candidate offers that I was getting replies from were random, it was impossible to say when I would get the best offer, and once I accept the offer I stop the search, or if I reject it, I lose the offer for ever.
The Problem with Assumptions
This problem fits quite well to the famous Secretary Problem which goes by many names, especially the 37% rule.
The problem is simple:
- You are given a pool/sequence of uniform randomly (no concentration of talent in the sequence) distributed candidates.
- You can go through each candidate one by one.
- You can compare two candidates relative ranking but no absolute ranking is possible, aka you can’t say this candidate is 10th best person in the pool but you can say he’s better than all the people you have seen so far.
- If you reject a candidate, he is not coming back!
- If you accept the candidate, your search is over and that’s your choice.
You want to choose the best candidate.
Solution
The rule is simple too.
Given you know the size of finite instances you would deal with, Look upto 37% and then Leap: keep going through and rejecting till 35-40% of all candidates in order to estimate the level you can get in this market of candidates, and after that choose the first candidate you find better than all you have seen so far.
This eludes to the fact that you should never go for a candidate that isn’t the best you have seen yet. Also 37% of exploring (and rejecting) exposes you to enough candidates to judge the pool, while leaving enough margin to potentially have a candidate better than the 37% you have seen so far.
Showing equivalence
For my housing situation, the assumptions don’t translate exactly to this problem formulation, but a bit of tweak does seem to satisfy all.
- An offer from the owner translates to a candidate, and not my application itself.
- The sequence itself was made due to time.
- I don’t know how many offers I would receive, but due to the capped deadline, and the frequency of offers which used to vary but averaged to twice a week: we can estimate for 3 months I would get 24 offers.
- The uniform randomness of the offers hold as I only looked at a fixed range of prices (450 to 700) and instances from this range happened randomly in time.
- I could compare two candidates based on my metrics.
- Once I deny an offer, its gone. If I accept it, my search is over.
Sim to Real Gap
It takes guts to follow this rule, especially seeing how irrational I (we) can be. Especially Point 3 is a big assumption of regularity, because offers don’t come in a stream of 8 per month, and not getting offers in a while intrigues me to go for the next one. Finally I didn’t really follow the rule according to the offers received, but the offers seen. That led me to taking the fourth offer (instead of ninth or tenth) I had, the one which seemed to tick more dots than usual.
Lesson pocketed
Many a times we don’t have a second shot. This phenomenon of optimal stopping to give two hints on how to do better, first one is clear on analyzing the sample space (the available set of choices) for a while and then finding the good outlier after your analysis; but also to attempt more.
This lesson comes across in multiple different ways or cliches, of Edison’s “1000 ways of how-not-to”, failing being the “pillar of succeeding”, or luck favouring the brave. All of these are conveying the same meaning, of giving more tries if the choice really matters to you, and in the process, using the 37% rule.