Friday, May 6, 2022

Book Insights 2/3: A Human's Guide to Machine Intelligence - Kartik Hosanagar

This blog is a only a summary note of the book and does not capture the full content and all the details. 
This blog is written for academic purpose, please do provide citation to the book A Human's Guide to Machine Intelligence, Author - Kartik Hosanagar, Publisher - Penguin, for referencing. 

I encourage the readers to buy the book for a detailed reading. 
It's available on Amazon:
and Flipkart:


The book A Human's Guide to Machine Intelligence by Kartik Hosanagar, provides readers a valuable insights to two of the million dollar questions: How Algorithms Are Shaping Our Lives? and How We Can Stay In Control?

The book consist of three parts:


  • Free Will in an Algorithmic World
  • The Law of Unanticipated Consequences

  • Omelet Recipes for Computers: How Algorithms Are Programmed
  • Algorithms Become Intelligent: A Brief History of AI
  • Machine Learning and the Predictability-Resilience Paradox
  • The Psychology of Algorithms

  • In Algorithms We Trust
  • Which Is to Be Master - Algorithm or User?
  • Inside the Black Box
  • An Algorithmic Bill of Rights
  • Conclusion: The Games Algorithms Play

The Preface section, gives the journey of development of artificial intelligence. 

In 1992 Vishwanathan Anand played Fritz, a chess-playing software program and he won easily. In 1994, an improved version of Fritz beat Anand and several other grandmasters. By 1997, IBM's Deep Blue beat world chess champion Gary Kasparov in a six-game tournament. Deep Blue had the ability to explore 200 million possible chess positions in a mere second. Google's latest chess-playing software, AlphaZero, taught itself the game in just 4 hours. It explores only 80,000 positions per second, before making the move with the greatest likelihood of success. 

With advancement of AI, it's applications moved from the sphere of chess and games, into our daily lives. It gives examples of Flipkart's recommendation algorithm and speech-recognition start-up, Myntra's AI-generated apparel designs, GreyOrange deploying robots in warehouses, HDFC's OnChat chatbot and Recruitment companies using AI for screening resumes.

With AI permeating into all spheres of life, it also brings in unique challenges for e.g. data security, data privacy, job creation and job displacements, AI biases etc.

Introduction section: 

It talks about Microsoft's chatbot XiaoIce which was launched in 2014. It was developed after years of research on NLP (natural language processing) and conversational interfaces.  XiaoIce attracted more than 40 million followers and friends on WeChat and Weibo (social apps).  In other words, humans interacts with XiaoIce on a regular basis just as two human friends would do. 

Woebot is a Chatbot therapists being used to help people maintain their overall mental health. 

Microsoft introduced on Twitter in 2016 in US. It garnered 100,000 interactions in just 24 hours but soon it took an aggressive personality of it's own - sending out extremely racist, fascist and sexist tweets. Very next day, Microsoft shutdown the project's website.

Algorithm is a series of steps (logic) which one needs to follow to arrive at the defined outcome.

Traditionally, programmers used to develop algorithm and it was static by it's nature. 

With advancement in artificial intelligence (AI), algorithms now can take in data, learn new steps (logic) on it's own and can generate more-sophisticated versions of themselves.  

Machine learning (ML) is a subfield of AI, that empowers machines with self-learning ability so that it can improve with experience. 

Modern algorithms have incorporated both AI and ML. The most popular versions of these algorithms are built on neural networks, opaque machine learning techniques. The learned strategies and behaviours they incorporate even their human programmers can't anticipate, explain or sometimes even understand them. Thereby they are advancing beyond their decision support role (suggestions) to becoming autonomous systems that make decisions on our behalf.   

ProPublica published a report on how algorithms used in Florida courtrooms are positively biased towards whites and racist towards black defendants. 

In May 6, 2010, afternoon a large mutual fund group to sell 75,000 contracts in just 20 minutes, by using a single algorithm. This lead to domino effect as other companies trading algorithms seeing the market behaviour attempted to exist the market by selling more stocks. In about half an hour, nearly $ 1 trillion of market value was wiped out. This event became known as 'flash crash'.

Cathy O'Neil Data Scientist and Political Activist calls algorithms built on Big Data as "Weapon of Math Destruction". According to Philosopher Nick Bostrom the inherent unpredictability of AI poses an existential threat to humans. 

Despite of these concerns, modern AI-based algorithm are here to stay. Hence the author Kartik Hosanagar dwells into the mind of the algorithm and answers three related questions:

  1. What causes algorithms to behave in unpredictable, biased and potentially harmful ways?
  2. If algorithms can be irrational and unpredictable, how do you decide when to use them?
  3. How do we, as individuals who use algorithm in our personal or professional lives and as a society, shape the narrative of how algorithms impact us?


Free Will in an Algorithmic World

The author takes us through the daily routine of a senior research fellow, which seems random at one level, but he makes us wonder about the degree to which the algorithms of Facebook, Google, Tinder and Amazon plays a role in his life's day to day circumstances. Essentially, it makes us ask - To what extent are we in control of our own actions?

The author explains that popular design approaches (notifications and gamifications) are for increasing user engagement. It exploits and amplifies human vulnerabilities (need for social approval, instant gratification) making us act impulsively and against our better judgement. 

The author also makes us realise though we feel a sense of free will by making the final decision, in reality 99% of all possible alternatives are excluded in the search algorithms. (Google's algorithms determine which one are featured in the very top of the results page). Automated recommendations are also a major driver of choices we make about what to buy, what to watch and what music to listen online. 

Amazon, Walmart, Netflix, Spotify, Apple's iTunes, Google's YouTube algorithms gently nudge us in specific directions.

Social media websites algorithms are the chief drivers of the content we see. Facebook, Instagram and Twitter's algorithms determines which potential stories or posts you should read first, which you can read later and which you don't need to read at all. The news-feed algorithms in the social networking websites play a crucial role in reportage we read and the opinions we form about the world around us.

Algorithms are also selecting the social networks by itself. LinkedIn algorithm reminds you of the people you emailed recently to be added in your professional network. Facebooks algorithm recommend whom to add as friends and Tinder, are recommending who you date or marry.

The books presents a case study of where the algorithm for recommendations is not just based on the stated preferences submitted by their subscribers, rather it was based on their online behaviour (based on what profiles they checked and clicked). 

The Law of Unanticipated Consequences

The author presents Kevin Gibbs to us, who programmed Google search autocomplete predictor tool (suggestions). The predictions are based on the user's own recent search queries, on trending news stories and what other people were searching for on Google. This feature brings in so much of efficiency (time saving) but there is also an unintentional outcome. The autocomplete reveals the prejudice existing around the subject of search. Therefore is Google unintentionally leading impressionable people who did not initially seek this information to webpages filled with biases and prejudices?

Another case study presented to us, the world's first beauty contest judged by AI. Though more than 6,000 people from 100 countries participated, the 44 winners chose by the software were nearly all white. 

Google Photos in 2015 faced similar issue with it's photo-tagging algorithm. A photo of Jacky Alcine (a software engineer) and his friend were auto-tagged "Gorillas". They were indeed black individuals. The Image-processing algorithms hadn't been trained on a large enough number of photos of black people hence it was unable to distinguish for different skin tones.


Omelet Recipes for Computers: How Algorithms Are Programmed

In this chapter the author presents the recommendation engines of three digital music platforms: Pandora, and Spotify.

Pandora's algorithms are based on "content-based recommendation". These systems start with detailed information about a product's characteristics and then search for other products with similar qualities. algorithms are based on "collaborative filtering". For e.g. you like Yellow by Coldplay, then the algorithm will find someone who also likes Yellow by Coldplay and what else she listens to. The algorithm then recommends to you, these new songs.

Spotify algorithm tries to combine these two methods. 

Pandora has a deep understanding of music and vocabulary, which emerged from the Music Genome Project, in which musicologists listened to individual tracks and assigned more than 450 attributes to each. 

The collaborative filters lack in depth knowledge, it is simplistic in it's approach, easy to roll out and therefore has become te most popular class of automated product recommenders on the internet (e.g. Netflix, YouTube, Google News).

How does social media websties Facebook, LinkedIn, Tinder recommend people?

The common approach they take is to match based on similarities in demographic attributes - age, occupation, location, shared interests in topics and ideas discussed on social media. 

An alternative approach used by other algorithms relies on people's social networks themselves. For e.g. if you and I are not connected on LinkedIn but have more than a hundred mutual connections, we are notified that we should perhaps be connected.

Both systems are intuitive. They are based on homophily (our tendency to connect with those most like us).  

In this chapter, the author touches upon another interesting topic - Neighbourhoods (for brick and mortar stores and for Digital world).

With years of experience we all know the importance of location in real estate. If a store exists in a right location, it will have footfalls of the customers. The concept of location extends within grocery stores too, i.e. similar items are placed next to one another on the same shelf. Therefore, "neighbourhoods" allows us to quickly find what we as customers are looking for.

For Digitial neighbourhoods, the author gives examples from the early days of the World Wide Web when Yahoo and Geocities used to classify businesses by products and services they offer. It was conventional directory of products converted into website of organised products and services into various categories.

With information explosion this early form of online categorisation had it's natural death and it was replaced by Algorithms such as Search engines, Recommendation systems or Social news feeds. In other words, today's Digital Neighbourhhoods are managed with this new forms of algorithms.

Google's PageRank algorithm ranks webpages not only by the occurrence of search terms on the pages but also by the hyperlinks from other pages ("inlinks") that a page receives. 

Facebook's ad-targetting algorithms stems out of social connections since similar people tend to be friends and hence share similar affinity to brands/products/services.

Amazon's shopping recommendation "people who viewed/bought this product also viewed/bought these other products", creates a "network" of interconnected goods.  

Food for thought - Amazon's recommendation algorithm (collaborative filtering) does it increase diversity? Afterall, wasn't internet meant to be a democratization tool. 

The truth is common algorithms reduce the diversity of items we consume, because these collaborative filtering might be biased towards popular items. Reason being, they recommend items based on what others are consuming and hence the obscure items are ignored. 

The author and team provide examples from simulation experiments they carried out which demonstrates these algorithms can create a rich-get-richer effect for popular items. Even though individuals might discover new items, aggregate diversity doesn't rise, and market share of popular items only increases. 

The author points towards Spotify's hybrid design (combination of collaborative filter with a content-based method) as a better solution to this problem. [Experts and AI/ML are deployed in the content based method for a detailed research in identifying the attributes of the product (song), than just reliance on popularity].

Food for thought - We generally don't evaluate, algorithms holistically hence the unintended consequences of an algorithm are overlooked.


A Brief History of AI

The author takes us back in time to 1783 by narrating the story of 'Mechanical Turk' a chess playing machine, which turned out to be a hoax involving  a hidden human player, a mechanical arm and a series of magnetic linkages. 

150 years later in 1950, Alan Turing published a paper posing a profound question: "Can machines think?" He imagined a computer might chat with humans and fool them to believe it is a human too. This came to be known as the Turning Test, to measure the intelligence of machines.

Soon after, John McCarthy proposed a workshop which will pursue the question how to make machines solve problems that only humans were assumed to be capable of solving. He names his conference the Dartmouth Summer Research Project on Artificial Intelligence which took place in 1956. In this workshop/conference, human-level intelligence was recognised as the gold standard to aim for in machines. It also established AI as a branch of science: What is thinking, what are machines, where do the two meet, how and to what end?

By 1959, Al Newell and Herb Simon had built a software the Logic Theorist which proved the theorems from the seminal work Principia Mathematica. 

By 1967 engineers at MIT developed Mac Hack VI the first computer to enter a human chess tournament and win a game. 

The AI community used different approaches for building intelligent systems:

  1. Rules of Logic
  2. Statistical techniques (infer probabilities of events based on data)
  3. Neural networks (inspired by how network of neurons in human brain fires to create knowledge)

By 1969 Neural Network approach lost favour when Marvin Minsky et al published a book Perceptrons which was critical of neural network, highlighting it's limitations.

Without much output and more of promises, 1970s and 1980s witness funding of AI research on a downward spiral. The funding got redirected to other areas of computer science, such as networking, databases and information retrieval.

The AI community had to set more-realistic near-term goals.

The previous approach of AI research was: AGI (artificial general intelligence), also described as 'strong AI'.

The newer approach of AI research became expert systems approach: ANI (artificial narrow intelligence), also described as 'weak AI'.

May 11, 1997, IBM's Deep Blue computer defeating Garry Kasparov was a very important milestone in the history of modern computing.

However, the ANI approach failed as the algorithms would fail in a given situation if these were not explicitly programmed. (There can be infinite number of situations in complex activities hence it's an endless task for a programmer to predict and write a code for all possible situations).

So by early 2000, there was a growing recognition among the researcher community that computers could never attain true intelligence without machine learning.

The advent of internet provided access to large datasets (Big Data) for training ML algorithms.

Processers which were originally designed to handle 3-D gaming graphics, found applications to process big data.

These three ingredients: Big data, Specialised processeors and AGI approach to research advanced the field of AI with many practical applications coming out. However, soon the 'traditional machine learning methods' emerged as the rate limiting step.

The recent explosion in ML is fueled by Deep Learning (essentially, digital neural networks arranged in several layers).

Deep learning models comprises of: 

  1. An Input layer (input data),
  2. An Output layer (desired prediction) and
  3. Multiple hidden layers (that combine patterns from previous layers to identify abstract and complex patterns in the data). 
In 1980s Geoff Hinton et al had developed a fast algorithm for training neural networks with multiple layers. Other researchers to follow, built upon this foundational work.

Today's Google's self-driving car prototypes, systems like DeepPatient at Mount Sinai hospital NY are outcomes of machine learning approach.

Machine Learning and the Predictability-Resilience Paradox

In this chapter the author gives example of the game Go (similar to chess, but more complex). In 2016, Google's Go-playing computer program, AlphaGo defeated Lee Sedol, the world champion from Korea. The Move 37, played by AlphaGo couldn't be understood by the programmers themselves.

The DeepLearning ML systems are becoming more intelligent, dynamic and more unpredictable. In other words, the explicit set of rule based algorithms were Predictable, whereas the DeepLearning ML were Resilient due to it's adaptability.

Human beings tacit knowledge is difficult to explain explicitly. The tacit dimension was put forth by Michael Polanyi in his 1966 book, The Tacit Dimension. 

David Autor refers to this as Polanyi's paradox - We know more than we can tell.

If technology has to solve complex creative problem, it's development has to move away from predictable systems and become resilient systems. Examples of such applications are Google's ranking algorithm, Google's self-driving cars, Google's ad-targetting algorithms.

Allowing learning from Big Data, taps into undiscovered knowledge hidden in data, which is escaping human beings.

On the other hand the issues which are challenging us with this approach are: 

1. Adversarial machine learning i.e. learning from data which is intentionally manipulated by adversaries trying to make the system malfunction (e.g. sabotating Google's Self-driving cars by adding in wrong road signs). 

2. Algorithmic bias i.e. when machines learn from Big Data, it might pick up different kinds of biases. 

Probable solutions to overcome these issues: 

1. Deemphasize Big Data and instead focus on 'Better Data' i.e. carefully curate 'clean' datasets and learn from them. (However, studies show the outcome from learning from Big Data by far supersedes the outcome of learning from smaller/cleaner datasets). 

2. Reinforcement learning i.e. it doesn't tap into Big Data rather it creates Big Data. In other words,  algorithm learns from the data generated by itself through self-exploration. (e.g. Google AlphaGo Zero the newer version of Go-playing software).

3. Multiple approaches are simultaneously employed: e.g. Self-driving cars deploying both machine learning and rules-based systems manually coded by programmers.

4. Explainable or Interpretable machine learning: hottest areas in AI research, how to build ML systems that can explain their decisions.

The Psychology of Algorithms

In this chapter the author provides the 'Nature (genes) vs Nurture (environment)' model for explaining how the algorithms might be working. 

Nature refers to the logic of early computer algorithms which were fully programmed. Nurture refers to the modern algorithms which learns from real-world data. 

The author points out how XiaoIce and Tay both similar algorithms (chatbot) from Microsoft, behaved differently in different data environments. He then points out to March 2017 Microsoft's launching of Zo, another chatbot, which was explicitly programmed to avoid political controversies. 

These examples provides us a framework for deconstructing algorithmic systems:

Data <----->Algorithms <-----> People

  1. Data - on which the algorithms are trained
  2. Algorithms - their logic/programmed code
  3. People - the ways in which users interact with the algorithms

Various studies conducted provides the following insights:

  • The like-mindedness of our Facebook friends traps us in an echo chamber (filter bubble - in which we all have our own narrow information base).
  • We prefer reading news items that reinforce our existing views. 
  • Digital echo chambers is driven by the actions of online users.

Data, Algorithms and People all put together have a complex interactions and together plays a significant role in determining the outcomes of algorithmic systems.


In Algorithms We Trust

In this chapter, the author describes the paradox of human beings simultaneously trusting and mistrusting algorithms.

The incidences of May 2016 in U.S. highway 27A Tesla Model S sedan in self-driving mode meeting a fatal accident killing it's driver and in March 2018 in Tempe, Arizonia a self-driving vehicle being tested by Uber killing a pedestrian are cited, which arouse significant protest and mistrust in public opinion on adoption of self-driving technologies (algorithms).
This negative public opinion is contrary to the aggregate safety data of self-driven cars in comparison to human drivers.

On the other hand in U.S. by the end of 2017, independent robo-advisers investment companies such as Betterment, Wealthfront, Vanguard and others were collectively managing more than $200 billion in assests through their automated investment platforms (algorithms).  

Various studies conducted for understanding human behaviour towards algorithms gives several insights:

1. We do trust algorithms over humans when we are evaluating it against other humans. However, when we compare the algorithm against us (self) we trust ourselves more.

2. Human beings are more forgiving of their own mistakes than those of the algorithm.

3. Human beings lose confidence in algorithms much more than they do in human forecasters (predictions) when they observe both making the same mistake.

Which Is to Be Master - Algorithm or User?

The author takes us back in time and tells us the history of elevator and points out to us, elevator were a predecessor of today's fully automated driverless car. 
When operator less (driverless) elevators were first introduced, people were reluctant and opposed to this idea. People would walk into an elevator car and immediately step out, asking "Where's the elevator operator?". In 1950s there was a strike by elevator operators in New York City. In a reaction to this the building owners forced the issue and the designers added reassuring features, most prominent of them was a big red coloured "stop" button. There was an intercomm phoneline for speaking to a remote operator.
Though this stop button and a phone line didn't offer a real control to users, it still gave people a sense of control, that they could interrupt the automated system and take over it if they needed to do so. Eventually, the usage of automated elevators went up and people embraced this new operator less (driver less) elevators. 

The above history of elevators is consistent to contemporary research on 'People's Trust On Algorithms'. If users feel they have some control - however minimal it may be - their trust on the algorithm is significantly enhanced. 

Examples of such minimal control for example are, on Netflix users can respond with 'thumbs up' or 'thumbs down' feedback to recommendations, which in turn will be used by the algorithms to improve future recommendations.
Another example would be Google's search algorithm which offers decisional control by offering a long list of hits for every search query. The user can scroll through the list and choose the one that best fits their needs. 

The author ends the chapter by pointing out algorithmic decision making are evolving from Decision Support Systems to becoming Autonomous Decision Makers. With this evolution, the issue of Transparency is getting lot of buzz among the circuit of AI researchers and social scientists.

Inside the Black Box

In this chapter the author explains to us (readers) how our trust or mistrust develops towards algorithms. 

Researcher Kizilcec points out for human beings, there is such a thing as the 'right' amount of transparency - not too little, not too much. According to him, the same applies to algorithms - Too Much Information (TMI) and Too Little Information both, can undermine user trust. 

The nature of trust deficit are of following types: 
  • Weakened competence belief - when we suspect if the algorithm truly has the required expertise.
  • Weakened benevolence belief - when we suspect if the algorithm is trying to maximize it's gains.
  • Weakened integrity belief - when we suspect if the algorithm is upholding values (e.g. honesty, fairness)

Research on decision support systems shows:

  • A HOW explanation of the algorithm, alleviates the person's weakened competence belief.
  • A WHY explanation of the algorithm, alleviates the person's weakened benevolence belief.
  • A TRADE-OFF explanation of the algorithm, alleviates the person's weakened integrity belief.
The above examples of trust is from the end-user's (non-technical person) perspective. However from the point of view of a Technical person or an Auditor or a Regulator - higher the transparency of the algorithm, higher is their degree of trust.

So how we as a society can achieve a high level of transparency, so that regulators can audit the algorithms?

The obvious answer is through Technical Transparency - making the source code public.

However, this is approach is not so simple to implement, as for-profit companies their algorithms are their intellectual property, they have an economic value and it can't be made public for obvious reasons. 

Also the modern algorithms (AI based) even if the source code is available for scrutiny/audit, the very nature of these algorithms (significant portions of their logic come through machine learning) makes it difficult to understand it's behaviour and hence audit it.

An Algorithmic Bill of Rights

The author starts this chapter with the Three Laws of Robotics by science fiction writer Issac Asimov, which was written in the year 1942, in a short story "Runaround".

He then lists out the algorithmic bill of rights, by US Public Policy Council of the Association for Computing Machinery (ACM) published in January 2017. These principles cover seven general areas:

  1. Awareness - those who design, implement, and use algorithms must be aware of there potential biases and possible harm, and take these into account in their practices.
  2. Access and redress - those who are negatively affected by algorithms must have systems that enable them to question the decisions and seek redress.
  3. Accountability - organizations that use algorithms must take responsibility for the decisions those algorithms reach, even if it is not feasible to explain how the algorithms arrive at those decisions.
  4. Explanation - those affected by algorithms should be given explanations of the decisions and the procedures that generated them.
  5. Data provenance - those who design and use algorithms should maintain records on the data used to train the algorithms and make those record available to appropriate individuals to be studied for possible biases.
  6. Auditability - algorithms and data should be recorded so that they can be audited in cases of possible harm.
  7. Validation and testing - organizations that use algorithms should test them regularly for bias and make the results publicly available.  

There is reference made for Ben Shneiderman, a professor of computer science at the University of Maryland who issued a call for a National Algorithmic Safety Board. In Dec 2017, New York City passed a law to set up a new Automated Decision System Task Force for monitoring the algorithms used by municipal agencies. 

The chapter, also draws our attention to EU's legislation the General Data Protection Regulation (GDPR). GDPR has two main sections:

Nondiscrimination - using algorithms to profile individuals is intrinsically discriminatory. Therefore, GDPR bans decisions which are solely based on the use of sensitive data (personal data).

Right to explanation - this addresses the issues related to transparency. It mandates that the users can demand for the data behind the algorithmic decisions made for them.

In September 2016, the tech giants came together to create 'The Partnership on AI' for self-regulation, which is focusing on four key areas: 

1. Best practices for implementing safety-critical AI systems in areas such as health care and transportation; 2. Detecting and addressing biases in AI systems; 3. Best practices for humans and machines to work together; and 4. Social, psychological, economic and policy issues posed by AI.

The author strongly advocates the users of algorithms - you and I should also step up and contribute in drafting the bill of rights for humans impacted by algorithms. 

Kartik Hosanagar (author) proposes four main pillars of an algorithmic bill of rights:

  1. Transparency of data
  2. Transparency of algorithmic procedures
  3. Providing a feedback loop to the users for communication and to have some degree of control
  4. User's responsibility to be aware of the risk of unanticipated consequences 

This book leaves us with a food for thought or should I say buffet of thoughts - 

"Together, we have to answer one of the more pressing questions we face today. How will we conceive, design, manage, use and govern algorithms so they serve the good of all humankind?" - Kartik Hosanagar

No comments:

Post a Comment