Monday, October 31, 2022

Let's provide Effective Career Guidance to our School Students by tapping into Media Contents

 

Image by Debi Brady from Pixabay

The OECD Website (Organisation for Economic Co-operation and Development) mentions, Effective Career Guidance enables young people to develop informed, critical perspectives about the relationship between education and employment, helping them to visualise and plan their transitions through schooling and into attractive work. Effective guidance is rich in first-hand encounters with the labour market, begins early and responds to the personal circumstances and interests of students. 

As you would be aware OECD is an international organisation which comprises of 38 member developed countries and it works to build policies across various spheres of human society. 

If you are a parent or a teacher or a career counsellor from India, then OECD's Career Readiness Project featured on their website is bound to catch your attention. OECD member countries like Australia, Canada, Croatia, Finland, France, Germany, Korea, Malta, New Zealand, Spain, UK and US, has put Effective Career Guidance into practice in their formal schooling system - Primary, Lower Secondary and Upper Secondary School.

The formal education system in countries like Australia, Canada, New Zealand, UK and US have Practice of Career Guidance right from their Primary school all the way to their Upper Secondary school. In countries like Finland, France, Germany and Korea the Practice of Career Guidance is introduced from Lower Secondary school onwards and Malta introduces it in their Upper Secondary school. (data sourced from OECD website on 1st of Nov 2022).

Career Guidance for students through their schooling years produces positive outcomes such as connecting the subjects with the world of work, removing blind spots and raising awareness of the world of possibilities, raising self-motivation, goal setting, increases focus and improves grades. 

From my personal experience as a parent and as a career counsellor, I regretfully have to say a systematic, formal career guidance and career awareness program is lacking in our formal Indian schooling system. Of course lot of positive changes are happening with the introduction of National Education Policy (NEP), but as on date our students are not receiving universal access to career guidance during their formal schooling for making connections of the subjects they study in school with the future world of work (professions). 

A multi-stakeholder engagement is needed for developing a holistic solution to meet the needs of our students. With the eventual implementation of NEP (National Education Policy) there is hope it will address this gap in our existing schooling system. This blog won't be able to do justice to the seriousness and enormity of this topic. For that, we need to indulge in serious academic reading of policy papers and follow the experts and policy makers. Hence, I am taking the approach of 'Plucking The Low Hanging Fruits', which can be easily practiced by parents, teachers and career counsellors. 

Let's provide Effective Career Guidance to our School Students by taking into Media Contents which offers insights into different careers and professions. We can exploit the opportunity created by penetration of internet, smart phones, YouTube, video-on-demand (VOD) and Over-the-Top (OTT), to provide access of these contents to all school students cutting across the urban-rural divide and the socio-economic strata, by spreading the word across through social media.

As well meaning adults, let's introduce our children/students to these contents and let's stimulate their curiosity and help them to connect the dots, by engaging with them in a two-way discussion. Such practice of effective career guidance by using media which has the flavour of entertainment, will also be not be perceived by the students as another over-bearing class, in their already cramped schedule of studies and tuitions!

This blog will be a running log entry of career related media contents. As I come across such contents I shall add them into the list. I also call upon your support to provide contents you come across, through the comment section and please do share this blog among your network so that it reaches to more students, parents, teachers and counsellors.

Together we can make a positive difference for our children and students, by exposing them to at least some form of Effective Career Guidance, instead of just waiting when a formal introduction of guidance will happen into our formal education system. 

List of Media Contents: 

INDIA'S BEST JOBS on Discovery+ Channel https://dplus.app.link/Zs7CT0LHAub

It features 26 professions as on date, spread over 2 seasons (13 episodes per season). The show is hosted by Meiyang Chang a dentist turned performer (achieved fame by debuting in Indian Idol). The theme of the show is 'Discovery how one makes their passion a successful career'.

Professions showcased on this show are: 

Season 1: Canine Behaviorist, Paragliding Instructor, Travel Blogger, RJ, Wedding Filmmaker, Fitness Trainer, Custom Bike Modifier, Organic Food Entrepreneurs, Drone Operator, Stand-up Comedian, Celebrity Chef, Lifestyle and Fitness Coach and Film Director.

Season 2: DJ, Project Management, Interior Designer, Theater Director, Ecostay Entrepreneur, Production Designer, Food Truck Entrepreneurs, Hairstylist, Private Detective, Automotive Journalist, Wildlife Photographer, Casting Director and Hospitality Entrepreneur. 

Host Chang goes through a checklist at the end of each episode. This Chang Checklist comprises of:
1. Are you your own Boss? 2. Job Satisfaction? 3. Does it pay your Bill? 4. Client satisfaction? 
5. Perks?
This checklist can introduce the students to the concept of passion, hard work, business acumen and finance management. It can make them realise, these concepts are the underlying principles across any professions they choose in future.


BREAKING POINT: INDIAN AIR FORCE ACADEMY on Discovery+ Channel https://dplus.app.link/RG8ZxjzNAub

This is a four episode series which follows four cadets - Mudit Tewari, Priya Sharma, Amogh Bhandralia and Kartik Thaku as they undergo various stages of training at the famed institution in Dundigul, Hyderabad. 

This series aim to inspire the youth of India to look at the Air Force as a career opportunity. The series also provides insights into the life of Air Force Officers, their journey from a cadet to a Flying officer. 
The series makes the students aware of entrance exams like NDA and CDS and offers insights on the hard work, dedication and skills required by them to qualify, complete the training and meet the demands of the call for duty.       

Tuesday, May 31, 2022

It's OKAY not to have a PLAN!


Photo by Rachel McDermott on Unsplash

The society expects us to plan our life as per a standard template and follow a prescribed path of milestones at various stages of life. 

The moment a girl or a boy enters high school, most of them are confronted with the following questions: 

  • Which stream are you planning to take in 11th?
  • What do you want to be when you grow up?
  • Aur beta, bade hogar kya banoge?

In other words, the society expects you to have a Career PLAN.

As a career counsellor, I come across so many teenagers and young adults who haven't made up their mind yet, on what they would like to pursue in their lives. Very few of them are comfortable being in this lack of clarity. On contrary, the majority of them express being in a state of confusion, feeling inadequate and they are desperately seeking for guidance. 

From my experience, the teenagers who are comfortable with not having a plan yet, usually have a safety net of supportive and understanding parents and guardians. However, the significant proportion of young adults who express confusion, dilemma and inadequacy are dealing with family pressure, peer pressure, pressure from teachers and society's expectations from them to have plan sooner than later.

If I draw upon my 20+ years of working experience I have come across so many bright professionals women and men, who did not have a concretized plan but they are doing extremely well and leading a successful career. They took up things as it came along, went with the flow and capitalized upon the opportunities life presented to them. 

A self-confession, I too never had not have a concrete plan. By temperament, I have believed in the importance of life's journey, to stay on course with the life's ups and down, than being overtly focused on a plan and the destination.

At this juncture, it would be appropriate to shift gears from my view point to real world evidences (case studies) to that I can bring in objectivity and not request you to just subscribe to my world view just out of good faith.

I was reading The Week Magazine (edition, May 29, 2022). The cover story 'Lessons Life Has Taught Me'. featured several prominent, successful men and women. The narration is in an interview style format, where they walk us (readers) through their professional life's journey. Behind their illustrious careers and achievements, if you pay attention, you will discover they too didn't have a plan in their life. They went through the journey of life, meandering, going with the flow, stumbling upon changes and opportunities and making their own luck. 

Let me mention two illustrious examples from this cover story:

  • Bibek Debroy (Economist and chairman of the Economic Advisory Council to the Prime Minister):

As a graduate student Mr Debroy wanted to study Physics in Presidency College, Kolkata. This was during the mid 1960s, when Kolkata was in the midst of considerable left-wing turbulence. Presidency College and especially the Science dept was perceived to be the hotbed of turbulence. Hence, his parents denied him the permission to study Physics at Presidency. They arrived at a compromise, by allowing Mr Debroy to study Economics and not Physics at Presidency college. 

Just thing about it! Someone like Mr Bibek Debroy who has had such an illustrious career as an Economist, didn't even have the plan to study Economics as a graduate student. Life's circumstances, thrusted upon Economics on him.  

Mr Debroy mentions in the interview, his venturing into 'Application based Economics' from Theory, was purely by chance. When he wasn't able to secure a permanent teaching position at Presidency College, he applied to Gokhale Institute of Politics and Economics in Pune. Gokhale Institute provided Mr Bibek the exposure to application based Economics, which shaped his career as an Economist in the years to follow. 

Fast forward into several decades later, Mr Debroy turned into an author and when NITI Aayog was formed in 2015, he was one of it's first members. Since 2017, he has been the chairman of the Economic Advisory Council to the prime minister. 

His brush with writing emerged out of a near death experience and not out of a well laid out plan. In 2004, he was in ICU dealing with a life and death situation. In the interview Mr Bibek says, "When you are there in the ICU, and you realise that you are at that point when you are not sure whether you are going to live or not, it's almost as if your entire life flashes before you. And you begin to ask: Who am I? What am I doing here? I decided that, if I survived, in that calender year 2004 I was going to bring out 12 books. That year, I actuallly published 15 books. Since then, I only write what I feel like writing, not because it is going to add to my resume."

Mr Bibek Debroy summarises the life's lessons in his own words: 

"I think that life's lesson really is that each one of us has a destiny. The fortunate few may realise this destiny.  

  • Ramachandra Guha (Historian, Writer):

In the interview, Mr Guha says, "In my day, at the age of 16, you chose your college subject. And I wanted to play cricket. So, I had to choose a humanities subject which would give me time in the afternoons to go for practice."

Just think about it! one of the most renowned historian and scholar Mr Guha didn't plan to take up Humanities for becoming a historian. He took up humanities, so that he could continue to play cricket along with his studies. 

Infact, his first choice of subject was English Literature. In the interview he goes on to say, "I would have liked to have studied English literature; back in the 1970s it was regarded as a girl's subject, so I studied economics. I was intellectually directionless in college, but I was doing many other things. I was playing bridge, was in the college quiz team and was editing the college magazine. I got focus in life only after I read about [the British-Indian anthropologist] Verrier Elwin at the age of 21, then got interested in sociology and anthropology."

Today we all know Mr Guha for his magnum opus 'India After Gandhi'. But once again, this project was not an outcome of his own plan. This project was suggested to Mr Guha by a publisher. "In the year 1998, India had just celebrated it's 50th year of independence. Publisher, Peter Straus suggested me to write a book on what happened after 1947. When I was commissioned to write it, I was 40. It came out when I was nearly 50, after almost a decade of researching and writing."

Mr Ramachandra Guha summarises the life lessons in his own words: 

"You must always go by your own instinct. Success is incidental. It is really the quality of work that must give you satisfaction. You must feel that you have done something that you are happy with and that has utilised your energies and talents fully."

It's OKAY not to have a PLAN!, has this approach been part of any serious scientific study? 

Apparently yes. 

Stanford Professor John D. Krumboltz along with his colleagues Levin and Kathleen Mitchell developed the 'Planned Happenstance Theory'. 

John Krumboltz, right, with a simulation game for choosing careers. (Image credit: Jose Mercado)
Copyright: Image provided by the Standford University News Services

"Arbitrary events have important influence on people's lives. All these events that happen in life are unpredictable and let's be grateful that they're unpredictable." - John D. Krumboltz

The main tenet of Planned Happenstance theory is "things in life will happen", whether we like them or not, and you can or need to prepare to see and take up these opportunities in your life.

Krumboltz et al recognizes that career planning didn't depend on one off career decisions taken as a teenager or as a working professional. Rather career planning is ongoing, often unplanned or influenced by unplanned and unpredictable events. 

So next time, when you find yourself in dilemma and in self-doubt and feeling miserable for lack of clarity about your next career steps, tell yourself "It's OKAY not to have a Plan!"

After this comforting self-talk, take the key points of Krumboltz et al 'Planned Happenstance Theory' and apply them in your career journey:

  1. Be aware of your surroundings - it's important to see opportunities and to keep your options open
  2. Take a risk, even with rejection as possible outcome - trying is better than not trying at all. Not trying leads to lost opportunities
  3. Be adaptable and open-minded - accept changes and engage with them. Say 'yes' when you can, not when there's no other option 
  4. Qualities that helps to make the most of the chanced opportunities are:
  • Curiosity
  • Persistence
  • Flexibility
  • Optimism
  • Risk taking
     5. Attributes to help turn chance opportunities into career opportunities are:

  • Commitment to ongoing learning
  • Ongoing self-assessment
  • Feedback from others
  • Effective networking
  • Work life balance
  • Financial planning for unemployment 

 In our culture, we are expected to be decisive about our careers goals and to have a plan. This cultural value attribution puts those who are uncertain under pressure and makes them feel inadequate. 

I hope it's now evident to you, that an undecided person who is actively exploring and learning about career opportunities may very well carve out an unexpected, but fulfilling career. 

Even those who have clearly defined career goals now, it may not remain fixed forever. They may find their goals changing over a period of time, as life progresses and situation changes. 

So remember, it's also OKAY not to have a plan. Just keep yourself open to chance events, be curious, be optimistic, take risk, be flexible, stay persistent, keep learning and explore new opportunities. Who knows what you'll end up doing!

#career #learning #success #opportunity #luck #life #careerguidance #careerplanning #careercoach #counselling #careergoals #destiny #wisdom #hope #believe #trust #journey #lifejourney #inspiration #plannedhappenstance #krumboltz

References:

  1. The Week magazine cover story 'Lessons Life Has Taught Me' also features Harsha Bhogle, Ritu Kumar, Sushmita Sen, Narayan Murthy and Tarun Tahiliani. https://www.theweek.in/tag.theweek~package@Lessonslifehastaughtme.html
  2. https://ed.stanford.edu/news/stanford-professor-john-d-krumboltz-who-developed-theory-planned-happenstance-dies
  3. https://marcr.net/marcr-for-career-professionals/career-theory/career-theories-and-theorists/planned-happenstance-theory-krumboltz-levin/

Friday, May 27, 2022

Book Insights 3/3: The Information Diet: A Case For Conscious Consumption by Clay Johnson

 

 

This blog is a only a summary note of the book and does not capture the full content and all the details.  
This blog is written for academic purpose, please do provide citation to the book The Information Diet: A Case for Conscious Consumption, Author - Clay Johnson, Publisher - O'Reilly, for referencing. 
I encourage the readers to buy the book for a detailed reading. 
It's available on Amazon Kindle: https://www.amazon.in/Information-Diet-Ca-Johnson/dp/1491933399

 Clay Johnson in his book The Information Diet: A Case for Conscious Consumption, takes a very interesting position by drawing an analogy between the industrialization of food (the fast food culture) with industrialization of information (the hyper digital media culture)

Mindless eating of fast food (low nutrient, high calories) leads to weight gain/obesity. 

Similarly, mindless consumption of information (high quantity but low in quality) leads to information obesity. 

The author alerts the readers by pointing out human beings are hard wired to salt, sugar and fat. 

Similarly, human beings are hard wired for affirmation of their beliefs, fear, hate and gossips.

The appeal he makes to us is not to be passive consumer of media, rather to recognise we all have a 'choice' and to use this agency to decide what to consume? what to avoid? 

The book brings up the concept of Fiduciary responsibility i.e. media companies serve their shareholders by focusing on revenue and profit margins. This business model translates into:
  1. Tweaking news headlines to make it more palatable to it's audience.
  2. Create Link baits, as more clicks = more ads = more revenue for the media house
  3. Many media houses deploy multivariate testing. In the initial 5 minutes, 2 variant headlines of the same news are put out online. The headline which draws more clicks stays online.
  4. Experienced journalist are being replaced with network of less qualified, cheaper independent contractors.
  5. Content farming: The editors makes decision on 4 parameters: i. Traffic potential, ii. Revenue potential, iii. Turn-around-time and iv. Editorial quality.
  6. Deployment of SEO (Search Engine Optimization)
  7. Media content is catered towards the algorithms, SEO
  8. Churnalism: Simply copy pasting what's in a press release (permissible plagiarism) than value added news articles
  9. Advertisements, Sales have taken over as the primary driver. For free or low priced content, unknowingly the consumers are the products.
The author also makes us aware of Bad Science, i.e. vested interests coming together to fund research studies, favourable to them. He gives examples of:
  • US Big Tobacco companies creating organizations such as Center for Indoor Air Research and ARISE (Associates for Research in The Science of Enjoyment). [create doubts in the minds of smokers and non-smokers]
  • American Enterprise Institute (think tank) funded by Exxon Mobil, Philips Morris [anti-climate change lobby]
  • Climate Gate controversy of 2009, University of East Anglia's Climate Research Unit.
This battle of Science and battle of Doubt Production, makes the information landscape murky.  

If you would have noticed, all the above points towards the external environment. 

The author, now makes us look inwards i.e. into 3 short comings of our nature which keeps us ignorant and ill-informed:
  1. Agnotology: The more informed someone is, the more hardened their beliefs become, irrespective of the information being factually correct or incorrect.
  2. Epistemic closure: Dismissing all other sources of information as unreliable.
  3. Filter failure: The bubble we create for ourselves, to avoid cognitive and ego burden.
Infact, social media feed algorithms are creating filter bubbles for us.

The author lists out the symptoms of information obesity: 
  • Distorted sense of reality, 
  • Loss of social breath (unable to nurture meaningful interaction due to the sheer overload of online network), 
  • Attention fatigue, 
  • Poor sense of time (screen addition, staying in virtual reality), 
  • Poor decision making. vi. Loss of productivity and efficiency. 
The author offers a solution to us by coining a new term 'INFOVEGANISM'

At the heart of this Infoveganism, lies:
  • Change the mindless/passive/auto-pilot consumption habits of media
  • Become conscious consumer
  • Follow ethics
  • Master data literacy
Mastering data literacy comprises of the following skills:
  1. How to search? (verify the source)
  2. How to filter? (think critically, make good judgement)
  3. How to process? (draw insights)
  4. How to produce? (as a content creator, focus on quality and value)
  5. How to synthesize? (connecting the dots and making inferences)
The author provides simple hacks for staying in control and to be mindful of our information diet:
  1. Prevent attention fatigue. In other words, we can maintain our attention fitness by i. Strategic allocation of our attention (choose what is important for you and filter out the rest), ii. Will-power, iii. Measurement (self-assessment of your media consumption) and iv. Elimination (based on self-assessment, minimize the unwanted/excess).
  2. Set daily time limit for online/screen time and stick to it.
  3. Sign up for advertisement free content
  4. Pay for consuming good content
  5. Set priority for yourself, use rules and filters to cut off unwanted emails, notifications etc.
  6. Operate out of consciousness, be mindful. Remind this to yourself and practice this habit.
The quote from the author Clay Johnson, sums up the message of the book:

"Obesity is a complicated problem. Obviously, obesity has to do with access, and obesity has to do with the economic conditions, but it sometimes also has to do with overeating, and the same thing happens with information. I think a lot of people don't have great access to information and good information, that's for sure, but also in the world of the internet, we have almost universal access to everything that we need. And that means that we have to make empowered decisions and informed decisions about what it is that we're consuming. It's the only way to sort of 'live right' online." - Clay Johnson

#information #informationoverload #InformationOverloadDay #conscious #Consciousliving #ConsciousChoices #Consciousminds 

Friday, May 6, 2022

Book Insights 2/3: A Human's Guide to Machine Intelligence - Kartik Hosanagar


This blog is a only a summary note of the book and does not capture the full content and all the details. 
This blog is written for academic purpose, please do provide citation to the book A Human's Guide to Machine Intelligence, Author - Kartik Hosanagar, Publisher - Penguin, for referencing. 

I encourage the readers to buy the book for a detailed reading. 
It's available on Amazon: https://www.amazon.in/Humans-Guide-Machine-Intelligence-Algorithms/dp/0525560882
and Flipkart: https://www.flipkart.com/human-s-guide-machine-intelligence/p/itmf6hwazfmhramh

  


The book A Human's Guide to Machine Intelligence by Kartik Hosanagar, provides readers a valuable insights to two of the million dollar questions: How Algorithms Are Shaping Our Lives? and How We Can Stay In Control?

The book consist of three parts:

Part One: THE ROGUE CODE

  • Free Will in an Algorithmic World
  • The Law of Unanticipated Consequences
Part Two: ALGORITHMIC THINKING

  • Omelet Recipes for Computers: How Algorithms Are Programmed
  • Algorithms Become Intelligent: A Brief History of AI
  • Machine Learning and the Predictability-Resilience Paradox
  • The Psychology of Algorithms
Part Three: TAMING THE CODE

  • In Algorithms We Trust
  • Which Is to Be Master - Algorithm or User?
  • Inside the Black Box
  • An Algorithmic Bill of Rights
  • Conclusion: The Games Algorithms Play

The Preface section, gives the journey of development of artificial intelligence. 

In 1992 Vishwanathan Anand played Fritz, a chess-playing software program and he won easily. In 1994, an improved version of Fritz beat Anand and several other grandmasters. By 1997, IBM's Deep Blue beat world chess champion Gary Kasparov in a six-game tournament. Deep Blue had the ability to explore 200 million possible chess positions in a mere second. Google's latest chess-playing software, AlphaZero, taught itself the game in just 4 hours. It explores only 80,000 positions per second, before making the move with the greatest likelihood of success. 

With advancement of AI, it's applications moved from the sphere of chess and games, into our daily lives. It gives examples of Flipkart's recommendation algorithm and speech-recognition start-up Liv.ai, Myntra's AI-generated apparel designs, GreyOrange deploying robots in warehouses, HDFC's OnChat chatbot and Recruitment companies using AI for screening resumes.

With AI permeating into all spheres of life, it also brings in unique challenges for e.g. data security, data privacy, job creation and job displacements, AI biases etc.

Introduction section: 

It talks about Microsoft's chatbot XiaoIce which was launched in 2014. It was developed after years of research on NLP (natural language processing) and conversational interfaces.  XiaoIce attracted more than 40 million followers and friends on WeChat and Weibo (social apps).  In other words, humans interacts with XiaoIce on a regular basis just as two human friends would do. 

Woebot is a Chatbot therapists being used to help people maintain their overall mental health. 

Microsoft introduced Tay.ai on Twitter in 2016 in US. It garnered 100,000 interactions in just 24 hours but soon it took an aggressive personality of it's own - sending out extremely racist, fascist and sexist tweets. Very next day, Microsoft shutdown the project's website.

Algorithm is a series of steps (logic) which one needs to follow to arrive at the defined outcome.

Traditionally, programmers used to develop algorithm and it was static by it's nature. 

With advancement in artificial intelligence (AI), algorithms now can take in data, learn new steps (logic) on it's own and can generate more-sophisticated versions of themselves.  

Machine learning (ML) is a subfield of AI, that empowers machines with self-learning ability so that it can improve with experience. 

Modern algorithms have incorporated both AI and ML. The most popular versions of these algorithms are built on neural networks, opaque machine learning techniques. The learned strategies and behaviours they incorporate even their human programmers can't anticipate, explain or sometimes even understand them. Thereby they are advancing beyond their decision support role (suggestions) to becoming autonomous systems that make decisions on our behalf.   

ProPublica published a report on how algorithms used in Florida courtrooms are positively biased towards whites and racist towards black defendants. 

In May 6, 2010, afternoon a large mutual fund group to sell 75,000 contracts in just 20 minutes, by using a single algorithm. This lead to domino effect as other companies trading algorithms seeing the market behaviour attempted to exist the market by selling more stocks. In about half an hour, nearly $ 1 trillion of market value was wiped out. This event became known as 'flash crash'.

Cathy O'Neil Data Scientist and Political Activist calls algorithms built on Big Data as "Weapon of Math Destruction". According to Philosopher Nick Bostrom the inherent unpredictability of AI poses an existential threat to humans. 

Despite of these concerns, modern AI-based algorithm are here to stay. Hence the author Kartik Hosanagar dwells into the mind of the algorithm and answers three related questions:

  1. What causes algorithms to behave in unpredictable, biased and potentially harmful ways?
  2. If algorithms can be irrational and unpredictable, how do you decide when to use them?
  3. How do we, as individuals who use algorithm in our personal or professional lives and as a society, shape the narrative of how algorithms impact us?

THE ROGUE CODE 

Free Will in an Algorithmic World

The author takes us through the daily routine of a senior research fellow, which seems random at one level, but he makes us wonder about the degree to which the algorithms of Facebook, Google, Tinder and Amazon plays a role in his life's day to day circumstances. Essentially, it makes us ask - To what extent are we in control of our own actions?

The author explains that popular design approaches (notifications and gamifications) are for increasing user engagement. It exploits and amplifies human vulnerabilities (need for social approval, instant gratification) making us act impulsively and against our better judgement. 

The author also makes us realise though we feel a sense of free will by making the final decision, in reality 99% of all possible alternatives are excluded in the search algorithms. (Google's algorithms determine which one are featured in the very top of the results page). Automated recommendations are also a major driver of choices we make about what to buy, what to watch and what music to listen online. 

Amazon, Walmart, Netflix, Spotify, Apple's iTunes, Google's YouTube algorithms gently nudge us in specific directions.

Social media websites algorithms are the chief drivers of the content we see. Facebook, Instagram and Twitter's algorithms determines which potential stories or posts you should read first, which you can read later and which you don't need to read at all. The news-feed algorithms in the social networking websites play a crucial role in reportage we read and the opinions we form about the world around us.

Algorithms are also selecting the social networks by itself. LinkedIn algorithm reminds you of the people you emailed recently to be added in your professional network. Facebooks algorithm recommend whom to add as friends and Tinder, Match.com are recommending who you date or marry.

The books presents a case study of Match.com where the algorithm for recommendations is not just based on the stated preferences submitted by their subscribers, rather it was based on their online behaviour (based on what profiles they checked and clicked). 

The Law of Unanticipated Consequences

The author presents Kevin Gibbs to us, who programmed Google search autocomplete predictor tool (suggestions). The predictions are based on the user's own recent search queries, on trending news stories and what other people were searching for on Google. This feature brings in so much of efficiency (time saving) but there is also an unintentional outcome. The autocomplete reveals the prejudice existing around the subject of search. Therefore is Google unintentionally leading impressionable people who did not initially seek this information to webpages filled with biases and prejudices?

Another case study presented to us, the world's first beauty contest judged by AI. Though more than 6,000 people from 100 countries participated, the 44 winners chose by the software were nearly all white. 

Google Photos in 2015 faced similar issue with it's photo-tagging algorithm. A photo of Jacky Alcine (a software engineer) and his friend were auto-tagged "Gorillas". They were indeed black individuals. The Image-processing algorithms hadn't been trained on a large enough number of photos of black people hence it was unable to distinguish for different skin tones.

ALGORITHMIC THINKING 

Omelet Recipes for Computers: How Algorithms Are Programmed

In this chapter the author presents the recommendation engines of three digital music platforms: Pandora, Last.fm and Spotify.

Pandora's algorithms are based on "content-based recommendation". These systems start with detailed information about a product's characteristics and then search for other products with similar qualities.

Last.fm algorithms are based on "collaborative filtering". For e.g. you like Yellow by Coldplay, then the algorithm will find someone who also likes Yellow by Coldplay and what else she listens to. The algorithm then recommends to you, these new songs.

Spotify algorithm tries to combine these two methods. 

Pandora has a deep understanding of music and vocabulary, which emerged from the Music Genome Project, in which musicologists listened to individual tracks and assigned more than 450 attributes to each. 

The collaborative filters lack in depth knowledge, it is simplistic in it's approach, easy to roll out and therefore has become te most popular class of automated product recommenders on the internet (e.g. Netflix, YouTube, Google News).

How does social media websties Facebook, LinkedIn, Tinder recommend people?

The common approach they take is to match based on similarities in demographic attributes - age, occupation, location, shared interests in topics and ideas discussed on social media. 

An alternative approach used by other algorithms relies on people's social networks themselves. For e.g. if you and I are not connected on LinkedIn but have more than a hundred mutual connections, we are notified that we should perhaps be connected.

Both systems are intuitive. They are based on homophily (our tendency to connect with those most like us).  

In this chapter, the author touches upon another interesting topic - Neighbourhoods (for brick and mortar stores and for Digital world).

With years of experience we all know the importance of location in real estate. If a store exists in a right location, it will have footfalls of the customers. The concept of location extends within grocery stores too, i.e. similar items are placed next to one another on the same shelf. Therefore, "neighbourhoods" allows us to quickly find what we as customers are looking for.

For Digitial neighbourhoods, the author gives examples from the early days of the World Wide Web when Yahoo and Geocities used to classify businesses by products and services they offer. It was conventional directory of products converted into website of organised products and services into various categories.

With information explosion this early form of online categorisation had it's natural death and it was replaced by Algorithms such as Search engines, Recommendation systems or Social news feeds. In other words, today's Digital Neighbourhhoods are managed with this new forms of algorithms.

Google's PageRank algorithm ranks webpages not only by the occurrence of search terms on the pages but also by the hyperlinks from other pages ("inlinks") that a page receives. 

Facebook's ad-targetting algorithms stems out of social connections since similar people tend to be friends and hence share similar affinity to brands/products/services.

Amazon's shopping recommendation "people who viewed/bought this product also viewed/bought these other products", creates a "network" of interconnected goods.  

Food for thought - Amazon's recommendation algorithm (collaborative filtering) does it increase diversity? Afterall, wasn't internet meant to be a democratization tool. 

The truth is common algorithms reduce the diversity of items we consume, because these collaborative filtering might be biased towards popular items. Reason being, they recommend items based on what others are consuming and hence the obscure items are ignored. 

The author and team provide examples from simulation experiments they carried out which demonstrates these algorithms can create a rich-get-richer effect for popular items. Even though individuals might discover new items, aggregate diversity doesn't rise, and market share of popular items only increases. 

The author points towards Spotify's hybrid design (combination of collaborative filter with a content-based method) as a better solution to this problem. [Experts and AI/ML are deployed in the content based method for a detailed research in identifying the attributes of the product (song), than just reliance on popularity].

Food for thought - We generally don't evaluate, algorithms holistically hence the unintended consequences of an algorithm are overlooked.

ALGORITHMS BECOME INTELLIGENT 

A Brief History of AI

The author takes us back in time to 1783 by narrating the story of 'Mechanical Turk' a chess playing machine, which turned out to be a hoax involving  a hidden human player, a mechanical arm and a series of magnetic linkages. 

150 years later in 1950, Alan Turing published a paper posing a profound question: "Can machines think?" He imagined a computer might chat with humans and fool them to believe it is a human too. This came to be known as the Turning Test, to measure the intelligence of machines.

Soon after, John McCarthy proposed a workshop which will pursue the question how to make machines solve problems that only humans were assumed to be capable of solving. He names his conference the Dartmouth Summer Research Project on Artificial Intelligence which took place in 1956. In this workshop/conference, human-level intelligence was recognised as the gold standard to aim for in machines. It also established AI as a branch of science: What is thinking, what are machines, where do the two meet, how and to what end?

By 1959, Al Newell and Herb Simon had built a software the Logic Theorist which proved the theorems from the seminal work Principia Mathematica. 

By 1967 engineers at MIT developed Mac Hack VI the first computer to enter a human chess tournament and win a game. 

The AI community used different approaches for building intelligent systems:

  1. Rules of Logic
  2. Statistical techniques (infer probabilities of events based on data)
  3. Neural networks (inspired by how network of neurons in human brain fires to create knowledge)

By 1969 Neural Network approach lost favour when Marvin Minsky et al published a book Perceptrons which was critical of neural network, highlighting it's limitations.

Without much output and more of promises, 1970s and 1980s witness funding of AI research on a downward spiral. The funding got redirected to other areas of computer science, such as networking, databases and information retrieval.

The AI community had to set more-realistic near-term goals.

The previous approach of AI research was: AGI (artificial general intelligence), also described as 'strong AI'.

The newer approach of AI research became expert systems approach: ANI (artificial narrow intelligence), also described as 'weak AI'.

May 11, 1997, IBM's Deep Blue computer defeating Garry Kasparov was a very important milestone in the history of modern computing.

However, the ANI approach failed as the algorithms would fail in a given situation if these were not explicitly programmed. (There can be infinite number of situations in complex activities hence it's an endless task for a programmer to predict and write a code for all possible situations).

So by early 2000, there was a growing recognition among the researcher community that computers could never attain true intelligence without machine learning.

The advent of internet provided access to large datasets (Big Data) for training ML algorithms.

Processers which were originally designed to handle 3-D gaming graphics, found applications to process big data.

These three ingredients: Big data, Specialised processeors and AGI approach to research advanced the field of AI with many practical applications coming out. However, soon the 'traditional machine learning methods' emerged as the rate limiting step.

The recent explosion in ML is fueled by Deep Learning (essentially, digital neural networks arranged in several layers).

Deep learning models comprises of: 

  1. An Input layer (input data),
  2. An Output layer (desired prediction) and
  3. Multiple hidden layers (that combine patterns from previous layers to identify abstract and complex patterns in the data). 
In 1980s Geoff Hinton et al had developed a fast algorithm for training neural networks with multiple layers. Other researchers to follow, built upon this foundational work.

Today's Google's self-driving car prototypes, systems like DeepPatient at Mount Sinai hospital NY are outcomes of machine learning approach.

Machine Learning and the Predictability-Resilience Paradox

In this chapter the author gives example of the game Go (similar to chess, but more complex). In 2016, Google's Go-playing computer program, AlphaGo defeated Lee Sedol, the world champion from Korea. The Move 37, played by AlphaGo couldn't be understood by the programmers themselves.

The DeepLearning ML systems are becoming more intelligent, dynamic and more unpredictable. In other words, the explicit set of rule based algorithms were Predictable, whereas the DeepLearning ML were Resilient due to it's adaptability.

Human beings tacit knowledge is difficult to explain explicitly. The tacit dimension was put forth by Michael Polanyi in his 1966 book, The Tacit Dimension. 

David Autor refers to this as Polanyi's paradox - We know more than we can tell.

If technology has to solve complex creative problem, it's development has to move away from predictable systems and become resilient systems. Examples of such applications are Google's ranking algorithm, Google's self-driving cars, Google's ad-targetting algorithms.

Allowing learning from Big Data, taps into undiscovered knowledge hidden in data, which is escaping human beings.

On the other hand the issues which are challenging us with this approach are: 

1. Adversarial machine learning i.e. learning from data which is intentionally manipulated by adversaries trying to make the system malfunction (e.g. sabotating Google's Self-driving cars by adding in wrong road signs). 

2. Algorithmic bias i.e. when machines learn from Big Data, it might pick up different kinds of biases. 

Probable solutions to overcome these issues: 

1. Deemphasize Big Data and instead focus on 'Better Data' i.e. carefully curate 'clean' datasets and learn from them. (However, studies show the outcome from learning from Big Data by far supersedes the outcome of learning from smaller/cleaner datasets). 

2. Reinforcement learning i.e. it doesn't tap into Big Data rather it creates Big Data. In other words,  algorithm learns from the data generated by itself through self-exploration. (e.g. Google AlphaGo Zero the newer version of Go-playing software).

3. Multiple approaches are simultaneously employed: e.g. Self-driving cars deploying both machine learning and rules-based systems manually coded by programmers.

4. Explainable or Interpretable machine learning: hottest areas in AI research, how to build ML systems that can explain their decisions.

The Psychology of Algorithms

In this chapter the author provides the 'Nature (genes) vs Nurture (environment)' model for explaining how the algorithms might be working. 

Nature refers to the logic of early computer algorithms which were fully programmed. Nurture refers to the modern algorithms which learns from real-world data. 

The author points out how XiaoIce and Tay both similar algorithms (chatbot) from Microsoft, behaved differently in different data environments. He then points out to March 2017 Microsoft's launching of Zo, another chatbot, which was explicitly programmed to avoid political controversies. 

These examples provides us a framework for deconstructing algorithmic systems:

Data <----->Algorithms <-----> People

  1. Data - on which the algorithms are trained
  2. Algorithms - their logic/programmed code
  3. People - the ways in which users interact with the algorithms

Various studies conducted provides the following insights:

  • The like-mindedness of our Facebook friends traps us in an echo chamber (filter bubble - in which we all have our own narrow information base).
  • We prefer reading news items that reinforce our existing views. 
  • Digital echo chambers is driven by the actions of online users.

Data, Algorithms and People all put together have a complex interactions and together plays a significant role in determining the outcomes of algorithmic systems.


TAMING THE CODE 

In Algorithms We Trust

In this chapter, the author describes the paradox of human beings simultaneously trusting and mistrusting algorithms.

The incidences of May 2016 in U.S. highway 27A Tesla Model S sedan in self-driving mode meeting a fatal accident killing it's driver and in March 2018 in Tempe, Arizonia a self-driving vehicle being tested by Uber killing a pedestrian are cited, which arouse significant protest and mistrust in public opinion on adoption of self-driving technologies (algorithms).
This negative public opinion is contrary to the aggregate safety data of self-driven cars in comparison to human drivers.

On the other hand in U.S. by the end of 2017, independent robo-advisers investment companies such as Betterment, Wealthfront, Vanguard and others were collectively managing more than $200 billion in assests through their automated investment platforms (algorithms).  

Various studies conducted for understanding human behaviour towards algorithms gives several insights:

1. We do trust algorithms over humans when we are evaluating it against other humans. However, when we compare the algorithm against us (self) we trust ourselves more.

2. Human beings are more forgiving of their own mistakes than those of the algorithm.

3. Human beings lose confidence in algorithms much more than they do in human forecasters (predictions) when they observe both making the same mistake.

 
Which Is to Be Master - Algorithm or User?

The author takes us back in time and tells us the history of elevator and points out to us, elevator were a predecessor of today's fully automated driverless car. 
When operator less (driverless) elevators were first introduced, people were reluctant and opposed to this idea. People would walk into an elevator car and immediately step out, asking "Where's the elevator operator?". In 1950s there was a strike by elevator operators in New York City. In a reaction to this the building owners forced the issue and the designers added reassuring features, most prominent of them was a big red coloured "stop" button. There was an intercomm phoneline for speaking to a remote operator.
Though this stop button and a phone line didn't offer a real control to users, it still gave people a sense of control, that they could interrupt the automated system and take over it if they needed to do so. Eventually, the usage of automated elevators went up and people embraced this new operator less (driver less) elevators. 

The above history of elevators is consistent to contemporary research on 'People's Trust On Algorithms'. If users feel they have some control - however minimal it may be - their trust on the algorithm is significantly enhanced. 

Examples of such minimal control for example are, on Netflix users can respond with 'thumbs up' or 'thumbs down' feedback to recommendations, which in turn will be used by the algorithms to improve future recommendations.
Another example would be Google's search algorithm which offers decisional control by offering a long list of hits for every search query. The user can scroll through the list and choose the one that best fits their needs. 

The author ends the chapter by pointing out algorithmic decision making are evolving from Decision Support Systems to becoming Autonomous Decision Makers. With this evolution, the issue of Transparency is getting lot of buzz among the circuit of AI researchers and social scientists.


Inside the Black Box

In this chapter the author explains to us (readers) how our trust or mistrust develops towards algorithms. 

Researcher Kizilcec points out for human beings, there is such a thing as the 'right' amount of transparency - not too little, not too much. According to him, the same applies to algorithms - Too Much Information (TMI) and Too Little Information both, can undermine user trust. 

The nature of trust deficit are of following types: 
  • Weakened competence belief - when we suspect if the algorithm truly has the required expertise.
  • Weakened benevolence belief - when we suspect if the algorithm is trying to maximize it's gains.
  • Weakened integrity belief - when we suspect if the algorithm is upholding values (e.g. honesty, fairness)

Research on decision support systems shows:

  • A HOW explanation of the algorithm, alleviates the person's weakened competence belief.
  • A WHY explanation of the algorithm, alleviates the person's weakened benevolence belief.
  • A TRADE-OFF explanation of the algorithm, alleviates the person's weakened integrity belief.
The above examples of trust is from the end-user's (non-technical person) perspective. However from the point of view of a Technical person or an Auditor or a Regulator - higher the transparency of the algorithm, higher is their degree of trust.

So how we as a society can achieve a high level of transparency, so that regulators can audit the algorithms?

The obvious answer is through Technical Transparency - making the source code public.

However, this is approach is not so simple to implement, as for-profit companies their algorithms are their intellectual property, they have an economic value and it can't be made public for obvious reasons. 

Also the modern algorithms (AI based) even if the source code is available for scrutiny/audit, the very nature of these algorithms (significant portions of their logic come through machine learning) makes it difficult to understand it's behaviour and hence audit it.

An Algorithmic Bill of Rights

The author starts this chapter with the Three Laws of Robotics by science fiction writer Issac Asimov, which was written in the year 1942, in a short story "Runaround".

He then lists out the algorithmic bill of rights, by US Public Policy Council of the Association for Computing Machinery (ACM) published in January 2017. These principles cover seven general areas:

  1. Awareness - those who design, implement, and use algorithms must be aware of there potential biases and possible harm, and take these into account in their practices.
  2. Access and redress - those who are negatively affected by algorithms must have systems that enable them to question the decisions and seek redress.
  3. Accountability - organizations that use algorithms must take responsibility for the decisions those algorithms reach, even if it is not feasible to explain how the algorithms arrive at those decisions.
  4. Explanation - those affected by algorithms should be given explanations of the decisions and the procedures that generated them.
  5. Data provenance - those who design and use algorithms should maintain records on the data used to train the algorithms and make those record available to appropriate individuals to be studied for possible biases.
  6. Auditability - algorithms and data should be recorded so that they can be audited in cases of possible harm.
  7. Validation and testing - organizations that use algorithms should test them regularly for bias and make the results publicly available.  

There is reference made for Ben Shneiderman, a professor of computer science at the University of Maryland who issued a call for a National Algorithmic Safety Board. In Dec 2017, New York City passed a law to set up a new Automated Decision System Task Force for monitoring the algorithms used by municipal agencies. 

The chapter, also draws our attention to EU's legislation the General Data Protection Regulation (GDPR). GDPR has two main sections:

Nondiscrimination - using algorithms to profile individuals is intrinsically discriminatory. Therefore, GDPR bans decisions which are solely based on the use of sensitive data (personal data).

Right to explanation - this addresses the issues related to transparency. It mandates that the users can demand for the data behind the algorithmic decisions made for them.

In September 2016, the tech giants came together to create 'The Partnership on AI' for self-regulation, which is focusing on four key areas: 

1. Best practices for implementing safety-critical AI systems in areas such as health care and transportation; 2. Detecting and addressing biases in AI systems; 3. Best practices for humans and machines to work together; and 4. Social, psychological, economic and policy issues posed by AI.

The author strongly advocates the users of algorithms - you and I should also step up and contribute in drafting the bill of rights for humans impacted by algorithms. 

Kartik Hosanagar (author) proposes four main pillars of an algorithmic bill of rights:

  1. Transparency of data
  2. Transparency of algorithmic procedures
  3. Providing a feedback loop to the users for communication and to have some degree of control
  4. User's responsibility to be aware of the risk of unanticipated consequences 

This book leaves us with a food for thought or should I say buffet of thoughts - 

"Together, we have to answer one of the more pressing questions we face today. How will we conceive, design, manage, use and govern algorithms so they serve the good of all humankind?" - Kartik Hosanagar