AISWITCH- US TM

AISWITCH- US TMAISWITCH- US TMAISWITCH- US TM

AISWITCH- US TM

AISWITCH- US TMAISWITCH- US TMAISWITCH- US TM
  • Home
  • THE AI-FIRST CEO/ CXO
    • Why AI must be CEO-led?
    • GenAI CoE Org Structure
    • CEOs Choose Right AISPs
    • SPs of AI-Powered CEOs
  • Platform Engineering: How
    • How to Win on Platforms?
    • Greenprinting Platforms
    • Platformizing Businesses
    • What's your Platform?
    • Top10 Platform Engg Steps
  • What's AI Storytelling?
    • Gesalt's Principles in AI
    • AI-Cloud Change Pilot: U
    • What's your AI story?
    • 5 C's of AI-Data-Cloud
    • Unique AI stories: How to
    • Checklist: AI Narrative
    • AI Storytelling with 5D's
  • 3-Mnt Story Boilerplates
    • AWS: Cloud-AI-Data Pivots
    • Why U must build AI story
    • How to build 3-mnt story
    • AI Boilerplate Example
    • Output: A 3-Mnt AI Story
    • Declutter AI: Power of 3
  • What's Data Storytelling
    • 5 Datastory Techniques
    • Date Story Technologies
    • Top 5 Data Story Tools
    • 3-Minute Data Stories
    • SO-WHAT Story Technique
    • Datastory QnA Boilerplate
  • AI PRACTICE TOOLS- SWITCH
    • AI PRACTICE- END USER
    • AI PRACTICE- TSP
    • FAQ- AISWITCH USAGE
  • PRACTICE-RESEARCH BLOGS
    • ALL RESEARCH BLOGS
    • STATE OF LANGUAGE AI
    • CUSTOMER-INSPIRED AI- AWS
    • THE BIGGEST Q- AI ETHICS
    • TSP, ITSP, INDUSTRY BLOGS
  • Selling AI Right
    • Why Value-Sell AI
    • 5 Cs of ValueSelling AI
  • Why AISWITCH
  • WHAT WE DO
  • WHO WE ARE
  • More
    • Home
    • THE AI-FIRST CEO/ CXO
      • Why AI must be CEO-led?
      • GenAI CoE Org Structure
      • CEOs Choose Right AISPs
      • SPs of AI-Powered CEOs
    • Platform Engineering: How
      • How to Win on Platforms?
      • Greenprinting Platforms
      • Platformizing Businesses
      • What's your Platform?
      • Top10 Platform Engg Steps
    • What's AI Storytelling?
      • Gesalt's Principles in AI
      • AI-Cloud Change Pilot: U
      • What's your AI story?
      • 5 C's of AI-Data-Cloud
      • Unique AI stories: How to
      • Checklist: AI Narrative
      • AI Storytelling with 5D's
    • 3-Mnt Story Boilerplates
      • AWS: Cloud-AI-Data Pivots
      • Why U must build AI story
      • How to build 3-mnt story
      • AI Boilerplate Example
      • Output: A 3-Mnt AI Story
      • Declutter AI: Power of 3
    • What's Data Storytelling
      • 5 Datastory Techniques
      • Date Story Technologies
      • Top 5 Data Story Tools
      • 3-Minute Data Stories
      • SO-WHAT Story Technique
      • Datastory QnA Boilerplate
    • AI PRACTICE TOOLS- SWITCH
      • AI PRACTICE- END USER
      • AI PRACTICE- TSP
      • FAQ- AISWITCH USAGE
    • PRACTICE-RESEARCH BLOGS
      • ALL RESEARCH BLOGS
      • STATE OF LANGUAGE AI
      • CUSTOMER-INSPIRED AI- AWS
      • THE BIGGEST Q- AI ETHICS
      • TSP, ITSP, INDUSTRY BLOGS
    • Selling AI Right
      • Why Value-Sell AI
      • 5 Cs of ValueSelling AI
    • Why AISWITCH
    • WHAT WE DO
    • WHO WE ARE
  • Home
  • THE AI-FIRST CEO/ CXO
    • Why AI must be CEO-led?
    • GenAI CoE Org Structure
    • CEOs Choose Right AISPs
    • SPs of AI-Powered CEOs
  • Platform Engineering: How
    • How to Win on Platforms?
    • Greenprinting Platforms
    • Platformizing Businesses
    • What's your Platform?
    • Top10 Platform Engg Steps
  • What's AI Storytelling?
    • Gesalt's Principles in AI
    • AI-Cloud Change Pilot: U
    • What's your AI story?
    • 5 C's of AI-Data-Cloud
    • Unique AI stories: How to
    • Checklist: AI Narrative
    • AI Storytelling with 5D's
  • 3-Mnt Story Boilerplates
    • AWS: Cloud-AI-Data Pivots
    • Why U must build AI story
    • How to build 3-mnt story
    • AI Boilerplate Example
    • Output: A 3-Mnt AI Story
    • Declutter AI: Power of 3
  • What's Data Storytelling
    • 5 Datastory Techniques
    • Date Story Technologies
    • Top 5 Data Story Tools
    • 3-Minute Data Stories
    • SO-WHAT Story Technique
    • Datastory QnA Boilerplate
  • AI PRACTICE TOOLS- SWITCH
    • AI PRACTICE- END USER
    • AI PRACTICE- TSP
    • FAQ- AISWITCH USAGE
  • PRACTICE-RESEARCH BLOGS
    • ALL RESEARCH BLOGS
    • STATE OF LANGUAGE AI
    • CUSTOMER-INSPIRED AI- AWS
    • THE BIGGEST Q- AI ETHICS
    • TSP, ITSP, INDUSTRY BLOGS
  • Selling AI Right
    • Why Value-Sell AI
    • 5 Cs of ValueSelling AI
  • Why AISWITCH
  • WHAT WE DO
  • WHO WE ARE

BLOGS- BLACKSWAN AI

RESEARCH BLOGS ON EVOLVING AI STANDARDS & GOVERNMENT AI POLICIES

RESEARCH BLOGS ON EVOLVING AI STANDARDS & GOVERNMENT AI POLICIES

RESEARCH BLOGS ON EVOLVING AI STANDARDS & GOVERNMENT AI POLICIES

Chief Research Advisor: Ian Head, 

Contributor: Rudrani Bose


How do the data rights of citizens and government policies of AI usage in different countries/ region affect AI adoption around the world?

Find out more

THE LANGUAGE PARADOX IN AI PART 1: GPT 3 NOT ENOUGH

RESEARCH BLOGS ON EVOLVING AI STANDARDS & GOVERNMENT AI POLICIES

RESEARCH BLOGS ON EVOLVING AI STANDARDS & GOVERNMENT AI POLICIES

Contributors: Aparajita Bandopadhyay


Why are human-invented natural languages still the biggest paradoxes for AI? Why won't we believe GPT3 to be real AGI, despite tall marketing claims, including The Post's?

Find out more

THE LANGUAGE PARADOX- PART 2: TRANSLATIONS

RESEARCH BLOGS ON EVOLVING AI STANDARDS & GOVERNMENT AI POLICIES

THE LANGUAGE PARADOX- PART 2: TRANSLATIONS

Contributors: Aparajita Bandopadhyay


One language is tough enough. Then, have some empathy towards machines and think like algorithms- what kind of challenges do they face while translating languages?

Find out more

AI & HI: BASICS OF NEUROSCIENCE

AI APPLICATIONS IN COVID, AI LEADERSHIP HABITS

THE LANGUAGE PARADOX- PART 2: TRANSLATIONS

Contributor: Aparajita Bandopadhyay

What are the basis of all these claims that AI is very similar to how human human intelligence works? How do human brains work?

Find out more

7 STILL-UNSOLVED FEATURES OF HUMAN LANGUAGES

AI APPLICATIONS IN COVID, AI LEADERSHIP HABITS

AI APPLICATIONS IN COVID, AI LEADERSHIP HABITS

Contributor: Aparajita Bandopadhyay


What 7 features of human languages make it the hardest problem towards achieving AGI?

Find out more

AI APPLICATIONS IN COVID, AI LEADERSHIP HABITS

AI APPLICATIONS IN COVID, AI LEADERSHIP HABITS

AI APPLICATIONS IN COVID, AI LEADERSHIP HABITS

Contributors: Oishani Bandopadhyay, Aparajita Bandopadhyay

Why is AI not effective as it was originally thought to be, to beat COVID?

What are the key leadership traits for successful AI applications in an enterprise?

Find out more

AMAZON AI CONCLAVE & THE GREAT BIG UNION BUDGET

LEADERSHIP CHANGES & AI-AUTOMATION CULTURE CHANGE

AMAZON AI CONCLAVE & THE GREAT BIG UNION BUDGET

A perfect equation emerged in my constantly lateral-thinking brain, connecting the dots between the Indian economic priorities and the learnings & insights gathered from Amazon AI conclave held just a few days before...

Find out more

7 MOST EFFECTIVE NEXT-AI SERVICES

LEADERSHIP CHANGES & AI-AUTOMATION CULTURE CHANGE

AMAZON AI CONCLAVE & THE GREAT BIG UNION BUDGET

A lot has changed in the AI services demand and supply side, in 2020. This year onwards, differentiating your AI offerings has become way harder than what it was in early 2020. Here are 7 next-AI blue ocean opportunities for providers to build on...

Find out more

LEADERSHIP CHANGES & AI-AUTOMATION CULTURE CHANGE

LEADERSHIP CHANGES & AI-AUTOMATION CULTURE CHANGE

AI PARTNERSHIPS & ECOSYSTEM LEADERS EMERGING POST-COVID

How does leadership change affect the culture and AI-automation journeys of service provider organizations?- Learnings from recent examples...

Find out more

AI PARTNERSHIPS & ECOSYSTEM LEADERS EMERGING POST-COVID

AI PARTNERSHIPS & ECOSYSTEM LEADERS EMERGING POST-COVID

AI PARTNERSHIPS & ECOSYSTEM LEADERS EMERGING POST-COVID

How are the AI partnership equations changing in post-COVID 2020?

Find out more

AI-automation governance, and governments of the world

Ian Head (https://www.linkedin.com/in/ian-head-245168/) 


Researcher- Rudrani Bose (  https://www.linkedin.com/in/rudrani-bose-a2a1441b0 )


Many Governments have AI Strategy documents in place that we can expect to lead to legislation and major investment in the next 3 years. To reach global markets and avoid confrontation, AI companies should be aware of government activity in this field. 


The Global Landscape


Artificial Intelligence presents limitless growth opportunities. Yet uncontrolled growth could quickly convert into threats if unchecked by regulatory standards. As a result, more than 20 countries across the world, along with groups like the World Economic Forum and the European Union have released AI strategy documents. Many of these can be expected to result in legislation. 


Although the United States of America released documents on artificial intelligence back in 2016, it was Canada in 2017, which became the first country to release an AI strategy. 

Since there are numerous policy documents, it is imperative to conduct a comparative analysis of these strategies, using specific standard parameters, to identify similarities, differences and the overall impact of AI on global populations. The following are snippets of some government’s  AI strategy and policy-making initiatives :


  • The 2017 Federal budget of Canada highlighted the five-year Pan-Canadian Artificial Intelligence Strategy, making it the first country to do so. 
  • On the 11th of February 2019, President Donald Trump announced the United States national strategy on artificial intelligence, called the American AI Initiative. With a vision similar to the Canadian approach, the American AI initiative aims to promote and foster a national culture using artificial intelligence technology and innovation. 
  • In 2017, Singapore launched the AI Singapore program to invest up to $150 million (USD111 million) in artificial intelligence over five years. The program brought together stakeholders from AI R&D, AI-based start-ups, and companies developing AI products to collaborate for the future of the digital economy. The National AI Strategy of Singapore was launched in November 2019 towards responsible adoption and innovation of artificial intelligence in Singapore, targeting the improvement of logistics and transport, smart cities, health, education, estates, safety and security. 
  • In the federal budget of 2018-19, the Australian government earmarked $29.9 million (USD21.7 million) to develop the nation’s AI and machine learning (ML) capabilities, to be utilized by the Department of Industry, Innovation and Science, the CSIRO and the Department of Education and Training. (comp)
  • In 2018, at the World Economic Forum at Davos, the then UK Prime Minister Theresa May announced the UK’s largest-ever investment of in research and development of digital technology


Key highlights from Five Leading Global Government AI strategy and policies


The key highlights of the AI strategy and policies from five leading governments around the world are mentioned below:


USA

· Encourage investments in AI research and development to unleash AI resources and remove barriers to AI innovation

· Upskill and train an AI-enabled workforce 

· Promote an international environment fostering responsible use of American AI innovation

Through this initiative, the United States hopes to nurture a climate of respect for freedom, human rights, intellectual property rights, the rule of law and equal opportunities for all in a new future with artificial intelligence (Whitehouse).

One interesting observation to make here is that in the USA there is no right to privacy. That is why European GDPR laws are so problematic to USA companies. USA companies expect to be able to do what they like with citizen’s data. The EU thinks otherwise.  In September 2020 Facebook threatened to quit Europe over this issue [https://www.computing.co.uk/news/4020505/facebook-threatens-quit-eu]. 


Singapore

· To be a leader in AI by 2030 through developing scalable, impactful AI products to deliver value to its citizens

· Build an AI-ready workforce to prepare citizens for the technology change. 


Australia

In 2018, the Government of Australia released a digital economy strategy – Australia’s Tech Future – highlighting the vision for governments, businesses and society to collaborate and reap the maximum benefits from digital technology. 

Australia’s concept of the future of AI involves the following objectives:

· To explore the future of AI in Australia, develop capabilities and build adequate policies to match capabilities

· To enable digitalization and solve challenges in the sectors of health and welfare, energy, education, transport, infrastructure and environment

· To improve the safety, efficiency and quality of processes in Australian industries


United Kingdom

The United Kingdom is ranked #1 in the Oxford Government AI Readiness Index, in terms of the ability of governments to take advantage of the benefits of automation. The Sector Deal articulates its visions in the development of AI as: 

To develop an industrial strategy through a focus on the five foundations of productivity:

-  Ideas – to be the most innovative economy in the world, 

- People – to ensure employment and better greater earning power for the citizens, 

- Infrastructure – boosting UK’s Infrastructure, 

- Business environment – To transform into a progressive and suitable business environment 

- Places – to nurture prosperous communities. 


References

https://www.weforum.org/agenda/2019/08/artificial-intelligence-government-public-sector/

http://www.unesco.org/new/en/media-services/single-view/news/canada_first_to_adopt_strategy_for_artificial_intelligence/

https://www.cifar.ca/ai/pan-canadian-artificial-intelligence-strategy

https://ec.europa.eu/commission/presscorner/detail/en/IP_18_3362

https://www.whitehouse.gov/ai/executive-order-ai/

    

AI for social equity & justice: Just how Just is AI?

Contributing author: Aparajita Bandopadhyay


There are multiple interesting UPSIDE perspectives on the most common and most difficult-to-mitigate bias in AI- The Gender Bias. As Sapolsky said in his recent (Dec 2020) interview- gender bias is so ingrained in societies across the regions and social strata of the world, that it is the most difficult one to get rid of, both in human decision models and in the AI world. Often it so happens that, in order to eliminate source data bias in training datasets, we drop attributes/ parameters/ features that contribute directly and aggravate biases in the inferences/ models. Now if we start deep analysis of the indirect and derived attributes that contribute to biases, even if passively, and choose to mask/ weigh down/ drop those features/ attributes, we may end up a resultant training dataset that's extremely sparse!


Despite all these issues, the funny thing about specifically gender biases in AI is that, there are certain upsides of it:

  •  As an AI recruiter/ essay evaluator usecase had shown- when the genders of the applicants/ authors are masked, the same write-up/ profiles receive the same points/ score in the hands of evaluators, irrespective of genders of either the applicants/ writers or the evaluators. But, when the genders are explicitly mentioned or are derivable (e.g. from names), the same material or content is scored poorly for writers identifiable as women, even by women evaluators. Well, at least, thanks to the AI use-case, the inherent biases of evaluators, irrespective of their own gender, became data-proved! Now, function leaders and evaluators alike could plan to do something to mitigate these implicit, hidden biases!
  • For another recruitment assistant AI usecase, it was trained on datasets that had the obvious and well-known trend that female employees even at the same skills and competency levels, same performance ratings and outcomes from similar roles, were paid lesser than men. Now, being trained on these datasets, and given the objective function of assisting in hiring the best talents with least costs/ CTC's, the AI agent started hiring more women given they were coming with same/ higher caliber but at lesser costs! This, in effect, actually helped mitigate the hiring bias, and in the long run it so happened that the biased AI agent hired more women than men and the eternal demand-supply balance gap kicked in at some point in the curve. Result: Cost of hiring women went up significantly, thereby reducing the gender-related pay-gaps!


Here is a candid interview on women in AI and what can be done at policy levels as well as at personal levels, to mitigate the Practice Biases in AI, specifically about the ever-increasing gender gap in this deep STEM field...

NASSCOM IndiaAI interview of AISWITCH on Women in AI

#1 Human-invented Machines vs. Human-invented Languages

THE LANGUAGE PARADOX FOR AI: GPT3 still not enough

 Contributing researcher: Aparajita Bandopadhyay


Why is language still the toughest problem to crack, in AI?


  • The core paper on GLUE - the NLP/NLU benchmarking system for General Language Understanding Evaluation (GLUE)- mentions that human ability to understand language is general, flexible and robust. Well, several NLP/ NLU algorithmic techniques and systems developed thus far, have been neither of these. 
  • Even after GPT3 having 175 billion parameters, the articles generated by it still requires relevance and context testing of the output, and consequently a huge amount of editing. Claiming GPT-3 to be the Nirvana of NLG and the ultimate step towards AGI in the field of language is arguable.  The recent claims by a well-known media company about using GPT3 to generate a perfectly credible article for their newspaper, triggered a big debate amongst the AI technologists, practitioners, and marketing folks. 


Do we determine what our words mean, or do words determine what we mean?


Though this rather childish-looking question may seem to be something which only characters as truculent as Tweedledum and Tweedledee would care to dispute ad infinitum, upon a little further analysis, it can really throw us into a confused dilemma.


  • What is language? The Oxford dictionary defines it as 'the method of human communication, either spoken or written, consisting of the use of words in a structured and conventional way’. Many writers have considered it to be far greater than the aforementioned definition, and some consider it to be an omnipotent and utterly ingenious device which can alter the course of our lives.
  • Languages have not only been shaped by us, but have done the converse, too; words have cheer-led, misled and led us to many a strange time and place in the course of our past. Though it is not evident to most at a first glance, languages can serve as a radically deterministic power on the way we think. Most foreign learners, upon being subjected to the learning of a foreign tongue, express their intense dislike for dense and complex grammatical modules, as well as intangible scripts; we are often led to wonder what strange concatenation of events must have led to the development of such a volatile and complex thing as language, which was originally designed for the ease of communication.
  • Almost all languages share the common essentials- they have distinct categories for the grouping of verbs, nouns and adjectives, as well as tense. The sheer fact that so many languages have so much in common shows us the astounding uniformity underlying human thought; only when contrary examples are cited do we recognise this; consider, for example, a certain language called Hopi, which bears absolutely no references to time! Several languages,including Indonesian and Mandarin lack discrete verb tenses; but to think of living without the concept of Time, in all its abstraction and paradoxical regularity,seems to be impossible. So many aspects of time-its linearity, its irreversibility, its ability to serve as a dimension,of sorts-have been embedded in us from a young age, via our languages. If time travel were possible, we would have very strange tenses indeed!


Noam Chomsky was one of the first linguists to suggest that the proficient usage of any language requires a certain instinct, and even, perhaps a certain genetic code. Yet, several languages are unique in their own ways; German and Russian, for example, are largely devoid of a distinct continuous tense; French allows the usage of double negatives to mean negatives!.

LANGUAGE PARADOX PART 2: TRANSLATIONS

It's already hard enough for machines to interpret and process ONE human language. The paradox becomes exponentially harder for languages in translation.  

Here is why

#2 The language paradox in AI: Language in Translation

The Language Paradox in AI- Language in Translation

 Contributing researcher: Aparajita Bandopadhyay

Languages in Translation: Wasn't one language enough?


To quote Franz Kafka, who, interestingly enough, was a brilliant renegade of a writer- ‘All language is but a poor translation’.

  • Many foreign words, which have resisted easy translation, have simply been incorporated without any change into English- e.g., schadenfreude and weltschmerz- and some linguists, such as Bickerton, have developed a theory which roughly states that a people which does not experience certain events,or know of certain objects, will not create words corresponding to them. This is an intuitively obvious idea; but, quoting Matt Ridley, it would be absurd to argue that only Germans can understand the concept of schadenfreude, and the rest of us find the concept of taking pleasure from the sorrows of our neighbours foreign.
  • Redundancies in words, too, inevitably appear in the development of languages. We must also call into consideration the fact that omnipresent and abundant things cannot, logically, have words corresponding to themselves; why would we want a word for something that is taken for granted? After all, the things we know the best are the things we don’t know that we know. Even this may serve as a hindrance to our complete comprehension of another language. It is also a highly engaging activity for our brains to learn and communicate with languages. Our brains use our Broca’s areas, Wernicke’s areas, prefrontal cortices, and several other centres for constructing simple sentences.
  • The fact that languages have influenced us greatly is proved by most of our inabilities to instantly grasp a new language- yet it is ourselves who have influenced languages in such diverse ways that different ones are hard to grasp. Languages are yet another proof of the often laughable circularity underlying human thought (every definition is ultimately circular-for example- the definition of a ’thing’ can be an ‘object’, and that of an ‘object’ may be a ’thing’-upon being asked what both of them mean, we can smugly reply-’the same thing as each other!’).Our inevitable ineptitude for the expression of our most elementary axioms reveals not only the visceral and nebulous nature of our concepts, but also the idea of a strange, uniform camaraderie between us, as a race. 
  • Languages delude us about ease of communication very often; yet, we often laugh at our own complexity in thought when trying to straighten out paradoxes of clever phrasing,and oxymorons (how many of us have really heard silence?).


No wonder then, that the best of Google Translator APIs and NLP/ language translator modules from Microsoft Azure or AWS, often generate translated text-strings that read more like AI-generated lame jokes.
One good service they all provide to human intelligence though- they augment our sense of humour- "Maria...makes me... laugh"!- so goes the famed song from The Sound of Music! 

#3 Machines vs. Human Intelligence: Neuroscience Basics

Basics of human intelligence and neuroscience part 1: Why is AI still a very distant Imitation Game?

Contributing researcher: Aparajita Bandopadhyay


Ground-rules of AI & HI Part -1: Basics of Neuroscience- A young Researchers' Point of View

From the 1950's, since the phenomenal paper of Turing got published- on the Imitation Game, almost all definitions of AI have included this common notion that AI attempts to somewhat 'mimic' the ways of human intelligence. But, do we still know exactly how human intelligence works in the first place, so that we can mimic it in machines?  
In the past 2 decades, thanks to huge progress in physics and engineering and MRI scanning technologies (e.g. fMRI of human brains during various activities- ref. Michio Kaku- Future of Mind), a lot has been revealed about how human brain works- from a neuroscientific perspective. Here is a VERY SMALL collection of some of these basics.

Q:What are synapses?

  • A synapse is basically the gap between the ends of the axon terminals of two nerve cells which serves as a transmitting medium, or sort of connection, between them. Every time we learn something new or practice something, new synapses get formed and old ones get strengthened. Synaptic transmission via transmitters and receptors occurs mainly in two ways:
  • Electric synapses:Two neurons can be coupled together by 'gap junctions'. A gap junction is a form of protein that allows ions and other small molecules to move between cells. Neurons connected via gap junctions are found in areas where they need to be very well-connected and coordinated, such as the hormone-secreting regions of the brain, like the hypothalamus.
  • Chemical synapses: These synapses don't allow small molecules and ions to move between cells. Hence, several neurotransmitters(recall part 2) are required for transmission of signals here.


Q: How can we manufacture and strengthen new and old synapses?

  • In order to strengthen old synapses,we must repeat the task we performed, or a task similar to it. This helps strengthen the connections between neurons and gyri and makes the brain perform the task with much greater efficiency afterwards.This shows that with adequate practice, we can master anything to a level that almost seems like we are 'effortlessly' performing a task. The more often you perform a similar task, the less energy your brain will spend on it and will let you develop new skills with relative ease,too! 
  • The more our brain learns,the more will it learn to learn?
  • Before practicing a new and important skill, try to adjust your mindset so that it tolerates and accepts new ideas and opinions. Do not forget old information- but be open-minded about new information, and try to look at information from different perspectives to understand it better. 
  • Using music, mnemonics, acrostics, rhythms and patterns is a great way to revise, practice and learn new things. Constantly practice thinking about something important by looking at it from different angles, analyzing it well and also try to explain it to someone else. All these exercises will make your synapses super-strong!.

Learn More

Languages are the biggest unsolved mysteries

Top of page- Contents

7 still-unsolved features of Human Languages: An AGI Paradox

Why the language hurdle to AGI is just so tough to surpass!

Contributing researcher: Aparajita Bandopadhyay



As defined by Robert Sapolsky in his lecture on ‘Language’ at Stanford, human language cannot be considered 'language', without these seven key features that distinguish it from the communication between other creatures. 

These include:

  1. semanticity, 
  2. embedded clauses, 
  3. recursion, 
  4. displacement, 
  5. arbitrariness, "Motherese", 
  6. meta communication, and 
  7. prosody.


Each of these facets of human language are exclusive to our species, and one or the other is typically the reason that various apes who were taught ASL (American Sign Language) over the years, have been unable to recreate language that is truly ‘human’. 


For communication to be complete, practical and real in the human world, each of these features is essential. In the same way, AI with Natural Language Processing/ Comprehension & Understanding/ Generation of contextually & semantically sensible dialogs, could only become humane if these features were embedded into the famed Siris and Alexas. The difficulty lies in creating these features without compromising datasets or precision, thereby reaching the holy grail of Artificial General Intelligence (AGI). 


1) Semanticity- the ability to generate and convey meaning, by ‘bucketing’ sounds to create words, is the most fundamental feature of any and every human language. The instinctiveness of meaning makes it all the more difficult to completely transfer meaning into a system, and clearly makes it incredibly challenging to generate meaning in novel words and communicate successfully. The awareness of semanticity in itself, the definition of semanticity that thereby gives the word meaning, is an almost inexplicable, and simultaneously intuitive, concept.


2) Embedded clauses are arguably the easiest to represent through programming languages as well as human languages, using simple to complex logic constructs- from Aristotle's term logic to propositional and predicate logic that form the foundations of AI. 

  • Adding conditions or details to a pre-existing clause, that answer questions such as where, when, how or why, in relation to the clause, creates language rich in information. 
  • Specifications through embedded clauses result in more informative sentences, and thus more meaningful communication. 
  • If further detailing and longer sentences do provide more information in a sophisticated algorithm, this facet can be conquered.


3) Recursion or generativity is an incredibly interesting property of language: a finite number of words can produce an infinite number of sentences, and a sentence can have infinite length, bounded only by practical time constraints. 

  • By embedding a sentence in another sentence with perhaps an identical format, you can generate a longer, different sentence. ‘They don’t know that we know they know we know’, is a perfect example that could technically last forever continued in the same exact pattern, without losing meaning. 
  • Recursion in programming usually refers to self-referencing, such as recursive functions that call themselves within the function definition.
  • Generativity is relatively easy for people to manufacture, by adding clauses, and may be easy to manufacture randomly as well, with strict boundaries of meaningfulness.


4) Displacement is the one feature that most famous chimpanzees and gorillas learning ASL could not achieve. Displacement involves the ability to talk about different time periods, different people, regardless of present circumstances, and without only conveying current emotion. Being able to talk about things emotionally distant from us, and dissociating our communication from our situation, is a unique capability that we seem to convey easily. Giving each other information in ‘facts’ not directly pertaining to ourselves, or even getting the answer from a chatbot when you ask for 23rd January’s weather, are examples of displacement ingrained in human language.


5) Arbitrariness refers to the lack of connection between the meaning of words and their shapes or sounds. Adjectives such as ‘heavy’ or ‘sad’ are not created with relation to the shapes of the letters in them, or resembling the sounds they make. The arbitrariness of human language makes it difficult to compare languages by sound or script, since neither of the two convey meaning. The randomness of meaning is, therefore, immune to guesswork or brute force.


6) Meta communication, the ability to communicate about communication and discuss language at all, is another feature of human language, forming the basis for natural language processing and generation. Meta communication also refers to secondary communication that can change or add to the meaning of a sentence or communication. 


7) Prosody, which forms a derivative part of meta communication, involves things such as intonation, stress, rhythm and other parts of body language that accompanies a unit of communication. The accompanying body language and intonation can convey different meanings to even the same sentence, which can be expressed through meta communication, such as sarcasm created using varied tones of voice.


Motherese, or baby talk, consists of the difference in intonation, including stress on vowels, that parents use when communicating with their infants, with the intention of teaching them how to speak. Specific to humans, it is an important part of child language acquisition, a field growing in importance to machine learning through natural language acquisition.


Each of the 7 features that distinguish human language from others seems instinctive, and unnoticeable in daily conversation. To train a language model on these basic but generic features of human languages, is a different ballgame altogether.  

AI applications & Leadership: Blackswan AI

How are different government handling AI ethics questions

Ethics of AI – Multiple regional requirements pose a reputational risk to AI in general

Ian Head


Contributing researcher: Rudrani Bose



AI leaders and strategists must be aware of different national ethics requirements for AI. While local products can learn and abide by the local rules, international and global products must be architected carefully to avoid lengthy legal confrontation, and reputational damage.  


A child no more, AI has grown-up responsibilities now


Since 2019, most companies have been using AI to improve process performance and costs, and some of the early mover multi-country organizations are also focussing on leveraging AI to improve customer experience. They are doing it by using AI to assure uniform levels of consistency, reliability, performance and service quality, across their regions of operations. 

Now Ethics in AI has become a key issue especially when companies are using AI and intelligent automation across their value spectrum and supply chains & partner ecosystems. AI developers must ensure that their deployments are able to comply with the ethical requirements in all the regions where their systems will be deployed. Breaking local privacy laws is invariably seen as arrogant and intrusive and malicious intent is often inferred. 

In this context, ethical use of AI in the enterprise becomes the most critical change pivot. AI-user (employee/ customer) experience is significantly influenced by the perceived ethical behaviour of the AI solutions. This is applicable for all enterprise AI solutions regardless of specific domains and usecases, e.g. conversational AI agents (e.g. customer service chatbots) or digital workers (e.g. intelligent ops process bots for claims processing, invoice processing, KYC, duplicate invoice detection, false claims & fraud detection, retail shrinkage detection, and so on). 

All global enterprises have to work within the government policy frameworks in their regions of operations, and strategic AI usecases must often comply with multiple local government policies and priorities on ethical AI. International and global use cases require policy adaptability so as to adhere to multiple regional idiosyncracies; as US companies working in the European Union have discovered (1).  

Here are some examples of current  government key initiatives on ethical AI. 


Some Regional Examples of Government Policies on AI Ethics


Australia

The Australian government released a report along with CSIRO listing out the framework for an ethical AI. This states that any AI system must:

1. Control any adverse or unethical effects of AI to ensure that the generation of net benefits for citizens are greater than the costs and risks of adopting and developing AI and associated technology stacks

2. Demonstrate regulatory and legal compliance with international and governmental guidelines to prevent AI systems from harming or deceiving citizens 

3. Ensure the protection of citizens’ private data and prevent data breaches causing financial, psychological or professional harm

4. Develop controls to monitor the fairness and inclusivity of data systems and restrain damage from biased algorithms. The term “bias” has expanded scope considerably in recent years. 

5. Ensure adequate transparency, responsibility and accountability of governments, organisations and people by ensuring that identifiable data is only used after gaining the consent of each citizen.


Canada

The Treasury Board Secretariat (TBS) of Canada is working towards the responsible use of AI in government. It is working with the international AI community to facilitate an open online consultation to foster collaboration on international guidelines for the ethical use of data systems. 

The federal government wishes to lead by example through adopting healthy ethical practices in its use of artificial intelligence. To ensure the proper and ethical use of AI, the government plans to:

  1. Estimate the impact of using through the development and      collaboration on standard tools and approaches
  2. Adopt transparent and accountable practices in AI decision-making      and carefully review the use of AI, along with the user benefit generated.      
  3. Enable openness in data sharing while ensuring national security,      defence and integrity of personal and organisational data and information.      
  4. Provide adequate training to upskill government employees in      possessing the capability to facilitate the development, design and usage      of reliable AI solutions and public services. 


USA

The American AI Initiative launched by President Donald Trump labels itself “AI with American Values” in seeking to protect and preserve freedom, human rights, institutional stability, the rule of law, right to privacy, intellectual property rights and equitable opportunities for all.  Right to privacy itself is not stated and defined clearly in the constitution, Unlike the EU where right to privacy is clear and mandatory e.g. by GDPR. 

In keeping with that philosophy, in February 2020, the US Department of Defense adopted a guideline of ethical principles to govern the use of AI. The regulations seek to incorporate responsibility, equitability, reliability, traceability and governability in digital technology used not only for military and defence purposes but also for commercial and for-profit business objectives. It aims to maintain the strategic global leadership of the US and respect the rules-based international order.
However US corporations have encountered difficulties when they assume that compliance with US law confers compliance with laws in other regions. For example in the European Union, where the right to privacy is strictly stated in the General Data Protection Regulations (GDPR).


UK

The government of the United Kingdom defined the ethical framework required to design responsible digital systems as those which 

- respect the dignity of individuals and allow people to form open, meaningful and inclusive connections. 

- Protect social values, social justice and prioritise public interest by providing for the wellbeing of all.


The Centre for Data Ethics and Innovation was set up by the government in 2018 to help navigate the ethical challenges posed by rapidly evolving digital systems. For this purpose, the Alan Turing Institute, the Office for Artificial Intelligence and the AI Councilwere also set up. As the UK leaves the EU, it is expected that these bodies will publish new recommendations to replace and possibly enhance the EUs General Data Protection Regulations (GDPR)


Singapore

Compared to other countries, Singapore has yet to develop a concrete strategy to incorporate ethical practices in AI systems. In June 2018, Singapore announced the establishment of an AI ethics advisory council  on the development and deployment of artificial intelligence, to guide the government in building ethical standards and codes of conduct for businesses. 

We can expect this body to produce guidance or regulations in the near future but whether the requirements will be similar to those of other bodies referred to in this document, is as yet unknown. 



Evidence:

(1) Facebook warns it may be forced to pull out of Europe  https://www.telegraph.co.uk/technology/2020/09/21/facebook-warns-could-pull-services-europe/ 

(2) https://www.theleaflet.in/specialissues/right-to-privacy-in-the-united-states-of-america-by-nehmat-kaur/

(3) https://thehill.com/opinion/judiciary/445975-what-do-you-mean-theres-no-right-to-privacy-in-america 


A precise take on AI standards & government policies

The Regulation of Artificial Intelligence

Contributing researcher: Rudrani Bose  https://www.linkedin.com/in/rudrani-bose-a2a1441b0

Standards and guidelines for the regulation of artificial intelligence are rapidly emerging and are more important now than ever before. The technology of artificial intelligence is not only new, but it is a work in progress, ensuring that advancements occur every day. Fierce competition for the development and deployment of AI technologies is likely to emerge in the future, necessitating the use of legal standards to act as a deterrent to AI being employed for unethical purposes. 


Some of the existing legal standards governing artificial intelligence are:


  •  IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems- 
  • The IEEE Standards Association has the IEEE P7000 series of working groups,  in AI standards, comprising of 14 groups, to address issues at the meeting point of technological and ethical concerns. 
  • European Union’s General Data Protection Regulation (GDPR)


A key challenge of in actually implementing and following these standards on ground is that - these standards focus on technical interventions and checkpoints, e.g. on AI data usage, algorithms and applications. They don't cover guidelines on applying these standards on the ground, in terms of people- e.g. training & awareness about risks of non-compliance, process of auditing to check enforcement and adherence to the standards, and the business outcomes i.e. the impact of implications of following the standards. 

7 Next-AI Services for End-users to Explore: Future-proof AI

#1: Domain Ontologies

#3 Learn from peers/ industry/ even competitors: AI-curated Knowledge/ Pattern-bases

#2 Look for Data Storytellers- within and outside

Here was a C-level client team from a traditional BFSI organization based out of EU, functioning successfully for decades, flourishing in one of the most mature & well-regulated markets and regions. They had every credible reason to be satisfied with their numbers and growth. Yet, the hunger for disruption using technologies were so prominent! The team that were visiting us were not core-tech or IT, but a combination of hardcore business services, business operations and CRM teams. There were 8 of them, all top leaders from business and IT functions, and ALL were equally comfortable in geek-speak as well as biz-speak!


We started talking about why it took months, in some cases years, for these large banks to put even common AI usecases like KYC or anomaly/ fraud detection/ prediction, in production. They asked- why don't we build and market domain-specific ontologies, lexicons, knowledge models and pattern bases?


Of course, why not? Key idea was that we could short-circuit the build/ validate/ test cycles of these mature & AI-starter usecases and didn't end up reinventing all wheels for every client, thus almost invariably overrunning their AI projects' go-live schedules. 


We set up a small team of just 3 senior AI engineers- aligned to our engineering teams, and started building these "ontology boxes". Initially we chose to build 1 box for BFSI- one of our largest verticals- also most mature in terms of AI adoption, and 1 box for another very interesting corporate function- the Legal team- that gave a unique problem of contract validation, to us. 


Given it was a brand new idea, and we were building it from scratch, it took the small team almost 6 months to come up with a Minimum Viable Product i.e. a working ontology for the legal functional domain, to start with. The Legal team were so engaged with the small engineering team, throughout the MVS cycle, because their domain knowledge and experience and interpretations were the most crucial inputs for the engineers to capture and build upon. They were so happy when the prototypes in-prod proved that they could accelerate implementations standard AI usecases like contract intelligence, from years to weeks!

#2 Look for Data Storytellers- within and outside

#3 Learn from peers/ industry/ even competitors: AI-curated Knowledge/ Pattern-bases

#2 Look for Data Storytellers- within and outside

Gartner has recently predicted that by 2025, data storytelling will become a multi-billion dollar AI-business activity in itself. 

Well, future is much faster than we think, as Diamandis & Kotler mentioned in their book with nearly the same name. Data storytelling has matured as a practice in top 1-5% early and most advanced adopters of AI & ML, e.g. in some of the topnotch financial services organizations. A handful of AI services start-up's have also started their strongly differentiated journey in this path less-travelled so far, in the past 1-2 years. 

AI storytelling is more than just data storytelling i.e. complex visualizations, or visual story-telling i.e. generative AI usecases producing text narratives explaining pictures/ images! Yes, data visualization technologies have become so mature now that it's not even considered innovative or fashionable enough to speak about, in the AI/ data sciences geek world. Even dynamic/ customizable data visualization have become relatively commonplace capabilities now.  

Data Storytelling - extending data sciences into the art form of narrative generation or storytelling- presents data, analytics, AL-ML model outputs & recommendations etc., in story form. This process of turning facts and figures into contextual and relatable stories, can engage the targeted audiences esp. the business users much better. Data storytelling typically includes - 1) creation of a narrative structure for data/ model outputs/ inferences, 2) generating contextually meaningful narratives on data & analytics output, 3) exploring stories from datasets in multiple facets / aspects, 4) tuning the data-driven narratives to a form that the targeted audience understand best. Data storytelling can lead the target audiences progressively e.g. through multiple layers, based on their exploratory curiosity, requirements, and expectations. 

Data storytelling, in addition to being on the output side of several AI-ML models, can also leverage AI on the input side, e.g. as a dynamic data storytelling service offering. It can leverage generative AI techniques and natural language generation modules (e.g. Quill from Narrative Science), to generate timely relevant and contextual stories based on data and ML model outputs. 

#3 Learn from peers/ industry/ even competitors: AI-curated Knowledge/ Pattern-bases

#3 Learn from peers/ industry/ even competitors: AI-curated Knowledge/ Pattern-bases

#3 Learn from peers/ industry/ even competitors: AI-curated Knowledge/ Pattern-bases

This is again a blue-ocean opportunity both for large global end-user organizations and their service provider partners, as this one also is based on integration of their experiential knowledge-bases, gathered through decades of operations. This is again NOT a start-up type opportunity, for same reasons as stated in #1. 

In line with what 5 of the largest and otherwise fiercely competitive US banks did in their TruSight* initiative, pattern-bases especially for fraud detection/ prediction, or cyber-attacks and attempts, are extremely valuable and highly monetizable intellectual properties that the large organizations are sitting on. This is primarily because the "Cost of Not Doing It" is very high for the end-user organizations as well as their service partners. Any new fraud/ anomaly, or even attempts to fraud or cyber-attack, that are at least 50--70%+ similar to previous fraud or attack-patterns, can be predicted and hence preventable, if these pattern-bases are made available as a service. 

Again, only one of the Big 4's have demonstrated capabilities witnessed in 2020, where they could show what they built in way of knowledge-pattern-bases, as an offering somewhat similar to this. But even that one organization is also not delivering it as a Service. That is- it isn't monetizing the knowledge-patterns as consolidated stand-alone revenue opportunities. This offering is of great value for large end-user corporate clients- who run high risks of financial frauds or legal liabilities due to potential cyber attacks on secure financial and PII data. 


[*  TruSight was founded by a consortium of leading financial services companies, including American Express, Bank of America, Bank of New York Mellon, JPMorgan Chase, and Wells Fargo ]

#4 AI-driven Risk Management

#5 Pre-trained accelerators/ AI starter bundles by domain

#3 Learn from peers/ industry/ even competitors: AI-curated Knowledge/ Pattern-bases

This blue-ocean opportunity can be realized as an extension of the previous design/ idea #2, further enhanced by the unique AI application potential that AI is ALWAYS ON. Obviously, machines never sleep and can be programmed in a way (e.g. using RT unsupervised or reinforcement learning) that they will NEVER STOP learning and will keep themselves constantly updated and will predict risks/ threats based on their latest learnings/ most recent & up-to-date models. So, it's the best of both worlds- combining the knowledge of all pre-existing risk patterns (as mentioned in #2), plus adding the always-on real-time learning capabilities, so that the risk profiles are constantly updated and prediction accuracy is not data-staled or affected by latency. 

Hence, it's only an obvious logic that all risks that are high-impact, high-urgency, and require continuous monitoring and alert actions, are top candidate usecases for autonomous AI applications and systems. These risks can be business risks, operational risks, technical (e.g. hybrid cloud performance vs. opex optimization) risks. Irrespective of their classification, the AI capabilities built for Risk Management as a Service will be horizontally applicable to all types of risks. 

The business case for end-users for this AI offering is to be based on the size and time-sensitivity of the risk/ threat and their subsequent mitigation actions. 

Again, this is a huge opportunity potentially and visibly untapped by large service provider organizations, many of which have mature risk management service offerings but haven't explored their AI-empowerment in an integrated, full-blown manner. Only EY has demonstrated capabilities in this space, thanks to their highly mature risk management knowledge-base and frameworks and their progressive and innovative approaches to leverage this risk knowledge to deliver highly differentiated value to its clients. 

#5 Pre-trained accelerators/ AI starter bundles by domain

#5 Pre-trained accelerators/ AI starter bundles by domain

#5 Pre-trained accelerators/ AI starter bundles by domain

Genpact has initiated, led and mastered the pretrained accelerator space. So has Accenture - for certain domains and functions, with similar approaches but slightly different execution and communicational narratives. But, offering the accelerators themselves, like the ontologies, lexicons, APIs, even AI-infra sandboxes, all bundled up by verticals/ functions, in "As a Service" models, is something of a blue ocean still now.

Till 2019, we could understand why. 90%+ of AI projects, irrespective of verticals, were in POC/ pilots. If they were running in production, then obviously getting the pretrained models tested and validated for performance in the real world, wasn't possible. 

But, 2020 onwards, due to rapid adoption of autonomous technologies for remote work support and service delivery (the COVID effect), productionization of AI-ML in business and IT services has become a mainstream reality. Now is the time to build tried and tested pre-trained accelerator bundles by industries, sectors and functions, and offer them as services. This will further speed up implementations, bring in overall best practices and SP's experiential knowledge by the client domains, and will speed up AI's Time-to-Value curves exponentially, especially for large clients with economies of scale at work. Their RoI and breakeven from large AI programs will become much better and faster. The business case is right there, for the large end-users and their service partners. 

#6 Explainable AI as a Service (XAIaaS)

#5 Pre-trained accelerators/ AI starter bundles by domain

#5 Pre-trained accelerators/ AI starter bundles by domain

Explainability and transparency of AI-ML models and the decisions & actions of autonomous systems, have become mandatory requirements for AI not just for regulatory constraints but also to scale up adoption by business users, ensuring trust and confidence.

Technically, lot of progress has been made in recent times, on XAI algorithms and measurability of trust and fairness scores. But, for end-user organizations that are keen to adopt AI rapidly, getting access to the skillsets required for XAI isn't an easy ask. For such organizations, XAIaaS is a good starting point, to build the credibility and capability in their AI solutions.

Start-ups that work in the XAI and fair AI space have a big role to play here, in collaboration with the tech biggies like Google, IBM, Microsoft and AWS, who have been extensively working in this XAI tech space in recent years. AWS SageMaker Clarify, for instance, offers a good starting point of XAI as a Service, for ML. Lean AI start-up's can help large end-user organizations to improve their XAI capabilities, leveraging these packages on top of their existing AI solutions as well as while building newer ones. 

#7 AI Fairness Assurance/ Trust/ Audit/ Governance as Services

#7 AI Fairness Assurance/ Trust/ Audit/ Governance as Services

#7 AI Fairness Assurance/ Trust/ Audit/ Governance as Services

With the advent of a lot of AI maturity models, by global consulting companies and process audit firms, AI audit and governance assurance as a Service is a blue ocean that is seeing some early ships sailing already. Assurance on data fairness i.e. debiased/ bias-mitigated training data, quality assured pre-curated datasets, and trust measurement as a Service, are the newer service offerings in this space. 

Amazon AI Conclave and the Union Budget next Monday morning

Emergence of a perfect AI equation for India?

Amazon AI Conclave, and the Union Budget Next Monday Morning: Emergence of a Perfect Equation?

Amazon just concluded their Virtual Conclave on AI, late last week. This was the 4th edition of Amazon  AI Conclave, because of COVID guidelines it had to go virtual this year. 


While I was writing this blog on my key takeaways from the Amazon AI conclave, I was also listening to the live telecast of budget, presented by our Finance Minister.  She was clearly listing down the financial and economic priorities of our country, to push it towards a growth path that was hopefully faster than the ones some of our smaller and hence nimbler Asian neighbors and friends have already started enjoying (e.g. Bangladesh and Vietnam).


A key benefit of my highly practiced multi-tasking skills suddenly dawned upon me- Connecting these dots. The key messages from the Amazon AI Conclave and the Union Budget priorities are so much in sync! Whatever the honorable FM was saying, I heard the technical equivalents of those initiatives and narratives, just 2 days before, in the Amazon AI Conclave.


Directionally, both were spot on. However, as all the experts were mentioning in their budget analysis,  the devil lies in the details, and in execution. Great strategies have failed far too often, due to unimaginative, conservative implementations. It’s one thing to say, and completely a different thing to actually do, and achieve demonstrable and proven outcomes.


Bridging this mission-critical gap between ambitious visions and average/ half-baked execution, is the task at hand- both for the country and for AWS, for India and the world to become the connected, ubiquitous, global powerhouse of AI, data, cloud- the three key pillars of the Brave New World:


  1. What stood out for me from the very first session of the AWS Amazon AI conclave, is that the leadership has already started stretching their goals and execution plans in the right direction, in terms of moving beyond just tech-speaks and talking consistently about Customer Outcomes i.e. proven execution and measurable, demonstrated results. Everything that were spoken, had proof points from case studies that were already successfully executed and delivered in real customer landscapes, and had already generated proven results, e.g. bringing down ML model-build (hence productionization) times from years and quarters to 1-2 weeks!
  2. The global AI-data tech supply-side shifts from labelled data and supervised learning to unsupervised and deep reinforcement learning algorithms were explained with brief, specific, relevant contexts of client applications and realized business outcomes like retail shrinkage prevention, and not just outputs in terms of model performance, AUROC and accuracy parameters.
  3. The other key take-aways were multiple reinforcements from the Amazon leadership team-members on the importance of harnessing an “ML culture”. This is why the patented framework of AI-SWIT‘C’H includes ‘C for Culture’ as a key dimension, for operationalizing AI and intelligent automation in organizations. The leaders explained with ample examples both from within and outside their organizations, across clients and partner ecosystems, on how certain culture change levers and initiatives are bearing fruit now, e.g. ubiquitous access to quality-validated ML training programs & self-learning materials, the AWS ML University, and democratization of access to data and tech-stacks for AI.  [For details: ref Amazon Machine Learning Solution labs <https://aws.amazon.com/ml-solutions-lab/ ] Zomato, Edelweiss Tokyo life Insurance and Jubilant Food works (Dominos India) are some exemplary instances of ML deployments where Amazon ML Solution Labs led the deployment effort for ML model building.
  4. The 6-step approach and the importance of prioritizing on the right projects absolutely resonated with the “Minimum Viable Strategy (MVS) for AI” guidance that AISWITCH has published in 2021 (https://aiswitch.org/ai-practice-tsp).


While the proven success stories were many, there are obviously even greater opportunities to explore and expand on:


  1. Having worked very closely with the AWS tech teams during the early days of setting up the Wipro HOLMES AI & Automation Ecosystem, there are so many wonderful assets that we tried hands-on from the AWS ecosystem, which did not get adequate mind-space of the analyst audience. This was probably 1- for want of time, 2- also due to the known fact that the analyst community usually don’t have much hands-on tech exposure to the “Deep AI space”:
  2. Sagemaker examples were mentioned, DevOpsGuru, CodeGuru were touched upon, for the hybrid “geeky analyst” folks like me, I guess. Healthlake – the healthcare datalake construct was briefly explained. These were like taking a ride in the Tomorrowland’s Carrousel in Disney Orlando, but the carrousel got stuck mostly in the current scenario.
  3. AWS is the global Father Figure of Cloud, being the First Mover in bringing the cluster/ grid computing technologies out of the labs and making them commercially feasible in making businesses agile, elastic and digital (thanks to GPUs usage in a novel way, EC2 and the storage clouds). There are great stuff in SageMaker e.g. Ground Truth and Data Wrangler- these capabilities must be flaunted and brought forth so that business users i.e. the target citizen developers of ML get early exposures on what the right questions are. Of course these topics were extensively covered in Day 2 which was the technology edition by design. But, quick exposures to some of these, at least in terms of demonstrable "Ease of Learning", could have benefitted the business leaders and analyst folks who could then enthuse the citizen builders in their respective functions, in a more informed manner. 
  4. AI-readiness is a leading indicator, not a lagging one. Hence, all the conversations, be it in customer, partner, analyst or tech research-space, should be feasibly ambitious & forward-looking. Most of the AI tech capabilities are already here, except in spaces like generative, super AI, and the 'Catch 22' scenarios in language corpus, with OpenAI's GPT-3 & Google's SMITH and BERTs & their domain-specific versions, and BIGGANs' semanticity problems, etc. 
  5. Discussions must also include topics like Quantum ML (e.g. IBM's Quantum cloud equivalents- for which ML problems do quantum techniques make sense?) and Green AI, meta-learning, zero-shot learning, RT AI, XAI and data bias reduction techniques. Knowledge exchanges on AI should aggressively promote innovative lateral thinking, e.g. how the unsupervised/ reinforcement learning techniques produce indirect "labelling bias reduction" benefits, beyond just improving performance and precision.


Net net, using India as the “Most Complex Problem” statement and context, Amazon has more than sufficient deep tech first-mover capabilities to build and test these prototypes for our immediate future here, and then scale up quickly, for the world. 


Technology-wise, AWS and other hyper-scalers are the Super Integrators of elastic data-algorithms-infrastructure, in the way of cloud and federated & edge AI. As the global IT services hub, India has a huge pool of service knowledge & curable datasets (can be masked for PII), and trained tech manpower that is agile i.e. that can willingly and quickly switch to AI-ML capabilities. 


If this is not an example of a Perfect AI demand-supply equation, then what is?

US Leadership Changes & World's Tech-Lead Equations

Biden-Harris and the already-blowing fresh breeze of fairness in the Tech world- From East to West

 Finally, intelligence of all kinds- artificial or human - has prevailed. The new President-elect for the United States, speaks for Science, Rationality, Fairness. A huge breather for the IT and tech companies, in product/ platform/ services and adjacent domains.

Being the Largest economy of the World, and also as home to some of the world’s best academic and research institutions, social and policy stability in the US does influence the Quality of Life and access to Knowledge & tech, for the Whole World.

Interesting shifts are already seen in the Tech world- Top tech companies like Microsoft, Google and IBM, have CEOs of Indian origin. Now Kamala Harris brings fairness in political leadership too. Balancing out the over-Indian flavour in these curries, Interesting pivots are seen in some of the Big Indian ITSPs switching to non-Indian leadership at CEO levels.

Fantastic amalgamation of East-meets-West cultures n value systems. Another interesting switch in this fact is that, now, some top tech Products n Platforms co.s have Indian leaders n Indian Services companies have non-Indian leaders! With Biden-Harris ushering in continuity in visa and H1B policies favoring the human-talent spectrum, it is again proven that fairness, merit and creative pursuits in any field cannot be confined within any gender, caste, creed, race or nations' boundaries. Men (and women) are born free, and we don't have to necessarily bind ourselves in our own minds' chains as we grow up.

The Tech and Commercial worlds are future-proofing themselves. Like the world of Nature is doing it, even if in an apparently ruthless but inevitable manner. Well-governed, ESG-aware AI-automation policies both at national and international levels as well as at enterprise and business domains, will balance out the human-nature equations too. Let us nurture that hope and work towards realizing it for our next-gens. 

How leadership changes change the ITSP landscape... For Good

Thierry at Wipro doing a Brian of CTS? - Same change levers but different intents (Growth/ costs?)

On 2nd November, TOI and Moneycontrol et al carried this story about potential overhauls at Wipro under the new leadership of Thierry, a BFSI veteran from CapG, a leader with the transparency and clarity that's typical of many European leaders (not stereotyping, just a generic observation :)). These articles also hinted at the possible change-paths Thierry may take for Wipro, similar to what Brian did for CTS, some time ago. 


Observationally, looks like it. To improve margins, not only TWITCH but even the global co.s are ruthlessly chopping off layers of legacy corporate fat that has made several of them rather ANTI-AGILE, if anything.

Finally this entire SP industry seems to refocus right- on GPR-Value (e.g. direct or indirect Growth/Profit/Revenue-Contribution-Per-Employee), and is using AI and intelligent automation within their own houses first, as apt instruments of CORPORATE LIPOSUCTION!

Since Y2K days, when KLOCs of COBOL were being updated by sheep-herds of trained engineers (layers of managers were the shepherds), the whole SP industry focussed on TnM- throwing more bodies towards all client problems!

Sadly, even now, many research firms still weigh the brawn more than the brain: Quantity of manpower 'certified on tool X', vs. Quality of Talent, or No. of IP's these new sheep-herds are producing.

This is the new double jeopardy (also double accounting) problem of AI, creating new layers of confusion matrix & resultant inefficiencies. This time: Still too many men, PLUS too many machines with varied IQ- haphazardly thrown into the already-horrifying mix. Cutting unnecessary flab is a critical and lifesaving surgery that many companies in this new conundrum need at this hour.


However, the intent defines the extent. Hence, instead of equating Thierry's change attempts for Wipro only at face value as an exact replication of what Brian did for CTS, we should look deeper. Intent-wise, Thierry is focussing on simplification and easily measurable, demonstrable VALUE metrics, whereas Brian decidedly focussed on COST-CUTTING. Their leadership challenges are different, company culture and realities are very different. What works at one won't get the other exactly there. CTS, while becoming more cost-efficient, also did lose a lot of great talents and experienced business leaders and client partners that they were known and strongly differentiated for. Given Wipro's core leadership legacy of Premji and Rishad, and the loyalty of some of their great leadership minds towards the owner family and the enterprise, surely that will not happen for Wipro, no matter what changes are brought in. 

AI-Automation-Digital: Public Domain Research

Metrics-driven AI & Automation: Strategy and Financial Management, RoI, scorecards

  

AI-automation Strategy Practice research: 

Client projects & research from Harvard, Gartner, McKinsey


AI & Automation strategy formulation at corporate level: STEP

https://www.linkedin.com/pulse/ai-strategy-2-peststep-framework-tapati-bandopadhyay/ 

 

The 90-10 rule of AI: AI360 Value Delivery Model

https://www.linkedin.com/pulse/ai-90-human-inspiration-10-technology-perspiration-bandopadhyay/ 

 


AI & automation best practices: Solutions architecture frameworks, benchmarking, workforce plan

  

AI-automation Technology Practice research:

 Solution frameworks, technology maturity models, benchmarking

 

AIOS: Architecture of an AI-powered operating system for AI

https://www.linkedin.com/pulse/aios-operating-system-ai-tapati-bandopadhyay/ 

 

The LOCO framework for fluidic or liquid AI

https://www.linkedin.com/pulse/loco-lens-fluidic-ai-liquid-intelligence-tapati-bandopadhyay/ 

 

Why AI-first needs Cloud-first

https://www.linkedin.com/pulse/why-cloud-first-ai-first-strategies-imperative-each-bandopadhyay/ 

 

Swimlanes for AI solutions planning

https://www.linkedin.com/pulse/whats-your-3-swim-lanes-methodology-break-ai-80-20-why-bandopadhyay/ 

 


Copyright © 2021-2027 AISWITCH - All Rights Reserved.  


Email us: 

bandopadhyay@aiswitch.org (Research & Advisory Services)

tapati.aiswitch@gmail.com (Research partnership)

Powered by

  • Gesalt's Principles in AI
  • What's your AI story?
  • Why U must build AI story
  • How to build 3-mnt story
  • Declutter AI: Power of 3
  • 5 Datastory Techniques
  • Top 5 Data Story Tools
  • 3-Minute Data Stories
  • AI PRACTICE- END USER
  • AI PRACTICE- TSP
  • FAQ- AISWITCH USAGE
  • ALL RESEARCH BLOGS
  • WHO WE ARE