Chief Research Advisor: Ian Head,
Contributor: Rudrani Bose
How do the data rights of citizens and government policies of AI usage in different countries/ region affect AI adoption around the world?
Contributors: Aparajita Bandopadhyay
Why are human-invented natural languages still the biggest paradoxes for AI? Why won't we believe GPT3 to be real AGI, despite tall marketing claims, including The Post's?
Contributors: Aparajita Bandopadhyay
One language is tough enough. Then, have some empathy towards machines and think like algorithms- what kind of challenges do they face while translating languages?
Contributor: Aparajita Bandopadhyay
What are the basis of all these claims that AI is very similar to how human human intelligence works? How do human brains work?
Contributor: Aparajita Bandopadhyay
What 7 features of human languages make it the hardest problem towards achieving AGI?
Contributors: Oishani Bandopadhyay, Aparajita Bandopadhyay
Why is AI not effective as it was originally thought to be, to beat COVID?
What are the key leadership traits for successful AI applications in an enterprise?
A perfect equation emerged in my constantly lateral-thinking brain, connecting the dots between the Indian economic priorities and the learnings & insights gathered from Amazon AI conclave held just a few days before...
A lot has changed in the AI services demand and supply side, in 2020. This year onwards, differentiating your AI offerings has become way harder than what it was in early 2020. Here are 7 next-AI blue ocean opportunities for providers to build on...
How does leadership change affect the culture and AI-automation journeys of service provider organizations?- Learnings from recent examples...
How are the AI partnership equations changing in post-COVID 2020?
Ian Head (https://www.linkedin.com/in/ian-head-245168/)
Researcher- Rudrani Bose ( https://www.linkedin.com/in/rudrani-bose-a2a1441b0 )
Many Governments have AI Strategy documents in place that we can expect to lead to legislation and major investment in the next 3 years. To reach global markets and avoid confrontation, AI companies should be aware of government activity in this field.
Artificial Intelligence presents limitless growth opportunities. Yet uncontrolled growth could quickly convert into threats if unchecked by regulatory standards. As a result, more than 20 countries across the world, along with groups like the World Economic Forum and the European Union have released AI strategy documents. Many of these can be expected to result in legislation.
Although the United States of America released documents on artificial intelligence back in 2016, it was Canada in 2017, which became the first country to release an AI strategy.
Since there are numerous policy documents, it is imperative to conduct a comparative analysis of these strategies, using specific standard parameters, to identify similarities, differences and the overall impact of AI on global populations. The following are snippets of some government’s AI strategy and policy-making initiatives :
The key highlights of the AI strategy and policies from five leading governments around the world are mentioned below:
· Encourage investments in AI research and development to unleash AI resources and remove barriers to AI innovation
· Upskill and train an AI-enabled workforce
· Promote an international environment fostering responsible use of American AI innovation
Through this initiative, the United States hopes to nurture a climate of respect for freedom, human rights, intellectual property rights, the rule of law and equal opportunities for all in a new future with artificial intelligence (Whitehouse).
One interesting observation to make here is that in the USA there is no right to privacy. That is why European GDPR laws are so problematic to USA companies. USA companies expect to be able to do what they like with citizen’s data. The EU thinks otherwise. In September 2020 Facebook threatened to quit Europe over this issue [https://www.computing.co.uk/news/4020505/facebook-threatens-quit-eu].
· To be a leader in AI by 2030 through developing scalable, impactful AI products to deliver value to its citizens
· Build an AI-ready workforce to prepare citizens for the technology change.
In 2018, the Government of Australia released a digital economy strategy – Australia’s Tech Future – highlighting the vision for governments, businesses and society to collaborate and reap the maximum benefits from digital technology.
Australia’s concept of the future of AI involves the following objectives:
· To explore the future of AI in Australia, develop capabilities and build adequate policies to match capabilities
· To enable digitalization and solve challenges in the sectors of health and welfare, energy, education, transport, infrastructure and environment
· To improve the safety, efficiency and quality of processes in Australian industries
The United Kingdom is ranked #1 in the Oxford Government AI Readiness Index, in terms of the ability of governments to take advantage of the benefits of automation. The Sector Deal articulates its visions in the development of AI as:
To develop an industrial strategy through a focus on the five foundations of productivity:
- Ideas – to be the most innovative economy in the world,
- People – to ensure employment and better greater earning power for the citizens,
- Infrastructure – boosting UK’s Infrastructure,
- Business environment – To transform into a progressive and suitable business environment
- Places – to nurture prosperous communities.
Contributing author: Aparajita Bandopadhyay
There are multiple interesting UPSIDE perspectives on the most common and most difficult-to-mitigate bias in AI- The Gender Bias. As Sapolsky said in his recent (Dec 2020) interview- gender bias is so ingrained in societies across the regions and social strata of the world, that it is the most difficult one to get rid of, both in human decision models and in the AI world. Often it so happens that, in order to eliminate source data bias in training datasets, we drop attributes/ parameters/ features that contribute directly and aggravate biases in the inferences/ models. Now if we start deep analysis of the indirect and derived attributes that contribute to biases, even if passively, and choose to mask/ weigh down/ drop those features/ attributes, we may end up a resultant training dataset that's extremely sparse!
Despite all these issues, the funny thing about specifically gender biases in AI is that, there are certain upsides of it:
Here is a candid interview on women in AI and what can be done at policy levels as well as at personal levels, to mitigate the Practice Biases in AI, specifically about the ever-increasing gender gap in this deep STEM field...
Contributing researcher: Aparajita Bandopadhyay
Why is language still the toughest problem to crack, in AI?
Do we determine what our words mean, or do words determine what we mean?
Though this rather childish-looking question may seem to be something which only characters as truculent as Tweedledum and Tweedledee would care to dispute ad infinitum, upon a little further analysis, it can really throw us into a confused dilemma.
Noam Chomsky was one of the first linguists to suggest that the proficient usage of any language requires a certain instinct, and even, perhaps a certain genetic code. Yet, several languages are unique in their own ways; German and Russian, for example, are largely devoid of a distinct continuous tense; French allows the usage of double negatives to mean negatives!.
It's already hard enough for machines to interpret and process ONE human language. The paradox becomes exponentially harder for languages in translation.
Contributing researcher: Aparajita Bandopadhyay
Languages in Translation: Wasn't one language enough?
To quote Franz Kafka, who, interestingly enough, was a brilliant renegade of a writer- ‘All language is but a poor translation’.
No wonder then, that the best of Google Translator APIs and NLP/ language translator modules from Microsoft Azure or AWS, often generate translated text-strings that read more like AI-generated lame jokes.
One good service they all provide to human intelligence though- they augment our sense of humour- "Maria...makes me... laugh"!- so goes the famed song from The Sound of Music!
Contributing researcher: Aparajita Bandopadhyay
Ground-rules of AI & HI Part -1: Basics of Neuroscience- A young Researchers' Point of View
From the 1950's, since the phenomenal paper of Turing got published- on the Imitation Game, almost all definitions of AI have included this common notion that AI attempts to somewhat 'mimic' the ways of human intelligence. But, do we still know exactly how human intelligence works in the first place, so that we can mimic it in machines?
In the past 2 decades, thanks to huge progress in physics and engineering and MRI scanning technologies (e.g. fMRI of human brains during various activities- ref. Michio Kaku- Future of Mind), a lot has been revealed about how human brain works- from a neuroscientific perspective. Here is a VERY SMALL collection of some of these basics.
Q:What are synapses?
Q: How can we manufacture and strengthen new and old synapses?
Languages are the biggest unsolved mysteries
Contributing researcher: Aparajita Bandopadhyay
As defined by Robert Sapolsky in his lecture on ‘Language’ at Stanford, human language cannot be considered 'language', without these seven key features that distinguish it from the communication between other creatures.
Each of these facets of human language are exclusive to our species, and one or the other is typically the reason that various apes who were taught ASL (American Sign Language) over the years, have been unable to recreate language that is truly ‘human’.
For communication to be complete, practical and real in the human world, each of these features is essential. In the same way, AI with Natural Language Processing/ Comprehension & Understanding/ Generation of contextually & semantically sensible dialogs, could only become humane if these features were embedded into the famed Siris and Alexas. The difficulty lies in creating these features without compromising datasets or precision, thereby reaching the holy grail of Artificial General Intelligence (AGI).
1) Semanticity- the ability to generate and convey meaning, by ‘bucketing’ sounds to create words, is the most fundamental feature of any and every human language. The instinctiveness of meaning makes it all the more difficult to completely transfer meaning into a system, and clearly makes it incredibly challenging to generate meaning in novel words and communicate successfully. The awareness of semanticity in itself, the definition of semanticity that thereby gives the word meaning, is an almost inexplicable, and simultaneously intuitive, concept.
2) Embedded clauses are arguably the easiest to represent through programming languages as well as human languages, using simple to complex logic constructs- from Aristotle's term logic to propositional and predicate logic that form the foundations of AI.
3) Recursion or generativity is an incredibly interesting property of language: a finite number of words can produce an infinite number of sentences, and a sentence can have infinite length, bounded only by practical time constraints.
4) Displacement is the one feature that most famous chimpanzees and gorillas learning ASL could not achieve. Displacement involves the ability to talk about different time periods, different people, regardless of present circumstances, and without only conveying current emotion. Being able to talk about things emotionally distant from us, and dissociating our communication from our situation, is a unique capability that we seem to convey easily. Giving each other information in ‘facts’ not directly pertaining to ourselves, or even getting the answer from a chatbot when you ask for 23rd January’s weather, are examples of displacement ingrained in human language.
5) Arbitrariness refers to the lack of connection between the meaning of words and their shapes or sounds. Adjectives such as ‘heavy’ or ‘sad’ are not created with relation to the shapes of the letters in them, or resembling the sounds they make. The arbitrariness of human language makes it difficult to compare languages by sound or script, since neither of the two convey meaning. The randomness of meaning is, therefore, immune to guesswork or brute force.
6) Meta communication, the ability to communicate about communication and discuss language at all, is another feature of human language, forming the basis for natural language processing and generation. Meta communication also refers to secondary communication that can change or add to the meaning of a sentence or communication.
7) Prosody, which forms a derivative part of meta communication, involves things such as intonation, stress, rhythm and other parts of body language that accompanies a unit of communication. The accompanying body language and intonation can convey different meanings to even the same sentence, which can be expressed through meta communication, such as sarcasm created using varied tones of voice.
Motherese, or baby talk, consists of the difference in intonation, including stress on vowels, that parents use when communicating with their infants, with the intention of teaching them how to speak. Specific to humans, it is an important part of child language acquisition, a field growing in importance to machine learning through natural language acquisition.
Each of the 7 features that distinguish human language from others seems instinctive, and unnoticeable in daily conversation. To train a language model on these basic but generic features of human languages, is a different ballgame altogether.
Contributing researcher: Rudrani Bose
Since 2019, most companies have been using AI to improve process performance and costs, and some of the early mover multi-country organizations are also focussing on leveraging AI to improve customer experience. They are doing it by using AI to assure uniform levels of consistency, reliability, performance and service quality, across their regions of operations.
Now Ethics in AI has become a key issue especially when companies are using AI and intelligent automation across their value spectrum and supply chains & partner ecosystems. AI developers must ensure that their deployments are able to comply with the ethical requirements in all the regions where their systems will be deployed. Breaking local privacy laws is invariably seen as arrogant and intrusive and malicious intent is often inferred.
In this context, ethical use of AI in the enterprise becomes the most critical change pivot. AI-user (employee/ customer) experience is significantly influenced by the perceived ethical behaviour of the AI solutions. This is applicable for all enterprise AI solutions regardless of specific domains and usecases, e.g. conversational AI agents (e.g. customer service chatbots) or digital workers (e.g. intelligent ops process bots for claims processing, invoice processing, KYC, duplicate invoice detection, false claims & fraud detection, retail shrinkage detection, and so on).
All global enterprises have to work within the government policy frameworks in their regions of operations, and strategic AI usecases must often comply with multiple local government policies and priorities on ethical AI. International and global use cases require policy adaptability so as to adhere to multiple regional idiosyncracies; as US companies working in the European Union have discovered (1).
Here are some examples of current government key initiatives on ethical AI.
The Australian government released a report along with CSIRO listing out the framework for an ethical AI. This states that any AI system must:
1. Control any adverse or unethical effects of AI to ensure that the generation of net benefits for citizens are greater than the costs and risks of adopting and developing AI and associated technology stacks
2. Demonstrate regulatory and legal compliance with international and governmental guidelines to prevent AI systems from harming or deceiving citizens
3. Ensure the protection of citizens’ private data and prevent data breaches causing financial, psychological or professional harm
4. Develop controls to monitor the fairness and inclusivity of data systems and restrain damage from biased algorithms. The term “bias” has expanded scope considerably in recent years.
5. Ensure adequate transparency, responsibility and accountability of governments, organisations and people by ensuring that identifiable data is only used after gaining the consent of each citizen.
The Treasury Board Secretariat (TBS) of Canada is working towards the responsible use of AI in government. It is working with the international AI community to facilitate an open online consultation to foster collaboration on international guidelines for the ethical use of data systems.
The federal government wishes to lead by example through adopting healthy ethical practices in its use of artificial intelligence. To ensure the proper and ethical use of AI, the government plans to:
The American AI Initiative launched by President Donald Trump labels itself “AI with American Values” in seeking to protect and preserve freedom, human rights, institutional stability, the rule of law, right to privacy, intellectual property rights and equitable opportunities for all. Right to privacy itself is not stated and defined clearly in the constitution, Unlike the EU where right to privacy is clear and mandatory e.g. by GDPR.
In keeping with that philosophy, in February 2020, the US Department of Defense adopted a guideline of ethical principles to govern the use of AI. The regulations seek to incorporate responsibility, equitability, reliability, traceability and governability in digital technology used not only for military and defence purposes but also for commercial and for-profit business objectives. It aims to maintain the strategic global leadership of the US and respect the rules-based international order.
However US corporations have encountered difficulties when they assume that compliance with US law confers compliance with laws in other regions. For example in the European Union, where the right to privacy is strictly stated in the General Data Protection Regulations (GDPR).
The government of the United Kingdom defined the ethical framework required to design responsible digital systems as those which
- respect the dignity of individuals and allow people to form open, meaningful and inclusive connections.
- Protect social values, social justice and prioritise public interest by providing for the wellbeing of all.
The Centre for Data Ethics and Innovation was set up by the government in 2018 to help navigate the ethical challenges posed by rapidly evolving digital systems. For this purpose, the Alan Turing Institute, the Office for Artificial Intelligence and the AI Councilwere also set up. As the UK leaves the EU, it is expected that these bodies will publish new recommendations to replace and possibly enhance the EUs General Data Protection Regulations (GDPR)
Compared to other countries, Singapore has yet to develop a concrete strategy to incorporate ethical practices in AI systems. In June 2018, Singapore announced the establishment of an AI ethics advisory council on the development and deployment of artificial intelligence, to guide the government in building ethical standards and codes of conduct for businesses.
We can expect this body to produce guidance or regulations in the near future but whether the requirements will be similar to those of other bodies referred to in this document, is as yet unknown.
(1) Facebook warns it may be forced to pull out of Europe https://www.telegraph.co.uk/technology/2020/09/21/facebook-warns-could-pull-services-europe/
Contributing researcher: Rudrani Bose https://www.linkedin.com/in/rudrani-bose-a2a1441b0
Standards and guidelines for the regulation of artificial intelligence are rapidly emerging and are more important now than ever before. The technology of artificial intelligence is not only new, but it is a work in progress, ensuring that advancements occur every day. Fierce competition for the development and deployment of AI technologies is likely to emerge in the future, necessitating the use of legal standards to act as a deterrent to AI being employed for unethical purposes.
Some of the existing legal standards governing artificial intelligence are:
A key challenge of in actually implementing and following these standards on ground is that - these standards focus on technical interventions and checkpoints, e.g. on AI data usage, algorithms and applications. They don't cover guidelines on applying these standards on the ground, in terms of people- e.g. training & awareness about risks of non-compliance, process of auditing to check enforcement and adherence to the standards, and the business outcomes i.e. the impact of implications of following the standards.
Here was a C-level client team from a traditional BFSI organization based out of EU, functioning successfully for decades, flourishing in one of the most mature & well-regulated markets and regions. They had every credible reason to be satisfied with their numbers and growth. Yet, the hunger for disruption using technologies were so prominent! The team that were visiting us were not core-tech or IT, but a combination of hardcore business services, business operations and CRM teams. There were 8 of them, all top leaders from business and IT functions, and ALL were equally comfortable in geek-speak as well as biz-speak!
We started talking about why it took months, in some cases years, for these large banks to put even common AI usecases like KYC or anomaly/ fraud detection/ prediction, in production. They asked- why don't we build and market domain-specific ontologies, lexicons, knowledge models and pattern bases?
Of course, why not? Key idea was that we could short-circuit the build/ validate/ test cycles of these mature & AI-starter usecases and didn't end up reinventing all wheels for every client, thus almost invariably overrunning their AI projects' go-live schedules.
We set up a small team of just 3 senior AI engineers- aligned to our engineering teams, and started building these "ontology boxes". Initially we chose to build 1 box for BFSI- one of our largest verticals- also most mature in terms of AI adoption, and 1 box for another very interesting corporate function- the Legal team- that gave a unique problem of contract validation, to us.
Given it was a brand new idea, and we were building it from scratch, it took the small team almost 6 months to come up with a Minimum Viable Product i.e. a working ontology for the legal functional domain, to start with. The Legal team were so engaged with the small engineering team, throughout the MVS cycle, because their domain knowledge and experience and interpretations were the most crucial inputs for the engineers to capture and build upon. They were so happy when the prototypes in-prod proved that they could accelerate implementations standard AI usecases like contract intelligence, from years to weeks!
Gartner has recently predicted that by 2025, data storytelling will become a multi-billion dollar AI-business activity in itself.
Well, future is much faster than we think, as Diamandis & Kotler mentioned in their book with nearly the same name. Data storytelling has matured as a practice in top 1-5% early and most advanced adopters of AI & ML, e.g. in some of the topnotch financial services organizations. A handful of AI services start-up's have also started their strongly differentiated journey in this path less-travelled so far, in the past 1-2 years.
AI storytelling is more than just data storytelling i.e. complex visualizations, or visual story-telling i.e. generative AI usecases producing text narratives explaining pictures/ images! Yes, data visualization technologies have become so mature now that it's not even considered innovative or fashionable enough to speak about, in the AI/ data sciences geek world. Even dynamic/ customizable data visualization have become relatively commonplace capabilities now.
Data Storytelling - extending data sciences into the art form of narrative generation or storytelling- presents data, analytics, AL-ML model outputs & recommendations etc., in story form. This process of turning facts and figures into contextual and relatable stories, can engage the targeted audiences esp. the business users much better. Data storytelling typically includes - 1) creation of a narrative structure for data/ model outputs/ inferences, 2) generating contextually meaningful narratives on data & analytics output, 3) exploring stories from datasets in multiple facets / aspects, 4) tuning the data-driven narratives to a form that the targeted audience understand best. Data storytelling can lead the target audiences progressively e.g. through multiple layers, based on their exploratory curiosity, requirements, and expectations.
Data storytelling, in addition to being on the output side of several AI-ML models, can also leverage AI on the input side, e.g. as a dynamic data storytelling service offering. It can leverage generative AI techniques and natural language generation modules (e.g. Quill from Narrative Science), to generate timely relevant and contextual stories based on data and ML model outputs.
This is again a blue-ocean opportunity both for large global end-user organizations and their service provider partners, as this one also is based on integration of their experiential knowledge-bases, gathered through decades of operations. This is again NOT a start-up type opportunity, for same reasons as stated in #1.
In line with what 5 of the largest and otherwise fiercely competitive US banks did in their TruSight* initiative, pattern-bases especially for fraud detection/ prediction, or cyber-attacks and attempts, are extremely valuable and highly monetizable intellectual properties that the large organizations are sitting on. This is primarily because the "Cost of Not Doing It" is very high for the end-user organizations as well as their service partners. Any new fraud/ anomaly, or even attempts to fraud or cyber-attack, that are at least 50--70%+ similar to previous fraud or attack-patterns, can be predicted and hence preventable, if these pattern-bases are made available as a service.
Again, only one of the Big 4's have demonstrated capabilities witnessed in 2020, where they could show what they built in way of knowledge-pattern-bases, as an offering somewhat similar to this. But even that one organization is also not delivering it as a Service. That is- it isn't monetizing the knowledge-patterns as consolidated stand-alone revenue opportunities. This offering is of great value for large end-user corporate clients- who run high risks of financial frauds or legal liabilities due to potential cyber attacks on secure financial and PII data.
[* TruSight was founded by a consortium of leading financial services companies, including American Express, Bank of America, Bank of New York Mellon, JPMorgan Chase, and Wells Fargo ]
This blue-ocean opportunity can be realized as an extension of the previous design/ idea #2, further enhanced by the unique AI application potential that AI is ALWAYS ON. Obviously, machines never sleep and can be programmed in a way (e.g. using RT unsupervised or reinforcement learning) that they will NEVER STOP learning and will keep themselves constantly updated and will predict risks/ threats based on their latest learnings/ most recent & up-to-date models. So, it's the best of both worlds- combining the knowledge of all pre-existing risk patterns (as mentioned in #2), plus adding the always-on real-time learning capabilities, so that the risk profiles are constantly updated and prediction accuracy is not data-staled or affected by latency.
Hence, it's only an obvious logic that all risks that are high-impact, high-urgency, and require continuous monitoring and alert actions, are top candidate usecases for autonomous AI applications and systems. These risks can be business risks, operational risks, technical (e.g. hybrid cloud performance vs. opex optimization) risks. Irrespective of their classification, the AI capabilities built for Risk Management as a Service will be horizontally applicable to all types of risks.
The business case for end-users for this AI offering is to be based on the size and time-sensitivity of the risk/ threat and their subsequent mitigation actions.
Again, this is a huge opportunity potentially and visibly untapped by large service provider organizations, many of which have mature risk management service offerings but haven't explored their AI-empowerment in an integrated, full-blown manner. Only EY has demonstrated capabilities in this space, thanks to their highly mature risk management knowledge-base and frameworks and their progressive and innovative approaches to leverage this risk knowledge to deliver highly differentiated value to its clients.
Genpact has initiated, led and mastered the pretrained accelerator space. So has Accenture - for certain domains and functions, with similar approaches but slightly different execution and communicational narratives. But, offering the accelerators themselves, like the ontologies, lexicons, APIs, even AI-infra sandboxes, all bundled up by verticals/ functions, in "As a Service" models, is something of a blue ocean still now.
Till 2019, we could understand why. 90%+ of AI projects, irrespective of verticals, were in POC/ pilots. If they were running in production, then obviously getting the pretrained models tested and validated for performance in the real world, wasn't possible.
But, 2020 onwards, due to rapid adoption of autonomous technologies for remote work support and service delivery (the COVID effect), productionization of AI-ML in business and IT services has become a mainstream reality. Now is the time to build tried and tested pre-trained accelerator bundles by industries, sectors and functions, and offer them as services. This will further speed up implementations, bring in overall best practices and SP's experiential knowledge by the client domains, and will speed up AI's Time-to-Value curves exponentially, especially for large clients with economies of scale at work. Their RoI and breakeven from large AI programs will become much better and faster. The business case is right there, for the large end-users and their service partners.
Explainability and transparency of AI-ML models and the decisions & actions of autonomous systems, have become mandatory requirements for AI not just for regulatory constraints but also to scale up adoption by business users, ensuring trust and confidence.
Technically, lot of progress has been made in recent times, on XAI algorithms and measurability of trust and fairness scores. But, for end-user organizations that are keen to adopt AI rapidly, getting access to the skillsets required for XAI isn't an easy ask. For such organizations, XAIaaS is a good starting point, to build the credibility and capability in their AI solutions.
Start-ups that work in the XAI and fair AI space have a big role to play here, in collaboration with the tech biggies like Google, IBM, Microsoft and AWS, who have been extensively working in this XAI tech space in recent years. AWS SageMaker Clarify, for instance, offers a good starting point of XAI as a Service, for ML. Lean AI start-up's can help large end-user organizations to improve their XAI capabilities, leveraging these packages on top of their existing AI solutions as well as while building newer ones.
With the advent of a lot of AI maturity models, by global consulting companies and process audit firms, AI audit and governance assurance as a Service is a blue ocean that is seeing some early ships sailing already. Assurance on data fairness i.e. debiased/ bias-mitigated training data, quality assured pre-curated datasets, and trust measurement as a Service, are the newer service offerings in this space.
Amazon AI Conclave, and the Union Budget Next Monday Morning: Emergence of a Perfect Equation?
Amazon just concluded their Virtual Conclave on AI, late last week. This was the 4th edition of Amazon AI Conclave, because of COVID guidelines it had to go virtual this year.
While I was writing this blog on my key takeaways from the Amazon AI conclave, I was also listening to the live telecast of budget, presented by our Finance Minister. She was clearly listing down the financial and economic priorities of our country, to push it towards a growth path that was hopefully faster than the ones some of our smaller and hence nimbler Asian neighbors and friends have already started enjoying (e.g. Bangladesh and Vietnam).
A key benefit of my highly practiced multi-tasking skills suddenly dawned upon me- Connecting these dots. The key messages from the Amazon AI Conclave and the Union Budget priorities are so much in sync! Whatever the honorable FM was saying, I heard the technical equivalents of those initiatives and narratives, just 2 days before, in the Amazon AI Conclave.
Directionally, both were spot on. However, as all the experts were mentioning in their budget analysis, the devil lies in the details, and in execution. Great strategies have failed far too often, due to unimaginative, conservative implementations. It’s one thing to say, and completely a different thing to actually do, and achieve demonstrable and proven outcomes.
Bridging this mission-critical gap between ambitious visions and average/ half-baked execution, is the task at hand- both for the country and for AWS, for India and the world to become the connected, ubiquitous, global powerhouse of AI, data, cloud- the three key pillars of the Brave New World:
While the proven success stories were many, there are obviously even greater opportunities to explore and expand on:
Net net, using India as the “Most Complex Problem” statement and context, Amazon has more than sufficient deep tech first-mover capabilities to build and test these prototypes for our immediate future here, and then scale up quickly, for the world.
Technology-wise, AWS and other hyper-scalers are the Super Integrators of elastic data-algorithms-infrastructure, in the way of cloud and federated & edge AI. As the global IT services hub, India has a huge pool of service knowledge & curable datasets (can be masked for PII), and trained tech manpower that is agile i.e. that can willingly and quickly switch to AI-ML capabilities.
If this is not an example of a Perfect AI demand-supply equation, then what is?
Finally, intelligence of all kinds- artificial or human - has prevailed. The new President-elect for the United States, speaks for Science, Rationality, Fairness. A huge breather for the IT and tech companies, in product/ platform/ services and adjacent domains.
Being the Largest economy of the World, and also as home to some of the world’s best academic and research institutions, social and policy stability in the US does influence the Quality of Life and access to Knowledge & tech, for the Whole World.
Interesting shifts are already seen in the Tech world- Top tech companies like Microsoft, Google and IBM, have CEOs of Indian origin. Now Kamala Harris brings fairness in political leadership too. Balancing out the over-Indian flavour in these curries, Interesting pivots are seen in some of the Big Indian ITSPs switching to non-Indian leadership at CEO levels.
Fantastic amalgamation of East-meets-West cultures n value systems. Another interesting switch in this fact is that, now, some top tech Products n Platforms co.s have Indian leaders n Indian Services companies have non-Indian leaders! With Biden-Harris ushering in continuity in visa and H1B policies favoring the human-talent spectrum, it is again proven that fairness, merit and creative pursuits in any field cannot be confined within any gender, caste, creed, race or nations' boundaries. Men (and women) are born free, and we don't have to necessarily bind ourselves in our own minds' chains as we grow up.
The Tech and Commercial worlds are future-proofing themselves. Like the world of Nature is doing it, even if in an apparently ruthless but inevitable manner. Well-governed, ESG-aware AI-automation policies both at national and international levels as well as at enterprise and business domains, will balance out the human-nature equations too. Let us nurture that hope and work towards realizing it for our next-gens.
On 2nd November, TOI and Moneycontrol et al carried this story about potential overhauls at Wipro under the new leadership of Thierry, a BFSI veteran from CapG, a leader with the transparency and clarity that's typical of many European leaders (not stereotyping, just a generic observation :)). These articles also hinted at the possible change-paths Thierry may take for Wipro, similar to what Brian did for CTS, some time ago.
Observationally, looks like it. To improve margins, not only TWITCH but even the global co.s are ruthlessly chopping off layers of legacy corporate fat that has made several of them rather ANTI-AGILE, if anything.
Finally this entire SP industry seems to refocus right- on GPR-Value (e.g. direct or indirect Growth/Profit/Revenue-Contribution-Per-Employee), and is using AI and intelligent automation within their own houses first, as apt instruments of CORPORATE LIPOSUCTION!
Since Y2K days, when KLOCs of COBOL were being updated by sheep-herds of trained engineers (layers of managers were the shepherds), the whole SP industry focussed on TnM- throwing more bodies towards all client problems!
Sadly, even now, many research firms still weigh the brawn more than the brain: Quantity of manpower 'certified on tool X', vs. Quality of Talent, or No. of IP's these new sheep-herds are producing.
This is the new double jeopardy (also double accounting) problem of AI, creating new layers of confusion matrix & resultant inefficiencies. This time: Still too many men, PLUS too many machines with varied IQ- haphazardly thrown into the already-horrifying mix. Cutting unnecessary flab is a critical and lifesaving surgery that many companies in this new conundrum need at this hour.
However, the intent defines the extent. Hence, instead of equating Thierry's change attempts for Wipro only at face value as an exact replication of what Brian did for CTS, we should look deeper. Intent-wise, Thierry is focussing on simplification and easily measurable, demonstrable VALUE metrics, whereas Brian decidedly focussed on COST-CUTTING. Their leadership challenges are different, company culture and realities are very different. What works at one won't get the other exactly there. CTS, while becoming more cost-efficient, also did lose a lot of great talents and experienced business leaders and client partners that they were known and strongly differentiated for. Given Wipro's core leadership legacy of Premji and Rishad, and the loyalty of some of their great leadership minds towards the owner family and the enterprise, surely that will not happen for Wipro, no matter what changes are brought in.
Client projects & research from Harvard, Gartner, McKinsey
AI & Automation strategy formulation at corporate level: STEP
The 90-10 rule of AI: AI360 Value Delivery Model
Solution frameworks, technology maturity models, benchmarking
AIOS: Architecture of an AI-powered operating system for AI
The LOCO framework for fluidic or liquid AI
Why AI-first needs Cloud-first
Swimlanes for AI solutions planning
Copyright © 2021-2023 AISWITCH - All Rights Reserved.
email@example.com (Research & Advisory Services)
firstname.lastname@example.org (Research partnership)
Powered by GoDaddy Website Builder