The interesting dichotomy of 2021 AI: For digital personalization, while also driving standardized, uniform, consistent, secure, regulatorily compliant experiences!
Starting 2018 and peaking in 2020 especially during rampant proliferation of AI usage in the post-COVID world due to remote & virtualized workspaces and in data-sensitive sectors like BFSI & health, most companies have been using AI to improve process performance and costs.
But the most interesting AI usage dichotomy strikes us when:
Ethics in AI has become a key issue especially when companies are using AI and intelligent automation across their value spectrum and supply chains & partner ecosystems. Towards this end:
International and global use cases require policy adaptability so as to adhere to multiple regional idiosyncrasies; as US companies working in the European Union have discovered. Here are some examples of current government key initiatives on ethical AI.
The Australian government released a report along with CSIRO listing out the framework for an ethical AI. This states that any AI system must:
1. Control any adverse or unethical effects of AI to ensure that the generation of net benefits for citizens are greater than the costs and risks of adopting and developing AI and associated technology stacks
2. Demonstrate regulatory and legal compliance with international and governmental guidelines to prevent AI systems from harming or deceiving citizens
3. Ensure the protection of citizens’ private data and prevent data breaches causing financial, psychological or professional harm
4. Develop controls to monitor the fairness and inclusivity of data systems and restrain damage from biased algorithms. The term “bias” has expanded scope considerably in recent years.
5. Ensure adequate transparency, responsibility and accountability of governments, organizations and people by ensuring that identifiable data is only used after gaining the consent of each citizen.
The Treasury Board Secretariat (TBS) of Canada is working towards the responsible use of AI in government. It is working with the international AI community to facilitate an open online consultation to foster collaboration on international guidelines for the ethical use of data systems. The federal government wishes to lead by example through adopting healthy ethical practices in its use of artificial intelligence. To ensure the proper and ethical use of AI, the government plans to:
The American AI Initiative launched by President Donald Trump labels itself “AI with American Values” in seeking to protect and preserve freedom, human rights, institutional stability, the rule of law, right to privacy, intellectual property rights and equitable opportunities for all.
Right to privacy itself is not stated and defined as explicitly and clearly in the constitution at US, unlike the EU where right to privacy is clear and mandatory e.g. by GDPR. In keeping with that philosophy, in February 2020, the US Department of Defense adopted a guideline of ethical principles to govern the use of AI. The regulations seek to incorporate responsibility, equitability, reliability, traceability and governability in digital technology used not only for military and defence purposes but also for commercial and for-profit business objectives.
It aims to maintain the strategic global leadership of the US and respect the rules-based international order. However US corporations have encountered difficulties when they assume that compliance with US law confers compliance with laws in other regions. For example in the European Union, where the right to privacy is strictly stated in the General Data Protection Regulations (GDPR).
The government of the United Kingdom defined the ethical framework required to design responsible digital systems as those which - respect the dignity of individuals and allow people to form open, meaningful and inclusive connections. - Protect social values, social justice and prioritise public interest by providing for the wellbeing of all.
The Centre for Data Ethics and Innovation was set up by the government in 2018 to help navigate the ethical challenges posed by rapidly evolving digital systems. For this purpose, the Alan Turing Institute, the Office for Artificial Intelligence and the AI Councilwere also set up. As the UK leaves the EU, it is expected that these bodies will publish new recommendations to replace and possibly enhance the EUs General Data Protection Regulations (GDPR)
Compared to other countries, Singapore has yet to develop a concrete strategy to incorporate ethical practices in AI systems. In June 2018, Singapore announced the establishment of an AI ethics advisory council on the development and deployment of artificial intelligence, to guide the government in building ethical standards and codes of conduct for businesses. We can expect this body to produce guidance or regulations in the near future but whether the requirements will be similar to those of other bodies referred to in this document, is as yet unknown.
Too little, too late?- Many AI tech and service providers are still in deep slumber....
Several top government agencies across more and more countries and regions have grown in terms of maturity of their citizen data protection and data rights acts and AI fairness and usage policies and governance frameworks. Same cannot be said at an empirical level, for the service providers and tech supply side of AI, in 90%+ instances.
The technology and service providers are mostly falling way behind the AI ethics maturity curve:
But, these are more exceptions than the norm. If the ethical AI considerations were mainstreamed enough, then most of the SP companies would have already pivoted their AI journeys towards fairer, secure, data-assured and uniform AI development and adoption practices. That's very far from the AI reality of the ground.
Rise of the AI Ethics Debt- For End-user companies
As always, the end-user client-side organizations have to bear the brunt, for the relative immaturity, myopic views and strategic unpreparedness- i.e. the supply-side capability constraints of most of their service provider partners. Even for top AI tech organizations like Google, we've seen how there have been raging debates on the question of fairness and ethics. No end-user organization worth their salt would like to leverage any AI solution or tech-stack, that does not have ethics and data governance embedded by default. Unfortunately, this is NOT the supply-side reality, even in the most popular AI stacks e.g. Tensor flow packs for image classifications, to training language corpora and parametric language models in transformer-based systems like BERT. Ethical maturity of opensource stacks is even more inconsistent, un-auditable and hence more questionable and doubtful.
This gaping demand-supply chasm in ethics-assured AI tech availability is slowly mushrooming into a big AI ethics debt, primarily for the end-user client companies, given that these may give rise to legal liabilities for them as they are directly facing and impacting customers, users, citizens.
Problem is- solution to this problem doesn't lie in just technology, e.g. creating autonomous smart compliance-checker bots for every AI usecase or sensitive enterprise data-based applications. This problem needs to be solved in a 3-proned approach:-
1- building strong data and AI governance frameworks- both for end-users and their SPs, leveraging global and region/country-specific regulations as well as partners' capabilities and top leadership experience of end-user clients
2- building and delivering mandatory training and PRACTICE certification programs (NOT just the tech dev certifications) for all parties involved, in AI development as well as deployment and run-time
3- designing and delivering organizational culture change interventions to ensure that all business users and tech developers are aware of their AI and data ethical responsibilities and are uniformly applying it round-the-clock in all their AI initiatives.
These initiatives can cover both the internal ethical aspects and external audit compliance aspects, which can in turn reduce the AI and Data Ethics Debts for end-users and SP organizations alike.