AI is becoming an imminent part of our lives both in small ways in our individual lives and on larger scales in the corporate applications. However, one doesn’t need to look far to see the rising concerns over the potential negative impacts of AI. Across many films – I, Robot, Ex Machina, Avengers: Age of Ultron, etc. – AI has been portrayed to have a “dark side”, and all these speak to the concerns that people have. While many of these might be inflated perceptions, what remains true is the need for solid AI governance to instill confidence amongst stakeholders.
IMDA and PDPC have jointly released two editions of the Model AI Governance Framework in its aim of developing a trusted digital ecosystem where organisations can benefit from tech innovations while consumers are confident to adopt and use AI. The second edition released in January 2020 refines the original framework for greater relevance and usability. It highlights two principles of responsible AI and four areas to consider – Internal Governance Structures & Measures, Determining the level of Human Involvement in AI-Augmented Decision-Making, Operations Management and Stakeholder Interaction and Communication.
Complementing the framework is a collection of exemplary local and international organisations that demonstrates how organisations implemented or aligned their AI governance practices with all sections of the Model Framework and effectively put in place accountable AI governance practices.
TAIGER is proud to be featured in the second volume along with Google, Microsoft and the City of Darwin (Australia). This comes as an affirmation for them towards how they help their clients leverage AI to optimise operational inefficiencies.
“The nature of technology adoption is that technology and transparency has to go hand in hand. As a tech vendor, it is key for our technology to be explainable and that we constantly review our development processes. If we can’t explain it, we can’t fix it. If we can’t fix it, we can’t scale it, and scalability is important to us and to our clients.” – David Padgett, CCO at TAIGER.
Putting in place practices that are aligned to the Model AI Governance Framework helps TAIGER assure customers that they understand their technology and take measures to ensure that their AI models are explainable, predictable and transparent. With AI still evolving, TAIGER believes that it is important to continuously strengthen its governance structure to enhance the trustworthiness of its AI models. Additionally, TAIGER’s restructuring efforts to put in place a proper internal governance structure that helps improve the explainability of AI models as well as communicate the implications and potential risks of its AI solution have benefited them in many ways. In certain cases, adopting responsible AI governance practices has helped TAIGER win client projects from competitors, as customers appreciate transparent and structured processes, both implementation-wise and management-wise, despite working with relatively new AI solutions.