Loading...
You are here:  Home  >  #Top News  >  Current Article

IIT Madras’ Centre for Responsible AI and Ericsson partner for joint research in Responsible AI

By   /  September 25, 2023  /  Comments Off on IIT Madras’ Centre for Responsible AI and Ericsson partner for joint research in Responsible AI

    Print       Email

CHENNAI : Indian Institute of Technology Madras’ (IIT Madras)  Centre for Responsible AI (CeRAI) today (25th Sept 2023) announced that it is partnering  Ericsson for joint research in the area of Responsible AI.

To commemorate the occasion, a Symposium on Responsible AI for Networks of the Future was organized where leaders from Ericsson Research and IIT Madras participated to discuss the developments and advancements in the field of Responsible AI.

During the event held at the IIT Madras campus today (25th Sept 2023), Ericsson signed an agreement to partner with CeRAI as a ‘Platinum Consortium Member’ for five years. Under this MoU, Ericsson Research will support and participate in all research activities at CeRAI.

The Centre for Responsible AI is an interdisciplinary research centre that envisions becoming a premier research centre for both fundamental and applied research in Responsible AI with immediate impact in deploying AI systems in the Indian ecosystem.

AI Research is of high importance to Ericsson as the 6G networks would be autonomously driven by AI algorithms.

Addressing the symposium, Chief Guest, Prof. Manu Santhanam, Dean (Industrial Consultancy and Sponsored Research), IIT Madras, said, “Research on AI will produce the tools for operating tomorrow’s businesses. IIT Madras strongly believes in impactful translational work in collaboration with the industry, and we are very happy to collaborate with Ericsson to do cutting edge R&D in this subject.”

Speaking on the occasion, Dr. Magnus Frodigh, Global Head of Ericsson Research, said, “6G and future networks aim to seamlessly blend the physical and digital worlds, enabling immersive AR/VR experiences. While AI-controlled sensors connect humans and machines, responsible AI practices are essential to ensure trust, fairness, and privacy compliance. Our focus is on developing cutting-edge methods to enhance trust and explainability in AI algorithms for the public good. Our partnership with CERAI at IIT Madras is aligned with Indian Government’s vision for the Bharat 6G program.”.

A panel discussion on ‘Responsible AI for Networks of the future’ was organised to commemorate the partnership during the symposium and some of the current research activities being carried out at the Center for Responsible AI were showcased.

Elaborating on the partnership between CeRAI and Ericsson, Prof. B. Ravindran, Faculty Head, CeRAI, IIT Madras, and Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras, said, “Networks of the future will enable easier access to high performing AI systems. It is imperative that we embed responsible AI principles from the very beginning in such systems. Ericsson, being a leader in future networks is an ideal partner for CeRAI to drive the research and for facilitating adoption of responsible design of AI systems.”

Speaking about the work that would be taken up under this collaboration, Prof. B. Ravindran added, “With the advent of 5G and 6G networks, many critical applications are likely to be deployed on devices such as mobile phones. This requires new research to ensure that AI models and their predictions are explainable and to provide performance guarantees appropriate to the applications they are deployed in.”

The Speakers and Panellists of the Symposium included Prof. R. David Koilpillai, Qualcomm Institute Chair Professor, IIT Madras, Dr. Harish Guruprasad, Core Member, CeRAI, IIT Madras, Dr. Arun Rajkumar, Core member – CeRAI, Dr. Jorgen Gustafsson, Head of AI , Ericsson Research,  Dr. Catrin Granbom, Head of Cloud Systems and Platforms, Ericsson Research,  Kaushik Dey, Research Leader -AI/ML, Ericsson Research – India

Some of the key projects presented during this Symposium include:

Ø  The project on large-language models (LLMs) in healthcare, which focuses on detecting biases shown by the models, scoring methods for real-world applicability of a model, and reducing biases in Large Language Models (LLMs). Custom-scoring methods are being designed based on Risk Management Framework (RMF) put forth by National Institute of Standards and Technology (NIST), the U.S. federal agency for advancing measurement science and standards.

Ø  The project on participatory AI addresses the black-box nature of AI at various stages, including pre-development, design, development and training, deployment, post-deployment and audit. Taking inspiration from domains such as town planning and forest rights, the project studies governance mechanisms that enable stakeholders to provide constructive inputs for better customisation of AI, improve accuracy and reliability, raise objections over potential negative impacts.

Ø  Generative AI models based on attention mechanisms have recently gained significant interest for their exceptional performance in various tasks such as machine translation, image summarization, text generation, and healthcare, but they are complex and difficult for users to interpret. The project on interpretability of attention-based models explores the conditions under which these models are accurate but fail to be interpretable, algorithms which can improve the interpretability of such models, and understanding which patterns in the data these models tend to learn.

Ø  Multi Agent Reinforcement Learning for trade-off and conflict resolution in intent based networks: Intent-based management is gaining traction in telecom networks due to strict performance demands. Existing approaches often use traditional methods, treating each closed loop independently and lacking scalability. This project studies a Multi-agent Reinforcement Learning (MARL) method to handle complex coordination and encouraging loops to cooperate automatically when intents conflict. Current efforts explore generalization abilities of the model by leveraging explainability and causality for joint actions of agents.

    Print       Email

You might also like...

IIT Roorkee Inaugurates Yuva Sangam – V to Foster Unity and Cultural Exchange

Read More →
Skilloutlook.com