AI, ML tools used by brokers may come under Sebi lens

3 Minutes Read Listen to Article
Share:

June 23, 2025 12:33 IST

x

The proposed guidelines cover several key parameters, including governance, investor protection, disclosure, testing frameworks, fairness and bias, and data privacy and cybersecurity measures.

new building of the Securities and Exchange Board of India (SEBI) Head Office at Bandra Kurla Complex in Mumbai

Photograph: A view of the new building of the Securities and Exchange Board of India (SEBI) Head Office at Bandra Kurla Complex in Mumbai. Photograph: ANI Photo

The Securities and Exchange Board of India (Sebi) has proposed guidelines for the supervision and governance of artificial intelligence (AI) and machine learning (ML) applications and tools used by market participants. These guidelines aim to specify procedures and control systems to ensure responsible usage.

The proposed guidelines cover several key parameters, including governance, investor protection, disclosure, testing frameworks, fairness and bias, and data privacy and cybersecurity measures.

 

Currently, AI and ML are widely used by stock exchanges, brokers, and mutual funds for various purposes such as surveillance, social media analytics, order execution, KYC processing, and customer support.

Sebi has proposed that market participants disclose their use of AI and ML tools in operations like algorithmic trading, asset management, portfolio management, and advisory services. Disclosures should include information on risks, limitations, accuracy results, fees, and data quality.

Market participants using AI and ML will need to designate senior management with technical expertise to oversee the performance and control of these tools. They also must maintain validation, documentation, and interpretability of the models.

Additionally, they will be required to share accuracy results and audit findings with Sebi on a periodic basis.

The market regulator has emphasised the importance of defining data governance norms, including data ownership, access controls, and encryption. It has also noted that AI and ML tools should not favour or discriminate against any group of customers.

"Market participants should think beyond traditional testing methods and ensure continuous monitoring of AI/ML models as they adjust and transform," Sebi said.

In terms of cybersecurity and data privacy, Sebi has highlighted risks such as the use of generative AI to create fake financial statements, deepfake content, and misleading news articles.

To mitigate these risks, Sebi has recommended human oversight of AI systems, monitoring of suspicious activities, and the implementation of circuit breakers to manage AI-driven market volatility.

<>Sebi formed a working group to prepare these guidelines and address concerns related to AI and ML applications. The regulator has suggested a 'lite framework' for business operations that do not directly impact customers.

Sebi has invited public comments on the proposals until July 11.

Get Rediff News in your Inbox:
Share:

Moneywiz Live!