Indian Government is Developing Voluntary Codes of Conduct for AI Businesses
According to a media report, the government is developing voluntary rules of ethics and conduct for businesses to adhere to while using generative AI or artificial intelligence (AI).
According to an official, the ethics and conduct guidelines created by the Ministry of Electronics and Information Technology will resemble "informal directive principles" for businesses, particularly those developing large language models (LLMs) or utilising data to train artificial intelligence (AI) and machine learning models. AI legislation is still a ways off. "We are currently trying to get the industry on board with a common set of principles and guidelines and are talking to all stakeholders to see what can be included," the official stated.
Scheduled to be Released by Next Year
Another official stated that the optional code of conduct might be made public early in the next year. The IT ministry may publish broader guidelines as part of the voluntary code that include actions that businesses can take to prevent potential misuse of the company's LLM and AI platforms, as well as steps that businesses can take during training, deployment, and commercial sale.
An 11-point code of conduct for businesses operating in the AI and gen-AI area has been prepared by the G7 members. According to a media report that quoted a ministry official, "The idea will be the same, but what we are trying to develop will be completely different."
Advisory Released by the IT Ministry
The IT ministry released an advisory earlier this year in March, requesting that all platforms make sure that their computer resources do not allow for bias or discrimination or jeopardise the integrity of the voting process through the use of artificial intelligence (AI), generative AI, LLMs, or any other algorithm of that like.
The IT ministry also stated in its advisory that before any AI models, large language models (LLMs), software that uses generative AI, or algorithms that are being tested in the beta stage of development or are unreliable in any way are made available to users on the Indian internet, they must obtain "explicit permission of the government of India." Later, the advisory was dismissed, and businesses were no longer required to register their AI or LLM prior to implementation.
Why This Step is Needed?
Even while artificial intelligence (AI) has transformed many industries, it also presents a number of threats to society, including prejudice, avoidable mistakes, poor decision-making, misinformation, and manipulation. Deepfakes and internet bots have the potential to damage democracies and erode social confidence. Criminals, rogue governments, ideological radicals, or just special interest groups may also abuse this technology to influence individuals for political or financial benefit. The possible harm to society and democratic processes has been brought to the attention of the European Parliament.
According to a recent global report, the number of deepfakes discovered worldwide across all industries increased by a factor of ten between 2022 and 2023. Prime Minister Narendra Modi recently expressed worry about the use of deepfakes to disseminate false information. At the national and international levels, numerous attempts are being made to address the threats posed by AI by addressing issues of ethics, morality, and legal values in the development and application of AI.
Must have tools for startups - Recommended by StartupTalky
- Convert Visitors into Leads- SeizeLead
- Manage your business smoothly- Google Workspace
- International Money transfer- XE Money Transfer