In a significant move, the Finance Ministry bans AI models like ChatGPT and DeepSeek, citing concerns over data security and confidentiality. The directive instructs officers and employees not to use these AI-powered tools for official work, highlighting the risks associated with handling sensitive government data.
The decision comes amid growing concerns about how AI models process and store information, with fears that confidential data entered into such platforms could be exposed or misused. Officials have been advised to exercise caution while dealing with artificial intelligence applications and adhere strictly to government-approved communication channels.
The directive from the Finance Ministry bans AI models primarily due to the risk of data leaks and cyber vulnerabilities. AI chatbots like ChatGPT and DeepSeek operate on cloud-based infrastructures, meaning any data entered into them is processed externally. This poses a potential security threat, particularly when dealing with classified or sensitive financial information.
A senior official explained, “While AI tools offer efficiency and automation, they also present risks related to unauthorized data access. Given the sensitivity of financial data, we cannot afford any potential breaches or leaks. The directive ensures that confidential information remains secure and protected from unintended exposure.”
With the Finance Ministry banning AI models, government employees have been instructed to avoid using AI chatbots for drafting official documents, emails, or reports. Instead, they must rely on traditional methods and officially approved digital platforms.
The move aligns with global concerns over AI security, as several countries and organizations have imposed restrictions on AI-powered tools. While AI models offer remarkable capabilities, their reliance on vast data processing mechanisms raises privacy and security challenges.
Cybersecurity experts believe the decision is a proactive step to safeguard national and financial security. “Generative AI models continuously learn from user inputs. If government officials inadvertently enter sensitive information, it could be stored or even used for further AI training, which is a serious risk,” said a cybersecurity analyst.
The Finance Ministry banning AI models is part of a broader conversation around AI governance in India. The government is actively working on regulations to ensure that AI is used responsibly while minimizing risks associated with data security.
While AI remains a transformative tool across industries, its unrestricted use in government operations could lead to unintended consequences. The Finance Ministry’s decision may prompt other ministries and departments to evaluate their policies regarding AI adoption.
Industry experts suggest that rather than an outright ban, the focus should be on developing secure, government-approved AI tools tailored for official use. Countries like the United States and the European Union are already working on frameworks to regulate AI applications while maintaining efficiency.
For now, the Finance Ministry banning AI models signals India’s cautious approach toward emerging technologies, prioritizing data security and confidentiality over convenience. The decision underscores the government’s commitment to safeguarding national interests while navigating the evolving landscape of artificial intelligence.