Bhutan

Rising AI threats: GovTech promotes responsible use and regulation

October 28, 2024 11:22 pm

Rising AI threats: GovTech promotes responsible use and regulation

The global Artificial Intelligence (AI) market is projected to soar from USD 397 billion in 2022 to USD 1.58 trillion by 2028, according to Grand View Research.

The PwC predicts that AI could contribute a staggering USD 15.7 trillion to the global economy by 2030.

These insights were shared by Sherab Gocha from GovTech during the national cybersecurity conference on October 25.

The Bhutan Computer Incident Response Team (BtCIRT), Cybersecurity Division under the GovTech Agency has been observing the month of October as the National Cybersecurity Awareness Month and will continue this observance this year. The theme this year is “Educate, Empower, Secure: Building a Cyber-Safe Bhutan”.

However, the widespread adoption of AI comes with significant risks. A McKinsey study indicates that AI could displace approximately 400 million workers or around 15 percent of the global workforce between 2016 and 2030.

During his presentation, Sherab Gocha presented on the generative AI guidelines for civil servants. He talked about the cautious and responsible approach to AI implementation within the civil service, balancing potential benefits with potential risks.

He said that while Bhutan currently lacks regulations specifically addressing data protection, there are existing data management guidelines in place.

He stressed that human oversight is essential when using any AI tool. Users must analyse, fact-check, and make informed decisions based on AI-generated content. He also called for clear mechanisms to address any issues or accidents stemming from AI systems.

Privacy and security are major concerns, as generative AI models like ChatGPT and Google Gemini collect user data, including logs and usage patterns. Users have the right to control their personal data, often having the option to opt out of data collection or request data deletion. For instance, ChatGPT allows users to request that their data not be used for training, with the platform automatically deleting it after a specified period.

Sherab Gocha highlighted the risks associated with sharing unpublished work through generative AI platforms, likening it to disclosing information on social media, which could jeopardise property rights. He cited an incident involving a Toyota employee who unintentionally uploaded sensitive data, resulting in substantial financial losses.

AI systems, he said, must be safe and reliable, delivering intended outputs while remaining inclusive and fair, avoiding biases based on race, gender, or other factors.

However, challenges such as bias and discrimination persist, particularly if AI systems are trained on biased data. In addition, the lack of transparency in complex AI models can make it difficult to understand and address potential issues.

To address the potential risks of AI, Sherab Gocha urged the implementation of appropriate regulations. He explained that generative AI relies on complex deep learning models, which can be difficult for users to fully understand. This lack of transparency raises concerns about privacy, security, and potential misuse.

He added that as AI systems become more sophisticated, there are growing concerns about surveillance and the use of biometric data, such as facial recognition, without explicit user consent. “These practices raise ethical questions and potential privacy violations.”

AI-generated content can be manipulated to spread misinformation and disinformation. “People should be cautious about treating AI-generated content as a primary source and always verify information with reliable sources,” Sherab Gocha said.

He categorised AI risks into three levels: high-risk AI systems, which operate in sensitive areas like healthcare, law enforcement, and public services, require stringent regulations due to their significant risks. Limited-risk AI, such as chatbots and recommendation systems, necessitates some oversight but poses lower risks. Minimal-risk AI includes simple automation tools that typically handle non-sensitive data and operate within clearly defined boundaries. “Civil servants have discretion in using such AI products,” he said.

Related Articles

Back to top button