EN
/
TH

Chatbots in the Context of Data Privacy Compliance


Daria Morozova
Daria Morozova
Apr 26, 2024 | 4 minutes to read

The confidentiality of chatbots has become a significant issue in recent years, discussed both domestically and abroad. It is of particular interest in Europe, where it is debated in the context of the European General Data Protection Regulation (GDPR).

Conversational AI provides companies with extensive opportunities for gathering data from clients and users, which simultaneously raises data privacy risks. Therefore, when planning the implementation of chatbots, particularly for sales and marketing, it is necessary to mitigate potential legal risks.

The solution involves adopting appropriate internal policies and carefully selecting external agencies and vendors offering chatbot development and support services.

What Risks to Confidentiality Does Using a Chatbot Create?

If identifying the interlocutor requires a large volume of personal data, this contradicts the GDPR’s data minimization principle ("the less you ask, the better"). Customer profiling is another activity that raises contentious issues. The chatbot needs to analyze and combine large sets of personal data, which contradicts the principle of limiting the confidentiality of the chatbot. The purpose limitation rule states that personal data can only be used for a specified purpose, which imposes strict restrictions on any data acquisition. Other restrictions relate to the prohibition on collecting sensitive personal data (such as ethnic origin, religion, health) and the obligation for transparent data processing. Finally, individuals have the right to object to the use of their personal data and can request its deletion. Complying with chatbot confidentiality also means protecting these two rights.

Chatbot Errors in the Context of Confidentiality

Among the cases of confidentiality breaches and ethical issues in the field of bots, the following examples can be cited:

An Amazon AI chatbot for recruitment, which was found to be gender-biased. The bot began to think men were better than women due to the higher number of male resumes it received. Tay – an AI chatbot from Microsoft for interacting with Twitter users. Unfortunately, Tay began to express racism, far-right, and Nazi views. Beauty – an AI chatbot that was supposed to impartially judge beauty. It was discovered after some time that Beauty AI did not like dark faces. The above cases make one think about the problem of unintentional bias in artificial intelligence. Nobody wants to pass confidential data to someone who is biased against them. Thus, the issue of confidentiality is common for both people and bots, especially when chatbots make mistakes illustrated by the three examples above.

Reducing Legal Risks When Using AI Chatbots

To comply with user data protection requirements, certain rules must be followed:

Develop internal policies and procedures to establish the permissible scope of data transmission through chatbots. Determine: What information is collected and how; Where the information is sent and stored; For what purposes the information is processed. Obtain consent for the processing of personal data from the interlocutor before starting a dialogue with the bot. An example of such a start message, which includes a hyperlink to the company's policy: "By sending a message, you agree to the processing of personal data in accordance with our policy."

Include links to the chatbot in the privacy policy on the website or in the web application. Additionally, some preventive features can be included in the chatbot window, such as a banner warning users not to share special categories of data, or a function allowing users to delete past conversations containing confidential data.

On the chatbot provider's side, the following requirements must be met: Access to the database must be strictly regulated; Regular backups are made, access to which is also regulated; Employees with access have signed non-disclosure agreements; There must be an option to manage the provided data—find and delete personal data from the company account (company settings, chatbot script, communication history) in accordance with Section 25 of the GDPR. There must be an option to transfer data for training the bot in a depersonalized form – a depersonalization script is run on the customer's side, possible "sensitive" information is removed, and the supplier is given the depersonalized dialogues.

Thus, chatbots provide broad opportunities for business development and marketing strategies, however, during the process, it is essential not to forget about protecting the confidentiality of users and clients with whom the chatbot will interact.

Daria Morozova
Daria Morozova
Apr 26, 2024 | 4 minutes to read
share article:

See also:

Contact us