Following the news about new ChatGPT functionality – custom GPTs can now be added to the dialogs with the original ChatGPT – Kaspersky experts stress the importance of exercising caution in sharing sensitive data with AI chatbots. Vladislav Tushkanov, Research Development Group Manager at Kaspersky’s Machine Learning Technology Research Team, comments:
“As GPTs can use external resources and tools to provide advanced functionality, OpenAI has implemented a mechanism that allows users to review and approve GPTs actions to prevent potential dialog exfiltration. Therefore, when a custom GPT wants to send some data to a third-party service, the user is prompted to allow or deny it, and they can inspect the data about to be sent using a drop-down symbol in the interface. The same mechanism applies to the new “@mention” functionality.”
“However, this requires awareness and a degree of caution on the part of the user, as they need to check and understand each request, which may affect the experience. In addition, there are other ways in which user data may potentially leak from a chatbot service: due to errors or potential vulnerabilities in the service, if it gets memorized during further training of the model, or if another person gets access to your account. In general, it is best to be careful not to share personal data and confidential information with any chatbot service on the Internet.”