Ai Bot Chatgpt has resumed our lives with a billion daily questions, but experts now warn that it is a … More
Chatgpt has changed the way many of us work and live our daily life. According to recent statisticsMore than 100 million of us use it every day to treat more than a billion requests.
But the LLM chatbot conquering the world was describe As “black confidentiality hole”, with concerns concerning the way in which he processes the data entered by users, which has even led to his briefly prohibited in Italy.
Its creator, Openai, made no secret Because all the input data may not be secure. In addition to using you to train your models, perhaps leading to its exposure to exit to other people, it can be examined by humans to verify compliance with the rules on how it can be used. And, of course, all the data sent to any cloud service is as secure as the security of the supplier.
What all this means is that all the data that has entered there should be considered public information. In that spirit, there are several things that should never be told – or any other public chatbot based on the cloud. Let’s start with some of them:
Illegal or contrary requests
Most AI chatbots have guarantees designed to prevent them from being used for ethics contrary. And if your question or request touches activities that could be illegal, you may find yourself in hot water. Examples of things that are definitely a bad idea to ask a public chatbot are to commit crimes, carry out fraudulent activities or manipulate people in measures that could be harmful.
Many use policies clearly show that illegal requests or IA search to carry out illegal activities could cause users to the authorities. These laws may vary considerably depending on where you are. For example, the laws on China AI prohibit using AI to undermine state authority or social stability, and EU’s law declares that “deep” images or videos that seem to be real people but are, in fact, generated by AI must be clearly labeled. In the United Kingdom, the online security law makes a criminal offense to share explicit images generated by AI-AI without consent.
Entering requests for illegal equipment or information that could harm others is not only morally erroneous; This can lead to serious legal consequences and reputation damage.
Connections and passwords
With the rise of AgenticMany more of us will find us using AI which is able to connect and use third -party services. It is possible that to do this, they need our connection identification information; However, their access could be a bad idea. Once the data has entered a public chatbot, there is very little control over what is happening to it, and there have been cases of personal data entered by a user exposed in responses to other users. Obviously, it could be a nightmare of confidentiality, so in the current state of things, it is a good idea to avoid any type of interaction with AI which involves giving it access to user names and accounts, unless you are entirely sure to use a very secure system.
Financial information
For similar reasons, it is probably not a great idea to start putting data such as bank accounts or credit card numbers in Genai Chatbots. These should only be entered in secure systems used for electronic commerce or online banking services, which have integrated security guards such as encryption and automatic data deletion once they have been processed. Chatbots have none of these guarantees. In fact, once the data has entered, there is no way to know what is going on, and the implementation of this very sensitive information could leave you exposed to attacks of fraud, identity, phishing and ransomware.
Confidential information
Everyone has the duty of confidentiality to protect the sensitive information for which he is responsible. Many of these tasks are automatic, such as confidentiality between professionals (for example, doctors, lawyers and the accountant and their customers). But many employees also have an implicit duty of confidentiality towards their employers. The sharing of corporate documents, such as tickets and minutes of meetings or transactional files, may well constitute the sharing of commercial secrets and a violation of confidentiality, as in the case involving Samsung employees In 2023. Thus, no matter how tempting it could be to pile up in Chatgpt to see what type of ideas it can dig, it is not a good idea unless you are completely sure that the information is sure to share.
Medical information
We all know that it can be tempting to ask Chatgpt to be your doctor and diagnose medical problems. But this should always be done with extreme caution, in particular since recent updates allow him to “remember” and even gather information from different cats to help him better understand the users. None of these functions are delivered with confidentiality guarantees, so it is better to know that we really have very little control over what happens to one of the information we enter. Of course, this is doubly true for health -related companies dealing with patients on patients, who risk huge fines and reputation damage.
To summarize
As with everything we put on the internet, it is a good idea to assume that there is no guarantee that there will remain private forever. It is therefore preferable not to disclose anything that you would not be happy for the world to know. Since Chatbots and AI agents play an increasingly important role in our lives, this will become a more urgent concern, and user education on risks will be an important responsibility for anyone who will provide this type of service. However, we must remember that we also have personal responsibility to take care of our own data and understand how to keep it safe.