Cybercrime: Be Careful What You Tell Your Chatbot Helper
Concerns about the growing abilities of chatbots trained on large language models, such as OpenAI’s GPT-4, Google’s Bard and Microsoft’s Bing Chat, are making headlines. Experts warn of their ability to spread misinformation on a monumental scale, as well as the existential risk their development may pose to humanity. As if this isn’t worrying enough, a third area of concern has opened up – illustrated by Italy’s recent ban of ChatGPT on privacy grounds.
The Italian data regulator has voiced concerns over the model used by ChatGPT owner OpenAI and announced it would investigate whether the firm had broken strict European data protection laws.
Chatbots can be useful for work and personal tasks, but they collect vast amounts of data. AI also poses multiple security risks, including the ability to help criminals perform more convincing and effective cyber-attacks.
Most people are aware of the privacy risks posed by search engines such as Google, but experts think chatbots could be even more data-hungry. Their conversational nature can catch people off guard and encourage them to give away more information than they would have entered into a search engine. “The human-like style can be disarming to users,” warns Ali Vaziri, a legal director in the data and privacy team at law firm Lewis Silkin.
Click here to read more.