Mastering Fraud Solution Implementation - Importance of Leadership and Unified Priorities
31.07.2024
One of the aspects of new technology is the lack of awareness of its potential nefarious use among the general public; here, the ChatGPT is the perfect storm. It's new; there is huge hype around it to the point of being viral. Currently, there are very few actual incidents related to its use, which might create a strong bias in favor of information leakage from the users' side.
ALERT: Even people conscious about sharing private or sensitive information do so via ChatGPT prompts. Prompts are archived!
What will unwind is very easy to predict (and is already happening). With ChatGPT, a new channel for phishing is born. We will start seeing replicas of ChatGPT or other similar prompt-based websites. This will be further fueled by the fact that ChatGPT is (at least at the moment) limiting its use due to capacity constraints or requires a paid subscription. Therefore we will see a new unlimited ChatGPT and free ChatGPT lures in our emails, chat messages, and web.
Cybercriminals able to eavesdrop or collect your prompts sounds like the beginning of a juicy ransomware incident. But even without phishing - forcing my way into your existing genuine ChatGPT account to get access to all your historical prompts will follow the same pattern. Tools already allow users to store the ChatGPT prompts and results in gitHub, so this is another flavor of the same. The last one - do you know how OpenAI is using the prompts we submit to ChatGPT?
@Users: be sure not to share proprietary, sensitive, or intellectual property information which you don't want to become common knowledge. Treat every ChatGPT prompt as publicly available information, or don't type it in.
@Companies: ensure the employees understand what information (e.g., source codes, accounting details, technical blueprints, results of audits, etc.) shouldn't be typed into the prompts and share clear guidelines.
31.07.2024
08.11.2023
13.07.2023
22.05.2023
05.04.2023