Contact
Contact

Contact Info

  • Ivan Skula
  • ivanskula.com
  • info@letstalkfraud.com
Header_AI_HelpingHand

Fraud and cyber crime - how much worse can it get?

  • 31.3.2023

All companies have digital transformation among the main strategic initiatives and at the top of their priorities - justifiably. Digitizing all the analog information and digitalizing current processes is not a small feat and is proven to provide substantial tangible benefits like the ability to access the data across organizations faster and cheaper than ever before and measure and monitor all operations, sometimes even in real-time. 

But looking at the scale of fraud and cybercrime, it's hard not to see[1] that the same thing that provides many tangible benefits to an organization simultaneously opens new attack vectors for people with nefarious intentions. All information in digital form can easily be accessed or moved/stolen in volumes no one could do with their physical form. Such data can be used to be sold to the highest bidder and, in worst-case scenarios, could even lead to bankruptcy.


All people exposed to fraud and cybercrime or those who "handle" them across diverse organizations understand that digital technology is a huge fraud enabler, whereas the countering technology to eliminate the current fraud and cybercrime trends is just trying to catch up and provides only partial mitigation capabilities. Moreover, terms like democratization in technology areas mean that more and more non-technical people or even a layperson can use the technology without the need for extensive training or appropriate education. 

It is similar in education. Here we formally still follow the same conventional paradigm - passing through grades with a standardized curriculum, while in the digital space, this concept is quickly becoming obsolete. Knowledge is not restricted to those attending the schools. It is available, and most of it is free. And similarly, as in the paragraphs above - while it provides many benefits (in providing education to children and people in general, especially in areas where given knowledge wouldn't otherwise be available), we can't ignore the negative aspects coming with it hand in hand. Knowledge always was, is, and will be the power. And not always the power to do good. 

Very few (me included) believed that AI could easily step into the domain of art and creative professions as they require "human creativity," which can't be easily translated into algorithms, and yet, one of the first instances of AI we saw in this area were use-cases linked to images and graphics. Web-based AI tools that would erase unwanted parts of the picture, upscale low-quality photos of your grandparents, even expand/outpaint a picture or create a seamless transition between two different images[2] were just the early applications. Today you can ask the algorithm to generate an image based on your descriptive text input in desired graphical style and combine it with your pictures or photos (like the one at the top of this article - a hand with six fingers - created by Midjourney AI).


A few weeks back, we have seen the world-changing (at least from the media attention and hype on the social networks) event of releasing the ChatGPT. And though understanding its practical boundaries, which come from the underlying technology, it is obvious that it will reshape many industries.


You might be asking - How does it relate to fraud and cybercrime? I believe the digital transformation and democratization of tools, technology, and knowledge will be the same as described above. AI assistants will be integrated into our various tools, like email clients, time management apps, MS Teams, etc.[3] New specialized assistants will be created for specific use cases like healthcare[4], finance [5], and many others. Within a few weeks, we have seen adoption and modifications to scale down or optimize the ChatGPT-like assistants to fit into the memory of our standard PC or event wearables and IoT devices[6].

We might have already had a feeling before that the advancements were coming too fast, and we had difficulty keeping up with all the changes. These last few weeks have been a precursor to what is yet to come. I'm sure most people are already looking forward to this future. The future, where knowledge couldn't be any closer to any individual. Some will immediately explore how this new future can be exploited to benefit them.

Imagine an AI integrated into a cyber-security platform[7] with complete oversight of what is happening (SIEM, IDS, EDR,  EDX, and others). An assistant whom you can give a task to consolidate the available data and generate a report of incidents and their resolution. An assistant who can notify you of new vulnerabilities and their impact on your organization. I'm sure you have many more applications in your mind, but have you thought about the possibility that such an assistant can be asked what would be the best way to breach the current security perimeter, which attack vector would be least visible, or which way to exfiltrate the sensitive data knowing your internal IT landscape and IT assets as well as technology deployed? Or which user failed the most phishing attempts or got the most antivirus alerts within the finance department?

Imagine an assistant who can optimize the company's logistics and operations based on current inventory data and usage/expenditure of components, parts, and raw materials in production along with the production plan. And now, imagine someone asking the same assistant how to perform the most efficient supply chain attack.

And finally, imagine having an assistant trained on the best-performing companies' data to support the critical decisions of the top-level managers. We don't even have to go to the future as, since last August, there is already an AI-based CEO - Mrs. Tang Yu [8]. 


The above examples might make you think that not adopting the technology internally might mitigate the risk sufficiently - at least in these early days. But it doesn't end there.

Imagine that there are groups (independent APT groups or even state-sponsored ones) that can train the cyber-specific model. Such a model can focus on collecting and consolidating information from publicly available sources (OSINT). How authentic will the new spear-phishing email look when the model generates it based on the social network profiles of the head of the finance department? How authentic will it look when considering public information and partnership announcements to write a Business Email Compromise email leveraging a database of leaked mail credentials or mobile numbers?

How efficient such an assistant would be in advising the hackers on the best possible targets for spear phishing or even on the shortest path to exploitation, considering all the publicly available information about the organization - used technologies and associated known vulnerabilities? 


As Stan Lee wrote in 1962 in Spiderman comics through the character of Uncle Ben - "With great power comes great responsibility."

There will be many challenges and risks with this new technology and its adoption into various aspects of businesses and our lives. There will be voices against its wide adoption without a proper risk assessment (as is already happening [9]), but, as Bloomberg states - it might be too late now, as the technology is out there. Governments,  companies, and academics will further formulate the guiding principles to mitigate the abuse of this technology. However, fraudsters and cyber criminals will get their hands on it and exploit it without any hesitation or moral limits. Immediately.

For us - people trying to fight against these criminals - we must prepare ourselves - primarily through self-education and upskilling to ensure our capabilities and prowess are at par with theirs, if not above.


References:

[1] https://www.ftc.gov/reports/consumer-sentinel-network-data-book-2022

[2] https://www.facebook.com/watch/?v=799578327949429

[3] https://techmonitor.ai/technology/ai-and-automation/microsoft-to-integrate-chatgpt-into-teams

[4] https://www.news-medical.net/health/What-does-ChatGPT-mean-for-Healthcare.aspx

[5] https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/

[6] https://beebom.com/how-run-chatgpt-like-language-model-pc-offline/

[7] https://www.cnbc.com/2023/03/28/microsoft-launches-security-copilot-in-private-preview.html

[8] https://www.showmetech.com.br/en/tang-yu-the-first-aiceo-in-a-company/#:~:text=An%20Artificial%20Intelligence%20as%20CEO&text='Tang%20Yu'%20is%20responsible%20for,receives%20a%20salary%20of%20%240.

[9] https://www.bloomberg.com/opinion/articles/2023-03-30/elon-musk-wants-to-pause-ai-progress-with-chatgpt-it-s-too-late-for-that

Categories

  • Announcement
  • Awareness
  • Banking
  • Book review
  • Cyber
  • Data
  • Fraud
  • Fraud Analytics
  • Fraud Operations
  • Fraud Rules
  • Implementation
  • KPI
  • Opinion
  • Personal
  • Phishing
  • SAS
  • Social Engineering
  • Statistics
  • Training

Recent Posts

Fear Not The AI, But The Automation
Fear Not The AI, But The Automation

16.04.2025

What The Culture Map Taught Me About Cross-Cultural Work and Trust
What The Culture Map Taught Me About Cross-Cultural Work and Trust

31.03.2025

Mastering Fraud Solution Implementation - Importance of Leadership and Unified Priorities
Mastering Fraud Solution Implementation - Importance of Leadership and Unified Priorities

31.07.2024

Essential Skills for the Modern Fraud Fighter
Essential Skills for the Modern Fraud Fighter

12.07.2024

Mastering Fraud Solution Implementation - The Art of Defining 'What' and 'How'
Mastering Fraud Solution Implementation - The Art of Defining 'What' and 'How'

24.06.2024

Don't make the headlines! Or everyone is the target - its a fact!
Don't make the headlines! Or everyone is the target - its a fact!

16.11.2023

The dawn of the vishing!
The dawn of the vishing!

08.11.2023

Customer in Control: Reducing Fraud Risk by Allowing Customers to Manage Their Own Exposure
Customer in Control: Reducing Fraud Risk by Allowing Customers to Manage Their Own Exposure

13.07.2023

Why don't we just block the fraudster's IP address and be done with it?
Why don't we just block the fraudster's IP address and be done with it?

06.07.2023

Approve or Decline - are these all our options?
Approve or Decline - are these all our options?

25.06.2023

Device fingerprinting - how it works and where it fits in fraud detection?
Device fingerprinting - how it works and where it fits in fraud detection?

16.06.2023

Changing face of phishing or what to be aware of!
Changing face of phishing or what to be aware of!

09.06.2023

Does SAS still matters? Absolutely! And let me tell you why.
Does SAS still matters? Absolutely! And let me tell you why.

04.06.2023

Fraud rules basics or How to design a rule?
Fraud rules basics or How to design a rule?

22.05.2023

Are generative models smart? Only if you're smarter about what you ask!
Are generative models smart? Only if you're smarter about what you ask!

12.05.2023

© 2024 letstalkfraud.com

  • CMS AdministriX