ChatGPT & GPT-3 Power Up Cyberattacks - Tecbound Technology

ChatGPT & GPT-3 Power Up Cyberattacks

Automation and artificial intelligence (AI) have benefited many organizations in many sectors, including cybersecurity. Unfortunately, those benefits extend to cybercrime too. Bad actors have used technology like ChatGPT and GPT-3 to conduct attacks. The use of these technologies helps cybercriminals overcome barriers that keep their attacks from landing and make the whole process smoother and cheaper for them, especially when they’re conducting phishing operations. The text created in applications like GPT-3 is far superior to the copy used in your average phishing message, making it difficult for end users and many email security solutions to detect. Diving into what ChatGPT and GPT-3 can do and how they can be used in cybercrime can help IT professionals stay vigilant against the new threats they power.

What are Chat GPT and GPT-3?
The similarity in names between ChatGPT and GPT-3 can be confusing. ChatGPT and GTP-3 aren’t the same, although they share some characteristics.

  • ChatGPT is a chatbot that uses AI. ChatGPT was developed by OpenAI and launched in late 2022. It is built on OpenAI’s GPT-3 family of large language models. ChatGPT uses supervised and reinforcement learning techniques to converse convincingly with people about various subjects.
  • Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text like emails or ad copy. For example, if you want ChatGPT to deliver a blog post, you’d first give Chat GPT a prompt explaining the blog post’s theme. ChatGPT would then utilize that prompt to generate the blog post.

ChatGPT & GPT-3 are dangerously good at what they do:
ChatGPT and GPT-3 create astonishingly believable text. Canadian writer and software developer Ben Halpern asked Chat GPT to explain the difference between the two technologies to him, and this is precisely what it said:

GPT-3, or Generative Pretrained Transformer 3, is the third generation of OpenAI’s GPT language model, and it is one of the most powerful language models currently available. It can be fine-tuned for various natural language processing tasks, including language translation, text summarization, and question answering.

ChatGPT is a variant of the GPT-3 model designed explicitly for chatbot applications. It has been trained on a large dataset of conversational text to generate more appropriate responses for use in a chatbot context. ChatGPT can also insert appropriate context-specific responses in conversations, making it more effective at maintaining a coherent conversation.

If you’re dubious about just how good this technology is and how dangerous it can be, read this blog post that was entirely generated using GPT-3.

How can this technology be used to facilitate cybercrime?
Bad actors have quickly embraced the opportunity these new technologies offer them to improve their cyberattacks. Tools like ChatGPT and GPT-3 that specialize in creating believable text benefit cybercriminals conducting phishing operations and other email-based cyberattacks. After all, a poorly written email is a hallmark sign of phishing. This technology is a godsend for cyber criminals who aren’t fluent in conversational English. If the messages that bad actors send to targets are well-written, the chance of one message landing successfully increases. This technology helps them overcome the language barrier and easily slip by the target’s scrutiny.

GPT-3 can be used to conduct many dangerous cyberattacks, including:

  • Phishing and spear phishing.
  • Business email compromise (BEC).
  • Ransomware and malware infections.
  • Account takeover (ATO).
  • Conversation hijacking.
  • CEO fraud.
  • Social media phishing attacks

ChatGPT is a dangerous upgrade for BEC & other attacks:
One of the most hazardous uses for this technology in cybercrime is its use in business email compromise and CEO fraud attacks. BEC is the cyberattack currently poses the biggest threat to businesses regarding damage and cost. The Register provided this example of a prompt for a CEO fraud phishing email made using GPT-3:

Write an email to [person1] in the finance operations department at [company1] from the company’s CEO, [person2]. The email should explain that [person2] is visiting a potential Fortune 500 client in [region1] and that [person2] requires an urgent financial transfer to be made to an account belonging to the potential client to close the deal. The email should include the sum of money [sum1] that should be transferred and details of the bank account that should receive the payment – [account_number] and [routing_number]. The email should also include basic information about the recipient company [company2], a financial services company located in [place1]. [person1] is not easily fooled and will require some convincing.

Of course, threat actors are also capitalizing on people’s interest in ChatGPT by launching phishing websites, social media pages, and fake apps impersonating ChatGPT to spread malware and steal credit card information. For example, cybercriminals have created websites claiming to be the official ChatGPT website that appears to be the real deal. They even include a button for potential customers to try ChatGPT. But the controller is a malicious link that leads victims to download malware, including Lumma Stealer, Aurora Stealer and other malware strains.

Should business leaders be worried about GPT-3 and ChatGPT?
It’s wise for business leaders, and IT professionals to be concerned about ChatGPt and GPT -3’s applications to cybercrime and other evolving cyberattack techniques, and many are. A recent survey by Magnet Forensics found that 42% of corporate digital forensics & incident response (DFIR) professionals are concerned that evolving cyberattack techniques pose a significant problem for their investigations. In this year’s survey, concern about developing cyberattack techniques increased by 50% and ranked ahead of all other factors.

There are a few mitigation techniques that experts point to as practical to reduce an organization’s risk of trouble from ChatGPT and GPT-3 boosted attacks:

  • Security awareness training, especially training using sophisticated phishing messages that can mimic the highly believable messages that those tools can create.
  • Email security solutions that can adjudicate messages’ content effectively use machine learning to power AI because those tools can train themselves to detect GPT-3-generated text.
  • A vibrant security culture encouraging employees to ask questions and learn about security threats helps everyone stay on top of potential threats like malicious messages generated using ChatGPT and GPT-3.

Source: ID Agent

Free Guide

16 Questions You MUST Ask Before Hiring Any IT Company