A monthly Information Security publication for the WPI community.

This month let's focus on ARTIFICIAL INTELLIGENCE (AI). It is the ability of computers to learn from data and perform tasks that typically require human intelligence, like recognizing speech, making decisions, or identifying images.

In this issue:

  • Social Engineering Attacks in the Era of Generative AI
  • Examples of AI in Everyday Life
  • AI Creating Hard to Detect Phishing emails
  • Using AI at WPI
  • Learning with Laughter
  • From WPI's CISTO: Cybersecurity Awareness Month
  • Where to Find Information Security?
  • Meet Women in Cybersecurity (WiCyS)
  • Featured Videos & By the Numbers
  • In the News
  • Diversity in Cybersecurity
  • Additional WPI Resources

Social Engineering Attacks in the Era of Generative AI

Raha Moraffah, Assistant Professor of Computer Science

Raha Moraffah's WPI Bio

The rise of Generative Artificial Intelligence (AI) has paved the way for cyber criminals to craft more sophisticated social engineering attacks and phishing emails. This technology revolves around three main elements: realistic content creation, advanced targeting and personalization, and automated attack infrastructure. Consequently, Generative AI complicates the task of distinguishing between authentic and fraudulent communications, increasing the risk of recipients inadvertently divulging sensitive information or downloading malware.

In particular, large language models (LLMs) excel at mimicking human conversational patterns, which can be manipulated for malicious purposes. These capabilities extend beyond conventional phishing tactics, encompassing chat-based social engineering attacks that threaten both individuals and organizations, thereby highlighting the pressing need for advancements in cybersecurity.

In the context of chat-based social engineering, attackers aim to extract three categories of sensitive information (SI) for illicit ends: personal, IT ecosystem, and enterprise information. Personal Identifiable Information (PII) encompasses any individual data that, if exposed, could pose significant risks such as identity theft. This includes a person’s full name, date of birth, social security number, address, financial details, and answers to common security questions. Institutional and Workplace Information includes any data related to an individual’s place of employment that could facilitate social engineering attacks, covering details about colleagues, team structures, and organizational data. Lastly, Confidential Research Information pertains to any non-public research data, including details about unpublished projects and research subjects. Research shows that LLMs can generate these types of malicious conversations, where the intent is clearly to extract SI for unauthorized purposes.

Cybersecurity research focuses on safeguarding valuable assets from such threats by identifying and mitigating the potential vulnerability of the digital space. Next, we will see the recent advances in utilizing LLMs to develop defense mechanisms against such attacks.

Defending Against Social Engineering Attacks using Generative AI

The misuse of generative AI for malicious activities is a significant concern. As AI-driven attacks evolve, current defense mechanisms based on rising user awareness and education may no longer suffice. In response, AI researchers have developed various machine learning and deep learning techniques designed to detect and prevent these threats, enhancing our understanding of human-to-human chat-based social engineering attacks.

Recently, the dual role of LLMs as both adversaries and defenders against chat-based social engineering attacks has been highlighted. Although the capabilities of LLMs on their own are limited, when integrated into a comprehensive pipeline, they can systematically analyze conversations, flag malicious messages, and consolidate these findings to assess and mitigate conversation-level social engineering attacks. Such approaches demonstrate the potential of LLMs to play a critical role in cybersecurity defenses. 

Conclusion

In conclusion, the advent of Generative AI presents both promises and perils in the realm of cybersecurity, particularly concerning social engineering attacks. As cyber criminals increasingly leverage this technology to craft sophisticated phishing schemes and manipulate communications, the need for robust defenses becomes paramount. Large Language Models exemplify the dual-edged nature of generative AI, serving both as tools for attackers and as foundational elements for defense mechanisms. These models' ability to mimic human interaction can be turned against cyber threats when integrated into comprehensive defense frameworks that detect and neutralize malicious activities. Going forward, continuous advancement in AI-driven security solutions and broader awareness among potential targets are essential to safeguard sensitive information and maintain trust in digital landscapes. The proactive adaptation and enhancement of cybersecurity strategies will be crucial in staying ahead of threats posed by the misuse of emerging technologies.

Examples of AI in Everyday Life

AI may seem like an abstract idea, however here are some examples of how consumer products use it in our daily lives.

  • Face recognition: Many Android and Apple products use this method to unlock devices. AI technology studies and saves the facial coordinates of a human face to recognize the authorized user.
  • Smart cars: The machine learning capabilities of AI have made the idea of fully automated cars possible. They are programmed to stop at signals and slow down or stop whenever an obstacle is detected.
  • Digital assistants: They not only understand our commands, but also respond to our questions, manage calls, send emails, and set alarms. Some examples of digital assistants are Alexa, Google Assistant, and Siri.
  • Entertainment and social apps: AI is used in social media apps to customize experiences for users. Streaming platforms use it to provide recommendations based on your watch history.
  • Banking: AI chatbots are used to offer an improved customer experience and provide 24/7 support.
  • Predictive search: As you start typing a search term, many platforms use AI to run prediction algorithms to anticipate what you are looking for.
  • E-commerce: AI algorithms can classify product searches to improve searching and filtering so you only see the relevant ones.
Examples of Artificial Intelligence in Everyday Life (claysis.com)

This 3-minute video by Microsoft gives a plain English explanation of AI and machine learning. It also includes examples of consumer products that use them.

AI Creating Hard to Detect Phishing Emails

After the widespread adoption of ChatGPT, "Darktrace shared research which found a 135% increase in ‘novel social engineering attacks’ in the first two months of 2023." ChatGPT was released in November 2022, so in a very short time threat actors used AI to quickly make targeted attacks on a very large scale.

According to Mailgun, there are free AI tools for hackers that are available on the dark web. They are "AI without safeguards and will happily generate requests to create phishing emails, code to spoof specific websites, or any other number of nefarious requests." This enables threat actors to quickly generate a large amount of malicious material.

Attackers have the potential to leverage "Generative AI tools to automate social engineering activity by creating longer, more convincing phishing emails." It becomes harder for the average person to determine if an urgent request with a detailed explanation is legitimate.

Darktrace reports that AI and LLMs provide "lower language barriers for attackers; using their native tongue, they can ask the AI to write a message in the language of their choosing." This means spelling and grammatical errors are no longer the definitive signs of a phish that they used to be. 

Four Pillars of AI Phishing

These are the core tasks a threat actor can do with an AI tool from the dark web.

  • Data Analysis: The attacker uses AI tools to scour the internet for vast amounts of data on the target group or individual. 
  • Personalization: With the collected data, AI generates highly personalized phishing emails.
  • Content Creation: AI is used to generate convincing email content that mimics the writing style of the target's contacts or known institutions. 
  • Scale and Automation: AI makes it easy for attackers to scale their operations efficiently. They can generate numerous unique phishing emails in a short time and use target a wide range of individuals.
How Phishing Attacks Are Becoming Harder to Identify (Darktrace.com)The golden age of scammers: AI-powered phishing (mailgun.com)

Using AI at WPI

The benefits of using AI tools are making their way into our work! Copilot can assist writing a first draft, and Zoom AI Assistant can take meeting notes. Just keep in mind that ultimately AI is a tool that still requires human oversight, and cannot be trusted implicitly. 

The AI Resources page was developed by ITS to deliver various AI details for WPI usage all in one place! Please note that WPI sign-in is required to access the link below.

AI Resources (wpi.edu) 

There you will find information, FAQs, and links for:

  • High Performance Computing Resources
  • Training Sessions
  • AI Tools
  • Policies
  • The Gordon Library Resources

Learning with Laughter 

Kombucha Girl with disgusted face says, "Spending hours going through security logs, and intrigued face says, "Using AI to analyze logs and produce a short report on issues to address."

From WPI's CISTO: Cybersecurity Awareness Month

October is Cybersecurity Awareness Month, a collaborative effort between government and industry to enhance cybersecurity awareness among the public.

Staysafeonline.org offers materials on cybersecurity themes, resources for speakers and event planning to support cybersecurity awareness, and so much more!

Staysafeonline.org

The federal Cybersecurity and Infrastructure Agency (CISA) offers an AI roadmap, resources, and FAQs.

AI Resources (cisa.gov)

Where to Find Information Security?

Visit us at the Campus Center tables near Dunkin' on Thursday, October 24 from 12:00pm - 2:00pm to learn about the tricks cybercriminals use to steal your treats.

Cybersecurity Exploration Station

Meet Women in Cybersecurity (WiCyS) 

WPI WiCys Executive Board

We are a student chapter of the national organization, WiCyS, dedicated to empowering, educating, and supporting gender minorities in the cybersecurity field. We host Coffee Chats with professors, organize teams for Capture the Flag competitions, lead professional development events like resume-building, and bring cybersecurity professionals to campus, among other general body meeting topics. 

WiCyS - InstagramWiCyS Club Sign Up

Upcoming WiCyS Events:

10/22 – Overview of the national WiCyS conference to be held in April

10/29 – Team challenges for National Cyber League Fall season

11/12 – Member presentations on security topics

Featured Videos

These videos explain AI and its role in cybersecurity.

AI in Cybersecurity (6 min)AI, Machine Learning, Deep Learning and Generative AI Explained (10 min)The Future Of AI At Arctic Wolf (8 min)

By the Numbers

- $2.5 billion was the value of the AI in education market in 2022.

- $88.2 billion is the amount AI in education is forecasted to be worth by 2032.

- The market for AI in personalized learning is forecasted to reach $48.7 billion by 2030 - up from $5.2 billion in 2022.

- AI helps 73% of students understand the learning material better.

AI in Education Market Statistics (TechReport.com)

In the News

Last month Providence Public Schools was attacked by ransomware that shut down their network for an extended time. The hacker group, Medusa, took credit for it.

Stolen Data from Providence Public Schools is Published (RI Current)

Diversity in Cybersecurity 

Kerry Tomlinson, Cyber News Reporter

Kerry is smiling and wearing a purple top.
Kerry Tomlinson profile

Additional WPI Resources

AI at WPIAI Degrees and Research at WPIAI and Cutting-Edge Research at WPIAI Resources (wpi.edu) 

Coming Next Month...

Online Shopping Scams

Is there a cybersecurity topic that you would like to know more about? Please contact WPI Information Security using Get Support below.