Ask Jack: Can Malware Become Smarter In 2023?

By Jack McCalmon, The McCalmon Group, Inc.

Can malware become smarter in 2023?

 

Makers of malware are continuously adapting. The evolution of phishing is a perfect example. We have gone from "Dear Precious" emails to sophisticated spear-phishing scams within a decade. 

An emerging concern is cybercriminals leveraging artificial intelligence (AI) to make better malware and social engineering scams. ChatGPT is the rage, including with programmers, but ChatGPT is also raising security eyebrows.

One of the signs of phishing are misspelled words and grammar mistakes. ChatGPT eliminates those common mistakes. ChatGPT can also help write and improve code. It can also help improve malware. https://gizmodo.com/chatgpt-ai-polymorphic-malware-computer-virus-cyber-1850012195

ChatGPT's terms and condition prohibit illegal use, but some people claim to be able to work around the protocols.

As one publication put it:

Hackers are already leveraging OpenAI code to develop malware. One hacker used the OpenAI tool to write a Python multi-layer encryption/decryption script that could be used as ransomware and another created an information-stealer capable of searching for, copying, compressing, and exfiltrating sensitive information. While there are many benefits of AI systems, these tools will inevitably be used for malicious purposes. Currently, the cybersecurity community has yet to develop mitigations or a way to defend against the use of these tools for creating malware, and it may not even be possible to prevent the abuse of these tools. https://www.hipaajournal.com/hackers-are-using-ai-tools-such-as-chatgpt-for-malware-development/

The takeaway is that AI has the chance of changing many things for the better, but like the Internet itself, criminals will look for exploitations. As a result, employers need to adapt with the risks as they emerge in 2023.

Jack McCalmon, Leslie Zieren, and Emily Brodzinski are attorneys with more than 50 years combined experience assisting employers in lowering their risk, including answering questions, like the one above, through the McCalmon Group's Best Practices Help Line. The Best Practice Help Line is a service of The McCalmon Group, Inc. Your organization may have access to The Best Practice Help Line or a similar service from another provider at no cost to you or at a discount. For questions about The Best Practice Help Line or what similar services are available to you via this Platform, call 888.712.7667.

If you have a question that you would like Jack McCalmon, Leslie Zieren, or Emily Brodzinski to consider for this column, please submit it to ask@mccalmon.com. Please note that The McCalmon Group cannot guarantee that your question will be answered. Answers are based on generally accepted risk management best practices. They are not, and should not be considered, legal advice. If you need an answer immediately or desire legal advice, please call your local legal counsel.

 

Finally, your opinion is important to us. Please complete the opinion survey:

What's New

Ask Jack: Are There ChatGPT And Phishing Risks Emerging?

A reader asks Jack about emerging risks surrounding ChatGPT. Jack discusses the popular chatbot and phishing.

Ask Jack: Can I Require Employees To Lock Up Their Laptops At Work And At Home?

Thieves target laptops and other mobile devices for a reason. Jack explains the risk and what organizations need to do to limit it.

Social Media Tracking Of Non-Users? Risks For Employers And Employees

Even if you do not have a TikTok account, a report states that your web habits are still being tracked. Learn about the risk.