Carol discusses how generative AI, ransomware, and social engineering are affecting the IBM i security landscape.
Staying current has been my mantra, but previous articles have focused on staying current with IBM i technology. In this article, I’m looking over the general security landscape for the trends that can hurt organizations and offering suggestions for lowering your risk of a catastrophic incident.
Malware, Specifically Ransomware
Clearly ransomware is not a new topic, but, as has been the case in the past, the attacks are evolving. To review, by far the most common way ransomware affects IBM i is via a file share. So the earliest ransomware attacks, in which the only act ransomware performed was to encrypt data, was easily thwarted by changing a read/write file share to a read-only share. However, ransomware is hardly static, and the next variation exfiltrated (downloaded) the data prior to encrypting it. In fact, some ransomware now only exfiltrates data and doesn’t even bother to encrypt it. Rather, they hold it for ransom, threatening to post the data to the Internet if the organization doesn’t pay the ransom. This negates the solution of changing a read/write share to a read-only share because the read-only shares allow the exfiltration of data. In IBM i 7.5, IBM provides the ability to secure the NetServer (the server that provides access via file shares) as well as individual file shares as I’ve written about previously. In addition to securing the shared object itself, this feature further reduces the risk of having your data exfiltrated and/or encrypted via ransomware.
Unfortunately, the evolution of attacks (and, in this case, not just ransomware attacks) continues, resulting in a ruling put out by the Securities and Exchange Commission (SEC) in 2023 requiring publicly traded companies to disclose cybersecurity incidents (not just ransomware) that the organization deems to be material (that is, will affect the organization’s bottom line). A Form 8-K must be filed with the SEC within four days of discovery and determination that the incident is material. What happened shortly after this came into effect was that a ransomware group “tattled” on an organization that they had attacked for not reporting the breach to the SEC! Talk about bold! But that’s how these groups operate—with brazen boldness.
How can you protect yourself? Stay current, take advantage of the features IBM provides for protecting your data, and make sure IBM i is part of your incident response plan. Many of you have done that, but I shudder when I think of the organizations for which an incident response plan is a foreign concept and/or have not included IBM i in their recovery scenarios.
Zero-Day Attacks
A zero-day attack is one in which a bad actor attacks an organization with a previously unknown vulnerability. These are very difficult to defend against, but there are three steps that you can take to reduce your risk:
- First, get rid of products you’re not using or previous versions or trials of products you’ve left installed. If the product is on your system and there’s a vulnerability, it could be exploited. Why leave yourself open to that type of attack?
- Second, keep the products you’re using up to date. Zero-day attacks can sometimes be launched against unpatched versions of a product.
- Third, keep current contact information with your vendors and monitor message boards and/or security-related newsletters to be alerted as soon as possible to attacks. (Here’s the information for signing up with IBM for alerts.) There’s not much you can do if your organization is one of the first to be attacked, but once the attacks are known, word travels fast within the security community. I have, on several occasions, been able to warn my clients of an attack on one of their installed products because of the warnings that come in the security newsletters I subscribe to. And, once vendors become aware of the vulnerability in their software, they will (usually) send out warnings to their customers—thus the importance of keeping your contact information current.
Generative AI
While all of these cybersecurity risks are rapidly evolving, those associated with generative AI have seen the fastest evolution. The good news is that not all changes are for the bad. Generative AI is giving those security vendors that are looking to prevent attacks the ability to better predict the next attack vector and put up preventative measures ahead of them. Unfortunately, the bad actors are also using generative AI to find new ways to attack organizations. I heard one global security vendor call it a game of whack-a-mole. As soon as the vendor blocks one method, the bad guys find another method of attack.
The security issues surrounding generative AI are not just that it’s being used by bad actors. Rather, the security concerns surround the generative AI engine being used and the data (that is, the large language model (LLM)) being used to generate the results. If you or your developers are using a publicly available AI engine such as ChatGPT and are inputting your organization’s data, that data is now in the public domain, potentially violating privacy laws and/or your organization’s compliance requirements. Many organizations (Apple, JP Morgan, Amazon, Verizon, and more) have banned the use of tools such as ChatGPT and Copilot in an attempt to not let company secrets out into the public domain. Organizations need to use tools that allow them to train the LLM using their own data but ensure that the tool they use isn’t using the organization’s data to train the tool’s model. In other words, ensure that your data stays within the walls of your organization. IBM’s watsonx is an example of a product that allows you to keep your data confidential.
Other concerns with generative AI tools surround the security of the data used as input to the models. A general concern with generative AI is the concept of bias and the desire to keep bias out of the results. But what if someone can access the data in the LLM and purposefully introduce data that will produce bias in the results? Another concern is the introduction of “bad” or inaccurate data into the LLM to purposefully skew the output. The result could be business decisions that are not only bad but potentially devastating or life-threatening, depending on what type of business is using the data. The importance of securing data has never been greater. (And that may be the understatement of the year!)
Developers have never been fans of security or a rigorous process, but when it comes to using generative AI, the integrity of the data is, to me, the most critical aspect of the whole process. Restrictive security MUST be in place, and processes MUST be implemented to ensure the data used to populate the LLM remains accurate—whether your developers like it or not. Make sure your developers are educated on the appropriate use of your organization’s data, and provide them with the budget to do generative AI right—that is, with tools that allow you to keep your data secure.
Social Engineering
Last but definitely not least are attacks via social engineering. Social engineering is when someone pretends to have a need to know or when someone who works with or in your organization convinces someone within the organization to take action or provide information that allows the bad actor to gain access to your organization.
Two of the most notable social engineering attacks occurred in 2023 when Caesars Entertainment and MGM Resorts International both suffered significant attacks that started with social engineering. Many social engineering attacks center around getting passwords reset, which emphasizes the need to use multi-factor authentication (MFA). In addition to using MFA, education is key to stopping social engineering attacks. Many organizations send out phishing emails to their employees to help them recognize and stop responding to these attacks. But one area that I think isn’t prepared enough is the IT help desk. These personnel, by the very nature of the name of the group, are there to help. If someone contacts the help desk and needs help, the help desk often helps them even if it’s not exactly following the process! While I realize this goes against the nature of these folks, helping people outside of official processes has got to stop. Organizations must develop procedures that allow the help desk to accurately authenticate someone yet empower (and require) them to say no if someone requests an out-of-band request or cannot provide the required authentication information.
Another attack that typically occurs via social engineering or phishing is called Business Email Compromise (BEC). BEC has been around for a while, but when it first started, an attack was much easier to detect if an employee was paying attention. BEC is when a bad actor infiltrates an organization and gains access to the email of an employee (typically a high-level manager or someone in finance), watches their email flow, and at some point inserts themselves into the email flow and redirects payments to their own accounts or requests a funds transfer. It became quite rampant during the pandemic, when everyone was working from home. Unusual requests were much more common during that time, and that, combined with the fact that you couldn’t just shout over the cubical wall to ask whether the request sounded legitimate, made these attacks successful. Generative AI is being used by bad actors to create increasingly legitimate-looking attacks. BEC affects companies both large and small, and huge losses due to BEC have been experienced around the world.
Ways to protect your organization include the following:
- Reduce the number of users with *ALLOBJ authority. The fewer profiles with “all power,” the fewer chances that a profile, if compromised, can be exploited.
- Remove profiles no longer in use, especially those of users who are no longer with the organization. One technique used by bad actors is to look through LinkedIn for people who have recently changed jobs and then call the previous organization and attempt to exploit the user’s old profile. Simply disabling the profile when someone leaves, as is the practice of many organizations, is not sufficient. If the help desk operators don’t realize the person has left the organization, they may simply re-enable the profile, and the attacker is in.
- Enable MFA. While not a perfect solution, the benefits are significant. This applies to your personal life as well. Enable MFA on your banking, cell phone, social media accounts—anywhere that offers MFA, enable it!
- Secure your test and development systems like your production systems. Not only does that produce a better product, but it protects your data. Microsoft recently suffered a significant breach by not doing that.
- Get rid of old systems no longer in use.
- Educate ALL employees (including senior management) on phishing and social engineering attacks and what they should do if they experience one.
Summary
How do you lower your risk of a catastrophic cybersecurity incident? First, have the attitude of “when” we get attacked rather than “if.” Assuming the worst will allow you to plan more effectively. And should the worst actually happen, your organization will be much more prepared and have a much greater chance of recovering successfully. Second, implement security best practices wherever possible. Finally, don’t forget to include IBM i in all your plans.
For a discussion on IBM i security best practices, see my book IBM i Security Administration and Compliance, Third Edition. Click here to order.
LATEST COMMENTS
MC Press Online