5 critical steps to securing modern school networks

Matthew J. Frederickson is director of information technology at Council Rock School District.

After over 30 years in IT, change is the only constant. I love that this job brings with it constant opportunities for me to learn and grow. Unfortunately, if you are like me, the past several years have seen an exponential increase in risk, and a serious loss of sleep.

I have ransomware to thank for that.

In 1989, biologist Joseph L. Popp created the first ransomware, called the AIDS Trojan, but it wasn’t until 2013 that two Russian hackers realized people would pay them to unencrypt files they had encrypted. Fast-forward a few years, and ransomware costs to organizations are now measured in the billions of dollars. It’s no longer a “guy living in his parents’ basement” thing
 it’s big business.

And it’s not slowing down anytime soon.

According to Dark Reading, from January 1, 2019, to September 1, 2019, approximately 50 school districts were hit with ransomware. By December 16, the number had climbed to over 1,000. According to the Department of Homeland Security, it’s only going to get worse. In a fall announcement, DHS stated cyber criminals were targeting “municipalities and schools
 organizations that traditionally have weak infrastructures and processes.” 

While I’ve never considered my infrastructure weak or my practices lacking, I had to rethink the way I do business because the outlook isn’t getting any better. In 2019, DHS stated 91% (or 94%, depending on which presentation you listen to) of security breaches occurred as a result of phishing. That means our users are our biggest threat. Why try to hack a firewall when you can just guess a password or get someone to click on a link to a cat video?

To provide some perspective, our district is spread over 72 square miles, located within five separate municipalities, and has two high schools, two middle schools, 10 elementary schools, one alternative school with grades 10-12, and two administration buildings.

We have approximately 11,000 students and about 1,300 staff, which means my department of 10 employees —​ including myself and my secretary —​ support about 13,000 daily. We have close to 6,000 laptops and desktops deployed, over 8,000 Chromebooks and over 1,110 iPads and Android-based tablets.

We are primarily a Microsoft Windows shop, running Active Directory, Office 365 and Google for Education. We have Windows servers hosted in a VMWare environment, a Fujitsu SAN with over 180 terabytes, and a few Linux servers just to round things out. Like most public K-12 school districts, we’ve hit the wall when it comes to doing more with less.

Using the old mindset of “they’ll never get through my firewall” simply doesn’t cut it anymore.

The concept of defense in depth has evolved and changed. There are potentially as many internal threats as external. To adequately protect what you have, you have no choice but to implement additional resources. Not all of them can, or should be, open-source solutions. But how do you get the funding you need to do the job you need to do?

First, you need to conduct an Information Security Risk Assessment. This will identify what you have, how valuable it is (to your organization), and what the potential threats are. You then need to adopt a framework for implementing your plan. This is necessary to prevent you or anyone else from just buying solutions without considering the big picture. I recommend the NIST Cyber Security Framework.

Once that’s out of the way, address the bigger issues. What are you doing to safeguard your network right now? While an audit is great, it takes time. And implementing a framework properly can take anywhere from 18 to 24 months. In all probability, you already have some great things in place, but you probably also have some gaps.

For us, I broke down what we needed to do into five simple areas.

Inside/Outside

The first step is to make sure you have adequate protection separating the network you’ve built (the inside) from the rest of the world (the outside). A firewall will do the trick, but you need a Next Generation Firewall (NGF). What separates a NGF from a traditional firewall is its ability to perform inspection at all seven layers of the OSI network model.

It must also have an effective intrusion detection/prevention system (IDS) built in and be capable of doing deep packet inspection (DPI). DPI is when the firewall becomes a man-in-the-middle by acting as the endpoint during secured transmissions. Your NGF should also have a DNS firewall, a critical feature that subscribes to a list of bad sites. It then blocks requests to these sites, preventing users from accessing a site that shouldn’t be accessed.  

Network monitoring

In addition to an NGF, you need network monitoring that provides session data for traffic on the network. You need to be monitoring all your network segments, and you need to establish an effective baseline. You can’t know what abnormal traffic looks like if you don’t know what normal traffic looks like. You also need to segment all your traffic using VLANS.

If you support bring-your-own-device (BYOD), and what school district doesn’t these days, that traffic should be segmented all the way back to the firewall so it’s never exposed to the inside network. Most NGFs will easily support this. You should also develop baselines for each VLAN and detecting unusual behavior​. Understanding what is happening on your network is critical to protecting it.

One way to collect this data with no impact to traffic is to use network taps. These passive devices sit between your network equipment, capturing the data that passes through them. There are some great open-source products out there that can help you analyze your data and establish baselines, but be warned —​ they will require some serious effort to get configured properly. You will also need to invest the time, whether using an open source or commercial product, in training the device and establishing a reliable baseline.

Identity and access management

The next area to focus on is identity and access management. We use Active Directory integrated with both Office365 and Google Apps for Education. We follow best practices when it comes to creating accounts, auditing accounts and removing access. What’s tricky, thanks to users being able to log in to Office365 or Google at any time, from anywhere, is being able to tell when a user account has been potentially hacked.

We use all the reporting tools available for both platforms, so when a 2nd-grade teacher “logs in” from Nigeria but is still teaching in her classroom here in the U.S., we know her account has been compromised.

To augment the canned reports, you should always review log data. We use a log aggregator to collect and parse the logs from our Active Directory servers. There are about 30 events (e.g. logins, logouts, how a person logged in, etc.​) spread over the security, system and application log we monitor. With custom reports and email, we get notified when something looks funny, and we’re able to be more reactive and effective at stopping unwanted actions before they become significant intrusions.

Once again, you need to know what normal is to know what abnormal is. For example, a high school student logging into their account at 2:00 AM is not unusual. But a teacher (well, most of them) or an elementary student logging in at 2:00 AM sets off all kinds of red flags for us.

Backups

Backups often get overlooked. They’ve become almost ubiquitous: Everyone does it, but they’ve never really sat down and thought about it.

Besides restoring the occasional accidentally deleted file, why are you doing backups? Most people will respond, “For disaster recovery.” So my next question is always, “What is your recovery point objective?”

That’s when I usually get a blank stare.

In today’s environment, we don’t just back up to recover the occasionally deleted file. We back up to ensure our environment can continue to function in the event of a catastrophic disaster, like ransomware.

A school district located a few towns away from us was hit hard last year. They were using the same backup system we’ve used for years (and no, we don’t use that one anymore). It became a problem for them because they fell into the routine of doing backups, but never validating whole backups or taking them offline. Their entire backup system and all backups were encrypted by the attack, so they had absolutely nothing to recover.

To fix that problem, in addition to using best practices for testing and validating backups, you should employ a system that creates immutable backups. These cannot be changed. They can only be deleted, and only after their time-to-live has exceeded. This value is set when the backup is created, so it can’t be altered either.

Once you have that backup, put it somewhere the bad guys can’t get to. We use AWS Glacier, an Amazon service designed specifically for this. I also have a laptop that isn’t on the domain, and is only on the network to copy my immutable files to Glacier before I turn it back off. Unless a bad guy just happens to be scanning my network at the exact right time and doesn’t think I’m just another BYOD device, he would have no way of knowing I had immutable backups.

Endpoint protection

Finally, there’s end-point protection. Specifically, your computing endpoints. The other endpoints —​ IOT devices, HVAC, security cameras, card readers and other things you can’t install a client on —​ need to be on a separate VLAN and closely monitored for unknown devices and unusual traffic.

For your computing endpoints, you really need to have a cybersecurity analyst sitting at each device, monitoring user and computer behavior, notifying you if something funny is going on. Unfortunately, that’s not practical, so we’ve come to rely on AV/malware software.

The problem with that, though, is traditional anti-virus/malware detection relies on consistency of the viruses and malware in terms of file names, sizes, signatures, etc. Unfortunately, most viruses these days re-write themselves as they replicate.

You need a product that thinks like a security analyst. We started using Deep Instinct with a proof-of-concept installation to make sure it would work and was a good fit. I started by installing it on some test servers. The footprint was so small, we ended up uninstalling and reinstalling it twice because I thought I had installed it wrong. I installed it alongside a very robust AV/malware tool (a fourth quadrant tool), and there was zero impact to server performance. It didn’t find anything, so I decided to install it in a few of my labs —​ areas I consider the most vulnerable.

Within seconds, on the third machine we installed it on, Deep Instinct identified a file it said, “looked like a ransomware-bot.” I quarantined the file and uploaded it to VirusTotal. My AV/malware solution had not identified it, and I wanted to see if any other solution would. VirusTotal has about 64 AV engines it uses to scan uploaded files. None of them detected the file as a ransomware bot.

I left the file quarantined and decided to do some research. What amazed me was that three days later, my AV/malware tagged the file as a “ransomware agent.”

Since installation, Deep Instinct has detected and quarantined, with no interaction on my part, at least three attempts to penetrate our network with ransomware that we have been able to independently correlate. It works because it has a “brain” that isn’t looking for specific file types, locations, signatures or just behaviors —​ though it does that just because it can. It’s doing the threat hunting I would expect a seasoned security analyst to do on my machines.

Since the most vulnerable part of your network infrastructure is your user, you need to do as much as possible to monitor whatever they’re doing on your network.

Source Article