*
Looking for a bargain? – Check out the best tech deals in Australia

How We Collect Malware for Hands-On Antivirus Testing

We wouldn't recommend a car without first getting behind the wheel, and we can't rate antivirus software without testing it. Here's how we get the malware we need to do our real-world tests.

(Credit: René Ramos; Shutterstock/KatePilko, StarLine)

PCMag reviews have testing at their core. When you read one, you can be sure that the reviewer has thoroughly examined the product, testing all the promised features and ensuring that users will have a smooth experience.

For VPNs, we run world-spanning performance tests. For backup products, we check that they correctly back up files and make restoring from backup easy. For video-editing products, we measure rendering times and other factors. Those tests are perfectly safe and simple.

Testing antivirus software is another story. The only way to confirm that an antivirus program works is to challenge it with real-world ransomware, viruses, and other malware. That means we have to dive into the dark web and haul up samples for testing.


Feature Testing

The Anti-Malware Testing Standards Organization (AMTSO) offers a collection of feature-check pages, so you can make sure your antivirus is working to eliminate malware, block drive-by downloads, prevent phishing attacks, and so on. However, there's no actual malware involved. Participating antivirus companies simply agree to detect AMTSO's simulated attacks. Not every security company chooses to participate, so failing a feature check doesn’t necessarily mean there’s a problem.

(Credit: AMTSO)

Antivirus testing labs around the world put security tools through grueling tests, reporting results periodically. When lab results are available for a product, we give those scores serious weight in that product's review. If all four of the labs we follow bestow their highest rating on a product, it's sure to be an excellent choice.

Unfortunately, only about one in ten of the companies we test participates with all four labs, and another one in six with three labs. More than a quarter of them have results from just one lab, and over a third don’t participate in testing at all. Clearly, hands-on testing is a must.

Even if the labs reported on all the products that we cover, we'd still do hands-on antivirus testing. Would you trust a car review from a writer who never even took a test drive? Nope.


Casting a Net for Malware

Just because the product reports, "Hey, I nabbed a virus!" doesn’t mean it was successful. In fact, our testing often reveals instances where the antivirus caught one malware component but allowed another to run. We need to thoroughly analyze our samples, noting the changes they make to the system, so we can confirm that the antivirus did what it claimed.

The independent labs have teams of researchers dedicated to gathering and analyzing the latest samples. PCMag only has the resources to analyze a new set of samples once a year. Since the samples will stay in use for months, products tested later might have the advantage of more time to detect the same sample in the wild. To avoid any unfair advantage, we start with samples that appeared several months earlier. We use the daily feeds supplied by London-based lab MRG-Effitas, among others, to start the process.

(Credit: PCMag)

In a virtual machine, connected to the internet but isolated from the local network, we run a simple utility that takes the list of URLs and tries to download the corresponding samples. In many cases, the URL is no longer valid, of course. At this phase, we want 3,000 to 5,000 samples, because there's a serious attrition rate as we winnow down the sample set.

The first winnowing pass eliminates files that are impossibly small. The smallest possible executable program on modern Windows versions is 268 bytes, so anything smaller than that is probably a fragment from a download that didn't complete.

Next, we isolate the test system from the internet and launch each sample. Some of the samples don't launch due to incompatibility with the Windows version or the absence of needed files; boom, they're gone. Others display an error message indicating installation failure or some other problem. You might think those would hit the skids as well, but we've learned to keep them in the mix; often enough, a malicious background process keeps working after the alleged crash.


Dupes and Detections

Just because two files have different names doesn't mean they're different. Our collection scheme typically turns up many duplicates. Fortunately, there's no need to compare every pair of files to see if they're the same. Instead, we use a hash function, which is a kind of one-way encryption. The hash function always returns the same result for the same input, but even a slightly different input yields wildly different results. In addition, there's no way to go from the hash back to the original. Two files that have the same hash are the same.

(Credit: NirSoft)

We use the venerable HashMyFiles utility from NirSoft for this purpose. It automatically identifies (and even color-codes) files with the same hash, making it easy to get rid of duplicates.


Another Use for Hashes

VirusTotal originated as a website for researchers to share notes about malware. Currently a subsidiary of Alphabet (Google's parent company), it continues to function as a clearinghouse.

Anyone can submit a file to VirusTotal for analysis. The site runs the sample past antivirus engines from about 70 security companies and reports how many flagged the sample as malware. It also saves the file's hash, so it doesn't have to repeat that analysis if the same file shows up again. Conveniently, HashMyFiles has a one-click option to send a file's hash to VirusTotal. We run through the samples that have made it this far and note what VirusTotal says about each.

(Credit: VirusTotal)

If most of the test engines rate the file as safe, it’s of no use to us. If none of them have seen the file, it could conceivably be an example of zero-day malware, but given that we deliberately choose samples several months old, that outcome is extremely unlikely. We look for samples that most testing engines identify as some kind of malware. In our current collection, none of the samples scored fewer than 30 hits.

One caveat. Representatives from some security companies have told me (off the record) that they don't give VirusTotal their full antivirus database, and don't share all the samples. The site itself states clearly that nobody should use it in place of an actual antivirus engine. Even so, it's a big help in identifying the best prospects.


Run and Watch

At this point, the hands-on analysis begins. We use an in-house program (cleverly named RunAndWatch) to run and watch each sample. A vintage PCMag utility called InCtrl (short for Install Control) snapshots the Registry and file system before and after the malware launch, reporting what changed. Of course, knowing that something has changed doesn't prove that the malware sample changed it.

(Credit: Microsoft)

Microsoft's ProcMon Process Monitor monitors all activity in real time, logging Registry and file system actions (among other things) performed by every process. Even with our filters, its logs are huge. But they help us tie the changes reported by InCtrl5 to the processes that made those changes.


Rinse and Repeat

Boiling down the huge logs from the previous step into something usable takes time. Using another in-house program, we eliminate duplicates, gather entries that seem to be of interest, and wipe out data that's clearly unrelated to the malware sample. This is an art as well as a science; it takes a lot of experience to quickly recognize non-essential items and capture entries of importance.

Sometimes after this filtering process, there's just nothing left, meaning that whatever the sample did, our simple analysis system missed it. If a sample gets past this step, it goes through yet another in-house filter. This one takes a closer look for duplicates and starts putting the log data in a format used by the final tool, the one that checks for malware traces during testing.


Last-Minute Adjustments

The culmination of this process is our NuSpyCheck utility (the name comes from a very early version created ages ago when spyware was more prevalent). With all the samples processed, we run NuSpyCheck on a clean test system. Quite often, we'll find that some of what we thought were malware traces prove to be already present on the system. In that case, we flip NuSpyCheck into edit mode and remove those.

There's one more slog, and it's an important one. Resetting the virtual machine to a clean snapshot between tests, we launch each sample, let it run to completion, and check the system with NuSpyCheck. Here again, there are always some traces that seemed to show up during data capture, but don't show up at test time, perhaps because they were temporary. In addition, many malware samples use randomly generated names for files and folders, different each time. For those polymorphic traces, we add a note describing the pattern, such as "executable name with eight digits."

A few more samples leave the field at this final phase because with all the shaving away of data points there was nothing left to measure. Those that remain become the next set of malware samples. From the original 3,000 to 5,000 URLs, we typically wind up with around five to seven dozen.


The Ransomware Exception

System-locker ransomware like the notorious Petya encrypts your hard drive, making the computer unusable until you pay the ransom and flashing a scary red-and-white skull image. The more common file-encryption ransomware types encrypt your files in the background. When they've done the dirty deed, they pop up a big demand for ransom. We don't need a utility to detect that the antivirus missed one of these; the malware makes itself plain.

Many security products are adding extra layers of ransomware protection, beyond the basic antivirus engines. That makes sense. If your antivirus misses a Trojan attack, it'll probably clear it up in a few days after it gets new signatures. But if it misses ransomware, you're out of luck. When possible, we disable the basic antivirus components and test whether the ransomware protection system alone can keep your files and computer safe.


What These Samples Aren't

The big antivirus testing labs can use many thousands of files for static file recognition testing, and many hundreds for dynamic testing (meaning they launch the samples and see what the antivirus does). We're not trying for that. Our few dozens of samples let us get a feel for how the antivirus handles attacks, and when we have no results from the labs, we have something to fall back on.

We do try to ensure a mix of many kinds of malware, including ransomware, Trojans, viruses, and more. We also include some Potentially Unwanted Applications (PUAs), making sure to turn on PUA detection in the product under test, if necessary.

Some malware applications detect when they're running in a virtual machine and refrain from nasty activity. That's fine; we just don't use those. Some wait hours or days before activating. Once again, we don't use those.

We hope this peek behind the scenes at our hands-on malware protection testing has given you some insight into how far we'll go to experience antivirus protection in action. As noted, we don't have a devoted team of antivirus researchers the way the big labs do, but we bring you in-the-trenches reporting that you won't find anywhere else.

About Neil J. Rubenking