Select Page
Share


By Mariano Sumrell, AVG Brasil Marketing Director
 
Antivirus performance tests are regularly posted on various communication channels. This is not surprising, given the importance of protecting computers and other devices from threats in an increasingly connected world.
 
However, the more attentive reader must have noticed that the results tend to vary a lot from test to test and must wonder why there is such a difference. The reason is the complexity involved in this type of test, which is mainly due to two reasons: the difficulty of obtaining a good sample of malware and the different detection mechanisms used by modern antiviruses.
 
Let's start by looking at the sample problem. More than 100,000 new threats appear every day. For the test to have value, it is necessary that the sample used meets some conditions:
 
a) It must be of sufficient size to have statistical significance. So a reasonable number for a good virus test should be on the order of a few hundred thousand malware.
 
b) The malware collection must cover the countless different types of threats, in order to represent the immense universe of these malwares found on the internet.
 
c) The elements of the sample must have relevance, that is, they must be threats that are actually present in the real and daily life of users.
 
d) It must contain a reasonable amount of files that are not really threats, but that can be easily confused with malware, to measure the amount of false positives reported by a given antivirus solution.
 
We can already see that it is not so simple to obtain a quality sample that meets all the conditions mentioned. And in order to carry out a test, it is necessary for the investigator to know how to identify perfectly which elements are actually viruses and which only appear to be a threat.
 
But it is not enough just to have a good sample and for it to be well identified. Current antiviruses, in addition to signature detection (database of known viruses), use other techniques such as heuristics, behavioral analysis, cloud detection and several other layers of protection that are not used in static tests (simple file scans).
 
Thus, an effective evaluation has to be done dynamically, that is, with the program running. This implies that each of the hundreds of thousands of elements in the sample must be run for each of the tested antivirus versions. In addition, the machine has to be zeroed – so we can be sure the test is not being done on an already contaminated machine – between each run.
 
Now that we've explained the complexity of evaluating antivirus programs, the question remains: "So antivirus tests aren't valid?". The answer is simple, there are very serious and professionally done tests that can be used to get a sense of the effectiveness of an antivirus solution. However, due to the complexity and the fact of working with samples, it is to be expected that there will be statistical fluctuations, which means, in practice, that small differences of one or two percentage points do not necessarily guarantee that a solution is better than a solution. than the other.
 
It is also worth remembering that the results of the most serious and professional tests usually present the methodology used, that is, they explain the conditions under which the test was performed and present information about the sample used, making the process more transparent and reliable.
 

quick access

en_USEN