Okay,I usually don't post outside of malware submissions but since this is related to tests I thought I should speak my mind being a malware hunter and doing/learning some malware analysis on my own.
If you want to go to suspicious sites, just prepare to be infected anyway and make the precautions as backups and not storing anything even moderately sensitive on your machine. And I specifically said by "signatures". But there are also generic protections and layered protections.
See the typical chained scenario of today:
Porn site -> malicious js -> malicious pdf -> malicious downloader -> malicious binaries.
Don't go to such porn site.
Don't use vulnerable apps.
Have antivirus with layered protection.
And then - who cares if emsisoft does not detect one of the downloaded malicious binaries, when the porn site is blocked and we detect the js and pdf?
It's very hard to evaluate the real-world performance of an AV solution when we don't (and I suspect we can't) test the whole chain and prove if the user is protected. The tests on VT and such don't prove anything, but the ability of the engine to detect it by the signature.
I have objections against all AV-Comparatives tests performed, also the Av-Test, but those are less 'documented', so it's hard to tell where the deficiencies lie.
The usual points about static testing are:
a) the tests are carried long after the real infection took place, so it's kind of useless from today's point of view
b) the tests are carried without any context state information. Such information - if there is file named "document.doc .exe" in email, this is enough to ban the execution
c) the tests don't know anything about the relationship of the samples. If you detect the dropper, you don't have to detect the dropped binary.
d) the tests are too binary-centric and have only small amount of script/pdf/flash malware, althought these are one of the main vectors of getting thru to your computer.
e) there is little of no info on how the testbeds are created. All these 99.1% and such scores are complete nonsense from my point of view. The overlap of the product's detections is not as great as clementi/marx tests suggest.
f) the amount of samples tested is around 500 per month this is not even 1/2 of what comes out each day.Its like a drop in the ocean.
This is not an excuse, that's an explanation what your really should read from the static tests. Yep, it's nice to be on the first places, but the world does not end if you're not there.
Regarding the pro-active test, this is the most flawed test of them all. It does _NOT_ test the ability of the product to protect you from the unknown malware. It tests the ability of the signature engines to detect the samples Av-Comparatives got in the test's timeframe. For example, what if the engine authors already had the samples and wrote the detections and Av-Comparatives added them later? We're back again in the 'testedbed construction' problem.