One of the consequences of the Meltdown and Spectre vulnerabilities, besides the immediate security concerns, is what it has meant for our ongoing testing and benchmarking operations. Along with accounting for the immediate performance changes from the required patches, we’ve also needed to look at how we present data and how we can (or can’t) compare new patched systems to older unpatched systems. From an editorial perspective we need to ensure the continuity of our data as well as providing meaningful benchmark collections.
What we’ve found so far is that the impact of the Meltdown and Spectre patches varies with the workload and the type of test. Nate’s GPU testing shrugs it off almost entirely, while Billy’s SSD testing especially feels the pinch. In the middle of course are consumer system reviews, where workloads are more varied but also much more realistic and often more relevant as well. Of course, this is also the type hardware that we most often have to return when we’re done, making it the most challenging to correct.
As everyone at AnandTech has been integrating the patches and updating their datasets in their own way, I wanted to offer some insight into what’s going on for system testing. With several important systems set to launch in the first half of this year – particularly Intel’s Hades Canyon – I’ve been going back and collecting updated results for use in those future reviews. In the process, I also wanted to document how performance has been impacted by these security patches and which benchmarks in particular have been affected.