The market for portable SSDs has shown rapid growth over the past decade or so. Almost all tier-one flash vendors (including Samsung, Western Digital / SanDisk, Micron / Crucial, and SK hynix) have their own PSSD lineup. Other than SK hynix, the others in the list have multiple PSSD models in the market. This has allowed them to target different market segments in terms of performance / serviced use-cases and cost. Samsung was one of the first tier-one vendors to pay attention to this market with the launch of the T1 Portable SSD in 2015. Since then, the company has been regularly introducing new PSSD models in the T series, while also launching the Thunbderbolt-compliant X5.
The T5 EVO Portable SSD being launched today by Samsung heralds a new category in the PSSD space – high-capacity flash storage limited to entry level speeds. The minimum capacity point in the T5 EVO PSSD family is 2 TB, with options for the consumer to scale up to 4 TB and 8 TB based on user requirements. Pricing is kept reasonable by the use of QLC flash. Samsung is no strager to using QLC flash in high-capacity consumer storage products. The 860 QVO and 870 QVO SATA SSDs were launched in 2018 and 2020 with a 8 TB version at the high end in the latter. Samsung’s use of dynamic SLC caching (‘Intelligent TurboWrite’ in their marketing parlance (PDF)) allowed these drives to offer compelling $/TB metrics while delivering enough performance for day-to-day PC usage. However, the performance after running out of the SLC cache was abysmal and even worse than 2.5″ hard drives.
The T5 EVO PSSD is not the first in the market to use QLC flash. Sabrent had used QLC flash in their lineup of dual-mode (Thunderbolt + USB) drives marketed under the XTRM Q tag. While the 4 TB and 8 TB capacity options for the XTRM Q were extremely attractive, and pricing was not outright ridiculous, the performance of the drives did not keep up with user expectations (particularly from the viewpoint of a Thunderbolt drive). Samsung is wisely avoiding this mis-step by limiting the T5 EVO to 5 Gbps speeds with its USB 3.2 Gen 1 Type-C connector. Can limiting the interface speeds for a QLC PSSD deliver better user experience? Samsung sent across the 8 TB variant of the T5 EVO to put through our rigorous direct-attached storage evaluation process. The review below investigates the performance profile of the PSSD sample and provides some comments on its value proposition.
The market for bus-powered direct-attached storage devices has expanded considerably over the last few years. Rapid advancements in 3D NAND technology, coupled with increasing confidence of manufacturers in QLC (4-bits per cell) has driven down the $/TB metric for capacious SSDs. On the host interface front, updates to the maximum speed have been clocking in at regular intervals. While USB 2.0 with its 480 Mbps speed held sway for more than 8 years, successive updates to 5 Gbps, 10 Gbps, 20 Gbps, 40 Gbps, and 80 Gbps have been introduced with a gap of 3 to 4 years between them. From the perspective of bus-powered storage devices, distinctive categories have emerged depending on the performance profile and internal components.
- 3GBps+ class: USB4 SSDs
- 2.5GBps+ class: Thunderbolt SSDs with PCIe 3.0 x4 NVMe drives
- 2GBps+ class: USB 3.2 Gen 2×2 SSDs with PCIe 3.0 x4 NVMe drives or native UFD controllers
- 1GBps+ class: USB 3.2 Gen 2 SSDs with PCIe 3.0 (x4 or x2) NVMe drives or native UFD controllers
- 500MBps+ class: USB 3.2 Gen 2 SSDs with SATA drives or native UFD controllers
- 400MBps+ class: USB 3.2 Gen 1 SSDs with SATA drives or native UFD controllers
- Sub-400MBps+ class: USB 3.2 Gen 1 flash drives with direct flash-to-USB (native UFD) controllers
The recent spate of product introductions has been in the first four categories, with a push for native UFD controllers that offer power consumption and thermal performance advantages. Therefore, it came as a bit of a surprise when Samsung sent across the details of the T5 EVO Portable SSD in the 400MBps+ class. The company is pitching the capacity options as the key selling point.
The T5 EVO is a 102g 95mm x 40mm x 17mm USB 3.2 Gen 1 PSSD made of solid metal with a rubber sleeve. The casing includes a metal ring for a lanyard, making it easy to carry around without fear of misplacing the drive. The T5 EVO’s dimensions make it appear to be a thick and oversized thumb drive (without the protruding connector), but the full metal construction lends it enough weight to give users a solid feel when handling it. The unit includes a blue LED power indicator near the upstream Type-C port. The rubber covering provides it extra durability against external shocks (Samsung claims drop testing from a height of 2m, but the 3-year warranty doesn’t cover physical damage caused by mishandling). Samsung includes a single 46cm USB 3.2 Gen 1 Type-C to Type-C cable in the package.
Samsung’s 8TB 870 QVO SATA SSD has been in the market for a few years. Given a similar capacity point, and performance numbers below that of the 870 QVO (due to host interface limitations), we suspected that the T5 EVO platform would share a lot in common with the 870 QVO. Our suspicions were not unfounded, as shown in the teardown gallery below.
The internals of the T5 EVO are kept hidden away by a number of screws under different plastic overlays. After figuring out their locations and unscrewing them, the rubber sleeve could be removed to reveal the solid metal casing. Removal of four easy-to-spot screws on the underside of the metal case allows the two layers of the casing to become unclasped. This reveals the single-board solution, whose components are protected by thermal pads that lie tight against the metal casing under normal circumstances.
Similar to the 870 QVO, the 8TB T5 EVO incorporates four flash packages (two on each side of the board, as shown in the gallery). However, the part numbers are different. While the 870 QVO used the K9XVGB8J1A, the T5 EVO uses the K9YYGB8J1C (visible at the bottom of the board photograph above). The 870 QVO used Samsung’s 9XL 3D V-NAND in QLC mode, but the new flash part number belongs to a different generation (likely decoding to 136L 3D V-NAND in QLC mode). The SATA controller (S4LR069 Metis) remains the same as the 870 QVO as does the presence of 8GB of LPDDR4 RAM for flash translation layer usage. The obvious additional component is the bridge chip – the ASMedia ASM235CM which bridges a single downstream SATA III port with a USB 3.2 Gen 1 (5 Gbps) Type-C upstream port.
The Samsung T5 EVO PSSD also includes hardware encryption support. Users can equip the PSSD with a password set via the Samsung Magician software.
CrystalDiskInfo provides a quick overview of the capabilities of the internal storage device. TRIM and NCQ are seen in the features list. The benchmark numbers in the next section confirm that native command queuing is active in the PSSD, and all S.M.A.R.T features such as temperature read outs worked well.
|S.M.A.R.T Passthrough – CrystalDiskInfo
The table below presents a comparative view of the specifications of the different storage bridges presented in this review.
|Comparative Direct-Attached Storage Devices Configuration
|1x SATA III
|1x PCIe 3.0 x4
|USB 3.2 Gen 1 Type-C (Female)
|USB 3.2 Gen 2×2 Type-C (Female)
|5Gbps-class, ultra high-capacity, compact, and sturdy portable SSD with a Type-C interface
|2GBps-class, sturdy palm-sized high-performance portable SSD with a Type-C interface
|95 mm x 40 mm x 17 mm
|88 mm x 60 mm x 14 mm
|46 cm USB 3.2 Gen 1 Type-C (male) to Type-C (male)
|45 cm USB 3.2 Gen 2×2 Type-C (male) to Type-C (male)
45 cm USB 3.2 Gen 2 Type-C (male) to Type-A (male)
|Samsung 136L V-NAND (6th Gen.) QLC
|Samsung 136L V-NAND (6th Gen.)
|Samsung T5 EVO 8TB Review
|Samsung T9 Portable SSD 4TB Review
Prior to looking at the benchmark numbers, power consumption, and thermal solution effectiveness, a description of the testbed setup and evaluation methodology is provided.
Testbed Setup and Evaluation Methodology
Direct-attached storage devices are evaluated using the Quartz Canyon NUC (essentially, the Xeon / ECC version of the Ghost Canyon NUC) configured with 2x 16GB DDR4-2667 ECC SODIMMs and a PCIe 3.0 x4 NVMe SSD – the IM2P33E8 1TB from ADATA.
The most attractive aspect of the Quartz Canyon NUC is the presence of two PCIe slots (electrically, x16 and x4) for add-in cards. In the absence of a discrete GPU – for which there is no need in a DAS testbed – both slots are available. In fact, we also added a spare SanDisk Extreme PRO M.2 NVMe SSD to the CPU direct-attached M.2 22110 slot in the baseboard in order to avoid DMI bottlenecks when evaluating Thunderbolt 3 devices. This still allows for two add-in cards operating at x8 (x16 electrical) and x4 (x4 electrical). Since the Quartz Canyon NUC doesn’t have a native USB 3.2 Gen 2×2 port, Silverstone’s SST-ECU06 add-in card was installed in the x4 slot. All non-Thunderbolt devices are tested using the Type-C port enabled by the SST-ECU06.
The specifications of the testbed are summarized in the table below:
|The 2021 AnandTech DAS Testbed Configuration
|Intel Quartz Canyon NUC9vXQNX
|Intel Xeon E-2286M
|ADATA Industrial AD4B3200716G22
32 GB (2x 16GB)
DDR4-3200 ECC @ 22-22-22-52
|ADATA Industrial IM2P33E8 NVMe 1TB
|SanDisk Extreme PRO M.2 NVMe 3D SSD 1TB
|SilverStone Tek SST-ECU06 USB 3.2 Gen 2×2 Type-C Host
|Windows 10 Enterprise x64 (21H1)
|Thanks to ADATA, Intel, and SilverStone Tek for the build components
The testbed hardware is only one segment of the evaluation. Over the last few years, the typical direct-attached storage workloads for memory cards have also evolved. High bit-rate 4K videos at 60fps have become quite common, and 8K videos are starting to make an appearance. Game install sizes have also grown steadily even in portable game consoles, thanks to high resolution textures and artwork. Keeping these in mind, our evaluation scheme for portable SSDs and UFDs involves multiple workloads which are described in detail in the corresponding sections.
- Synthetic workloads using CrystalDiskMark and ATTO
- Real-world access traces using PCMark 10’s storage benchmark
- Custom robocopy workloads reflective of typical DAS usage
- Sequential write stress test
In the next couple of sections, we have an overview of the performance of the 8TB variant of the Samsung T5 EVO PSSD in these benchmarks. Prior to providing concluding remarks, we have some observations on the drive’s power consumption numbers and thermal solution also.
Benchmarks such as ATTO and CrystalDiskMark help provide a quick look at the performance of the direct-attached storage device. The results translate to the instantaneous performance numbers that consumers can expect for specific workloads, but do not account for changes in behavior when the unit is subject to long-term conditioning and/or thermal throttling. Yet another use of these synthetic benchmarks is the ability to gather information regarding support for specific storage device features that affect performance.
Samsung claims read and write speeds of up to 460 MBps for the 8TB variant of the T5 EVO. We do get numbers close to those (446 MBps) in the ATTO benchmarks presented below. Our ATTO benchmarking is restricted to a single configuration in terms of queue depth, and is only representative of a small sub-set of real-world workloads. The results also allow the visualization of change in transfer rates as the I/O size changes, with optimal performance being reached around 1 MB for a queue depth of 4.
CrystalDiskMark. for example, uses four different access traces for reads and writes over a configurable region size. Two of the traces are sequential accesses, while two are 4K random accesses. Internally, CrystalDiskMark uses the Microsoft DiskSpd storage testing tool. The ‘Seq128K Q32T1’ sequential traces use 128K block size with a queue depth of 32 from a single thread, while the ‘4K Q32T16’ one does random 4K accesses with the same queue configuration, but from multiple threads. The ‘Seq1M’ traces use a 1MiB block size. The plain ‘Rnd4K’ one uses only a single queue and single thread . Comparing the ‘4K Q32T16’ and ‘4K Q1T1’ numbers can quickly tell us whether the storage device supports NCQ (native command queuing) / UASP (USB-attached SCSI protocol). If the numbers for the two access traces are in the same ballpark, NCQ / UASP is not supported. This assumes that the host port / drivers on the PC support UASP.
Typical of QLC SSDs, the random write performance is quite weak. However, the random read numbers are excellent. Thanks to the availability of dedicated DRAM for the FTL, the random read performance at low queue depths for the T5 EVO is actually better than that of the DRAM-less T7 Shield. Sequential numbers are limited by the host interface.
Our testing methodology for storage bridges / direct-attached storage units takes into consideration the usual use-case for such devices. The most common usage scenario is transfer of large amounts of photos and videos to and from the unit. Other usage scenarios include the use of the unit as a download or install location for games and importing files directly from it into a multimedia editing program such as Adobe Photoshop. Some users may even opt to boot an OS off an external storage device.
The AnandTech DAS Suite tackles the first use-case. The evaluation involves processing five different workloads:
- AV: Multimedia content with audio and video files totalling 24.03 GB over 1263 files in 109 sub-folders
- Home: Photos and document files totalling 18.86 GB over 7627 files in 382 sub-folders
- BR: Blu-ray folder structure totalling 23.09 GB over 111 files in 10 sub-folders
- ISOs: OS installation files (ISOs) totalling 28.61 GB over 4 files in one folder
- Disk-to-Disk: Addition of 223.32 GB spread over 171 files in 29 sub-folders to the above four workloads (total of 317.91 GB over 9176 files in 535 sub-folders)
Except for the ‘Disk-to-Disk’ workload, each data set is first placed in a 29GB RAM drive, and a robocopy command is issue to transfer it to the external storage unit (formatted in exFAT for flash-based units, and NTFS for HDD-based units).
robocopy /NP /MIR /NFL /J /NDL /MT:32 $SRC_PATH $DEST_PATH
Upon completion of the transfer (write test), the contents from the unit are read back into the RAM drive (read test) after a 10 second idling interval. This process is repeated three times for each workload. Read and write speeds, as well as the time taken to complete each pass are recorded. Whenever possible, the temperature of the external storage device is recorded during the idling intervals. Bandwidth for each data set is computed as the average of all three passes.
The ‘Disk-to-Disk’ workload involves a similar process, but with one iteration only. The data is copied to the external unit from the CPU-attached NVMe drive, and then copied back to the internal drive. It does include more amount of continuous data transfer in a single direction, as data that doesn’t fit in the RAM drive is also part of the workload set.
Across all of these workloads, the T5 EVO performs slightly better in the reads and much better in the writes compared to the only other USB 3.2 Gen 1 device in our results database – a USB flash drive. Other drives with better host interfaces / internal TLC SSDs deliver higher bandwidth numbers for these use-cases, as per expectations. Power users may want to dig deeper to understand the limits of each device. To address this concern, we also instrumented our evaluation scheme for determining performance consistency.
Aspects influencing the performance consistency include SLC caching and thermal throttling / firmware caps on access rates to avoid overheating. This is important for power users, as the last thing that they want to see when copying over 100s of GB of data is the transfer rate going down to USB 2.0 speeds.
In addition to tracking the instantaneous read and write speeds of the DAS when processing the AnandTech DAS Suite, the temperature of the drive was also recorded. In earlier reviews, we used to track the temperature all through. However, we have observed that SMART read-outs for the temperature in NVMe SSDs using USB 3.2 Gen 2 bridge chips end up negatively affecting the actual transfer rates. To avoid this problem, we have restricted ourselves to recording the temperature only during the idling intervals. The graphs below present the recorded data.
|AnandTech DAS Suite – Performance Consistency
The first three sets of writes and reads correspond to the AV suite. A small gap (for the transfer of the video suite from the internal SSD to the RAM drive) is followed by three sets for the Home suite. Another small RAM-drive transfer gap is followed by three sets for the Blu-ray folder. This is followed up with the large-sized ISO files set. Finally, we have the single disk-to-disk transfer set.
The T5 EVO is able to maintain around 450 MBps consistently for reads and writes when large file sizes are involved. For smaller file sizes, the writes drop down a bit. But, the behavior doesn’t seem to be indicative of a SLC cliff that was seen in other PSSDs. While other PSSDs deliver much better raw bandwidth numbers, the loss in performance is much higher after the SLC cache runs out. This may lead to some user frustration – an aspect seemingly well-handled by the QLC configuration in the 8TB T5 EVO. The 38C temperature at the end of the process is also quite reasonable given the performance of the unit.
There are a number of storage benchmarks that can subject a device to artificial access traces by varying the mix of reads and writes, the access block sizes, and the queue depth / number of outstanding data requests. We saw results from two popular ones – ATTO, and CrystalDiskMark – in a previous section. More serious benchmarks, however, actually replicate access traces from real-world workloads to determine the suitability of a particular device for a particular workload. Real-world access traces may be used for simulating the behavior of computing activities that are limited by storage performance. Examples include booting an operating system or loading a particular game from the disk.
PCMark 10’s storage bench (introduced in v2.1.2153) includes four storage benchmarks that use relevant real-world traces from popular applications and common tasks to fully test the performance of the latest modern drives:
- The Full System Drive Benchmark uses a wide-ranging set of real-world traces from popular applications and common tasks to fully test the performance of the fastest modern drives. It involves a total of 204 GB of write traffic.
- The Quick System Drive Benchmark is a shorter test with a smaller set of less demanding real-world traces. It subjects the device to 23 GB of writes.
- The Data Drive Benchmark is designed to test drives that are used for storing files rather than applications. These typically include NAS drives, USB sticks, memory cards, and other external storage devices. The device is subjected to 15 GB of writes.
- The Drive Performance Consistency Test is a long-running and extremely demanding test with a heavy, continuous load for expert users. In-depth reporting shows how the performance of the drive varies under different conditions. This writes more than 23 TB of data to the drive.
Despite the data drive benchmark appearing most suitable for testing direct-attached storage, we opt to run the full system drive benchmark as part of our evaluation flow. Many of us use portable flash drives as boot drives and storage for Steam games. These types of use-cases are addressed only in the full system drive benchmark.
The Full System Drive Benchmark comprises of 23 different traces. For the purpose of presenting results, we classify them under five different categories:
- Boot: Replay of storage access trace recorded while booting Windows 10
- Creative: Replay of storage access traces recorded during the start up and usage of Adobe applications such as Acrobat, After Effects, Illustrator, Premiere Pro, Lightroom, and Photoshop.
- Office: Replay of storage access traces recorded during the usage of Microsoft Office applications such as Excel and Powerpoint.
- Gaming: Replay of storage access traces recorded during the start up of games such as Battlefield V, Call of Duty Black Ops 4, and Overwatch.
- File Transfers: Replay of storage access traces (Write-Only, Read-Write, and Read-Only) recorded during the transfer of data such as ISOs and photographs.
PCMark 10 also generates an overall score, bandwidth, and average latency number for quick comparison of different drives. The sub-sections in the rest of the page reference the access traces specified in the PCMark 10 Technical Guide.
Booting Windows 10
The read-write bandwidth recorded for each drive in the boo access trace is presented below.
Low queue depth random accesses usually get better performance with DRAM-equipped SSDs, even when they are behind SATA controllers. Hence, it is not much of a surprise to see the T5 EVO outperform even the T7 Shield in this benchmark.
The read-write bandwidth recorded for each drive in the sacr, saft, sill, spre, slig, sps, aft, exc, ill, ind, psh, and psl access traces are presented below.
The T5 EVO fares much better for the creative workloads, as many of the traces are sequential in nature. Photoshop does see the T5 EVO getting pushed out to the bottom of the list, but in general, the presence of DRAM on the T5 EVO’s main board helps deliver good results.
The read-write bandwidth recorded for each drive in the exc and pow access traces are presented below.
The spreadsheet workload appears to be largely sequential in nature, allowing for the T5 EVO to get top honors. Performance falls down for the presentation workload due to the nature of accesses that have an advantage with faster host interfaces.
The read-write bandwidth recorded for each drive in the bf, cod, and ow access traces are presented below.
Gaming workloads are usually heavy on sequential reads, allowing the T5 EVO to shine in the middle of the pack, with units able to outperform it only with faster host interfaces.
Files Transfer Workloads
The read-write bandwidth recorded for each drive in the cp1, cp2, cp3, cps1, cps2, and cps3 access traces are presented below.
These workloads mirror our DAS test suite, and the results are largely the same. Faster host interfaces backed by high-performance SSDs have a significant advantage. So, it is no surprise that the T5 EVO and the 5 Gbps thumb drive make up the bottom portion of the graphs.
PCMark 10 reports an overall score based on the observed bandwidth and access times for the full workload set. The score, bandwidth, and average access latency for each of the drives are presented below.
The use of a DRAM-enabled SATA SSD platform allows the Samsung T5 EVO to place itself in the middle of the pack, ahead of even units like the DRAM-less T7 Shield. A faster host interface allows the X10 Pro to outperform the T5 EVO, and the two high-performance SSDs behind USB 3.2 Gen 2×2 bridge chips make up the top two spots.
The performance of the Samsung T5 EVO in various real-world access traces as well as synthetic workloads was brought out in the preceding sections. We also looked at the performance consistency for these cases. Power users may also be interested in performance consistency under worst-case conditions, as well as drive power consumption. The latter is also important when used with battery powered devices such as notebooks and smartphones. Pricing is also an important aspect. We analyze each of these in detail below.
Worst-Case Performance Consistency
Flash-based storage devices tend to slow down in unpredictable ways when subject to a large number of small-sized random writes. Many benchmarks use that scheme to pre-condition devices prior to the actual testing in order to get a worst-case representative number. Fortunately, such workloads are uncommon for direct-attached storage devices, where workloads are largely sequential in nature. Use of SLC caching as well as firmware caps to prevent overheating may cause drop in write speeds when a flash-based DAS device is subject to sustained sequential writes.
Our Sequential Writes Performance Consistency Test configures the device as a raw physical disk (after deleting configured volumes). A fio workload is set up to write sequential data to the raw drive with a block size of 128K and iodepth of 32 to cover 90% of the drive capacity. The internal temperature is recorded at either end of the workload, while the instantaneous write data rate and cumulative total write data amount are recorded at 1-second intervals.
|Sequential Writes to 90% Capacity – Performance Consistency
The T5 EVO is able to sustain maximum write speeds (around 440 MBps) for more than three hours before the direct-to-QLC writes start. In the latter state, the write speeds drop down to around 60 MBps. However, limiting the interface speed allows the controller to rapidly fold the data written earlier and free up SLC cache for the incoming data. An empty 8 TB T5 EVO allowed us to write as much as 5.2 TB of data before the direct-to-QLC segment. To be completely honest, it is difficult to find a use-case for a PSSD that involves writing of more than 5.2 TB of data in one shot.
Bus-powered devices can configure themselves to operate within the power delivery constraints of the host port. While Thunderbolt ports are guaranteed to supply up to 15W for client devices, USB 2.0 ports are guaranteed to deliver only 2.5W (500mA @ 5V). In this context, it is interesting to have a fine-grained look at the power consumption profile of the various external drives. Using the ChargerLAB KM003C, the bus power consumption of the drives was tracked while processing the CrystalDiskMark workloads (separated by 5s intervals). The graphs below plot the instantaneous bus power consumption against time, while singling out the maximum and minimum power consumption numbers.
|CrystalDiskMark Workloads – Power Consumption
The T5 EVO has a peak consumption of 4.61 W. The spike in power numbers between the 700s – 835s timestamps point to rapid folding in progress, allowing the SLC cache to be regained.
The flash market is currently experiencing a supply glut, with pricing to the advantage of the consumers. This is reflected even in SSDs such as the 870 QVO. The 8TB version of the 870 QVO is available for purchase at less than $350. The inclusion of a bridge chip and the design of a smaller enclosure with additional thermal protection can be expected to add around $50 – $75 to that price. Charitably keeping this number at $100, we would have expected the 8TB T5 EVO to retail around $450. However, Samsung is currently selling the PSSD for $650. This is absurd pricing for a QLC PSSD with speeds limited to 5 Gbps, even if we grant that no other PSSD is currently being sold in the market at that capacity point.
The 2TB and 4TB variants are being sold at $190 and $350 respectively. The 2TB number is palatable, but things start becoming awry with the 4TB version. At that capacity point, PSSDs delivering much better performance with TLC flash and faster host interfaces are available for a much cheaper price. It is not clear whether the T5 EVO presents anything unique at the 2TB and 4TB capacity points to deserve that premium.
While the value proposition is clearly not in favor of the T5 EVO at the current price points, the product definitely deserves credit for creating a new category. QLC is notorious for its slow direct write speeds, an aspect that is easy to encounter in direct-attached storage workloads. By limiting the interface to 5 Gbps (USB 3.2 Gen 1) speeds, the SSD controller is able to fold written data to the QLC segment and free up the SLC cache for the incoming data simultaneously. This allows the end user / application to enjoy the benefits of a much larger SLC cache than what is actually available.
Most PSSDs in the market employ a DRAM-less platform to bring down power consumption and cost. Interestingly, the T5 EVO adopts a board with LPDDR4 RAM for the flash translation layer. This gives the T5 EVO an edge over DRAM-less SSDs (even NVMe ones) for some workloads.
On the technical and product engineering front, the T5 EVO portable SSD is a clear winner, particularly at the 8TB capacity point. The 460 MBps cap may not matter for a wide range of use-cases that is currently being served by portable hard drives. However, the insane pricing throws a spanner in the works. Hopefully, we will see Samsung take the feedback into consideration and market the T5 EVO with a slight premium over the 870 QVO.