WinRAR has been a staple in the PC community for decades, offering the ability to compress data into compact files for easier transfer. With that, however, comes the occasional security concern, and today we have an example of just such a situation. Reports have begun to circulate, highlighting an issue in all but the latest edition of WinRAR that enable software to execute without the Windows Mark of the Web (MotW) security warning pop-ups.
If you aren't familiar with the MotW warnings, you might recognize them as the pop-ups that warn you against running strange software from the internet. It typically includes a blurb explaining that it's dangerous to execute applications downloaded from unfamiliar sources, and includes both an option to continue regardless or to cancel the operation entirely. This system can apparently be skipped over entirely in older versions of WinRAR, making for a greater security risk.
The official release notes for version 7.11 confirm that this vulnerability has been nullified and goes on to detail the fixed issue. The notes state, "if symlink pointing at an executable was started from WinRAR shell, the executable Mark of the Web data was ignored." As long as you update to the latest version, this security flaw shouldn't be an issue.
WinRAR confirmed that the security flaw was identified by Shimamine Taihei of Mitsui Bussan Secure Directions, Inc. The concern was reported directly to the WinRAR team who were able to tackle the issue and resolve it by the time version 7.11 was released. In the report, the issue was described, "If a symbolic link specially crafted by an attacker is opened on the affected product, arbitrary code may be executed."
It's important to note that while this security flaw requires users to manually open links to initiate potential attacks, it still increases the security risk by skipping the pop-up Windows warning system entirely. The MotW system is just an extra layer, warning users before they execute suspicious code, to help stop malware from automatically propagating. However, the MotW pop-ups can be a crucial step in mitigating the spread of unwanted software. It's best to update your version of WinRAR to the latest version to avoid any potential mishaps going forward.
]]>In a surprising turn of events, an Nvidia engineer pushed a fix to the Linux kernel, resolving a performance regression seen on AMD integrated and dedicated GPU hardware (via Phoronix). Turns out, the same engineer inadvertently introduced the problem in the first place with a set of changes to the kernel last week, attempting to increase the PCI BAR space to more than 10TiB. This ended up incorrectly flagging the GPU as limited and hampering performance, but thankfully it was quickly picked up and fixed.
In the open-source paradigm, it's an unwritten rule to fix what you break. The Linux kernel is open-source and accepts contributions from everyone, which are then reviewed. Responsible contributors are expected to help fix issues that arise from their changes. So, despite their rivalry in the GPU market, FOSS (Free Open Source Software) is an avenue that bridges the chasm between AMD and Nvidia.
The regression was caused by a commit that was intended to increase the PCI BAR space beyond 10TiB, likely for systems with large memory spaces. This indirectly reduced a factor called KASLR entropy on consumer x86 devices, which determines the randomness of where the kernel's data is loaded into memory on each boot for security purposes. At the same time, this also artificially inflated the range of the kernel's accessible memory (direct_map_physmem_end), typically to 64TiB.
In Linux, memory is divided into different zones, one of which is the zone device that can be associated with a GPU. The problem here is that when the kernel would initialize zone device memory for Radeon GPUs, an associated variable (max_pfn) that represents the total addressable RAM by the kernel would artificially increase to 64TiB.
Since the GPU likely cannot access the entire 64TiB range, it would flag dma_addressing_limited() as True. This variable essentially restricts the GPU to use the DMA32 zone, which offers only 4GB of memory and explains the performance regressions.
The good news is that this fix should be implemented as soon as the pull request lands, right before the Linux 6.15-rc1 merge window closes today. With a general six to eight week cadence before new Linux kernels, we can expect the stable 6.15 release to be available around late May or early June.
]]>The 3D printing community has seemingly been obsessed with overgrown maximum build volume printers. The petite Sovol Zero is a refreshing change from these backbreaking behemoths. It's a Core XY mini based on the open-source Voron 0 design, which we printed, built, and reviewed a few years ago. Unlike the original Voron, the Sovol Zero is a mostly assembled, plug-and-play machine.
Sovol has declared this to be the fastest 3D printer currently on the market, with acceleration rates up to 40,000mm/s². The profiles included with it have impressive speeds that are much faster than my Bambu Lab. However, 3D printers can’t go faster than physics, so the speeds this – or any – printer can hit are largely dependent on the size and shape of the model. For example, on the Cute Flexi Octopus I printed out, the printer barely pushed past 100mm/s due to all the tiny parts.
It takes full advantage of its enclosed build by including a hotend that can reach 350 degrees Celsius and a build plate to 120 degrees Celsius. There’s no need for an active heater on a machine this size. You can check the chamber temperature on the Mainsail screen but not on the Zero’s tiny interface. This makes printing in technical filaments a breeze.
The Zero has a retail price of $499 and is on sale for $429, which is quite a bargain in both price and time saved from the current print-it-yourself $694 Voron V0.2 kit. It also has a slightly larger 152 mm build plate, making it closer in size to Bambu’s A1 Mini and the Prusa Mini, both of which have 180 x 180 mm build plates.
The Sovol Zero is a delightful machine, but rather niche and with an odd mix of old and new tech. Performance-wise, it’s right up there with the best 3D printers we tested, but it narrowly fell short of making the list. My only true complaint would be the noisy fans and stepper motors that sound like happy droids whistling while they work. But is that really a negative?
The Sovol Zero comes mostly pre-assembled, with the screen, spool holder and filament runout sensor bagged separately. Also in the box are a paper copy of the manual, Wi-Fi antenna, USB drive with a copy of OrcaSlicer and calibration prints, power cord, scraper, nozzle cleaner, flush cutters, tool kit, PTFE tubing, a fan cover for the filter fan, a spare silicone brush, and spare brass and hardened steel nozzles.
According to rumors, the Sovol Zero was almost called the Sovol 08 Mini, but thankfully, someone talked the company out of it, and the Zero was allowed to stand on its own. If you’re curious about the name, the Zero is inspired by Voron’s open source Voron 0 design – technically the Voron 0.2 – which has a slightly smaller 120 x 120 x 120 mm build volume.
The Zero is like a classic muscle car: simple, heavy, and loud. There are not a lot of plastic parts. It’s mostly made of steel plates and glass panels, with some iconic Sovol Blue injected molded accents. It weighs at a hefty 30 pounds, way more than a typical bed slinger. It probably needed all the weight to keep it from vibrating right off your table because, like a muscle car, the tiny Zero hauls booty. It has super squishy, shock absorbing feet, and Klipper to cancel out the vibrations.
The tool head has a dual-gear extruder with the ubiquitous Sovol wheel flipped around to face the rear. It has a “normal” sized nozzle that looks like a standard V6, but Sovol is offering a drop-in replacement kit that includes the ceramic heater and thermistor pre-assembled for $36.99 or $11.99 for a kit. Sovol rates the nozzle flow at 50mm³/s, which should be able to keep up with its claim of 40,000mm/s² acceleration.
The 3.5-inch minimalist screen is a throw back to previous generations with a monochrome display and selector knob. It’s quite primitive compared to what we expect on modern printers, but honestly, I don’t use it much as Klipper lets me use Mailsail to send files from my PC.
The X and Y axis run on linear rails, and the Z axis runs on twin linear rails with two lead screws belted to a single stepper motor. The belts keep everything in sync.
For bed leveling, the Zero uses both a pressure sensor built into the hotend (which taps) and an eddy current sensor that zooms over the plate to scan it. It also has accelerometers in the tool head and the bed for Klipper’s vibration compensation, which reduces unsightly ringing artifacts caused by vibration.
A built-in camera allows you to monitor your prints from Klipper’s Mainsail screen. It’s not the best camera for capturing timelapses, but it will let you see how your prints are doing. Klipper also makes sending files extremely easy over WiFi without needing to talk to a Cloud server, plus there’s a port for a USB drive. The machine can also be used with the Obico App for remote monitoring and spaghetti detection.
The Sovol Zero has an abundance of cooling fans. There is a large 5020 parts cooling fan in the toolhead, a massive auxiliary cooling fan in the rear of the printer, and a filtered cavity heat dissipation fan on the right side. This fan requires a magnetic cover outside the case when running high-temperature filaments like ASA, ABS, or PC.
The motion system of the Sovol Zero may be noisy, but you’ll never hear it, given the complete cacophony of all the fans listed above. This may be the loudest, fastest printer we’ve seen since the FLSUN S1 Pro. The stepper motors are also quite chirpy and sound like an astromech complaining about how you haven’t changed your primary buffer panels like it asked - if you’re the nerdy kind, you might not mind the noise.
The Sovol Zero comes almost fully assembled. The Wi-Fi antenna screws on to the right side of the printer, along with the filament runout sensor. The spool holder is screwed into the rear. In all, there are only three screws. The screen slots into the bottom front after attaching the ribbon cable.
Bed leveling and Z offset are automatic and done at the beginning of every print. The eddy current sensor allows the scanning of the bed in a matter of seconds and does an excellent job.
Loading filament is very simple and easier than with our Voron. Simply place the spool into the side-mounted rack and feed the plastic into the reverse Bowden tube until it reaches the hotend.
The factory set the filament load temperature to 250 degrees Celsius, which is fine for most filaments. For running higher temperature materials like PC, ABS, ASA, and Nylon, this can be adjusted in the Macros.cfg in Klipper. We set it to 300 degrees Celsius to help clear more stubborn filaments.
Reverse the process to change colors or remove the filament.
Sovol included a copy of OrcaSlicer, a free third-party slicer based on Open Source PrusaSlicer and BambuStudio. The included profile worked great for everything except TPU, which required some tweaking.
The Sovol Zero comes with a sample coil of white PLA, which I didn’t bother using. If you want more colors and materials like silks and multicolor filaments, you should check out our guide to the best filaments for 3D printing for suggestions.
Speed Benchy’s are tricky, as you’re often judging someone’s slicer skills rather than the printer. The Zero came with a presliced and absolutely eye-popping eight minute and 27 second Benchy that printed a bit wobbly, but quite well for an under 10 minute print. When I tried to replicate it, I got a decent looking 14 minute and 49 second boat with slightly rough layers in the middle. There was no sign of ringing and just a stray bit of wisps. My boat was printed in Polymaker Red PLA, and the Sovol sliced boat is in Creality Blue PLA.
I should briefly touch on the paradox of superfast printers: they are limited by physics. No printer can hit its top speed immediately. It has to accelerate to that speed, then slow down when it has to corner. The flexi Cute Octopus I printed reminded me of that - because no matter how I sliced it, I couldn’t get it to print faster than around 85 minutes.
Looking at the speed charts, it didn’t make any sense. Then, I figured out that the charts showed me the estimated speed, not the actual speed. Prusa Slicer is able to show Actual Speed, so I sliced the Cute Octopus for my MK4, a machine with less than half the acceleration rates, and found that it could print it in 82 minutes. Below is the “actual speed” and you can see the printer is constantly revving up and slowing down.
The flexi Cute Octopus is an excellent print to test bed adhesion, plus I can give them away to kids when I travel to festivals. I was able to fit two Cute Octopi on the bed, and they printed in two hours and 58 minutes. I used a 0.2 mm layer height and PLA default speed settings, with a max speed of 500mm/s on the infill that it never reached. This was printed in Polymaker Red PLA.
To test out the printer’s maximum build size, I printed this Floppy Flounder in PETG. It took five hours and 30 minutes, using a 0.2mm layer height, 3 walls and 10% infill. It printed great and was completely flexible with very little stringing. The top surface is a little rough, but that can be tuned by adjusting the flow or top speed settings. This was printed in ProtoMaker Translucent Blue PETG, a company that sadly is no longer in business.
The Zero did a good job with flexible filament. I gave it a barrel-shaped can holder that printed fairly well. It could have used some support around the bottom, but that’s more a model problem than a printer issue. The layers are smooth and even, though there are a few rough spots that look like a clog was trying to happen. This could easily be addressed by tuning the TPU profile. This printed in six hours and 13 minutes, using a 0.2 layer height and OrcaSlicer’s default settings. The material is Polymaker Red TPU.
Since the Sovol Zero does high temperature materials very well, I printed a handful of cable organizers in Jet Black Prusament ASA. All the clamps fit on one plate, which printed in 48 minutes and 55 seconds. The print is professional looking and strong with no visible layer lines.
This modular ClampDock is one of my favorite things to print. it currently holds my headphones to the edge of my desk. It needs to be strong, so this is printed out of ABS, PC and TPU. It was originally just going to be PC, but the hinge wouldn’t open, so I switched to ABS, which the Zero had an easier time with printing. This was printed with a 0.2 layer height, 5 walls and a slowed first layer for good adhesion. It took three hours and 10 minutes to print. Printed (mostly) in Polymaker PolyMax PC.
The Sovol Zero is a compact Core XY 3D printer build like a tank with endless amounts of horsepower. It’s acceleration rate of 40,000mm/s² is backed by a nozzle flow rate that goes up to 50mm³/s. This is all very impressive, but you have to remember that physics exists, and 3D printers need time to ramp up to those speeds. The speed of your 3D printer will always be throttled by the size and shape of your model.
Even so, the models we test printed had excellent quality at speed, even if we didn’t set any records. The Zero’s enclosure worked very well, so high-temperature filaments weren’t a problem.
I’m a big fan of companies that leave Klipper alone and provide a profile to use with regular OrcaSlicer. These tools are more than complete on their own – and it hopefully saved Sovol money in not having to develop their own code and software.
Currently on sale for $429, the Zero is worth the price when you consider the quality of the build. This is a great printer if you want something like a Voron 0, but don’t want to build it yourself. If you want a regular-sized Core XY that’s super affordable, I’d recommend you check out the Elegoo Centauri Carbon. Beginners looking for bargain-priced color printers should check out the Bambu Lab A1 Mini, our favorite pick for beginners who want a little color in their life. It’s only $389 with a four-color AMS unit.
MORE: Best 3D Printers
MORE: Best Budget 3D Printers
MORE: Best Resin 3D Printers
]]>The Shenzhen 8K UHD Video Industry Cooperation Alliance, a group made up of more than 50 Chinese companies, just released a new wired media communication standard called the General Purpose Media Interface or GPMI. This standard was developed to support 8K and reduce the number of cables required to stream data and power from one device to another. According to HKEPC, the GPMI cable comes in two flavors — a Type-B that seems to have a proprietary connector and a Type-C that is compatible with the USB-C standard.
Because 8K has four times the number of pixels of 4K and 16 times more pixels than 1080p resolution, it means that GPMI is built to carry a lot more data than other current standards. There are other variables that can impact required bandwidth, of course, such as color depth and refresh rate. The GPMI Type-C connector is set to have a maximum bandwidth of 96 Gbps and deliver 240 watts of power. This is more than double the 40 Gbps data limit of USB4 and Thunderbolt 4, allowing you to transmit more data on the cable. However, it has the same power limit as that of the latest USB Type-C connector using the Extended Power Range (EPR) standard.
GPMI Type-B beats all other cables, though, with its maximum bandwidth of 192 Gbps and power delivery of up to 480 watts. While still not a level where you can use it to power your RTX 5090 gaming PC through your 8K monitor, it’s still more than enough for many gaming laptops with a high-end discrete graphics. This will simplify the desk setup of people who prefer a portable gaming computer, since you can use one cable for both power and data. Aside from that, the standard also supports a universal control standard like HDMI-CEC, meaning you can use one remote control for all appliances that connect via GPMI and use this feature.
The only widely used video transmission standards that also deliver power right now are USB Type-C (Alt DP/Alt HDMI) and Thunderbolt connections. However, this is mostly limited to monitors, with many TVs still using HDMI. If GPMI becomes widely available, we’ll soon be able to use just one cable to build our TV and streaming setup, making things much simpler.
]]>Right now, at Amazon, you can find the 15.6-inch Samsung Galaxy AI Book4 Edge laptop for one of its best prices to date. This Snapdragon X Plus-based laptop usually goes for around $899, but right now it's marked down to just $695. So far, no expiration has been specified for the discount, so we don't know for how long it will be made available at this rate. It is, however, labeled as a limited offer.
We haven't had the opportunity to review the Samsung Galaxy AI Book4 Edge so far, but we're plenty familiar with several Snapdragon-powered Copilot+ machines. Recently, some controversy arose when the Surface Laptop 7s were frequently returned due to compatibility issues. If you're considering this laptop, you might want to research a little and make sure your favorite games and apps are able to run well on Windows-on-Arm systems. On the positive side, once you go Arm, you should enjoy the best "long-lasting battery" life available on Windows devices.
Samsung 15-Inch Galaxy AI Book4 Edge: now $695 at Amazon (was $899)
This laptop is built around a Snapdragon X Plus X1P-42-100 processor. It has a 15.6-inch FHD display and relies on a Qualcomm Adreno GPU. It comes with 16GB of LPDDR5X and a 500GB internal SSD for storage.View Deal
The main processor driving the Samsung Galaxy AI Book4 Edge is a Snapdragon X Plus X1P-42-100. This CPU has eight cores with a base speed of 3.4 GHz and a single-core boost feature that takes it up to 3.8 GHz. For graphics, it relies on a Qualcomm Adreno GPU which outputs to a 15.6-inch anti-glare display with an FHD resolution of 1,920 x 1,080 pixels.
As far as memory goes, this edition comes with 16GB of LPDDR5X and a 500GB internal SSD is fitted for storage. It has a couple of 2W speakers integrated for audio output, but you also get a 3.5mm audio jack to take advantage of. It has an HDMI 2.1 port for outputting video to a secondary screen and a handful of USB ports, including one USB 3.2 port and two USB4 ports.
It is also worth noting that this price is cheaper than the current offer over at the official Samsung website. If you want to check out this deal for yourself, head over to the Samsung 15-inch Galaxy AI Book4 Edge product page on Amazon US for more information and purchase options.
]]>PC overclocking and hardware expert Roman ‘Der8auer’ Hartung has shared his before-and-after AMD Ryzen 9 9950X3D delidding test results. His conclusion: enthusiasts can expect up to 10% higher performance with direct-die cooling, but the cost will be significantly higher power consumption. Alternatively, Der8auer observed that users could run the delidded CPU at stock settings and enjoy far lower temperatures, around 23 degrees Celsius lower in this case, plus improved efficiency. A comfortable compromise might be found between these sampled extremes.
In the above video we see Der8auer introduce the awesome AMD Ryzen 9950X3D, then establish a baseline performance / thermal profile, before the delidding operation. Subsequently, he used the same liquid cooler, settings, and benchmarks to see what benefits the delidding process could deliver.
During the delidding process, Der8auer provided some sage advice. He used the still-compatible Delid-Die-Mate Ryzen 7000 device for the 9950X3D. Please take your time ‘wiggling’ the HIS, perhaps up to 100 times, “until it falls off by itself,” plead the overclocker. An example of a rush job he shared (reproduced below) should be warning enough for would-be delidders to be patient.
The overclocking expert published two charts with the before and after delidding performance and power consumption on show. In the Cinebench R23 multi-thread tests chart, which we embedded below, you can see a key takeaway: the delidded Ryzen X3D chip could deliver up to 9% better performance in this productivity benchmark. However, it is questionable whether the 73% increase in power consumption would be worth it. Der8auer also tested Counter Strike 2 4K during his video, and provided a similar chart.
If you buy the AMD Ryzen 9 9950X3D but don't feel driven to wring every-last-ounce of performance out of it – Der8auer notes that the delidded CPU can run much cooler at stock settings. He seemed impressed that his sample could run at 65 degrees Celsius under load – which is a temperature reduction of 23 degrees Celsius compared to the CPU as shipped from AMD with IHS ‘octopus’ attached. This modded chip also ran at 290W under load, using about 20W less power than the original chip.
If you are interested in more 9950X3D delidding news, we recently retold the hair-raising tale of an ‘amateur’ delidding one of AMD’s best CPUs for gaming using some fishing line and a clothes iron - plus nerves of steel.
]]>Lian Li Industrial Co. Ltd., established in 1983, is a Taiwanese company specializing in manufacturing computer cases, power supplies, and accessories. It's one of the oldest players in the PC market and is known for its focus on aluminum-based designs. Lian Li produces a range of products aimed at both consumer and industrial markets, with the company's offerings including mid-tower and full-tower cases and more compact cases for smaller builds. Amongst consumers and PC enthusiasts, Lian Li's products are recognized for their build quality, modularity, and innovative features, catering to a diverse set of needs in the PC building community.
We focus on the EG1000 Platinum ATX 3.1 PSU to see whether it deserves a spot in our list of best power supplies. This power supply unit partially complies with the ATX 3.1 design guide (the paragraphs related to electrical quality and performance). It is designed to meet the demanding requirements of modern gaming PCs, with its specifications indicating good efficiency and robust power delivery. Featuring fully modular cables with individually sleeved wires, dynamic fan control for optimal cooling, and advanced internal topologies, the EG1000 Platinum aims to provide both reliability and performance. However, behind its long list of features, the highlight of the EG1000 Platinum is the shape of the chassis itself, which forgoes the ATX cuboid shape and standard length.
The Lian Li EG1000 Platinum ATX 3.1 PSU comes in straightforward, effective packaging. The outer box is L-shaped, designed to hint at the PSU's unique shape, while the unit itself is protected during shipping by a nylon pouch and dense packaging foam, ensuring it arrives in pristine condition.
The bundle goes slightly beyond the essentials, including mounting screws, the necessary AC power cable, and a basic manual, as well as a few cable ties and a PCI slot cover for routing external cables.
This power supply features a fully modular design, allowing for the removal of all DC power cables, including the 24-pin ATX connector. The cables are all-black, from connectors to wires, and each one is individually sleeved, contributing to the unit's premium aesthetic and improving cable management options. The 12V-2x6 cable is a minor exception, with the tips of the connectors being blue. There is also a relatively short (300 mm) SATA cable with four consequent SATA connectors, which should be very useful in smaller cases with arrays of drives. A reusable cable strap holds every single cable.
The ATX, 12V CPU, and PCI Express cables have four preinstalled wire combs. The wire combs can be easily moved to any position across the cable or removed completely should the user wish to.
The Lian Li EG1000 Platinum ATX 3.1 PSU is housed in a unique chassis that measures 182 mm long, considerably longer than the standard ATX dimensions. This increased length is due to the L-shaped design, where the cable connectors are placed horizontally, making the body deeper. The extended length and design might require careful consideration of cable paths in standard cases, as the unit is primarily designed with dual chamber cases in mind.
The EG1000 Platinum ATX 3.1 PSU's aesthetic is minimalist, featuring a smooth and simple matte black paint finish. The left and right sides are entirely covered by large stickers, with the left side displaying purely decorative elements and the right side providing detailed electrical specifications and certifications. The bottom of the unit is equipped with a unique mesh fan finger guard that spans its entire surface, with a badge bearing the company's logo on the lower left corner, contributing to the sleek appearance.
The front side of the PSU is home to the standard on/off switch and AC cable receptacle. The rear is more complex, featuring an internal USB connector hub that allows users to connect multiple devices requiring an internal USB header. This is particularly useful for those with motherboards that have limited USB headers. The modular cable connectors are clearly organized and labelled for easy and error-free connections, although they are not colour-coded.
The Lian Li EG1000 Platinum ATX 3.1 PSU is equipped with a Hong Hua HA1225M12F-Z 120 mm fan, which features a Fluid Dynamic Bearing (FDB) engine. This type of fan is well-regarded for its quality and longevity and is commonly found in high-end power supplies. It has a modest maximum speed of 2000 RPM, which should be sufficient considering the unit's high efficiency.
The Lian Li EG1000 Platinum ATX 3.1 PSU is manufactured by Helly Technology, a relatively young but reputable OEM founded in 2008 in China. Despite its shorter history, Helly Technology has established a solid presence in the power supply market with platforms supporting mid to high-tier products.
The input stage features a modest transient filter with two Y capacitors, one X capacitor, and two filtering inductors, which is somewhat less effective than those found in other high-tier units. Two rectifying bridges on a dedicated heatsink handle the AC voltage input. The Active Power Factor Correction (APFC) setup includes three MOSFETs (PTA28N50) and a diode on the large heatsink across the edge of the PCB, as well as a large inductor and two EPCOS 470 μF capacitors.
In the primary stage, the Lian Li EG1000 utilizes a full-bridge LLC topology with four OSG55R190F MOSFETs from Oriental Semiconductor mounted on a dedicated heatsink. The secondary stage employs eight G013N04G MOSFETs located on the underside of the main PCB to generate the primary 12V rail, while the 3.3V and 5V rails are produced by DC-to-DC circuits. The secondary side capacitors include a mix of Rubycon and Nippon Chemi-Con products, all from Japanese manufacturers known for their high quality.
For the testing of PSUs, we are using high precision electronic loads with a maximum power draw of 2700 Watts, a Rigol DS5042M 40 MHz oscilloscope, an Extech 380803 power analyzer, two high precision UNI-T UT-325 digital thermometers, an Extech HD600 SPL meter, a self-designed hotbox and various other bits and parts.
The Lian Li EG1000 Platinum ATX 3.1 PSU meets the 80Plus Platinum certification standards when the input voltage is 115 VAC, even if only barely. When tested with a 115 VAC input, this PSU achieves an average nominal load efficiency of 90.4% across its operational range from 20% to 100% of its capacity, increasing to 92.3% when operated with a 230 VAC input. It would not pass the 80Plus Platinum requirements with an input voltage of 230 VAC as the half-load efficiency is not nearly high enough. The efficiency under very low load conditions is acceptably high.
The Lian Li EG1000 Platinum ATX 3.1 PSU demonstrates fair thermal performance and acoustic characteristics during room temperature testing. The fan does not activate immediately but starts when the load is just under 100 Watts. As the load increases, the fan speed escalates almost linearly alongside with the load. However, due to the PSU's small proportions and 120 mm fan, internal temperatures are slightly higher than other units in its class. Acoustically, the fan operates quietly only at loads lower than 300 Watts, which will become noticeable in a quiet environment and keep getting louder as the load increases.
The Lian Li EG1000 Platinum ATX 3.1 PSU maintains commendable efficiency during hot testing. The unit achieves an average nominal load efficiency of 89.1% with a 115 VAC input and 91% with a 230 VAC input. This indicates some degradation due to the elevated ambient temperature but within reasonable limits. There are signs of thermal stress at maximum load, with greater efficiency degradation.
Even though the ambient temperature is significantly higher, the fan once again does not start right up, but once the load is a little higher than 70 Watts. It maintains its linear speed profile but accelerates more rapidly, reaching its maximum speed just below a 900-watt load. It keeps fairly low noise levels when the load is below 300-400 Watts, but the noise output will increase rapidly after that point, with the EG1000 becoming very loud at loads above 800 Watts.
The Lian Li EG1000 Platinum's relatively compact size does not do it any favors when it comes to thermal performance. The internal temperatures are on the high side for an 80Plus Platinum-certified unit, although they stay within safe operational limits. There is a small spike at maximum load due to the fan's inability to increase its speed any further, but the final temperatures are far too low to trigger an OTP event.
The Lian Li EG1000 Platinum exhibits fairly good voltage filtering, with the 12V rail showing a maximum ripple of 42 mV, the 5V rail at 22 mV, and the 3.3V rail at 24 mV. The overall voltage regulation is within acceptable limits, with the 12V rail at 0.9%, the 5V rail at 1.4%, and the 3.3V rail at 1.1%. These are not record-setting figures, but the power quality is very good, even considering this is a top-tier product.
During our thorough assessment, we evaluate the essential protection features of every power supply unit we review, including Over Current Protection (OCP), Over Voltage Protection (OVP), Over Power Protection (OPP), and Short Circuit Protection (SCP). The over-current protection (OCP) results are satisfactory, with the 3.3V and 5V rails reacting at 118%, while the 12V rail is significantly slack at 137%. The over-power protection (OPP) response is a bit lax, kicking in at 132%.
The Lian Li EG1000 Platinum ATX 3.1 PSU positions itself as a premium power supply unit targeting enthusiasts and high-performance system builders seeking something out of the ordinary and aesthetically superior to most other designs. Its unique L-shaped design caters specifically to dual-chamber chassis, offering a distinctive approach to modularity and cable management. However, this design choice might not resonate with a broader audience, as it deviates from the conventional PSU form factor, potentially limiting its compatibility with standard ATX cases and user acceptance. Its unconventional shape, which extends the PSU's length to 182 mm, could pose installation challenges in some cases. However, the unit is primarily marketed towards cases specifically designed to accommodate this kind of unit.
Build quality and aesthetics are where the EG1000 Platinum stands out from the crowd. Aside from its unique L-shaped chassis, the all-black, fully modular cables featuring individually sleeved wires and pre-installed wire combs are possibly the aesthetic highlight of this unit. The integrated USB hub may be redundant for most users. Still, it can be very useful for PC builders wanting to integrate many devices with a motherboard with only one or two headers available. Lian Li designed the EG1000 Platinum to be elegant and pleasant to look at, not extravagant.
Thermally, the EG1000 Platinum performs adequately but not exceptionally. During both cold and hot testing, the 120 mm Hong Hua fan, while reliable, struggles to maintain lower internal temperatures due to its size and the PSU's compact internal volume. The fan's linear speed profile ensures it ramps up appropriately with increasing load. Still, the unit becomes noticeably loud at higher speeds, which could be a concern for users seeking a quieter system even when heavily loaded. Even with the slight hints of thermal stress at maximum load, the unit manages to stay within safe thermal limits, but it does so at the cost of higher noise levels under heavy loads.
Electrically, the EG1000 Platinum delivers solid performance and power quality. It meets the 80Plus Platinum efficiency requirements, even though it just barely clears the threshold for an input voltage of 115 VAC. The average nominal range efficiency is fairly good but not high for a Platinum-certified unit. Voltage regulation and ripple suppression are both very good, with the unit delivering a stable and good quality power output under any operating conditions.
Overall, the Lian Li EG1000 Platinum ATX 3.1 PSU is a well-built, reliable power supply with a few caveats. Its unique design and premium components are offset by potential installation challenges and noise issues under heavy load. While it offers good electrical performance and modularity, its appeal may be limited to users with specific chassis requirements. For a price of $175, it provides fair value but may not be the best choice for every PC builder.
MORE: Best Power Supplies
MORE: How We Test Power Supplies
MORE: All Power Supply Content
]]>Nvidia's PhysX and Flow SDKs are now completely open-source under the permissive BSD-3 license. If you've been a part of the developer community, these libraries have been open-source since late 2018, except for the key GPU simulation kernels. Releasing the source code for these kernels paves the way for game developers to integrate custom and highly optimized variations of PhysX and Flow, while the modding community might see this as an opportunity to run legacy PhysX code on unsupported RTX 50 GPUs through compatibility layers.
PhysX is a real-time physics engine that offloads complex calculations to your GPU, capitalizing on its parallel processing, and powered under the hood by CUDA. This technology has been employed in a handful of older titles from the 2010s, some notable examples of which are Mirror’s Edge, Batman: Arkham Asylum, Metro 2033, Borderlands 2, and the list goes on.
The fact that most of these games relied on a 32-bit PhysX implementation, combined with Nvidia's decision to discontinue 32-bit CUDA on their Blackwell GPUs, causes the intricate physics simulations that are designed and optimized for parallel computing to fall back to the CPU, crippling performance. Flow is more specialized and serves to power fluid simulation mechanics. Think of fire, gas, and smoke effects.
With PhysX 4.0, Nvidia made public the CPU-side simulation source code of PhysX, but the GPU-side kernels were still proprietary. Limited to the binaries only, understanding the system's internals and customizing it for specific needs was almost impossible. However, with Nvidia's special GPU acceleration sauce now out in the open, anyone can see, study, modify, and build on these existing libraries.
We won't be surprised if modders now work to create a 32-bit to 64-bit compatibility layer to enable PhysX support for older titles on Blackwell GPUs. With access to the source code, it's technically possible to decouple PhysX and Flow from CUDA and port the technology to hardware-agnostic platforms like OpenCL/Vulkan to enable support for AMD and Intel processors, but that's much easier said than done.
For the most part, PhysX is a dead technology for games and has been superseded by alternatives; for example, Unreal Engine 5 uses the new Chaos Physics engine. However, access to the PhysX's GPU kernels and the shader simulation code for Flow is likely to have far-reaching impact for graphics engineering, robotics, architecture and design, animation, and the list goes on.
]]>Another Blackwell GPU bites the dust, as the meltdown reaper has reportedly struck a Redditor's MSI GeForce RTX 5090 Gaming Trio OC, with the impact tragically extending to the power supply as well. Ironically, the user avoided third-party cables and specifically used the original power connector, the one that was supplied with the PSU, yet both sides of the connector melted anyway.
Nvidia's GeForce RTX 50 series GPUs face an inherent design flaw where all six 12V pins are internally tied together. The GPU has no way of knowing if all cables are seated properly, preventing it from balancing the power load. In the worst-case scenario, five of the six pins may lose contact, resulting in almost 500W (41A) being drawn from a single pin. Given that PCI-SIG originally rated these pins for a maximum of 9.5A, this is a textbook fire/meltdown risk.
The GPU we're looking at today is the MSI RTX 5090 Gaming Trio OC, which, on purchase, set the Redditor back a hefty $2,900. That's still a lot better than the average price of an RTX 5090 from sites like eBay, currently sitting around $4,000. Despite using Corsair's first-party 600W 12VHPWR cable, the user was left with a melted GPU-side connector, a fate which extended to the PSU.
MSI 5090 Gaming trio OC melted cable (repost with pics) from r/nvidia
The damage, in the form of a charred contact point, is quite visible and clearly looks as if excess current was drawn from one specific pin, corresponding to the same design flaw mentioned above. The user is weighing an RMA for their GPU and PSU, but a GPU replacement is quite unpredictable due to persistent RTX 50 series shortages. Sadly, these incidents are still rampant despite Nvidia's assurances before launch.
With the onset of enablement drivers (R570) for Blackwell, both RTX 50 and RTX 40 series GPUs began suffering from instability and crashes. Despite multiple patches from Nvidia, RTX 40 series owners haven't seen a resolution and are still reliant on reverting to older 560-series drivers. Moreover, Nvidia's decision to discontinue 32-bit OpenCL and PhysX support with RTX 50 series GPUs has left the fate of many legacy applications and games in limbo.
As of now, the only foolproof method to secure your RTX 50 series GPU is to ensure optimal current draw through each pin. You might want to consider Asus' ROG Astral GPUs as they can provide per-pin current readings, a feature that's absent in reference RTX 5090 models. Alternatively, if feeling adventurous, maybe develop your own power connector with built-in safety measures and per-pen sensing capabilities?
]]>AMD processors were instrumental in achieving a new world record during a recent Ansys Fluent computational fluid dynamics (CFD) simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory (ORNL). According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly.
A new supercomputing record has been set!Ansys, @bakerhughesco, and @ORNL have run the largest-ever commercial #CFD simulation using 2.2 billion cells and 1,024 @AMD Instinct GPUs on the world’s first exascale supercomputer. The result? A 96% reduction in simulation run…April 4, 2025
Frontier was once the fastest supercomputer in the world, and it was also the first one to break into exascale performance. It replaced the Summit supercomputer, which was decommissioned in November 2024. However, the El Capitan supercomputer, located at the Lawrence Livermore National Laboratory, broke Frontier’s record at around the same time. Both Frontier and El Capitan are powered by AMD GPUs, with the former boasting 9,408 AMD EPYC processors and 37,632 AMD Instinct MI250X accelerators. On the other hand, the latter uses 44,544 AMD Instinct MI300A accelerators.
Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia’s market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.
“By scaling high-fidelity CFD simulation software to unprecedented levels with the power of AMD Instinct GPUs, this collaboration demonstrates how cutting-edge supercomputing can solve some of the toughest engineering challenges, enabling breakthroughs in efficiency, sustainability, and innovation,” said Brad McCredie, AMD Senior Vice President for Data Center Engineering.
Even though AMD can deliver top-tier performance at a much cheaper price than Nvidia, many AI data centers prefer Team Green because of software issues with AMD’s hardware.
One high-profile example was Tiny Corp’s TinyBox system, which had problems with instability with its AMD Radeon RX 7900 XTX graphics cards. The problem was so bad that Dr. Lisa Su had to step in to fix the issues. And even though it was purportedly fixed, the company still released two versions of the TinyBox AI accelerator — one powered by AMD and the other by Nvidia. Tiny Corp also recommended the more expensive Team Green version, with its six RTX 4090 GPUs, because of its driver quality.
If Team Red can fix the software support on its great hardware, then it could likely get more customers for its chips and get a more even footing with Nvidia in the AI GPU market.
]]>Seasonic, a longstanding leader in power supply technology, is recognized for its expertise in designing and manufacturing high-performance power supply units (PSUs). With a reputation built on delivering durable, efficient, and reliable products, the company caters to a diverse range of users, including professionals with demanding workloads and gaming enthusiasts requiring stable power delivery. Seasonic's distinction lies in its in-house design and manufacturing of PSU platforms, with a strong emphasis on premium components and engineering excellence.
This review takes a close look at the Prime TX-1600 Noctua Edition, a collaborative effort between Seasonic and Noctua. This flagship model in Seasonic’s Prime series combines Seasonic’s proficiency in power supply design with Noctua’s renowned cooling expertise. The unit features titanium-level efficiency combined with very impressive specifications and a power rating of 1600W at 50°C, targeting insatiable power users who are building systems with extreme power demands. Its standout feature includes Noctua’s NF-A12x25 HS-PWM fan to deliver superior cooling performance. The power supply also complies with ATX 3.1 and PCIe 5.1 standards, ensuring compatibility with the latest hardware, including GPUs requiring high transient power. The collaboration is further reflected in the PSU’s design, with Noctua’s brown color found throughout the product. It is an expensive product, but it sets the performance standard for top-tier PC PSUs, making it one of the best power supplies on the market.
The Seasonic Prime TX-1600 Noctua Edition comes packaged in a durable, very long cardboard box featuring a black-and-brown theme. This color scheme departs from Seasonic's usual black-and-silver design for Titanium-rated units, aligning with Noctua’s signature aesthetic. Inside, the PSU is securely enclosed in a nylon pouch and further protected by dense foam inserts to ensure safe transport.
The included bundle is extensive, featuring mounting screws, a 90-degree ATX adapter that doubles as a jump-start tester, an AC IEC C19 power cable, cable ties, five cable straps, cable combs, and a metallic case badge. It is important to note that users must avoid jump-starting the PSU with the adapter connected to a motherboard.
The Prime TX-1600 features unique cables with individually sleeved wires in black and brown, complementing the unit’s aesthetic theme. These cables include two PCIe 5.1 12V-2x6 connectors for modern GPUs, six 6+2 pin PCI Express connectors, and 18 SATA connectors. Notably, two of the SATA connectors are 3.3 V cables, which may be incompatible with older devices and disable hot-swapping functionality.
The Seasonic Prime TX-1600 Noctua Edition features an understated and robust design. Its chassis is coated with self-cleaning satin black paint that resists fingerprints, with decorative embossments adorning the left and right sides, adding texture without compromising the semi-minimalist aesthetic. The PSU also prominently features Noctua’s signature brown in its fan guard plate, which covers most of the unit’s bottom.
The PSU deviates from standard ATX dimensions, with a length of 210mm—considerably longer than the ATX guide limit of 140mm. This extended size necessitates compatibility checks with cases to ensure proper fitment. The fan grille design prioritizes minimizing turbulence noise over aesthetics. Regardless, it seamlessly integrates into the PSU’s overall design.
The rear panel houses the IEC C20 AC power inlet, an on/off switch, and a hybrid fan mode button. Enabling hybrid fan mode allows the PSU to operate passively under lower loads, while disabling it forces the fan to spin at minimal speeds. The front side is filled with cable connectors, labeled for straightforward installation.
Internally, the Prime TX-1600 Noctua Edition is equipped with a Noctua NF-A12x25 HS-PWM 120mm fan. This fan incorporates advanced features such as an SSO2 bearing and Sterrox liquid-crystal polymer blades, which provide excellent thermal performance and reduced noise levels. There is no official datasheet for this particular model but our measurements revealed a maximum speed of about 2400 RPM.
The PSU’s design is entirely in-house, reflecting Seasonic’s expertise in power supply development. Its internal layout is very dense, utilizing massive heatsinks for any PC PSU, let alone a design with that high an efficiency. The input filtering stage is robust, featuring six Y capacitors, four X capacitors, and two oversized inductors for enhanced EMI suppression.
The Active Power Factor Correction (APFC) stage eliminates rectifying bridges in favor of four rectifier MOSFETs mounted on large heatsinks, improving efficiency. The APFC includes four additional MOSFETs, two diodes, and three enormous Nippon Chemi-Con 820μF capacitors, forming an interleaved topology for maximum reliability and efficiency.
The primary inversion stage utilizes four Infineon 60R080P7 MOSFETs in a full-bridge LLC topology. These are mounted on dedicated heatsinks just before the dual main transformers. On the secondary side, 16 MOSFETs split across two arrays generate the primary 12V rail. The DC-to-DC circuits are on a vertical daughterboard in parallel to the connector’s board and produce the 3.3V and 5V rails. All capacitors, including secondary-stage components, are supplied by Nippon Chemi-Con, one of the currently most reputable manufacturers. Despite the dense layout, the build quality is immaculate.
For the testing of PSUs, we are using high precision electronic loads with a maximum power draw of 2700 Watts, a Rigol DS5042M 40 MHz oscilloscope, an Extech 380803 power analyzer, two high precision UNI-T UT-325 digital thermometers, an Extech HD600 SPL meter, a self-designed hotbox and various other bits and parts.
During cold testing, the Seasonic Prime TX-1600 Noctua Edition demonstrated spectacularly high efficiency, meeting both the 80Plus Titanium and Cybenetics Titanium certification requirements with a 115 VAC input, missing the 80Plus Titanium certification requirements with an input voltage of 230 VAC by a hair at maximum load. It achieved an average nominal load efficiency of 93.1% at 115 VAC and 94.8% at 230 VAC. The efficiency peaked at approximately 40% load and remained fairly stable across the entire nominal load range (10–100%). At very low loads, the PSU maintained excellent efficiency levels, surpassing typical standards for advanced PC PSUs.
The Noctua NF-A12x25 HS-PWM fan remained inactive until the load exceeded 800 watts, allowing the PSU to operate passively under lower power conditions. When the fan did engage, it spun at low speeds and generated minimal noise. Even under maximum load, the fan did not go anywhere near its full rotational speed, maintaining a fairly quiet operation. Thermal performance during cold testing was notable, with internal temperatures staying very low despite the PSU’s high power output. The effect that the designer’s choice of heatsinks and the inclusion of Noctua’s advanced fan are evident on the thermal and acoustics performance of this unit.
During hot testing, the Seasonic Prime TX-1600 Noctua Edition exhibited a marginal drop in efficiency compared to cold conditions. The unit achieved an average nominal load efficiency of 92.9% at 115 VAC and 94.6% at 230 VAC, reflecting a slight decrease of 0.2% from cold testing results. This efficiency drop is minimal, with the PSU virtually completely unfazed by the >20 degree Celsius temperature increase.
The Noctua NF-A12x25 HS-PWM fan activated slightly earlier during hot testing, starting operation at approximately 650 watts compared to 800 watts in room temperature conditions. The fan speed increased progressively with load but even now it does not ever reach its maximum rotational speed. Acoustic performance was outstanding, with noise levels remaining controlled and within a tolerable range throughout testing. Thermal performance was exceptional, with operating temperatures remaining extremely low for a PSU with such a high power output. The internal components were kept well within safe limits, with no indication of thermal stress.
The Seasonic Prime TX-1600 Noctua Edition exhibits outworldish electrical stability and power quality, setting the standards for a top-tier PC PSU. The 12V rail maintains regulation at 0.4%, while the 5V and 3.3V rails hold at 0.8% and 0.7%, respectively. Ripple suppression is particularly noteworthy, with maximum ripple levels measured at 20 mV on the 12V rail, 14 mV on the 5V rail, and 14 mV on the 3.3V rail. These would be top-class results for any PC PSU, which the Prime TX-1600 achieves while outputting 1600 watts with an ambient temperature above 45 degrees Celsius.
During our thorough assessment, we evaluate the essential protection features of every power supply unit we review, including Over Current Protection (OCP), Over Voltage Protection (OVP), Over Power Protection (OPP), and Short Circuit Protection (SCP). All protection mechanisms were activated and functioned correctly during testing. The OCP trigger thresholds are set at 128% for the 3.3V rail, 132% for the 5V rail, and 126% for the 12V rail. The OPP is set to engage at 128%, aligning with expectations for an ATX 3.1-compliant power supply.
The Seasonic Prime TX-1600 Noctua Edition sets itself apart as a meticulously engineered power supply unit, showcasing a unique collaboration between Seasonic and Noctua. The design of the unit reflects this partnership, with a robust black chassis accented by Noctua’s signature brown, creating a distinctive aesthetic that some will love and some will hate. The inclusion of individually sleeved black-and-brown cables further complements the theme. Aesthetics are something typically subjective but we believe that it will be a great visual match if other Noctua products are found in the system. Internally, the PSU features a densely packed layout with oversized heatsinks, premium Nippon Chemi-Con capacitors, and advanced circuit design, ensuring both durability and performance.
In terms of efficiency and electrical performance, the Prime TX-1600 Noctua Edition delivers industry-leading results. The unit meets 80Plus Titanium and Cybenetics Titanium certification requirements, achieving a peak efficiency of 94.8% at 230 VAC during cold testing and maintaining an impressive 94.6% in hot conditions. At 115 VAC, it maintains similarly high efficiency, with peak values reaching 93.1% during cold testing and 92.9% in hot conditions. Voltage regulation is exceptional, with deviations of just 0.4% on the 12V rail and ripple levels as low as 20 mV, setting a high standard for power quality.
The PSU’s thermal and acoustic performance is equally impressive, some of which success can be attributed to Noctua’s NF-A12x25 HS-PWM fan and special grill design, but we should not forget about the massive heatsinks and top-tier components either. This advanced cooling design ensures minimal turbulence noise while effectively maintaining very low operating temperatures with no or minimal airflow, even under full load and elevated ambient conditions. Even when the fan does start – and it takes quite a load to make it start at all – it remains remarkably quiet even at higher loads, never reaching its maximum speed during testing.
At a retail price of $570, the Seasonic Prime TX-1600 Noctua Edition represents a significant investment, but it justifies this cost by setting performance benchmarks in nearly every category. Its combination of cutting-edge design, unparalleled efficiency, superior electrical stability, and remarkable thermal and acoustic characteristics makes it a standout option for users who demand uncompromising quality. For professionals and enthusiasts building systems with extreme power requirements, this PSU is a definitive choice, offering a balance of innovation, performance, and reliability that is difficult to match.
MORE: Best Power Supplies
MORE: How We Test Power Supplies
MORE: All Power Supply Content
]]>Yesterday, Microsoft unveiled WHAMM, a generative AI model for real-time gaming, as demonstrated in its demo starring the 28-year-old classic Quake II. The interactive demo responds to user inputs via controller or keyboard, though the frame rate barely hangs in the low to mid-teens. Before you grab your pitchforks, Microsoft emphasizes that the focus should be on analyzing the model's quirks and not judging it as a gaming experience.
WHAMM, which stands for World and Human Action MaskGIT Model, is an update to the original WHAM-1.6B model launched in February. It serves as a real-time playable extension with faster visual output. WHAM uses an autoregressive model where each token is predicted sequentially, much like LLMs. To make the experience real-time and seamless, Microsoft transitioned to a MaskGIT-style setup where all tokens for the image can be generated in parallel, decreasing dependency and the number of forward passes required.
WHAMM was trained on Quake II with just over a week of data, a dramatic reduction from the seven years required for WHAM-1.6B. Likewise, the resolution has been bumped up from a pixel-like 300 x 180 to a slightly less pixel-like 640 x 360. You can try out the demo yourself at Copilot Labs.
The model's ability to keep track of the existing environment, apart from the occasional graphical anomaly, while simultaneously adapting to user inputs, is impressive, regardless of the atrociously bad input lag. You can shoot, move, jump, crouch, look around, and even shoot enemies, but ultimately, it's no more than a fancy showcase and can never substitute the original experience.
As expected, the model isn't perfect. Enemy interactions are described as fuzzy, the context length is limited, the game incorrectly stores vital stats like health and damage, and it is confined to a single level.
This announcement follows OpenAI's latest Ghibli trend, which has garnered a lot of negative attention. While I'm no artist, there's a certain human element to every piece of creative work that AI cannot truly recreate. Yet, with AI's current rate of development, we might see that fully AI-generated games and movies could be a reality within the next few years, and that's where things are heading.
The sweet spot lies in AI enhancing, not replacing, creative works, like Nvidia's ACE technology, which can power lifelike NPCs. Parts of this technology are already integrated into the life simulation game inZOI. From a technological point of view, WHAMM still represents a step up from previous attempts, which were often chaotic, incoherent, and teeming with hallucinations.
]]>OctoPrint is a software that allows you to control and monitor your 3D printer remotely. You can start and stop, adjust 3D printing settings, and even view the live progress of your 3D prints via a camera. The software supports various plugins, each designed to address specific needs, and allows you to tailor the software based on your requirements. If you own one of the best 3D printers and want to create timelapses, you must install Octolapse.
Some plugins can help you organize print files on OctoPrint and automate routine tasks like bed leveling or filament changes. Others focus on analytics and reporting to offer insights on print time by analyzing the G-code as well as the performance of the 3D printer. We look at the best five plugins for OctoPrint. But before that, let’s look at how to access and install the plugins in OctoPrint.
Accessing and installing plugins on OctoPrint is straightforward if you have already managed to set up the software. Follow the steps below.
1. Log in to OctoPrint interface in your browser.
2. Click the wrench icon on the top-right corner to access the settings menu.
3. Locate and click Plugin Manager under OCTOPRINT section. This enables you to search, install, and manage plugins.
You will be able to see the plugins already installed. To look for more, click Get More.
Let’s now have a look at the best OctoPrint plugins to install.
Octolapse is the first plugin on our list. If you have ever come across short, interesting 3D printing time-lapses where the print looks like it’s growing steadily from the print bed, this is the plugin that can be used to create them. It captures snapshots at various stages of the printing process and synchronizes the camera's movement with the print head. Octolapse is completely free, and all you need to have is a Raspberry Pi and a camera, which can be a webcam, Pi camera, or DSLR. Follow the steps below to install and use it.
1. Search for Octolapse on the search bar in the settings section, then install it.
2. Restart OctoPrint, and then add your 3D printer profile using the Add Profile section. Then, connect it to OctoPrint using a USB cable.
3. Start the 3D printer connection by clicking on Connect on the left section of the interface.
You can then go ahead and set up your camera settings.
When you finish, start your 3D print and you will see the snapshots of your timelapses in the videos and images section. You can choose to download or delete them from there.
Bed leveling is an important and difficult process, especially if your 3D printer doesn’t have automatic bed leveling features. Even auto-bed leveling features might not be accurate sometimes, and your 3D prints can fail if there is a part that is not well-leveled. The bed visualizer plugin is a great tool to help you properly level your bed. It integrates with the 3D printer firmware to gather the details of the bed and then it generates a 3D mesh visualization.
The visualization highlights high and low points in the X, Y, and Z axes. This helps point out the areas that can cause issues and this enables you to make the necessary adjustments. To set up the plugin, your 3D printer must be running on compatible firmware like Marlin, PrusaFirmware, Klipper, or Smoothieware.
Follow the steps below to learn how to use it.
1. Go to the settings section, then Plugins > Bed Visualizer. If you don’t see it, go to Get More, then search for it there.
Sometimes, it might fail to install due to missing system dependencies. If that happens, you will need to SSH to your Raspberry Pi, then run the command sudo apt install libatlas3-base so that the plugin can load.
2. Click on the plugin, and you will need to set the Gcode commands for it to work. You can find the firmware-specific examples on GitHub. You can then enter them in the Gcode section.
3. When you enter the example, save, and update the Mesh, you will see your 3D printer moving as the plugin retrieves the current mesh, as shown below.
You will see a visualization of your bed when it finishes, as shown below.
4. Click the settings icon on the right to get more details on your 3D printer bed. When you go to the Corrections section and hover your mouse over the numbers, you will see how much the points needs to be adjusted in relation to the point that you click.
As the name suggests, PrintTimeGenius helps estimate the time it takes to 3D print your file. It analyzes the actual Gc-code instead of relying on the predictions of the 3D printer, which is not perfect. The good thing about this plugin is that it learns over time by comparing actual print durations with its predictions, and this improves the accuracy after each print. You can find it already installed on the plugins section. Follow the steps below to use it.
1. Adjust the settings accordingly after clicking on it. The default settings works well for most people. Click Save.
2. Upload a G-code to OctoPrint, then click the load and print icon just above the upload option.
3. Go to the plugins section, click PrintTimeGenious then select the G-code, and click Analyze.
When 3D printing multiple objects at once, if something goes wrong and you would like to cancel one of the objects, it can be hard to do so without disrupting the others. Cancel Ojects makes it easier to cancel specific objects mid-print without restarting the entire print job. This is helpful as it prevents the failed objects from interfering with the other objects, saving time and material. To use it, follow the steps below.
1. Go to the plugins section and click Cancel Objects. If it’s not there already, go to Get More and search for it there. You can also get it on GitHub.
2. Upload multiple objects, then select Cancel Objects option on the drop-down menu near Terminal.
3. Load the files from the left section of the interface and you will see them appearing and there is an option to cancel them.
4. When you click cancel on an individual model, a window will launch, asking if you are sure to cancel. Proceed to accept.
Obico, formerly known as The Spaghetti Detective, uses AI to detect potential print failures in real time. It monitors the progress of the print, identifies issues like filament tangles or spaghetti-like tangles, and alerts you. Obico also integrates with webcams, allowing you to visually monitor prints through your phone or computer. It also supports notifications through SMS, email, or push alerts. Get to know how to use it in the steps below.
1. Go to Plugins Manager > Get More, search for Obico, and install it.
2. Restart OctoPrint, then reload the page so that the changes take effect.
3. Click Setup Plugin to start setting up Obico.
4. Choose whether to setup with a mobile app or web browser.
For my case, I will choose web browser.
5. Continue to open Obico website to sign up for an account.
6. Link your 3D printer to Obico by clicking on Link Printer.
7. Select OctoPrint in the window that launches then click Next. It will start scanning for your 3D printer. For it to find your 3D printer, it must be powered on and if you are connecting it via a Raspberry Pi, ensure it’s powered on. You can also link it manually by clicking on Switch to Manual Linking.
8. Copy the 6-digit verification code that will be generated.
9. Go back to the previous page in the Obico plugin in OctoPrint and click continue.
10. Paste the verification code and Obico will be set up successfully.
11. Go back to Obico web application and you can rename your 3D printer, check the 3D printer feed, add a phone number, and even change the 3D printer settings.
You can also go ahead and connect the 3D printer at the serial port.
You can also upload your G-code to the platform and start 3D printing on Obico.
You can also download designs from the 3D models section, slice them on the platform, choose your 3D printer, and then confirm. You can find the 3D print failure option when you scroll down in the 3D printer section.
If you have connected the camera, you will be able to view 3D printing process live in the right section.
When you sign up to Obico, you are given 30 30-day free trial. Afterward, you will need to upgrade to the Pro version, which costs $4/month. The free version offers you basic web streaming, 10 free AI detection hours monthly, and up to 50MB of G-code cloud storage per file. The pro version, on the other hand, gives you premium webcam streaming, 50 AI detection hours per month, and G-code cloud storage of up to 500MB per file.
]]>US tariffs have caused a huge stir across every industry, and PC hardware is no exception. Today, we received an email from Vaio touting a tariff-free sale on their laptop inventory.
This was confirmed in the email but isn't clear on the website. In the email, Vaio assured us that this applies to the Vaio SX-R laptops, but we're unsure if it applies to other laptop series.
We are sure that this is only a temporary offer. The email stated that tariff-free pricing will only apply to the current inventory. Once this stock has been sold, tariff pricing will be applied, and we anticipate that the prices will increase notably.
Therefore, this may be your last chance to snag a brand-new Vaio SX-R laptop without the avalanche of tariffs impacting the price.
Over the last few days, we've covered some of the impending impacts of the latest tariffs. Most recently, there have been concerns that chipmaking tools will make US-made processors much more expensive to manufacture.
While exceptions have been made for semiconductor imports, the same cannot be said for water fabrication equipment (WEF). These uncertainties make offers like this from Vaio enticing, although fleeting.
The email from Vaio confirmed that the tariff-free pricing will definitely apply to the Vaio SX-R line. However, we aren't sure if it applies to all of the current laptop inventory, which also includes the Vaio FS series. That said, the "Shop Tariff Free" URL takes us to a page listing the Vaio SX-R and Vaio FS laptop lines.
At the very least, you can see specs for both machines on the Vaio website. The Vaio SX-R starts at $2,199 and goes up to $2,499, with the biggest difference being the storage options, which range from 1TB to 2TB. Both versions come with an Intel Core Ultra 7 155H processor, 32GB of RAM, and a 14-inch touchscreen with a resolution of 2560 x 1440px.
]]>As we near the second anniversary of the RTX 4070, Zephyr has introduced the RTX 4070 Sakura Snow X edition, showcasing an exotic all-CNC-machined shroud, including an integrated I/O bracket (via Videocardz). While an RTX 5070 might seem more logical, Zephyr doesn't generally consider performance when powering its distinctive small form factor designs. Likewise, porting such a design to the RTX 5070 might not be practical from a sales standpoint, given the current GPU market.
Zephyr is a relatively new and niche GPU manufacturer from China specializing in custom, compact-sized GPUs with extraordinary designs, like when they announced the "world's first" ITX form factor RTX 4070 last year. GPU shrouds are typically made of plastic or sometimes metals like Aluminium for high-end GPUs, like MSI's Suprim X models. Normally speaking, single-fan, mini-ITX GPUs forgo metal shrouds for budget considerations, but the Sakura Snow X is a unique exception.
While renders show the Sakura Snow X as all-white, real-world photos indicate a more metallic grey finish instead, probably due to the lighting. The GPU offers a single-fan, dual-slot design, carrying the mini-ITX form factor, and is compatible with Nvidia's SFF guidelines. Listed at 176x127x41mm (LWH), the Sakura Snow X's dimensions don't include the bracket, so plan your build accordingly. Overall, the GPU looks pristine from every angle, almost like a clean-cut solid metal block, speaking to the precision achieved with CNC machines.
In-house testing shows that the 105mm diameter fan, coupled with the all-metal shroud, decreases GPU core temperatures by up to three degrees Celsius compared to the original Sakura edition. In terms of specifications, we're looking at an AD104-250 die with 5,888 CUDA cores and 12GB of G6X memory, which is standard for all RTX 4070 GPUs. The RTX 4070 still holds pretty well against its Blackwell successor, landing just 16% slower. The upside? You might be able to find this GPU in stock.
Despite the reduced dimensions, Zephyr still adheres to Nvidia's reference clock speeds with a 200W TGP. You can always undervolt to reduce temperatures and power consumption and even increase performance if your GPU is thermal throttling. The RTX 4070 Sakura Snow X is available through Chinese e-commerce platforms for 4,399 RMB or $600.
]]>3D Gloop! is finally leaving dad’s garage and getting its own pad – or rather, a 10,000 square foot warehouse with space for a new lab and storefront. After nearly seven years in business, Andrew Mayhall and Andrew Martinussen have outgrown Mayhall’s garage (and living room, crawl space, storage locker, and a few trailers). They are moving their “science sauce” production to its own facility.
3D Gloop is a solvent-based adhesive that chemically welds together plastics like PLA, PETG, and ABS/ASA. The exceptionally strong bond is arguably the strongest you can create for 3D printed parts.
The pair began 3D Gloop! as a start-up in 2018 and donned lab coats to promote their “ludicrously” strong plastics adhesive at 3D printing festivals across the country. You may have seen them with Jephf, an automotive welding robot turned tug-of-rope warrior, at Open Sauce last year. Visitors were challenged to play tug of war with a rope held together by 3D printed parts and Gloop.
Mayhall said the business could use a little help during its next growth phase. They have covered the basics and will begin renovating the warehouse in April after they take possession. However, the local municipality where Gloop is moving threw a wrench in the works: it needs a retail showroom.
The new space will include more than just shelves of glue. Mayhall said they are already working with local school districts to host 3D printing workshops at their new facility and inspire a whole new generation of makers.
To cover the additional costs, they offer Gloop fans a chance to purchase a chunk of their literal foundation to become “Foundational Supporters”. Supporters will have their name permanently embedded into the showroom’s floor with Gloop’s iconic purple splatters made of copper, aluminum, and dyed concrete.
Foundational Supporters will also receive a gift of 3D Gloop and a lifetime discount. Top-tier supporters will also be able to participate in the Alpha and Beta tests for future products.
Foundational Supporters start at $250 and go up to $1,500. You can buy a limited number of splatters on the floor. Head over to 3D Gloop to check it out.
]]>Now is one of the best times to get a top-quality gaming monitor. More specifically, the HP Omen Transcend 32-inch quantum dot OLED (QD-OLED) gaming display is available at Newegg for its lowest price ever. It's typically priced around $999, but today it's been discounted to just $759.
We reviewed the HP Omen Transcend 32 monitor last month and loved its performance, rating it 5 out of 5 stars. It oozed quality and made for a top-notch experience in every metric we tested it against. Our only con was a light suggestion for a remote control. That said, if you want to see how well this monitor stacks up against others on the market, check out our list of the best gaming monitors.
HP Omen Transcend 32-Inch QD-OLED 4K Monitor: now $759 at Newegg (was $999)
This monitor is huge, spanning 32 inches and with a dense 4K UHD resolution. It's AMD FreeSync Premium Pro certified and has DisplayPort, HDMI, and even USB options for video input.View Deal
The HP Omen Transcend 32 monitor features a 31.5-inch QD-OLED panel. It has a dense 4K UHD resolution of 3840 x 2160px. The refresh rate can reach as high as 240Hz, while the response time can reach an impressively low 0.03 ms. It's AMD FreeSync Premium Pro-certified for its performance.
You get a handful of video input options, including one USB port, a DisplayPort 2.1 input, and two HDMI 2.1 ports. The screen covers 100% of the sRGB color gamut and is illuminated by a maximum possible brightness of 1000 Nits. It has three USB Type-C Ports alongside three USB Type-A ports. As far as audio support goes, it has built-in speakers, but you can also take advantage of its 3.5mm jack for connecting external audio peripherals.
So far, no expiration date has been specified for the discount, so we're not sure how long it will be available. For purchase options, visit the HP Omen Transcend 32-Inch QD-OLED 4K gaming monitor product page at Newegg.
]]>Organic light-emitting diode (OLED) technology has swept through the computing space, delivering a superior viewing experience in devices ranging from smartphones to tablets to laptops to the best OLED gaming monitors. When it comes to PC monitors, there are generally two popular options available to consumers: WOLED and QD-OLED.
WOLED stands for White OLED and has been popularized by LG. WOLEDs feature four subpixels: red, green, blue, and white. WOLEDs do away with the individual emitters for the red, green, and blue filters, and rely on a single layer that emits white light. The white subpixel doesn’t have a filter, so it lets the white light from the emitter pass through uninterrupted.
This arrangement allows WOLEDs to carry the same benefits of traditional OLEDs – namely, per-pixel control of light output resulting in incredible contrast – but it also has an added advantage. By using a single white emitter to pass through the color filters, you don’t run into a problem where individual emitters for red, green, and blue age at different rates, resulting in color shifting and burn-in.
WOLED technology doesn’t completely eliminate burn-in or image retention on monitors, but it can lessen the severity of the phenomena over time.
Another thing to consider with WOLEDs, however, is that the use of a filter has some downsides. While you can achieve superbly bright whites thanks to the white light emitter, the color filters can blunt that light production, reducing color volume.
Quantum-Dot OLED (QD-OLED) swaps the white light emitter for a blue light source and was developed by Samsung Display. There are then red and green subpixels – the blue subpixel is just an extension.
Each red and blue subpixel is infused with red and blue quantum dots, respectively, over the blue emissive layer. The energy-efficient nature of quantum dots (they can pass through 99% of the light they receive) means that QD-OLED panels can reach higher peak brightness levels and produce superior color, thanks to not needing to deal with the color filters in a WOLED panel.
Another advantage is that since you don’t need to drive the white light, which also must contend with the color filters, QD-OLEDs are also more power efficient.
Our testing found that QD-OLED monitors have consistently delivered superior color volume compared to WOLED monitors. For example, we’ve seen QD-OLED panels achieving around 110 percent coverage of DCI-P3 for superbly saturated color in SDR and HDR content. And the AOC AG346UCD is officially the most colorful OLED monitor we’ve tested to date, coming oh so close to perfection with 100.95% sRGB volume.
Burn-in results where an image on a monitor is retained and can be seen when content is being displayed. An example would be a static ticker bar at the bottom of the screen on a TV news channel (a la CNN or Fox News) or a status panel/health bar in a game. If that bar is left in the same position without any mitigation strategies in place, you would see that image appear when switching to media content that, for example, features a light background.
When we test OLED gaming monitors, we don’t have them in our possession long enough to perform any endurance testing to check for image burn-in. However, manufacturers have implemented comprehensive mechanisms in firmware to mitigate burn-in on modern OLED, WOLED, and QD-OLED panels.
For example, Philips uses pixel shifting to move displayed content 1 pixel (and up to 8 pixels) up/left/right/down to reduce burn-in. This can occur automatically once per 80 seconds. You can also manually trigger a manual clearing of burn-in that occurs. LG offers Clear Panel Noise, Screen Shift features as well. Samsung panels also offer Pixel Shit, Adjust Logo Brightness, pixel refresh, and screen optimization features.
While it’s impossible to state that an OLED panel will never experience one hint of burn-in/image retention over its useful life, these features significantly lower the risk.
It’s hard to find a bad OLED monitor these days, as the contrast ratio, color volume, and response times put them in a league far surpassing their VA and IPS counterparts. Pricing is the only major downside to going with any OLED panel over, say, an IPS equivalent.
You pay a significant pricing premium to enjoy the tangible benefits of OLED. But once you commit to OLED, you’ll have to decide how much of a hit you want to take to your wallet. WOLED-based monitors are generally cheaper, but you have to contend with the limits on brightness and color volume. QD-OLED monitors aren’t hindered by those limits, but you will pay an even dearer premium to enjoy those fruits.
]]>Our SSD benchmarks hierarchy provides a look at how all the different SSDs we've tested over the years stack up. These are all M.2 NVMe drives, but our test group has PCIe 3.0, 4.0, and 5.0 models. This is not our list of the best SSDs, as we're looking to rank the drives by raw performance, regardless of price — and when buying an SSD, the price per GB tends to be a major consideration.
We've grouped the SSDs by capacity and type to help keep things simple. There are tables for 1TB, 2TB, and 4TB+ as well as charts below. We also have a separate chart for all the M.2 2230 drives — for the best Steam Deck SSDs and other handhelds. Given current prices, not to mention the voracious appetite for the capacity of modern games, we're going to start with the 2TB drives. These are generally the sweet spot in price-to-performance and capacity ratios, though there's still a wide range in price — we're looking at you, PCIe 5.0 drives.
We've added over a dozen new drives to the SSD benchmarks list, including newcomers like the Samsung 9100 Pro, Acer GM9000, and Micron 4600. NAND memory prices are heading north it seems, with price hikes of 5–10 percent predicted for the coming months, so if you're looking for fast and plentiful storage you might want to act sooner than later.
We've sorted by the random QD1 IOPS results for the tables — the geometric mean of both the read and write IOPS, to be precise. This is one of the more realistic representations of overall SSD performance, even if it's a synthetic test, as it's difficult to game the system. Lots of manufacturers will test random IO performance at queue depths of 32 or even 256 because that makes everything look much faster, but in the real world, random queue depths are mostly at QD1 and almost never go beyond QD4.
Besides 4K IOPS, our tables also show the sequential performance (the geomean of the QD8 sequential read and write tests), file copy bandwidth (for a 50GB folder copy with over 30,000 files), average power consumption while copying those files, and finally a look at the geometric mean of all the read/write bandwidth tests. All of these metrics are also broken out into separate charts, should you prefer that format.
Also, if you're an SSD manufacturer and you don't see your drive in our tables, send me an email, and we can see about testing it. We can't test every capacity of every drive out there, but we like to show a wide sampling of options.
We use the QD1 4K random results to quantify the snappiness and responsiveness of the SSD during a normal desktop PC experience. It should be immediately obvious that there's not much difference between the various PCIe 3.0, 4.0, and 5.0 drives when it comes to QD1 random I/O. Yes, the Corsair MP600 Elite does take the top spot, barely, while second place goes to the Crucial T705 — the fastest SSD we've tested at present. Some of the other top-performing drives like the Crucial T500, Solidigm P44 Pro, and Kingston KC3000 are PCIe 4.0 drives, however.
Since we're only using data from the past couple of years, after we switched to our current Core i9-12900K test PC, we're decidedly heavy on PCIe 4.0 and 5.0 drives. But there are a decent number of PCIe 3.0 drives... near the bottom end of the table and charts. But even the fastest drives are less than twice the random I/O performance of the slowest drives.
That's why we also include the other columns for performance. The pure sequential scores show maximum throughput, generally within most drives' "burst" pSLC cache period. If you're doing drive-to-drive copies or backups using PCIe 5.0 hardware, it can make a huge difference — the Phison E26 SSDs all sit at the top, significantly ahead of the fastest PCIe 4.0 drives, and you can also see the different in NAND speed when looking at the E26 drives. The T705 and Max14um have 14 GT/s NAND, several others use 12 GT/s NAND, and the earlier models are 10 GT/s.
Copy performance is more of a real-world look at a common task: Copying 50GB of data from the drive to itself. This requires simultaneous reads and writes, and even the fastest drives drop to under 3.0 GB/s, which is still about quadruple the performance of the slowest SSDs we've tested.
We noted last year that Black Friday / Cyber Monday was a great time to upgrade your SSD, and warned that prices could head north in the coming months. That has now proven to be the case, with many SSDs now selling for 20–30 percent more than what they cost last November. Where high-performance 2TB drives were previously starting at around $100, most now cost $120+. Samsung's 990 Pro 2TB as another example dropped as far as $119 in November and now starts at $179, almost a 50% increase.
Last up for standard SSDs, we have the 4TB and higher capacity drives. So far, we've only tested 15 such SSDs, though we expect more will arrive in our labs for testing over the coming year.
We've seen exactly one 4TB PCIe 5.0 drive so far, the Crucial T700 4TB, which is a beast of an SSD. It's also beastly on pricing, currently selling for $473 (down from a $599 MSRP). If you want the faster T705 4TB, it's $543 (but we haven't tested the 4TB T705). We love the idea of large and fast performance, and prices are at least coming down now, but it's still far more economical to pick up something like the Samsung 990 Pro 4TB for $339.
Stepping up to 8TB drives usually means QLC NAND, and while that's not the end of the world, there are often performance compromises. Not to mention, the 8TB SSDs still cost a pretty penny. The Sabrent Rocket 4 Plus 8TB costs $1,199, while the Sabrent Rocket Q 8TB is "currently unavailable" on Amazon. It would be far cheaper to just pick up multiple 4TB drives rather than plunking down that much money for a single 8TB drive.
Sequential performance for most PCIe 4.0 SSDs lands right around 7 GB/s, with a couple of slower/older models at around 4.4 GB/s. The PCIe 3.0 drives all peak at just over 3.2 GB/s. File copy speeds are about one-third to one-fourth as fast, however.
The 1TB SSDs mostly mirror what we've already seen with the 2TB drives. However, in more extensive testing (like our write saturation tests), the lower capacity means you'll run out of pSLC cache more quickly. There are of course exceptions, like Intel's Optane SSDs that provide incredible QD1 random IO. RIP, 3D XPoint... RIP.
Aside from the Optane drives, the Gigabyte Gen5 12000 sits at the top of all four charts, with a sizeable lead on the sequential performance metric. The Nextorage NE5N is the only other 1TB PCIe 5.0 drive that we've tested, but it comes with slower NAND and thus falls behind in some of the other tests.
The random performance again gives a great illustration of why so many people might think that faster SSDs don't really make that much of a difference. QD1 is the most likely scenario for random workloads, and even the fastest SSD is only about 70% faster than the slowest SSD in our group. But sequential performance does matter, even for things as simple as verifying a game installation in Steam. The top performers are up to four times as fast as the slowest drives in that case.
The copy results level the playing field. Many of the SSDs will use the same controller and same NAND, which is why there are a lot of SSDs that deliver roughly the same performance. They won't be the same in every instance, but for moderate use, just about any of these SSDs will still perform competently, in which case, looking for a good deal is often the determining factor.
You can now find even quality 1TB drives for well under $100. The least expensive 1TB SSD that we've tested right now is the HP FX900 at $65, which provides a good blend of performance overall. Faster drives like the WD Black SN850X 1TB now cost $89, making them less enticing. The cheapest drives cost $5–$10 less, but they're often slower, use QLC NAND, and/or have some other potential concerns.
Wrapping things up, nearly all of the drives in the previous lists have been 2280 models — 22mm wide and 80mm long. M.2 2230 SSDs are becoming popular, thanks in part to the rise of the Steam Deck and other handheld gaming portables. We've tested a baker's dozen of the 2230 SSDs, and nearly all of the drives use the same hardware, resulting in very similar performance. There are only two exceptions: the WD Black SN770M and SN740 use a custom WD controller, while the Inland TN436 uses an older Phison E18T controller — everything else uses the Phison E21T controller.
A few of the drives scored better in our random IO tests, but it was consistently faster than other 2230 drives, likely thanks to newer firmware. The TN436 ends up dead last, as expected, while everything else falls within a relatively narrow range of 40K–41K IOPS.
The WD Black SN770M and WD SN740 (an OEM variant of the same hardware, more or less) take the top spots in the sequential performance tests. At the same time, the controller gets hotter than other drives, which can be a potential concern for using it in the Steam Deck. The new Lexar Play 1TB also performed ahead of the crowd, followed by the other 1TB TLC drives, with the QLC-equipped drives filling out the chart.
The biggest issue with M.2 2230 drives is their pricing. The 1TB models are at least reasonably competitive, with the Corsair MP600 Mini going for $84, but the 2TB drives generally cost over twice as much. It's the price for going ultra-compact, and if you're just looking for the least expensive 2TB drive you can find the Addlink S91 2TB costs $178 — a reasonable choice for the Steam Deck. The absolute cheapest 2TB M.2 2230 drive currently available is the Micron 2400 at $156, which we haven't reviewed; it uses the SM2269XT controller and QLC NAND, so it will likely be a bit slower than the Phison E21T models.
The 2230 drives are very much not about maximum performance. Most 2TB models use QLC NAND, and under sustained write saturation testing, they'll drop below 100 MB/s. But that's the thing: A Steam Deck can't even write at 100 MB/s if you're downloading games over its wireless connection. We typically saw peak data rates of ~30 MB/s is all. So, picking up the most cost-effective 2230 drive for such use makes sense.
After a wave of GeForce RTX 5060 series rumors hit the internet last month, online retailers are now publishing pre-built listings sporting RTX 5060 and 5060 Ti graphics cards. Resident leaker momomo_us on X discovered one such PC sporting an RTX 5060 priced at $1,149.
The pre-built system in question is a CyberPowerPC "Gamer Master" desktop sold by Best Buy. It sports a Ryzen 7 8700F processor, 16GB of DDR5 memory, 2TB SSD, and, of course, an RTX 5060 8GB graphics card. Assuming the images represent the actual model, the RTX 5060 inside appears to be a compact dual-slot AIB partner model (not a Founders Edition card) with a single 8-pin power cable.
An identically configured CyberPowerPC from the same retailer sporting an older RTX 4060 sells for just $50 more than its RTX 5060 counterpart. The minor price deviation suggests the RTX 5060's official MSRP will be the same (or very similar at the very least) to its predecessor (which has an MSRP of $299, though it is not selling at that price currently). The 8GB moniker on the title and spec page seemingly confirms the GPU will come with 8GB of VRAM, as rumors have suggested.
Three RTX 5060/5060 Ti pre-built listings from a PC builder known as Stormcraft have also shown up at Newegg. The trio comes with 14th-generation Intel CPUs, 32GB of RAM, and a 650W power supply in a Micro-ATX fishtank chassis. The RTX 5060 variant costs $1,399 and features a Core i7-14700F CPU. The cheapest RTX 5060 Ti variant is paired to a Core i5-14400F, costing $1,299, and the most expensive counterpart is paired to a Core i7-14700F, priced at $1,499. VRAM capacity on the RTX 5060 Ti models was not specified.
Nvidia could be very close to announcing and launching the RTX 5060 Ti and RTX 5060 now that system builders and retailers publish listings of OEM pre-builts sporting the two GPUs. Rumors regarding the RTX 5060 series have been floating around for at least a month. The RTX 5060 Ti is rumored to carry the GB206-3001-A1 core sporting 4,608 CUDA cores (36 SMs), a 128-bit interface, 180W TGP, and two GDDR7 VRAM options, an 8GB model, and a more expensive 16GB (clamshell) variant (just like its predecessor) for VRAM-conscious buyers.
The vanilla RTX 5060 is rumored to carry the GB206-250-A1 die sporting 3,840 CUDA cores (30 SMs), a 128-bit interface, 150W TGP, and 8GB of GDDR7. The latest rumor suggests Nvidia will launch both GPUs on April 16th at 9 PM.
]]>Loongson, a fabless Chinese CPU manufacturer, announced the successful tapeout of two new processors, the 2K3000 and the 3B6000M, designed for industrial control and mobile, respectively. Both chips employ the same underlying silicon but have been uniquely packaged for their target markets. It's important to note that reaching High Volume Manufacturing (HVM) for these chips may take some time, and we're likely months away from seeing them being produced at scale.
Based on Loongson's roadmap, the 2K3000 family is overdue, as it was initially scheduled for late 2024. The CPU in question offers an eight-core LA364E-based CPU with a base frequency of 2.5 GHz. In-house testing shows the CPU can hit 30 points in integer performance under SPEC CPU2006, which is hard to compare against modern-day CPUs, as the software was retired in 2018.
Either way, even at iso-frequencies, this performance pales in contrast to the desktop 3A6000 based on the LA664 architecture. Due to Loongson's unclear naming scheme, it's hard to definitively say which architecture is more recent. Then again, we're probably barking up the wrong tree, given the age of SPEC CPU2006.
The iGPU (integrated GPU) draws its lineage from Loongson's recently introduced LG200 GPGPU (General Purpose GPU). Loongson claims in addition to basic graphics acceleration with OpenGL 4.0 support, the built-in GPU can also power lightweight AI workloads rated at 8 TOPS of INT8 performance, alongside 256 GFLOPS for FP32 compute. While FLOPS aren't a perfect indicator of real-world performance, that's somewhat faster than the original Nintendo Switch's GPU in handheld mode.
Both chips are said to integrate independent hardware encoding and decoding modules, capable of displaying output to three interfaces (eDP/DP/HDMI) at up to 4K at 60 FPS. Loongson has also implemented Chinese SM2/SM3/SM4 cryptographic standards directly in hardware for security. Regarding I/O, we get support for PCIe 3.0, USB 3.0/2.0, SATA 3.0, GMAC for Ethernet, eMMC for storage, SDIO, SPI, LPC, RapidIO, and CAN-FD.
These chips further segment and diversify Loongson's desktops, servers, and mobile/industry offerings. Chinese manufacturers are likely to integrate the 3B6000M into domestic laptops, tablets, smartwatches, retro handhelds, and other devices.
Meanwhile, the industrial control counterpart (2K3000) is better suited for PLCs, HMIs, and edge servers. Given that support for Loongarch is still quite limited, Huawei is reportedly prepping a new desktop/laptop Arm-based chip dubbed Kirin X90 for its upcoming AI PCs, rumored to be powered by HarmonyOS.
]]>During a 50th anniversary event at its Redmond, Washington headquarters, Microsoft executive vice president and CEO of AI, Mustafa Suleyman, detailed new features for Copilot to make it a true companion on Windows and smartphone platforms.
Many of these features have appeared in other AI programs, but others, like Vision, may help change the way you use your computer, should they become widely adopted.
Copilot Vision has moved off the web and into Windows and smartphone apps. This is likely the biggest feature to affect PC users in the near future.
On mobile, Copilot can look through your camera and analyze your surroundings. You could ask Copilot questions about what you see and have it analyze real-time video or anything stored in your camera roll. One example Microsoft provided is showing Copilot pictures of plants to get more information on how to better take care of them. The company had an example on stage where Suleyman pointed a phone at dinosaur toys and asked for more information about the creatures.
Windows users can let Copilot view their screen and search across apps, browser tabs, and files for information. Microsoft also says that Copilot will be able to change settings, organize files, or work on projects without switching between apps.
Apple had suggested that Siri could do this with Apple Intelligence, but those features have been delayed.
For Windows, Vision will first roll out to Windows Insiders next week and then be distributed "more broadly afterwards."
If you don't have an app up, that's not an issue. Microsoft is rolling out Actions, a feature that lets Copilot complete tasks without you doing anything but prompting it. To some degree, this sounds like OpenAI's Operator. For example, you can ask the AI to make dinner reservations or send someone a gift
While Actions should work on "most websites," launch partners include Book.com, Expedia, Kayak, OpenTable, Tripadvisor, and 1-800-Flowers.
Microsoft is also adding a Deep Research functionality that analyzes multiple sources and combines information from across the web or documents. We've seen this in ChatGPT and DeepSeek, and now it's coming to Copilot.
Copilot is also powering search (which ChatGPT is also doing), grabbing information from multiple sites in Bing in order to come up with a broad report with multiple citations.
"This allows you to be just a click away from your favorite publishers and content owners," Microsoft's Bing blog reads.
Copilot will be able to learn about you. As you interact with the AI, it will remember details you tell it, which Microsoft says will make responses richer and allow for proactive action.
If you don't want Copilot to get to know you that well, you can opt out entirely or limit what it remembers through a user dashboard.
Copilot's other new features are varied and, frankly, less interesting. There's an option to have Copilot create a podcast based on your interests. If you really want to learn about art or horticulture while doing the dishes, all you have to do is ask Copilot for a podcast about it.
The Copliot Shopping feature researches products and informs you about price drops (the latter of which has been done manually by sites like camelcamelcamel.com for years). But Copilot will also let you make purchases directly through the phone app, seemingly bypassing the store altogether by using the AI in an agentic way.
Pages is effectively an organizer. You can hand Copilot a ton of documents, notes, or other files, and have Copilot put them together into a clean outline for brainstorming, studying, or journaling.
Depressing news about tariff price rises on your favorite tech, GPU prices through the roof, and the general cost of being a gamer skyrocketing. With all these extra costs and threats of prices increasing further floating about, it's so lovely to find some actual deals on our favorite tech and hardware. Today, we have a gaming laptop from HP that won't break any FPS records. However, it will offer you a mobile platform to play your games on and is upgradable, so you can beef up the specs for even better performance.
Today's gaming laptop deal is available at Best Buy for just $449. The HP Victus 15 (model: 15-fb2063dx) is the perfect gaming laptop option if you want to pick up an affordable laptop for light gaming, playing games on a home network, or cloud gaming via Xbox Gamepass.
This 2024 model of the HP Victus features a 15.6-inch IPS display with an FHD (1920 x 1080 pixel) resolution, AMD's Ryzen 5 7535HS processor, AMD's Radeon RX 6550M GPU, 8GB of DDR5 RAM, and a 512GB SSD for your operating system and games library. 8GB of RAM and a 512GB SSD aren't great, with Windows alone chewing up a large portion of the available RAM. If you are thinking of purchasing this laptop, you will want to look at grabbing some more RAM and upgrading to at least 16GB for gaming. The good news is that the laptop is easily upgradable, and you can add another 8GB of RAM and install up to 2TB of storage in the M.2 slots.
HP Victus 15 Gaming Laptop: now $449 at Best Buy (was $799)
A compact gaming laptop that contains an AMD Radeon RX 6550M laptop GPU, a 6-core AMD Ryzen 5 7535HS processor, 8GB of DDR5 RAM, and a 512GB SSD for storage. The HP Victus 15 (model: 15-fb2063dx) uses a 15.6-inch display with a bright 300-nit IPS panel. The screen has a maximum resolution of 1920 x 1080 pixels and a smooth 144Hz refresh rate - perfect for high-motion gaming.View Deal
We've reviewed the HP Victus 16, the big brother of this laptop, and found the laptop to have excellent battery life, a comfortable keyboard, a bright display, and an attractive design. The Laptop featured in this deal is the smaller 15.6-inch model and includes many similar specifications and the same design, albeit in a slightly smaller chassis. For only $449, this laptop is the perfect gaming laptop for those on a budget looking to play games while away from your main gaming rig.
Don't forget to look at our Best Buy coupon codes for April 2025 and see if you can save on today's deal or other products at Best Buy.
]]>Of the key components in any PC, the storage drive is the slowest: transferring bits in a fraction of the time your CPU and GPU take to process it or your RAM takes to load it. A poor-performing storage drive often leads to a big bottleneck, forcing your processor (even if it's one of the best CPUs for gaming) to waste clock cycles as it waits for data to crunch.
Finding the best SSD or solid-state drive for your specific system and needs is key if you want the best gaming PC or laptop, or even if you just want a snappy productivity machine. To find the best SSDs for gaming and productivity, we test dozens of drives each year and highlight the best ones here. We have multiple categories, including the best SSD for NAS and the Best SSD for the Steam Deck listed below. For those on the hunt for the best SSD for the PS5, be sure to head to that link for our recommendations based on our exhaustive testing. If you're looking for the ultimate in cheap and deep storage, we also have a list of the best hard drives.
Amazon's Big Spring Sale is upon us and, with it, come significant savings on SSDs. For all the deals, check out our Big Spring Sale page. Our favorite SSD deal is below.
Samsung 990 Pro 4TB SSD: now $279 at Amazon (was $319)
The Samsung 990 Pro 4TB is among the fastest SSDs currently available on the market, with read and write speeds of up to 7450/6900 MB/s, maxing out the Gen 4 bandwidth. View Deal
The newest budget NVMe SSDs have undercut the pricing of mainstream drives on the slower SATA interface (which was originally designed for hard drives), but we shouldn't expect to see the end of SATA SSDs any time soon.
The era of PCIe 5.0 SSDs is also upon us, propelling storage performance to new heights. Blazing-fast PCIe 5.0 M.2 SSDs, which offer up to twice the sequential speeds of the older PCIe 4.0 standard, are now supported with Intel and AMD's current platforms, like Zen 4 Ryzen 7000, Zen 5 Ryzen 9000, and 12th Gen Alder Lake through 14th-Gen Raptor Lake Refresh.
It's great if your desktop system can handle a PCIe 5.0 drive, but they are still new and more expensive and certainly aren't a requirement. For example, the PCIe 4.0 Samsung 990 Pro is our current choice for the best SSD overall, and the best SSD for gaming. This drive is rated for 7,450 / 6,900 MBps of sequential read/write throughput and 1.2 / 1.55 million read/write IOPS. That means less time waiting for game levels to load or videos to transcode, not to mention a snappier experience in Windows.
PCIe 5.0 SSDs still have plenty to offer. The Crucial T705 ranks as the fastest consumer SSD in the world that you can actually buy, alongside similar SSDs like the Sabrent Rocket 5, delivering up to a blistering 14.5 GB/s of sequential throughput and 1.8 million random IOPS over the PCIe 5.0 interface. That's an amazing level of performance from an amazingly compact device.
While the PCIe 5.0 drives are the fastest SSDs money can buy right now, believe it or not, raw speed isn't everything. In regular desktop tasks such as web browsing or light desktop work, you may not even notice the difference between a PCIe 3.0 SSD and one with a 4.0 interface, let alone a new bleeding-edge PCIe 5.0 model. The latest PCIe 5.0 SSDs also carry a heavy price premium for now, so you're probably best suited with a PCIe 4.0 model — unless you're after the fastest possible performance money can buy, of course. If that's the case and your system supports it, go for a new PCIe 5.0 SSD.
Ultimately, the best SSD for you is one that provides enough capacity to hold your data at a price you can afford. Consider that a high-end, AAA game can use more than 100GB of data, and Windows 11 all by itself may need 60GB. These days, we feel 2TB drives represent the sweet spot, with 4TB models becoming increasingly common.
Here's the shortlist of our rankings, but we have deeper breakdowns for these drives below, along with far more picks for other categories, like PS5 SSDs, RGB SSDs, workstation SSDs, and SATA SSDs, among other categories.
Below, you'll find our list of the best SSDs. For even more information, check out our SSD Buyer's Guide. Iif you're looking for an external SSD, you can check out our Best External Hard Drives and SSD page, or learn how to save some money by building your own external SSD.
Samsung hit back at its competitors with this impressive update to the 980 Pro. New hardware and new options, including a heatsink with RGB and a 4TB variant, have allowed Samsung to retake the M.2 SSD crown. Performance is excellent across the board, setting a few new performance records, such as with 4K random read performance. In our testing, the drive was consistent, power-efficient, and cool. Samsung has also updated its software for this drive, giving it the best SSD toolbox available, and the drive is backed by a competent warranty and decent support.
$20 extra for a heatsink and RGB is a good deal, and Samsung will likely discount this drive over time. Competing PCIe 5.0 drives on the market offer faster performance, but they still carry a premium.
Read: Samsung 990 Pro Review
WD has taken its popular Black SN850 SSD and turned it up to 11. The Black SN850X leverages an improved controller and newer flash to get the most out of the PCIe 4.0 interface. Performance is improved across the board, and the drive rivals most of the top contenders in the PCIe 4.0 market. There's also a heatsink option that comes with RGB at 1TB and 2TB. WD also supports the SSD with its decent Dashboard application and a respectable five-year warranty.
The M.2 Black SN850X was a bit pricey at launch, however, with a daunting MSRP, but those prices have largely come down. The touted Game Mode 2.0 feature felt incomplete in our testing, although WD ensures us that this will improve with future firmware updates. All-in-all, this is a good compromise if you can’t find the Samsung 990 Pro.
Read: WD Black SN850X Review
The Crucial T705 is the fastest drive we have tested to date, finally breaking the 14 GB/s barrier. Careful work with Phison’s Max14um reference SSD design has led Crucial to eke out even more performance, taking the excellent T700 - a previous Fastest SSD position holder - up a notch. The optional heatsink design remains passive, which is a bonus, and you can also purchase the drive bare. Aside from the solid sequential performance, the T705 also has good sustained performance and can reach an incredible 1,550K / 1,800K random read and write IOPS at 2TB.
This is the fastest drive for now, but there will be others. The Sabrent Rocket 5 is not too far behind, and there are drives built on non-Phison controllers - like the InnoGrit IG5666-based Teamgroup T-Force GE Pro - that also promise over 14 GB/s of potential throughput. PCIe 5.0 drives remain an enthusiast product due to cost and availability concerns, and so far, they have proven inefficient and unwise for laptops and the PS5. Still, if you want the fastest consumer storage you can buy, the T705 is the fastest drive on the market.
Read: Crucial T705 Review
The Sabrent Rocket 5 has the distinction of providing the fastest direct-to-TLC write performance we have ever seen. During the longest of workloads, it can average write speeds of 4.45 GB/s, outclassing any PCIe 3.0 drive in existence and beating our previous high points with TLC flash with either 4.0 or 5.0 SSDs. It’s otherwise similar to other drives based on the Phison E26 controller, but it’s at the upper end of those, too. This allows it to provide excellent all-around performance with DirectStorage-optimized firmware for future-proofing.
It has the same downsides as other ultra-fast drives - namely, high power consumption and poor power efficiency. Idle power consumption in a desktop PC, which is the most likely destination for the drive, remains quite high. The Rocket 5 can also put out a lot of heat when it’s pushed. If you can provide an ample heatsink, though, this drive will run cool enough even under sustained workloads without any throttling. This makes it one of the fastest overall drives on the market, and the absolute fastest in extended heavy workloads.
Read: Sabrent Rocket 5 Review
The Crucial T500 combines cutting-edge flash with a customized controller that manages to be power-efficient with just four channels but also squeezes in the coveted performance-boosting DRAM cache. The T500 is also a single-sided drive with TCG Opal support, making it perfect for professional laptop use.
Many laptops are still stuck with PCIe 3.0 slots, and that’s fine. The T500 will be even more efficient when run at 3.0, and its benefits, aside from bandwidth potential, do not disappear. While the T500 does offer a heatsinked version, which we have in our all-around best SSD category, you’ll be going bare for a laptop. In this respect, it can even be better than DRAM-less drives, as the T500’s controller has more surface area and a metal IHS to prevent controller overheating. It’s simply the finest drive for laptops at this time unless you really want more horsepower. That’s on the menu, too, especially once the 4TB version arrives.
Read: Crucial T500 Review
The Sabrent Rocket 4 replaces the original Rocket 4 with a faster, more power-efficient design. Although this is a DRAM-less drive, the performance is excellent, and the drive’s single-sided nature makes it great for laptops. This is an easy drop-in part that falls short of the T500 only in its omission of DRAM, but luckily, DRAM isn’t as much of a requirement as it once used to be. The Rocket 4 also tops out at only 2TB of capacity - the T500 promises 4TB this year, and there are some good 4TB options like the Lexar NM790 already available.
Read: Sabrent Rocket 4 Review
4TB has become a more attractive capacity point for SSDs as time has gone on. While there are now many options available, most come with compromises of one sort or another. You may have to settle for QLC, a weaker controller, no DRAM, unreliable hardware, etc. This is not always a big deal, especially if the drive is intended to be a secondary gaming drive. In the PlayStation 5, however, extra cooling is beneficial, so it’s convenient to have a heatsink option available. At the same time, laptops favor bare drives and especially single-sided drives, the latter of which have been very rare with TLC until recently.
Samsung has managed all of this with its high-performing 990 Pro SSD. You have a powerful controller with DRAM, cutting-edge TLC flash, and a single-sided drive with or without heatsink even at 4TB. WD’s SN850X has been out a while at 4TB but has no heatsink option and is double-sided, with the SN850P being a latter heatsinked version for the PS5. There has been an increasing amount of 4TB TLC drives, including the Lexar NM790 and Addlink A93, but these cannot compare to the power and brand power of Samsung’s 990 Pro. You do have to pay for that privilege given the high MSRP, but at this time there is no substitute.
Read: Samsung 990 Pro Review
Now that Crucial has finally brought out the 4TB SKU for the T500, it can replace the T700 on our best SSDs list for the best 4TB SSD alternative. The T700 is still a good choice for this, but the T500 is better for a few reasons. While both drives have a heatsink option, the T700 requires one, while the T500 can work bare in a laptop. The T500 is also more power-efficient but doesn’t skimp on performance by omitting DRAM. And while the T700 is PCIe 5.0 capable, many machines — including laptops and the PS5 — won’t benefit from that extra bandwidth.
The 4TB T500 is not without its faults, though. Its pricing is a little high for what you get, matching other high-end drives, which makes more sense on desktops. This is partly because the T500 has inconsistent sustained performance while those like the 990 Pro and SN850X do not. The 4TB T500 is also double-sided, which potentially reduces its compatibility. There are already single-sided, 4TB DRAM-less drives for less, such as the Lexar NM790, and there may be more in the future, although in general, this fact shouldn’t reduce the T500’s appeal.
Read: Crucial T500 4TB Review
WD has updated the popular Black SN850X with an 8TB model, a capacity that is in high demand within some circles. This is high density for the cost of just a single M.2 slot, while the far less expensive option of two 4TB drives requires two slots instead. There are cases where one slot is a hard limit, such as with the PlayStation 5 or some laptops or smaller desktop motherboards. In those cases, this drive is probably the best choice for an 8TB drive if that much space is needed. However, you do pay a premium for that benefit.
While 8TB is not a new milestone by any measure, WD can reach this without any real compromises. Performance is still about the same, and power efficiency is in the same ballpark. There’s also the option for a heatsink if you’d like, and that’s not a bad idea for a drive of this caliber. The 8TB model is still double-sided like the 4TB, which might make it a tight fit for specific laptops, but, for the most part, this is not a problem. It took WD a while to bring this capacity to the market, but it did it right, making it the best 8TB drive.
Read: WD Black SN850X 8TB SSD Review
The Teamgroup MP44 is one of those drives that remains a value champion just for being in the right place at the right time. It feels like a natural successor to the Teamgroup MP34, a drive that was once the most popular 4TB choice among budget PCIe 3.0 drives. The MP44 is even better than that, though, as it has a more reliable controller with up-to-date flash. As such, performance is good everywhere it matters and the drive is power-efficient, too.
It’s probably not the best SSD for laptops as the controller can act as a hotspot, but otherwise it’s a good choice at any capacity. However, it faces more competition below 4TB. There are faster drives either way, but it’s difficult to argue about the MP44’s price. Its most direct rivals would be the Patriot Viper VP4300 Lite, the Lexar NM790, and the Addlink A93, but it generally beats them all with its lower cost, particularly at the coveted 4TB.
Read: Teamgroup MP44 Review
While the Teamgroup MP44 is the value champion if you want the newest hardware and the fastest PCIe 4.0 speeds, going with its smaller sibling - the MP44L - is a good choice if budget is your top priority. Neither drive has DRAM, but both perform well enough that, in most cases, you’ll notice no difference between the MP44L and more expensive drives. There are no real performance pitfalls here and, further, the drive is efficient and single-sided for laptops or the PS5. It’s an easy pickup on sale.
The downside is that there are faster DRAM-less PCIe 4.0 drives around, like the Lexar NM790, Addlink A93, and Patriot Viper VP4300 Lite. However, extra power is often inconsequential if you’re on a budget. The MP44L competes more directly with WD’s Blue SN580, Black SN770, and Blue SN5000, as well as drives that share the MP44L’s hardware like the Silicon Power UD90 and Addlink S90 Lite, but sometimes these drives have swapped to QLC at higher capacities - the UD90 certainly has. The MP44L usually comes out on top as the best value, though.
Read: Teamgroup MP44L SSD
WD took its popular Black SN850 SSD and turned it up to 11, but luckily for value seekers, the price isn't nearly as extreme. The current $156 price on Amazon for the 2TB model is a great deal, even if it's now $25 more than it cost last year. The Black SN850X uses an improved controller and newer flash to get the most out of the PCIe 4.0 interface, thus delivering excellent performance with the Sony PlayStation 5. WD improved performance across the board, and the drive comes with a heatsink option at 1TB and 2TB capacity points.
WD also supports the SSD with a solid five-year warranty that will let you game with peace of mind. This drive is made for the PlayStation 5, and while it can be a bit pricier than budget options, overall, it's still our top pick for the PS5. It's also fast for gaming on a PC, particularly with DirectStorage starting to become useful, so this drive is plenty attractive.
WD has taken the course of releasing an officially licensed SN850P SSD. That drive is a glorified heatsinked SN850X and you should only pick it if you want the heatsink at 4TB. Even then, it's far cheaper to get a bare SN850X and add your own heatsink.
Read: WD Black SN850X Review
The PNY CS3140 is not much different than the Kingston KC3000, the drive it replaces on our list, but it happens to be less expensive which makes it a better fit for the PS5. When it comes to this console, less is more: once you hit a certain performance threshold you start to look at price per GB. There are certainly drives that would beat the CS3140 here, like the Teamgroup MP44 or MSI M482, but they are DRAM-less. DRAM is not a requirement, especially for gaming, but if you want the very best that’s not called the SN850X you’re looking at E18-based drives like the CS3140.
The CS3140 is no-frills so you might want to add a heatsink of your own and, in fact, PNY makes and sells one for the PS5 for around $11 more. This may or may not be required but does make the drive look sharp in the console. PNY doesn’t cut down on the warranty for this drive and the numbers are about as good as it gets for performance, so you’re not missing out on anything by picking this over similar, more expensive drives. Its only weakness is a lack of an 8TB capacity which we don’t think is a factor anymore due to WD having cornered that market with the SN850X and SN850P.
Read: PNY XLR8 CS3140 SSD Review
The Crucial P310 came as a bit of a surprise, but a welcome one. M.2 2230 SSDs have ratcheted up in popularity ever since Valve’s Steam Deck launched, and now there are more portable gaming systems than ever. There’s also Microsoft’s Surface Pro line and some laptops that take M.2 2230 or M.2 2242 - this drive can be extended up to M.2 2280 if needed - which used to mean going to eBay for OEM options like the WD SN740. This hasn’t been the case in a while, but finding a decent 2TB drive has remained difficult. The P310 handles that challenge like a champ.
Sure, it’s QLC-based, which means it’s not quite as fast or consistent as it could be, but it’s more power-efficient than the TLC-based WD Black SN770M and has more throughput. In fact, it’s the fastest 2TB M.2 2230 SSD we’ve ever tested. We expect the updated Corsair MP600 Mini would beat it, but the P310 has better availability and should be less expensive. It’s fast enough where it matters, which makes it the best option if you’re looking purely for capacity, but your host system should be able to take PCIe 4.0 drives to fully benefit.
Read: Crucial P310 SSD review
Corsair’s second run at the MP600 Mini, now with a faster controller and flash, is an example of how to do things right. It takes M.2 2230 SSDs to the next level in terms of performance while maintaining excellent levels of power efficiency. To top it off, it brings TLC flash at up to 2TB in a single-sided package. Previously, it was necessary to go with QLC flash - which in some cases is slower than TLC flash - or the power-hungry WD Black SN770M, which in any case isn’t as fast. This isn’t as big a deal with the PCIe 3.0 Steam Deck, as you can’t reach the full potential of today’s drives with that interface.
The new MP600 Mini comes at a price, though. Literally - it costs a bit more than the competition. The least expensive way to get this level of performance is to go with the Crucial P310, the best choice for M.2 2230 on any PCIe 4.0 platform if you want the highest capacity and 7 GB/s. For a 3.0 platform like the Deck and TLC flash, the Black SN770M remains solid. If you want the best performance possible, then the updated MP600 Mini is the way to go. For the time being, it is even good for M.2 2242 with an extender, otherwise, the native Rocket Nano 2242 will do the trick at 1TB.
Read: Corsair MP600 Mini (E27T) SSD Review
The WD Black SN770M is unique in that it offers 2TB of TLC NAND flash in the tiny M.2 2230 form factor in a single-sided design. This makes it optimal for use in the Steam Deck, ASUS ROG Ally, and other portable gaming/computing devices. Some of these can take double-sided drives or longer drives, but the most popular of them all - the Deck and Deck OLED - work best with this form factor. For a long time, it was only possible to get a drive with less-desirable QLC if you wanted 2TB, but with the SN770M, that compromise is no longer required.
This comes at a cost as the older hardware on the SN770M - which is the same as the popular M.2 2280 Black SN770 - pulls more power and puts out more heat. For regular gaming use, this wasn’t an issue in our testing. The difference in battery life is essentially negligible, and the drive is usually not pushed enough for its direct heat output to be an issue. Therefore, it offers the best baseline performance in this form factor for now, but QLC-based alternatives may be more affordable.
Read: WD Black SN770M Review
With the growing popularity of M.2 2230 SSDs, it was only a matter of time before we saw retail 2242 options. The Sabrent Rocket Nano 2242 is one of these, alongside the Corsair MP600 Micro. Alternatives include OEM and last-gen drives, like Sabrent’s original Rocket 2242, but some are double-sided. Not so with the Rocket Nano 2242, which will fit in the Lenovo Legion Go and many laptops with at least one M.2 2242 slot. It’s an easy drop-in solution with good performance and power efficiency.
The drive is only currently available at 1TB. However, with dual NAND packages, we expect larger capacity options in the future. M.2 2230 SSDs can also be extended for M.2 2242, but the 2TB options currently on the market all have their own drawbacks, except perhaps for the imminent Corsair MP600 Mini (E27T). However, the Rocket Nano 2242 gives plenty of performance for portable devices as it stands and is an easy pickup for M.2 2242.
Read: Sabrent Rocket Nano 2242 SSD review
PNY had its heart set on producing a very fast RGB-capable SSD, and with the CS3150 XLR8, or CS3150, it succeeded. This PCIe 5.0 SSD also has a heatsink with dual fans to ensure it never overheats. PNY’s software allows control over the RGB and fans, with synchronization possible for the former if you have other PNY RGB products. The warranty is standard, but the drive does support hardware encryption via the TCG Opal 2.0 specification, which may be a selling point for some.
The CS3150 isn’t perfect, though. It’s expensive and can be difficult to find. It’s only available at 1TB and 2TB capacities, needing 2TB to hit its maximum performance numbers. There are also other drives equal or faster to it, although for many workloads this isn’t particularly relevant. If RGB isn’t your thing, this drive also comes without the RGB in both white and black variants. Regardless of the model you go for, the drive can operate without throttling, and its performance is good across the board.
Read: PNY CS3150 Review
The Corsair MP600 Pro LPX has come a long way since our original review. Its performance has been straightened out and Corsair has also added an 8TB SKU. The drive has good availability and fair pricing, which is what helps it stand out in this category. While there are many drives similar to it, including our original pick - the Seagate FireCuda 530, which is now hard to find, Corsair has focused more on simply having the drive available. Throw in a heatsink - in black or white, to match your decor - and you have an attractive and very consistent all-around performer.
What makes this drive good for workstations is that it has a powerful, eight-channel controller with DRAM, in an era when four-channel DRAM-less drives are becoming more popular. PCIe 5.0 SSDs still carry too much of a premium. The MP600 Pro LPX also has tried-and-tested hardware that’s mature and reliable, which isn’t the case with some IG5236-based drives like some Silicon Power XS70s. Corsair also offers this drive from 500GB up to 8TB, which gives a ton of flexibility to suit your needs. It’s not power-efficient by today’s standards, but this isn’t a huge factor for workstations and HEDTs, making it a safe choice.
Read: Corsair MP600 Pro LPX Review
The Inland Performance Plus and the similar Gaming Performance Plus - the same drive, but with a heatsink - are typical of the early high-end PCIe 4.0 SSD heyday. The Phison E18 controller, with DRAM, has proved its mettle, but it feels a bit outdated with the rise of fast DRAM-less options and PCIe 5.0 SSDs. However, when it comes to consistent performance, drives like this are still a great pick for workstation workloads. Corsair’s MP600 Pro LPX is similar to the Performance Plus but is from a well-known brand and has a heatsink. The latter drive can save you some money while providing the same results, though.
The Performance Plus has pretty decent sustained performance - you have to check our Gaming Performance Plus review for that, as the former’s hardware was upgraded after launch - and a great warranty, to boot. This makes it perfect if you don’t need a heatsink, although you can add your own. The SSD market has changed significantly in recent years and at this time the Performance Plus is a great value if you’re looking for this sort of performance with a wide range of capacities. It’ll have to do, as alternatives aside from the more expensive SN850X and 990 Pro are DRAM-less or unoptimized.
Read: Inland Gaming Performance Plus 2TB SSD Review
At first glance, the Adata Legend 960 Max seems like just another drive among many. That’s true, as there are better drives in almost every category. There’s faster drives, drives with more IOPS, more efficient drives, etc. What the Legend 960 Max does right is typically of little interest to desktop users: it has good sustained performance and runs cool while maintaining that speed. It also has DRAM, living in a world where DRAM-less drives are becoming more popular and are affordable but aren’t always ideal for heavier workloads.
The fact is, this drive is quite consistent, which is potentially useful for NAS and even workstation use. Its warranty doesn’t lag behind and the addition of a heatsink means it’s ready to go right out of the box - or you can get the regular Legend 960 without a heatsink. It’s also pretty much the least expensive drive of this type, with DRAM, at 4TB, when ignoring drives with problematic hardware like the Silicon Power XS70 or Adata S70 Blade. It’s one of those drives that goes unnoticed which means at the right price it could be a niche solution for a tucked-away server.
Read: Adata Legend 960 Max Review
The Addlink NAS D60 is a niche drive but fills its designated role pretty well. If you have a NAS system, a workstation, or other servers - whether for home lab use or SOHO - this drive may be worth looking at. Assuming your server can take an M.2 NVMe drive or two, the NAS D60 can do caching duty in tandem with mechanical hard drives or even be used in an all-flash array. Whichever way you go, some special features of this drive help it step away from other retail consumer drives, which justifies its price premium. But it’s still more affordable than full-out enterprise solutions.
The first thing that stands out about this drive is that it’s using enterprise-grade flash. Such flash is more reliable with higher baseline endurance. This lets Addlink extend the warranty to 1 drive write per day (DWPD), which is three times the retail standard. The second thing that stands out is that it has capacitors on-board for power loss protection. This means improved integrity for data-in-flight. Lastly, the NAS D60 foregoes any pSLC cache, which, while hurting all-around performance, does give more consistent sustained performance. This combination makes it particularly good for a write cache, singly or in RAID, for NAS and other systems.
If you’re looking for a more traditional drive or one with a larger capacity option, the Adata Legend 960 Max remains viable. It also has a heatsink, which the D60 NAS lacks. Pick the D60 Max if you want the higher TBW, the PLP, and/or the non-cache performance characteristics. Oh, and remember that the NAS D60 will not be very power-efficient if that’s a factor for you.
Read: Addlink NAS D60 SSD review
You can get a SATA drive in the M.2 form factor, but most SATA drives are 2.5-inch models, which allows them to drop into the same bays that hold laptop hard drives. SATA drives are the cheapest.
If you don’t want to dish out big bucks on something in the NVMe flavor but still want strong SATA performance, the MX500 is a great choice. As an alternative to the Samsung 860 EVO, it offers similar performance and has a strong history of reliability. Usually priced to sell, the MX500 is a top value at any capacity you need.
Read: Crucial MX500 Review
Restrained by the SATA interface, but still need the absolute highest endurance and performance you can get? As the pinnacle of SATA performance inside and out, Samsung’s 860 PRO is the SSD to buy.
Like the Samsung 970 PRO, the 860 PRO uses Samsung’s 64L MLC V-NAND, which helps propel it to the top of the charts in our rounds of benchmarking and makes for some incredible endurance figures. You can get capacities up to 4TB, and endurance figures can be as high as 4,800 TBW. But with prices that are triple that of your typical mainstream SATA SSD, the 860 PRO is mainly for businesses with deep pockets.
Read: Samsung 860 Pro Review
We use the same test system for all our SSD benchmarks. You can find the specifications in the boxout, and the short summary is that it's an Intel Alder Lake platform — chosen because it was the first platform to support PCIe 5.0 for expansion cards and M.2 slots. We have periodically looked at newer platforms, but Raptor Lake didn't change the results much if at all, and AMD's PCIe 5.0 platforms tend to be slightly slower than Intel's platforms.
We have a battery of benchmarks, each of which gets run multiple times. We use the best result from each test. Here are the charts of all currently tested SSDs (from the past three years, give or take). We froze Windows 11 at version 22H2 in order to keep the test results consistent — various security updates have had an impact on certain benchmarks over the years.
We've grouped the SSDs by capacity, beginning with the 4TB and larger drives, then the 2TB drives (which are easily the most popular and well-represented class in our testing), then the 1TB drives, and finally all the 2230 drives (in both Gen3 and Gen4 modes). We haven't tested any new 500GB-class or smaller SSDs in several years as that market is mostly dead for DIY upgrades these days.
Whether you're shopping for one of the best SSDs or one that didn't quite make our list, you may find savings by checking out the latest Crucial promo codes, Newegg promo codes, Amazon promo codes, Corsair coupon codes, Samsung promo codes or Micro Center coupons.
MORE: Best External SSDs and Hard Drives
MORE: All SSD Content
]]>Nintendo has delayed opening pre-orders for the Switch 2 in the U.S., the company tells Tom’s Hardware, two days after the White House announced tariffs covering most nations on Earth.
“Pre-orders for Nintendo Switch 2 in the U.S. will not start April 9, 2025 in order to assess the potential impact of tariffs and evolving market conditions," the company told us in an emailed statement. "Nintendo will update timing at a later date. The launch date of June 5, 2025 is unchanged.”
The company announced the Switch 2 on Wednesday, with a launch price of $449. Nintendo manufactures the console in China and Vietnam, so even though the U.S. has previously threatened significant import duties on the former, it could still send hardware produced in the latter to North America, which will circumvent the increased levies Trump applied to China earlier in the year.
Nintendo was likely caught off guard by the nearly global tariffs the White House released just hours after the official launch of the Switch 2. Trump’s "Liberation Day" tariff announcements increased U.S. tariffs on Chinese goods to 54% — but Vietnam was also unexpectedly hit with a 46% duty.
This means that Nintendo’s announced price will likely change as the tariff rate for Vietnamese goods has drastically increased. This is unfortunate for the company and potential buyers, as the new handheld gaming console is already 50% more expensive than the original Switch. We still have hope that the company will be able to retain the originally announced price when it goes on sale on June 5.
Some Vietnamese journalists speculated that the 46% tariff that Trump applied to Vietnam was just a way for him to get the country to the negotiating table. Vietnam Deputy Prime Minister Ho Duc Phoc is traveling to the U.S. in the coming days, and if Hanoi can give the U.S. some concessions, it could potentially push the tariff rates lower.
But if the 46% import taxes on Vietnamese goods remain in place by June 5, then Nintendo will likely have no choice but to pass these taxes to the consumer and increase the console's retail price here. This will certainly be a disappointment to Nintendo's many fans. But with the sweeping nature of the tariffs, Nintendo is going to be far from the only company facing this problem.
]]>This week, President Donald Trump imposed at least 54% tariffs on nearly all imported Chinese goods, and Beijing hit back with 34% duties for goods from the U.S. In addition to these taxes, Bloomberg reported that China is restricting the export of seven rare earth metals as part of its move against Washington.
The Ministry of Commerce for the People’s Republic of China is sanctioning the following materials: samarium, gadolinium, terbium, dysprosium, lutetium, scandium, and yttrium. Although these resources aren’t widely known to the public, they’re crucial for producing some of our most advanced technologies. For example, some of these materials are used in the magnets found in the motors of electric vehicles, while others are used for creating superconductors. Some are also used in storage media to improve efficiency and performance, and several more are found in nuclear reactors.
This announcement occurred several days before the additional levies Trump announced would take effect on April 9. China usually announces retaliatory actions against the U.S. after these taxes are already being charged on imports, giving the White House a chance to back down. However, it seems the country also made this countermove just one day before the April 5 deadline for TikTok to find an American buyer or face a ban in the U.S. It’s speculated that China made this move so it can have some aces up its sleeve during subsequent bilateral negotiations between the two companies.
Computer chips and copper are exempt from the newly announced tariffs, at least for now, which is great news for PC manufacturers. However, these levies affect the equipment and materials required for American-made processors, making building chips in the U.S. more expensive. When combined with China’s ban on rare earth metals that companies use, many chip makers will have to increase their prices as they scramble to find alternative sources (which are likely more expensive).
China’s average tariff rate on U.S. goods before this announcement was 17.8%, which is significantly less than the 32.8% rate it applies. However, Trump justified the jump in tariff rates for China as an answer to the non-tariff barriers it allegedly places on U.S. goods. We can only watch from the sidelines as almost all imports into the U.S., including silicon semiconductors, become more expensive due to these taxes. Hopefully, these companies and their customers can adjust to this new reality to ensure their survival.
]]>Microsoft's Copilot AI tool has been available for a while as a dedicated application for Windows users. However, the latest Copilot update certainly shakes things up by introducing a new redesign that integrates it fully within Windows 11 and Windows 10. This change means that the Copilot AI system doesn't require you to access the web app to function. It also comes with improved performance and more user-friendly controls.
This new update was first available for testers in March, but now it's available not only to Windows 11 users but also to Windows 10 users, as long as they're running version 19041.0 or later. The latest update should provide better performance because the system operates locally and utilizes less memory. The new Copilot application is available for download via the official Microsoft Store.
Copilot was previously only accessible using a web-based application or opening it from the sidebar. Now, you can pull it up quickly with a keyboard shortcut (Alt + Space) and adjust the window position and size to your preference. Because it's integrated into the Windows OS, it has access to system information unique to your PC, which Microsoft intends to enhance the individual user experience. That said, Copilot cannot operate the Windows 11 OS, but that isn't necessarily off the table for future updates.
According to Microsoft, the new update includes a new sidebar that you can use to initiate conversations with the chatbot to get answers faster since Copilot, on average, now uses between 50MB to 100MB of RAM. Additional control options include a dedicated taskbar icon and what Microsoft calls "picture-in-picture" mode.
When comparing Copilot to other popular AI tools like ChatGPT, it's obvious this new update puts Copilot in a unique position. Instead of relying on external sources, the latest update keeps everything much closer to the OS. With a keyboard shortcut or voice command, you're just a step or two away from interacting with Microsoft's AI experience.
Again, the new update is available to Windows 10 and Windows 11 users. If you want to experiment with it yourself, head over to the Microsoft Store and download the new Copilot to see what it's all about.
]]>The Asus ROG Strix X870E-E Gaming Wifi is an upper-mid-range offering you can find on Asus’ webstore (at the time of this writing) for $499.99, which is also the launch price. You know if it says ROG Strix, it will pack a lot of features, along with a signature premium appearance. The latest E Gaming model introduces an additional M.2 socket, faster networking, various AI and DIY-friendly “Q” features, and a refreshed aesthetic compared to the X670E version, enhancing or updating elements almost universally.
For under $500, Asus offers a variety of AI features, including AI Overclocking (an easy performance upgrade), AI Cooling II (one-click fan tuning), and AI Networking II (optimize network performance) to maximize the potential of the installed hardware. A multitude of EZ PC DIY functionality is also included, covering the M.2 sockets (Q-Release/Slide/Latch), troubleshooting (Q-LED/Code), Wi-Fi (Q-Antenna), and the slim PCIe Slot Q-Release.
Asus has improved its design, refining an already high-end aesthetic. The large VRM heatsinks enhance the look with a dot-matrix-like RGB-backlit ROG symbol and an Asus tagline on top. While it’s not significantly different from the previous generation, the design looks less busy, and better supports warm-running PCIe 5.0 M.2 devices.
In terms of connectivity and pwerr, there’s also a lot to like. With 13 USB ports on the rear I/O (10 Type-A and 4 Type-C), five M.2 sockets (3 PCIe 5.0), robust power delivery, and fast networking, there isn't much that could improve speed without spending significantly more money. Performance of the ROG Strix X870E-E across our testing suite with default settings was average in most tests. It also proved to be a competent gaming option, demonstrating its all-around capability against the other motherboards we’ve evaluated on this platform.
Below, we’ll examine the board's details and determine whether it deserves a spot on our Best Motherboards list. But before we share test results and discuss details, we’ll list the specifications from Asus’ website.
Asus includes several accessories to help ease your building experience. From SATA cables to Wi-Fi antennas, it’s enough to get you going without a trip to the store. Below is a complete list of the extras.
The board’s overall design doesn’t change considerably from the previous version. However, it still exudes premium vibes and fits the profile of a premium motherboard. On the all-black, 8-layer PCB, oversized heatpipe-connected VRM heatsinks display an RGB ROG symbol shining through. Three larger heatsinks are positioned over the three PCIe 5.0 M.2 drives at the bottom. A plate-style heatsink with diagonal slats and hints of brushed aluminum covers the remaining M.2s, along with the chipset heatsink. A second RGB lighting strip concealed below illuminates the bottom of the board. Once again, we appreciate the enhancements over the previous generation, and there’s no doubt this setup will look impressive inside almost any chassis.
In the left corner, we see the two Dual Procool II 8-pin EPS connectors powering the CPU. The oversized VRM heatsinks have no issues keeping the powerful VRMs below in spec. The ROG Symbol design splits up the dual-finished (matte and smooth aluminum) cover top, along with the “For Those Who Dare” branding that you’ll find on both parts of the heatsink.
Moving right, we run into the Nitropath-equipped DRAM slots with locking mechanisms on both sides. Asus lists capacity up to 256GB with DDR5-8400+(OC) speeds for 8000 series APUs, while 7/9000 series desktop processors are slightly lower at DDR5-8000+(OC). Our DDR5-8000 kit didn’t work out of the box (ut wasb;t ib the QVL), but our Team Group DDR5-7200 kit worked without issue. The board also features Asus’ AEMP feature that helps get the most out of memory kits without XMP profiles.
Above the DRAM slots, we find the first four of the seven 4-pin fan headers. Each header supports PWM- and DC-controlled devices, with a total output across all headers of 1A/12W. Although this is low compared to other motherboards, which typically offer at least one 2/3A header, you would only need to worry if you carelessly piggyback a couple of fans on the same header. Control over these devices is managed through Armory Create and AI Cooling II, which includes one-click fan tuning via a proprietary Asus algorithm.
Next are a couple of LED displays to aid in troubleshooting. At the top is the Q-Code LED, which shows more detailed codes, while the simpler Q-LED display is below. Both features function during the POST process, indicating the specific area (Q-LED - CPU, DRAM, VGA, BOOT) where the issue lies, and providing additional information (via Q-Code). These are always beneficial when problems arise, and especially important if you enjoy tweaking and pushing your system.
We encounter the first (of three) 3-pin ARGB headers along the right edge. Control over any integrated devices and those connected to the headers is managed through the Aura Sync software, which can be accessed via the Armory Crate. Below is the start button and the Flex Key (reset), which you can configure for quick access features like Safe Boot or toggling the LEDs on and off with the button. Next, there is the 24-pin ATX connector to power the board, a front panel USB 3.2 Gen 2x2 (20 Gbps) Type-C header, and finally a 19-pin front panel USB 3.2 Gen 1 (5 Gbps) header.
Power delivery on the X870E-E Gaming features 22 phases, 18 dedicated to Vcore. Power flows from the EPS connector(s) to a Digi+ ASP2205 PWM controller. Next are 18 Vishay Sic850A 110A SPS MOSFETs utilizing Asus’ “teamed” power configuration we’re used to seeing these days. The available 1,980 Amps can easily support overclocked flagship-class processors, even with sub-ambient overclocking methods. Ultimately, the only limitation is CPU cooling on such a well-built board. If this sounds familiar, it’s the same solution on the more expensive ROG Maximus X870E Hero.
Hiding under the SupremeFX shield at the bottom left corner is the current-generation Realtek ALC4080 audio codec. Helping things out are several premium dedicated audio capacitors (yellow), audio line shielding, and a Savitech SV3H712 Amp. Most users will be pleased with the audio solution.
In the middle of the board are two PCIe slots: The top one is CPU-connected and runs at PCIe 5.0 x16 speeds, while the bottom one, through the chipset, runs at PCIe 4.0 x4. Both slots are reinforced, with the top slot featuring Q-Release Slim technology for easily extracting your graphics card without needing a button. The card is secured with a standard (perhaps broader) clip that is spring-loaded and remains open by default, locking in place when the GPU is pushed down. As long as your graphics card is secured to the PC case, there’s no risk of it coming loose. To remove it, pull it up on the IO side of the card to dislodge it from the front (left) part of the slot. Additionally, the top slot can bifurcate and supports up to four x4 M.2s (or x4/x4/x8) using an add-in card.
Mixed in among the PCIe slots are five M.2 sockets. The top three sockets, M.2_1/2/3, are all CPU-connected and operate on PCIe 5.0 x4 (128 Gbps), supporting devices up to 80mm (M.2_3 supports 110mm). The bottom two slots accommodate 80mm modules but connect through the chipset, running at PCIe 4.0 x4 (64 Gbps). You have plenty of speed and sockets available. The first socket also has a multi-size connector for easier installation. There is lane sharing; when M.2_2 and M.2_3 are enabled, the primary PCIe slot reduces to x8. The E Gaming supports NVMe RAID0/1/5/10 with 9000 series processors.
Along the right edge, past the chipset, are four horizontal SATA ports (also supporting RAID0/1/5/10) and another 19-pin USB 3.2 Gen 1 (5 Gbps) header.
Across the bottom of the board are several exposed headers. You’ll find the typical stuff here, including additional USB ports, RGB headers, and more. Below is a complete list, from left to right.
The rear IO on the X870E-EGaming is extremely busy with 14 total USB ports, dominating the space. Starting on the left is the HDMI video output followed by two USB4 (40 Gbps) Type-C ports. Two more Type-C ports line the bottom edge, and between those are the Clear CMOS and BIOS Flashback buttons. Above all of that are 10 USB 3.2 Gen 2 (10 Gbps) ports and the Realtek RTL8126 5 GbE. On the right are the Wi-Fi 7 module, the quick connect Q-Antenna, and the 2-plug plus SPDIF audio stack.
MORE: Best Motherboards
MORE: How To Choose A Motherboard
MORE: All Motherboard Content
Asus’ BIOS on the X870E-E Gaming looks the same, sporting the black, red, easy-to-read ROG theme we’re all familiar with. Asus starts in an Easy Mode that displays high-level information, including CPU and memory clock speeds, temperatures, fan speeds, storage information, etc. Advanced Mode has several headers across the top that drop down additional options. The new Q-Dashboard shows all the integrated connectivity. When hardware is connected, there’s a green circle next to it. The BIOS is one of my favorites, as any option you need is there, and anything you need frequently isn’t buried deep within menus.
Armoury Crate for the X870E-E Gaming follows the ROG-inspired theme. Several applications exist for various functions, ranging from RGB lighting control, audio, system monitoring, and overclocking etc. It's also worth mentioning the included software. When purchasing Asus motherboards, you receive a sixty-day AIDA64 license - a useful application for stress and performance testing. Asus’ Driver Hub (get your updated drivers here!) and a custom version of Hwinfo for real-time monitoring are also helpful applications. We’ve captured a few screenshots of the applications below.
We’ve updated our test system to Windows 11 (23H2) 64-bit OS with all updates applied as of late September 2024 (this includes the Branch Prediction Optimizations for AMD). Hardware-wise, we’ve updated the RAM kits (matching our Intel test system), cooling, storage, and video card. Unless otherwise noted, we use the latest non-beta motherboard BIOS available to the public. Thanks to Asus for providing the RTX 4080 TUF graphics card and Crucial for the 2TB T705 SSDs. The hardware we used is as follows:
MORE: Best Motherboards
MORE: How To Choose A Motherboard
MORE: All Motherboard Content
Our standard benchmarks and power tests are performed using the CPU’s stock frequencies (including any default boost/turbo) with all power-saving features enabled. We set optimized defaults in the BIOS and the memory by enabling the XMP profile. For this baseline testing, the Windows power scheme is set to Balanced (default) so the PC idles appropriately.
Synthetics provide a great way to determine how a board runs, as identical settings should produce similar performance results. Turbo boost wattage and advanced memory timings are places where motherboard makers can still optimize for stability or performance, though, and those settings can impact some testing.
The X870E-E Gaming WiFi performed adequately across the synthetic benchmarks. It was average in many areas, but struggled in the Procyon Office tests. The margin of difference isn’t significant enough to cause concern, but it was still slower than most other boards by small margins.
The board did fine in the timed applications. Nothing to worry about here.
Starting with the launch of Zen 5, we’ve updated our game tests. We’re keeping the F1 racing game but have upgraded to F1 24. We also dropped Far Cry 6 in favor of an even more popular and good-looking game in Cyberpunk 2077. We run both games at 1920x1080 resolution using the Ultra preset (details listed above). Cyberpunk 2077 uses DLSS, while we left F1 24 to native resolution scaling. The goal with these settings is to determine if there are differences in performance at the most commonly used (and CPU/system bound) resolution with settings most people use or strive for (Ultra). We expect the difference between boards in these tests to be minor, with most falling within the margin of error. We’ve also added a minimum FPS value, which can affect your gameplay and immersion experience.
The E Gaming proved to be a competent gamer, landing in the middle of the pack for the 3DMark and game tests. All good here.
Over the past few CPU generations, overclocking headroom has been shrinking on both sides of the fence while the out-of-the-box potential has increased. For overclockers, this means there’s less fun to have. For the average consumer, you’re getting the most out of the processor without manual tweaking. Today’s motherboards are more robust than ever, and they easily support power-hungry flagship-class processors, so we know the hardware can handle them. There are multiple ways to extract even more performance from these processors: enabling a canned PBO setting, manually tweaking the PBO settings, or just going for an all-core overclock. Results will vary and depend on the cooling as well. In other words, your mileage may vary. Considering all of the above, we’re not overclocking the CPU. However, we will try out our different memory kits to ensure they meet the specifications.
Memory testing went as expected, with the E Gaming rejecting our Klevv DDR5-8000 kit but happily running the Team Group DDR5-7200 kit after enabling the profile. There’s plenty of headroom left when using the right kit (stick to the QVL for the best results), but with AMD, you’re better off around 6400 MHz with the tightest stable timings you can find. If you want to squeeze every last MHz out of your processor and prefer not to tweak it yourself, Asus’ AI Overclocking (Dynamic OC Switcher, Core Flex, PBO Enhancement, and clock gen for BCLK overclocking), part of the company’s intelligent features, will help you get there.
We used AIDA64’s System Stability Test with Stress CPU, FPU, Cache, and Memory enabled for power testing, using the peak power consumption value from the processor. The wattage reading is from the wall via a Kill-A-Watt meter to capture the entire PC (minus the monitor). The only variable that changes is the motherboard; all other parts remain the same. Please note we moved to use only the stock power use/VRM temperature charts, as this section aims to ensure the power delivery can handle flagship-class processors.
The power consumption of the Ryzen 9 9990X is relatively low compared to the 7950X used for X670/X670E. In the past, high-end boards peaked at nearly 300W, but current systems now reach a maximum of 250- 270W0W during CPU stress tests (gaming with the Nvidia RTX 4080 versus the RTX 3070 is a different matter). That said, the X870E-E Gaming peaked at 239W under load, with the CPU consuming around 141W. The idle power consumption for this board was 97W, which is among the higher results.
VRM temperatures on the E Gaming peaked at just under 51 degrees Celsius according to Hwinfo and 49 degrees Celsius on our hottest probe. The premium VRMs and the beefy heatpipe-connected heatsinks can tackle any load you throw at them, even an overclocked, high-power flagship like the Ryzen 9 9950X.
Wrapping things up, the ROG Strix X870E-E Gaming WiFi is a high-quality upper-mid-range motherboard. It offers fast connectivity, including USB ports, storage, and networking options. Along with the hardware, Asus provides one of the most comprehensive sets of AI tools and DIY-friendly features. So, if you’re not interested in making manual adjustments, the AI tools can help extract some extra performance. Priced at $499.99, it’s equivalent to its X670E counterpart but offers faster connectivity and is worth the upgrade at similar price points. It’s pretty impressive in isolation, but there is, of course, competition.
In the sub-$500 category, ASRock’s X870E Taichi is the least expensive direct competitor at $449.99. Gigabyte’s X870E Aorus Master is available for $479.99, while MSI’s X870E Carbon matches the Asus price at $499.99. If you need three PCIe 5.0 M.2 sockets, Gigabyte has you covered, but it only offers four total sockets, similar to the others. ASRock’s lower price point is appealing, but you get less, especially regarding AI capabilities. MSI’s Carbon might be the best-looking option, but it also falls short in the M.2 socket count.
That said, among this group, Asus provides the most well-rounded board. With impressive hardware specifications, various AI tools, and DIY-friendly features, you won’t find much better without significantly increasing your spend. However, the other boards offer fewer features or slower specifications. The ASRock will serve you well if you're not concerned about having the fastest or most advanced options. Nevertheless, if you seek the most comprehensive AMD-based solution with today’s modern features (at a sub-$500 price point, of course), the X870E-E Gaming WiFi is the motherboard for you.
MORE: Best Motherboards
MORE: How To Choose A Motherboard
MORE: All Motherboard Content
]]>Nvidia’s most affordable RTX 40-series desktop graphics card is starting to push toward $500 at U.S. retailers. Yet, despite its horrific pricing, the Nvidia GeForce RTX 4060 currently tops the best seller GPU chart on Newegg US, with an MSI Ventus model listed at $460 (and it "ships from Hong Kong"). Further down the chart, the only other RTX 4060 model in the top 20 is a triple-fan Gigabyte Eagle model at $455 (which ships from the US). These prices seem ridiculously high, but Newegg is still seeing enough demand to make this MSI model its best seller.
Of course, it's likely more of a statement on the lack of GPU supply rather than high demand for these graphics cards. The Blackwell RTX 50-series GPUs, along with AMD's RDNA 4 RX 9000-series and Intel's Battlemage Arc B-series, have all been rather difficult to find at anything approaching MSRPs. Inventory on the previous generation graphics cards also dried up late last year, with only the lower tier cards like the RTX 4060, RX 7600, Arc A750, and RX 6600 still remaining in stock at reasonable prices. We can scratch the 4060 off the "reasonable" list now, however.
When the RTX 4060 was launched at $299 in June 2023, it was panned by reviewers and enthusiasts for its significant compromises. Benchmarks showed that the intergenerational performance increase wasn’t exciting, and its 8GB VRAM and 128-bit interface meant the RTX 4060 suffered a significant VRAM and bus width deficit vs the previous 12GB RTX 3060. (Of course, Nvidia also released an RTX 3060 8GB late in the Ampere life cycle that perhaps made the 4060 look a little less terrible.)
Normally, when Nvidia is due to unleash the next-generation replacements for a graphics card family, consumers have a chance at a rare bargain. This is not the case in 2025, as the launch date for the RTX 5060-class GPUs approaches. Leaks and indications of a lift in 60-class pricing don’t help exert downward pressure on the RTX 4060 series. For example, Chinese etailers recently listed RTX 5060 and RTX 5060 Ti graphics cards for between $463 and $528 (prices converted for Chinese Yuan and minus local sales tax).
Another reason the RTX 4060 desktop price seems so high is that you can grab full gaming laptop systems featuring the RTX 4060 laptop GPU for bargain prices. Of course a mobile variant isn't equivalent to a desktop PC model, but a couple of weeks ago we spotted an RTX 4060 packing MSI Thin A15 for $729. That’s just $270 more than the desktop GPU alone, and if you were starting out in PC gaming, that bargain laptop features a powerful Ryzen 9 8945HS processor, 16GB of RAM, and 1TB of storage, as well as the laptop screen, keyboard etc. – in a portable form factor.
That's not a direct substitute for a desktop graphics card if that’s what you want/need, but it illustrates the silly desktop card pricing. We must also comment that, unlike other Nvidia desktop/laptop GPU comparisons, the RTX 4060 laptop GPU actually offers relatively similar performance — the only major differences are clocks speeds and power limits.
The law of supply and demand is a very powerful force in economics, and we frequently see this reflected in the PC components market. In the graphics card arena, the supply/demand balance has been unfortunate for PC DIYers and enthusiasts for several months now, chiefly due to AI, and we've seen similar issues in the past (e.g. due to cryptomining and Covid). It's not looking like things will be rectified shortly.
Having said that, we do feel that these RTX 4060 prices can’t be sustained. Nvidia and AMD are preparing to fill this market with new gen ‘60’ cards shortly, and they are usually supplied in good quantities, as they are mainstream mass-market products. Faced with RTX 4060 cards approaching $500, we would definitely suggest waiting a few weeks for the next-gen 4060/9060, with potential replenishments and restocks of ‘70’ series GPUs helping out, too.
If you're in a pinch, check out our graphics card hierarchy for the comparative performance of lesser-known GPUs from AMD and Intel. Perhaps, for your intended purposes, an AMD Radeon RX 7600/XT or Arc B580, B570, A770, or A750 might be a satisfactory substitute for an RTX 4060. Check our reviews and see how alternatives work in your favorite games and apps. As a stopgap, the Newegg fourth place AMD Radeon RX 6600, at $210, isn’t a terrible choice, either.
]]>For now, the sweeping import tariffs enacted by the U.S. government do not tax imports of semiconductors. Still, the Trump administration is preparing separate tariffs for sectors like semiconductors, pharmaceuticals, and possibly critical minerals. Apparently, Trump expects to impose tariffs on chips 'very soon.'
"The chips [tariffs] are starting very soon and the pharma is going to be starting to come in […] sometime in the near future," said President Trump at a White House briefing.
Earlier this year, Trump threatened to impose 25%, 50%, or 100% tariffs on semiconductors produced in Taiwan but remained quiet after TSMC committed to investing an additional $100 billion in its production capacity and an R&D center in the U.S.
However, after the sweeping tariffs imposed on goods from Europe, Japan, South Korea, and Taiwan made chipmaking tools 20%—32% more expensive for American chip producers and, therefore, increased their costs, it is logical for the U.S. government to impose tariffs on foreign semiconductors to somewhat level their competitive advantages over domestic chips in the American market.
It remains to be seen how the Trump administration's chip tariffs will look and whether there will be sweeping tariffs on all chip exports or some kind of differentiation.
Sweeping tariffs on all chips produced outside of the U.S. will hurt the best and most profitable American chipmakers, such as AMD, Broadcom, Intel, Nvidia, and Qualcomm, which make the lion's share of their products at TSMC in Taiwan.
For example, if there is a 25% tariff on an Nvidia AI GPU that the company sells for $50,000 with a 75% gross margin, then Nvidia will have to declare a value of $12,500 and pay an import duty of $3,125. Such a tariff will either hurt Nvidia's margins or make its GPUs more expensive for buyers in America. For Elon Musk's next-generation data centers, which are set to contain a million GPUs, this means $3.125 billion of additional costs.
Of course, there are cheaper products sold at a lower margin. For example, a $200 CPU is sold with a 40% - 50% gross margin across the supply chain. So, a 25% tariff will increase the retail price of such a CPU to $225 — a significant increase — if not absorbed by the CPU producer and the supply chain.
With the current sweeping 'reciprocal' tariffs set on exports from all the U.S. trade partners, the U.S. government used a straightforward method for setting the tariff: they divided the U.S. trade deficit with each country by the total value of imports from that country, according to numerous analysts (there is a graph proving this here). Hopefully, the Trump administration will use a more sophisticated method of calculating semiconductor import taxes.
]]>Today, G.Skill announced two new high-speed and high-capacity DDR5 memory kits to compete against the best RAM. Tailored toward enthusiasts under the Trident Z5 RGB and Z5 Royal Neo series, the latest offerings include what G.Skill claims is the world's first 128GB (64GBx2) memory kit running at DDR5-8000 speeds alongside a 64GB (32GBx2) DDR5-9000 offering for those prioritizing speed. There's no word on pricing and availability, though we expect more details to be unwrapped in the future.
128GB memory kits, typically configured as 64GBx2 for optimal speed, are widely available with speeds ranging from DDR5-5600 to DDR5-6000. G.Skill's latest Trident Z5 Royal Neo 64GB modules fill this gap in the market. They are advertised to hit DDR5-8000 speeds when overclocked.
The "Neo" designation indicates that this memory kit is designed and optimized for AMD's AM5 platform. Likewise, the 32GB DDR5-9000 modules are intended for Intel's Arrow Lake platform. However, the press release doesn't specify if these kits are CUDIMM-based.
G.Skill prepped a test bench featuring an AMD Ryzen 9 9950X on an Asus ROG Crosshair X870E Apex motherboard. The attached validation screenshot shows the 128GB Trident Z5 Royal kit running at DDR5-8000 CL44-58-58 speeds. If tighter timings are your priority, G.Skill recently introduced a 96GB DDR5-6400 kit with CL30 latency. However, they haven't explicitly stated support for AMD's EXPO technology.
Moving over to the Intel setup that was outfitted with the Core Ultra 7 265K and the Asus ROG Maximus Z890 Apex, the 64GB Trident Z5 RGB counterpart operates at an impressive 9000 MT/s with CL48-64-64 timings under MemtestPro 4.0. Intel's Arrow Lake processors officially support DDR5-5600 for standard DIMMs and DDR5-6400 for CUDIMM. Anything above that is out of JEDEC specifications and might require some tuning to ensure stability.
Likewise, Raphael and Granite Ridge work best at DDR5-6000 and DDR5-6400 speeds, respectively. Achieving DDR5-8000 on AM5 typically requires a 1:2 ratio, with the UCLK running at half the MCLK frequency. Your mileage may vary depending on your motherboard and CPU's memory controller, but these are impressive results. When purchasing high-end kits, ensure your motherboard is on the Qualified Vendor List of your RAM kit for compatibility. Keeping your BIOS up to date can also enhance stability.
]]>Although the new import tariffs imposed by the Trump administration do not tax semiconductor imports, they do tax imports of wafer fabrication equipment (WFE) made outside of the U.S. and used by American chipmakers. As a result, companies like Intel, GlobalFoundries, Samsung Foundry, and TSMC will have to pay at least 20% more for chipmaking tools in the U.S. than they do in other countries, which will likely affect the prices of chips made in America.
Although the U.S.-based Applied Materials, KLA, and Lam Research control about 50% of the chipmaking tool market, producers of fab tools from China, Europe, Japan, South Korea, and Taiwan command the remaining 50%. U.S.-based chipmakers hardly use tools made in China, but they certainly use lithography, etching, and deposition equipment produced in the EU, Japan, South Korea, and Taiwan. Starting from April 9, they will have to pay a 20% to 32% import tax (depending on the origin of the equipment) when buying semiconductor production equipment from companies in these countries, which will inevitably affect their costs in the U.S.
Given that equipment costs determine the costs of chips more than anything else, expect a tangible increase in production costs in America. Then again, if the Trump administration imposes additional tariffs on chips made elsewhere, this will somewhat level the costs of chips produced overseas and domestically. However, this will not make chips made in the U.S. more competitive on the global market.
The 20% import tax on lithography tools made by ASML will probably have the biggest impact on American chipmakers as ASML does not really have U.S.-based rivals when it comes to advanced immersion DUV litho tools as well as Low-NA EUV and High-NA EUV litho machines. While a 20% rate seems modest compared to the 54% tariffs on Chinese imports, they apply to extremely costly equipment.
ASML's advanced immersion DUV (ArF) machines used for sub-10nm fabrication processes cost $82.5 million per unit on average (based on the company's Q4 FY2024 results), a Low-NA EUV system is priced around $235 million depending on configuration, and its upcoming High-NA EUV tool is expected to cost $380 million. With the new import duties, those same tools will cost American chipmakers $99 million per Twinscan NXT:2000i-series DUV machine, $282 million per Twinscan NXE:3800E Low-NA EUV system, and $456 million for next-generation Twinscan EXE:5000-series High-NA EUV tool.
It is noteworthy that ASML produces not only lithography scanners but also metrology and inspection tools, which will also become more expensive for American chipmakers. In fact, some of ASML's HMI eScan e-beam tools are assembled in Taiwan, and they will be 32% more expensive for American buyers. The good news is that ASML can expand the production of HMI tools in San Jose, California.
In addition, there are other European fab equipment makers, such as ASM International, which makes atomic layer deposition (ALD) and epitaxy tools, and Aixtron, which makes deposition equipment for compound semiconductors (GaN, SiC, etc.), which are also produced in the U.S. (by companies like Macom and X-Fab).
Speaking of cleaning, coating, deposition, and etching tools, tools from Japan-based companies like Tokyo Electron and Screen Holdings are quite popular on the market, and they are set to get 24% more expensive for American companies.
Given that wafer fab equipment from Europe and Asia has just gotten 20% to 32% more expensive for American buyers, a natural question is whether these tools can be replaced with ones produced in the U.S.
ASML has only two big rivals — Canon and Nikon — and they are based in Japan and do not produce EUV systems or advanced DUV scanners anyway, so American fabs will have to pay 20% extra for ASML tools in the foreseeable future. The actual impact of ASML’s tools on production costs will be determined by whether a particular product is more lithography intensive (DRAM, logic), or etching and deposition intensive (3D NAND).
As for cleaning, coating, deposition, and etching tools, Applied Materials, KLA, and Lam Research build world-class machines for these steps. However, if a particular process technology and production flow already integrates a certain system from Tokyo Electron, then switching to a different tool is extremely complicated and takes time. Also, some tools are unique and are designed for a very specific process or flow, which makes their replacement even more complicated.
In general, this sudden increase undermines the significant capital investment plans of companies like Intel, TSMC, and Samsung Foundry, all of which are building or expanding advanced fabs that cost tens of billions of dollars in the U.S. The impact will not be limited to leading-edge nodes. Companies using mature and specialty manufacturing technologies, including GlobalFoundries and Texas Instruments, will also pay more for their needed equipment. In the end, the added costs will push up the price of chips produced domestically.
]]>Since Apple connected its PCs with smartphones and tablets using its MacOS and iOS operating systems to provide a seamless user experience some 10 to 15 years ago, multiple attempts have been made to replicate similar capabilities with Windows-based PCs. One such attempt is Intel's Unison app, which was released in early 2022 and will be discontinued this June, reports Neowin.
"Intel Unison will soon be discontinued," reads a statement by Intel in Apple's AppStore, Google's PlayStore, and Microsoft's Store. "The first step in its wind-down process is ending service for most platforms at the end of June 2025. Lenovo Aura platforms will retain service through 2025."
Intel's Unison allows users to make phone calls, send text messages, get notifications, and transfer files and photos between Android and iOS handsets and Windows 11 PCs. The app is a part of Intel's Evo program to improve the user experience with premium Windows 11 PCs running its 12th-Gen Core processors or newer. However, the company no longer sees the app as one of its competitive advantages.
Intel did not disclose why it decided to discontinue its Unison app. Perhaps this is a part of the company's broader cost-cutting strategy, and if so, we could see Intel dropping support for other software efforts in the coming months. Recently, Lip-Bu Tan, Intel's new chief executive, said that the company planned to can or spin off operations that no longer fit its core strategy, and apps like Unison barely do. While it certainly improves the user experience with Intel-based PCs, it is not an exclusive app, and maintaining a large fleet of software costs money that Intel wants to preserve for developing its core products.
This is perhaps because Intel's Unison is not a unique app, as multiple programs connect smartphones with Windows PCs. Microsoft offers Phone Link, and Samsung has its own version called Flow. Dell has tried to offer its own Mobile Connect app, but it did not work flawlessly with iPhones, so it discontinued it somewhere along the line, clearing the road for Intel's Unison and Microsoft's Phone Link. Although Intel's Unison could be a fine app, Microsoft's Phone Link has better compatibility as it works with virtually all PCs running Windows 10.
]]>Did the semiconductor manufacturing world just shrink a nanometer or two? Intel and TSMC have reportedly reached a preliminary agreement to create a joint venture to operate Intel's fabs in the U.S.
The news comes from Reuters, which cites a report from The Information based on two sources familiar with the matter. Intel has not yet commented on the matter; Tom's Hardware has reached out to the companies for more information.
Under the terms of the agreement, TSMC is said to own 20% of the joint venture. It is unclear which companies will own the remaining 80%, but earlier this year TSMc reportedly approached multiple leading fabless chip designers headquartered in the U.S. — including AMD, Broadcom, Nvidia, and Qualcomm — about investing in the joint venture, which would own multiple fabs in America. Both Nvidia and a TSMC board member later denied the discussions.
This arrangement was reportedly influenced by the U.S. government, specifically the White House and Department of Commerce, as part of efforts to address ongoing operational difficulties at Intel.
U.S. authorities view the partnership as a means to stabilize Intel: The IDM 2.0 strategy has faced multiple challenges as the company has so far become a leader neither in products nor in semiconductor production technologies. At the same time, the current U.S. government will not support sales of Intel's fabs to a foreign investor, especially TSMC.
At this point it is unclear what exactly TSMC's involvement would be with Intel's American fabs — which cost tens of billions of dollars — many of which can only be used to make processors for Intel (including fabs capable of producing on Intel 3 and Intel 4 process technologies) and only one or two of which can make processors on Intel's 18A fabrication technology.
It is also unclear how TSMC's plans to own 20% of Intel Foundry aligns with its own plans to invest $165 million in its Arizona Fab 21 site to make chips for its partners, including Apple.
The financial markets responded quickly to this news. Intel's stock price increased nearly 7% after the report surfaced, which helped the company to recover after a drop of market capitalization caused by the new import tariffs that will be implemented by the U.S. By contrast, shares of TSMC traded in the U.S. dropped by about 6%, highlighting differing investor reactions to the deal.
As both Intel and TSMC are in their quiet periods, they cannot make any comments regarding future plans or even factors that can impact them materially, but we have reached out for comment.
]]>Finding the best PS5 SSD can be daunting due to the wide variety of choices. Plenty of SSDs will work in the PS5 and provide a simple and hassle-free capacity upgrade for your game library, but which ones rise above the crowd? To narrow down the options, we've tested many drives in a battery of tests, including many models from our SSD benchmarks hierarchy. From these, we've picked the best SSDs for the PS5 based on performance and price at several different capacities.
Nearly any new drive you buy for the PC can also be used in the PS5, so you'll also find many of these same picks on our list of Best SSDs for desktop PCs. You can also use everything from a tiny M.2 2230 drive up to a longer M.2 22110 model in the PS5, but there's no real benefit from choosing the other form factors. M.2 2280 SSDs are ubiquitous and typically offer the best combination of value, performance, and capacity.
The PS5's internal SSD is a restrictive 825GB (or 1TB on the PS5 Slim), and after formatting, updates, and bloatware, it typically leaves you with about 670GB free for games. That's bad news because today's games are becoming larger with each new release, and you'll also need somewhere to store all the screenshots and video clips you gather while you play. Call of Duty, as an example, can use more than 200GB all by itself!
The good news is that Sony has an M.2 expansion slot where you can put a second SSD for the PS5, and the current system firmware allows you to use SSDs with up to 8TB of capacity. That's hopefully enough storage to satiate even the most demanding of gamers, but there are also far more affordable options, with modern 2TB and 4TB models being particularly attractive choices for the PS5.
Here's the quick list of the best SSDs for the PS5, but we have further breakdowns and testing results below. There are also similar drives in some cases, with effectively the same hardware, and we'll list those alongside our primary selections. When searching for the best SSD for the PS5, you'll want to be careful about which drive you pick. The Samsung 990 Pro and WD SN850X are great SSDs for the PS5, though pricing has been trending upward for the past several months — on all SSDs. The SN850X also comes as an SN850P that's just an overpriced SN850X with a different heatsink and PlayStation 5 branding.
We've broken things down by category, with our top picks being the WD Black SN850X, SK hynix Platinum P41, and Samsung 990 Pro. For capacity or budget minded shoppers, we also have the Acer GM7000, Silicon Power US75, and Netac NV7000. Which drive will fit your particular needs best depends on what you're after, so we list multiple alternatives for most categories and SSDs.
WD took its popular Black SN850 SSD and turned it up to 11, but luckily the price isn't nearly so extreme. The current $124 price on Amazon for the 2TB model is a great deal, though other capacities may not be as attractively priced. The 4TB drive at $249 is worth a look for those who want more capacity; the 8TB drive at $549 is also worth a thought if you really want all the capacity you can pack into that M.2 slot — and it's now $300 off the original launch price.
The Black SN850X leverages an improved controller and newer flash to get the most out of the PCIe 4.0 interface, thus delivering excellent performance with the Sony PlayStation 5. Performance is improved across the board, and the drive comes with a heatsink option at all capacities. You'd be better served by a purpose-built PS5 heatsink, however.
WD also supports the SSD with a respectable five-year warranty that will let you game with peace of mind. It's a great match for the PlayStation 5, and while it can be a bit pricier than budget options, overall it's still our top pick. It's also fast for gaming on a PC, particularly with DirectStorage starting to become useful.
WD has taken the course of releasing an officially-licensed SN850P. That drive is a glorified heatsinked SN850X and should only be picked if you really want the PS5 logo on your heatsink for whatever reason. It's far less expensive to get a bare SN850X and add your own heatsink.
Read: WD Black SN850X Review, WD Black SN850X 8TB Review
The PNY CS3140 replaces the SK hynix Platinum P41 as our alternate pick for a variety of reasons. For one, the Platinum P41 has had widespread SLC caching issues when used for a prolonged period of time, which can reduce write performance. This isn’t a huge deal when it’s used in a PS5, but the Platinum P41 is also limited to 2TB and its cousin, the Solidigm P44 Pro, is effectively being removed from the market. The CS3140, instead, uses common hardware, goes up to 4TB, and is currently well-priced at most capacities.
If there’s something to knock, it’s the lack of a heatsink, but PNY does sell one for the PS5 for around $11. Alternatively, the Kingston Fury Renegade with heatsink will work well. That drive is priced competitively at 4TB, as is the heatsink-less MSI Spatium M480 Pro. While there are many DRAM-less alternatives as well, if you want the most powerful drive with DRAM — aside from the WD Black SN850X and Samsung 990 Pro, which cost more — the CS3140 is a good choice right now.
Read: PNY XLR8 CS3140 SSD Review
If you want the largest possible SSD for your PS5, look no further than the WD Black SN850X 8TB. 16TB drives aren't really a thing in the consumer space, likely because even 8TB drives remain relatively niche parts with a higher price per TB of capacity than 4TB and 2TB drives. But the SN850X online prices have dropped quite a bit since it initially launched with an $849 MSRP.
In our PS5 test suite, the SN850X 8TB was effectively just as fast as any other drive. The PS5 doesn't support PCIe 5.0 speeds and the internal drive ends up being a bottleneck for both the copy to and read from tests that we run. That means you not only get maximum capacity but also maximum performance.
What's not to love? The price. $579 for 8TB isn't terrible, but that's more than a PS5 costs on its own. $649 for the heatsink version is a bit of a joke, since you can put on your own $10–$15 heatsink instead (but without the WD Black branding). It's also a double-sided drive, which means the underside can run a bit hotter if you're doing a bunch of writes — both most writes will be limited by the internet connection so it's not really a concern.
Read: WD Black SN850X 8TB review
The Samsung 990 Evo Plus takes the same Piccolo controller found in the 990 Pro and pairs it with newer, higher capacity 236-layer (v8) V-NAND. Performance ends up being very good, particularly if you're planning on putting it into a PS5, and you can get up to 4TB of capacity in a single-sided drive.
Samsung is also about as well-known and reliable an SSD brand as you can find. There have been issues with a few models over the years, but the 990 Evo Plus has so far been free of any notable problems. Whether it's in a PS5, laptop, or desktop, the 990 Evo Plus is efficient and fast.
While our primary capacity pick has double the storage, for many 4TB of storage in a PS5 will be "enough" to hold a sizable gaming library.
Read: Samsung 990 Evo Plus review
The Teamgroup MP44 is, and has been, one of our favorite budget drives. This is true for desktops, laptops, and the PS5. It tends to be competitive at most capacities with a wide capacity range, and in our review it performed very well in general with plenty of performance for this console. There are many decent competitors, like the Patriot Viper VP4300 Lite, the MSI M482 at 2TB, and our old choice for this category: the Silicon Power US75. All of these including the MP44 are DRAM-less, but that’s not a big deal for gaming.
So, why the change? Well, unfortunately, many models will change hardware over time and often not for the better. There have been reports of QLC flash on the VP4300 Lite and US75, to name two, but the MP44 has stuck with TLC flash so far. This doesn’t mean the hardware hasn’t changed on the MP44 — an alternative controller to the Maxio MAP1602, the Phison E27T, has been reported, as has different TLC flash — but the drive still largely performs as tested and expected. This means good performance and high power-efficiency, to the point where you might not even need a heatsink.
Read: Teamgroup MP44 review
The Teamgroup MP44L is another one of our favorite budget SSDs and, like the MP44, it’s replacing a drive formerly on this list. In this case it’s replacing the Netac NV7000 and for similar reasons: the NV7000 has multiple hardware configurations, including some with the InnoGrit IG5236 controller that has had reliability issues. The MP44L is far from perfect as it’s slower than the NV7000 and is limited to 2TB, but on the other hand the availability of lesser capacities — and the fact it’s an inexpensive drive — makes it great if you have a very tight budget.
The performance difference is there as the MP44L just barely meets the PS5’s criteria, but in real-world use your experience will be pretty similar among the various drives. You also get the benefits of better power efficiency and less heat production, which means it can be run without a heatsink. The MP44L is certainly starting to feel obsolete but the PS5 itself is entering its fifth year and the technology doesn’t really need more than an SSD of this caliber. There are certainly better drives and they can be found on this list, but if cost is your top priority you shouldn’t overlook this drive.
Read: Teamgroup MP44L review
Swipe through the galleries for different capacities
Some of the best SSDs for the PS5 are either specifically designed for the console, or come with an integrated heatsink. However, some drives don't come with a heatsink, so we equip them with the Sabrent M.2 NVMe heatsink for the PS5 to both meet the requirements for the PS5 and to ensure a level playing field. We've found that this cooler is a great solution if you're looking for a cheap, versatile, and easy-to-install solution. There are other similar heatsinks, like the SK hynix Haechi H01 that will work just as well (though apparently neither of those work with the newer PS5 Slim).
The Sony PS5 has an internal benchmark measuring how fast the system can read data from the drive. This is the most critical performance metric for gaming, as a speedy response time is responsible for ensuring a smooth gaming experience. As you can see in the 'PS5 Read Benchmark' column above, the fastest SSD in our test pool was 90% faster than the slowest model. Sony will even flag performance as being potentially inadequate if the read score is below about 4,000 MB/s. However, this read tests only takes a few seconds and basically shows the burst speed of the SSDs, so it's quite synthetic in nature.
Real-world tests show much smaller differences. For instance, our 'Copy to M.2' benchmark consists of timing how long it takes to move four games totaling more than 200 GB (we use Mass Effect: Andromeda, Assassin's Creed Valhalla, Elden Ring, and Astro's Playroom) from the internal PS5 SSD to the expansion drive. In most cases, we only see a difference of a few seconds, and converting to MB/s the difference between the fastest WD Black SN850X and the Solidigm P41 Plus is only 10%. But then there's a pretty big step down to the Samsung 990 Evo and the various Phison E27T-equipped SSDs, followed by the Transcend 250H, and in dead last (for now) sits the Inland TN470 1TB — another Phison E27T drive. How much will this matter in terms of gaming performance? Probably not at all, but when you move a bunch of data from the integrated SSD to the M.2 drive, it will take longer.
On the flipped side, we also tested this process in reverse, moving the four games back to the internal drive for our "Transfer From M.2" benchmark. Here, the sustained write speed (and encryption/security protocols) of the integrated 825GB SSD becomes the limiting factor, and there's only a 5.6% difference between the fastest and slowest SSD we've tested. The current 825GB SSD only appears to write data at up to ~250 MB/s, and all of the M.2 SSDs are easily able to maintain read speeds much higher than that figure.
Likewise, real-world testing (i.e. launching games) has failed to expose meaningful differences between the drives — it's common to see at most a one to two second difference between drives in game load times. Other testing we've seen from multiple outlets indicates very few meaningful differences, if any, for game loading times. Overall, you're unlikely to notice the speed difference between most PCIe 4.0 SSDs and could make a good argument for simply selecting the most cost-effective drive that meets the capacity target that you want — 4TB and 2TB drives are particularly popular.
Naturally, not all of the drives that we test will make the final cut for our list of Best SSDs for the PS5, but that doesn't mean those drives failed the test, or wouldn't be a great deal if you can catch them on sale. The Solidigm P44 Pro is a great SSD that delivered respectable performance in our PS5 SSD benchmarks (it's the same hardware as the SK hynix Platinum P41), and given the slim difference between the fastest and slowest SSDs on our list, it could make a great drive if the price is right. The only thing we'd try to avoid is any SSD that uses QLC NAND, as those drives can slow down significantly as they're filled to capacity.
There's also no real benefit at present to selecting any of the PCIe 5.0 SSDs, as they tend to use more power then PCIe 4.0 drives, making them a poor choice for the PS5. Given current prices there's no real purpose in using a PCIe 5.0 drive for your PlayStation 5, though we've included results from several of the newer models (on the assumption that price differences between PCIe 4.0 and 5.0 drives will shrink over time).
Which SSDs are compatible with the PS5? Luckily, finding a spacious PS5 SSD to complement your console's internal drive isn't too difficult — any PCIe 4.0 SSD that provides a minimum of 5,500 MB/s of throughput over the NVMe interface can be used as a PS5 SSD, provided it comes with a heatsink that doesn't take the overall height above 11.25mm. In fact, even slower SSDs will also be perfectly fine (PCIe 4.0 is still required), though the PS5 may warn you about the potential for reduced performance if you opt for such a drive.
Do you absolutely need a heatsink for a PS5 SSD? Sony says yes, and you can easily add your own heatsink to SSDs that aren't marketed specifically for the PS5. You can also use one of the best external drives with the PS5 to store games, but these are only for game storage — you'll need an internal expansion drive to actually play the games.
What size of SSD should you buy for the PS5? You might be fine with a 1TB drive, but we recommend selecting a 2TB or 4TB model due to the current low pricing trends for these models. Besides, who wouldn't want more storage for extra games?
Ultimately, the best drive for your PS is one that provides enough capacity to hold your games and data at a price you can afford. To help you choose, we've tested a number of the top SSDs in our labs — see the results further down the page — and pulled out the top performers for a list of the Best PS5 SSDs.
The Sony PS5 requires an M.2 SSD that communicates over the NVMe protocol. Officially, you'll need a PCIe 4.0 x4 model that can deliver up to 5,500 MB/s of sequential read throughput. In practice, you can use slower SSDs, and you'll just get a warning that performance may be inadequate — note that PCIe 3.0 models are explicitly prohibited from working. The console supports 250GB, 500GB, 1TB, 2TB, 4TB and 8TB models.
These small, rectangular drives look like sticks of RAM, only smaller, and the PS5 accepts both single-sided and double-sided versions. You'll also need to ensure that your drive has a cooling solution pre-applied. These can consist of thin copper heat spreaders that look like a label, or a full-fledged metal heatsink with a thermal pad.
Not all of the best SSDs for the PS5 come with a heatsink, but you can easily use your own double- or single-sided heatsink. We recommend the Sabrent M.2 NVMe heatsink for the PS5, which actually replaces the outside SSD panel on the PS5 with a heatsink, giving the SSD access to nice cooler air from outside the system. We've found that this cooler is a great solution if you're looking for a cheap, versatile, and easy-to-install solution, but there are many options on the market. For instance, TeamGroup has its new TForce AL1 heatsink, which operates similarly, coming to market soon.
Just make sure the SSDs don't exceed 110 x 25 x 11.25mm. M.2 SSDs are usually 80mm long by 22mm wide, described as size 2280, but some may be shorter or longer. The PS5 supports M Key Type 2230, 2242, 2260, 2280 and 22110. Some M.2 drives are also SATA interfaces instead of NVMe, but those are rare and would not be listed as being PCIe 4.0 compliant. Regardless, make sure your SSD supports NVMe.
Sony has detailed instructions on how to install a PS5 SSD. As you can see in the video above, installing the SSD is a simple process that only requires a #1 Phillips head screwdriver. After you've installed the SSD, you can navigate through the menus to the 'Settings→Storage→Installation Location' area and change it to your new SSD. All new games will now install directly to the SSD.
To move existing games to your new drive, select the internal SSD, highlight the item you want to move, press the Options button, and then select 'Move Games and Apps.' Select any other games that you would like to move in the checkboxes, then select 'Move.' As noted in our above testing, moving from the integrated SSD will generally be much faster than moving to the integrated drive.
Donald Trump’s new “liberation day” tariffs, announced in a splashy White House event on Wednesday, will have a huge impact on the price of virtually all consumer goods. But PCs, particularly those built by smaller, boutique vendors may be hit hardest of all, makers and resellers tell Tom’s Hardware.
Large OEMs such as Dell and HP may be able to limit their exposure by moving production to less-tariffed countries. U.S. brands such as Maingear, iBuyPower and Falcon Northwest assemble their products in America — using parts that come almost exclusively from Asia.
“Tariffs have a direct impact on our cost structure… which we have to pass down to our customers,” Wallace Santos, CEO of Maingear, told us a few minutes before Trump released his latest round of tariffs. “Some of our suppliers are stopping their production lines to move out of China, causing scarcity, which ultimately causes FOMO, which causes even more scarcity.
After yesterday's announcements, Santos said he expects prices for his PCs to go up 20 to 25% as a result of the tariffs.
On Wednesday, Trump announced his full suite of new tariffs, which include rates of 54% on China (+34% on top of the 20% already announced), 32% on Taiwan, 26% on South Korea and 46% on Vietnam. Those are all countries where a lot of PC components such as SSDs, RAM, PC cases and graphics cards are sourced. That 15% number could rise, in other words.
“Some of our GPU suppliers had to stop their Chinese lines to move to Taiwan or Vietnam, causing additional shortages,” Santos told us.
The tariffs had already worsened GPU shortages as manufacturers tried to move from China to less-taxed countries such as Vietnam. Now that those countries have tariffs of their own, there’s no place to go. These suppliers are less likely to move operations from China now that Vietnam and Taiwan also have huge tariffs applied to them.
"Unfortunately, we now anticipate around 20–25% due to the newly announced tariffs. We’re monitoring closely and will do our best to mitigate the impact," Santos said on Thursday.
Think America will start manufacturing all of its own DRAM and NAND Flash? Think again.
“Sadly the overwhelming majority of PC component manufacturing is not done in the US and never has been,” said Kelt Reeves, CEO of Falcon Northwest. “That means there's no US alternative supplier for most PC parts, so even for a US-based system integrator like us, it just means skyrocketing costs.”
“PCs are a low-margin business that can't just absorb such huge price increases. PCs were already getting more expensive with the GPU shortages and tariffs on Chinese manufacturing,” Reeves added, “and these new tariffs are going to make them much more expensive.”
Jon Bach, CEO of Puget Systems, another system integrator, published a blog post about tariffs last week, claiming that his company will try to absorb some of the costs before passing them along to consumers. Bach’s post was written before the latest lineup of tariffs, and still predicts price increases of 20 to 45 percent by June.
Gary Shapiro, CEO of the Consumer Technology Association was more pointed in his criticism of the Trump tariffs. In a statement, Shapiro wrote:
“President Trump’s sweeping global and reciprocal tariffs are massive tax hikes on Americans that will drive inflation, kill jobs on Main Street, and may cause a recession for the U.S. economy. These tariffs will raise consumer prices and will force our trade partners to retaliate.”
Shapiro is clearly no fan of tariffs, and pulled no punches in his comments: “Americans will become poorer because of these tariffs. This will not be a golden age – but a return to the global economic catastrophe of the Smoot-Hawley tariffs of the 1930s that will disproportionately hurt low income and hardworking Americans.”
Few people in the electronics industry aren’t pulling their hair out about tariffs. But B&H, a discount retailer known for its low prices and focus on A/V equipment, is taking a more spiritual approach to the tumult ahead.
“In general B&H’s overarching and guiding principle is to be patient and not worry,” Senior Business Development Executive Steve Honickman told us back in March. “B&H’s success is with GOD’s help…and our honesty, sharing knowledge and customer service! We can’t control pricing but we always will go above & beyond for our customers!”
]]>Dell is giving some of its Alienware gaming PCs (or one, at least) additional flexibility by enabling compatibility with third-party motherboards. The system maker has provided a new AlienFX board cable conversion kit for its flagship Alienware Area-51 gaming desktop, which provides all the necessary equipment to swap out the Area-51's built-in motherboard with a third-party ATX or Micro-ATX motherboard. The kit can be bought at Dell's online store for $34.99.
The kit comes with a 4-pin power switch wiring cable, fan power bridge wiring cable, USB dongle extension, and a bag of three nuts. The power switch wiring cable is responsible for providing a connection to the Area-51's power button. The fan power bridge wiring cable provides compatibility with the Area 51's rear fan controller. The USB dongle extension converts the right-angled USB 3.0 front panel internal plug from 90 degrees to a traditional connector for improved compatibility.
Finally, the extra nuts provide compatibility for Micro-ATX motherboards. Dell also has an online video tutorial showing users how to swap the OEM board with a third-party counterpart.
While the kit provides all the necessary hardware to make third-party motherboards from Gigabyte, MSI, Asus, Biostar, and others work in the Area-51, there are a few caveats.. Dell does not provide official support for third-party motherboards paired to the Area-51, and does not guarantee compatibility.
The latter is most likely referring to internal port selection. For example, if you put a barebones $60 Micro-ATX motherboard in the Area-51, it is unlikely to have enough fan ports or connectors to support all of the Area-51's fans, lights, and connectors.
Another caveat is compatibility with the Area-51's internal power supply, which takes advantage of the still-emerging ATX12VO standard. Most motherboards on the market (even the latest ones) do not support ATX12VO, instead using the older ATX 24-pin standard. Third-party boards will need to support ATX12VO if users want to take advantage of the built-in PSU. The good news is that the Area-51's power supply can be switched out for a conventional model if needed.
Regardless, Dell's decision to support aftermarket motherboards is certainly welcome. There are a number of advantages to having aftermarket compatibility. Customers are not locked down to Dell's ecosystem if they need greater functionality from their system, such as more I/O, more M.2 slots, more DIMM slots (there are only two on the OEM motherboard at least right now), or a beefier power delivery system for overclocking.
And if the built-in OEM motherboard dies, users aren't limited to buying a replacement from Dell; they can replace the board with a third-party option, which might be a cheaper alternative if the system's warranty has expired.
The kit should also let those who really like the look and quality of the case they paid for to keep using it for several years after the existing hardware is past its useful gaming life. With the price of PC cases (and likely many other things) seemingly on the rise, anything that helps us hang onto existing hardware for longer is welcome at this point.
Microsoft has officially announced the release of hotpatch updates for Windows 11 Enterprise builds. This new updating protocol is currently for Enterprise 11 versions 24H2, which applies to AMD and Intel x64 devices. This isn't the first time the new hotpatching system has been mentioned, but it is the first time it's become widely available across Windows 11 Enterprise 24H2 devices.
If you're new to the hotpatching system, the biggest takeaway is the ability to apply updates without the need to restart. This is definitely ideal for corporate environments, letting users continue to work not only without the interruption caused by a reboot, but also without excessive CPU usage.
There are quarterly updates that will likely still require a reboot outside of the regular hotpatch updates. Still, this is a significant change compared to the monthly reboot requirements presently in place.
Word of the new hotpatch system first came to our attention in February of 2024. And by November, we got a glimpse at the new protocol in Windows 11 Enterprise 24H2 and Microsoft 365 Preview builds. It's taken a few months to iron out the details but we've finally hit an official release.
In the blog post, Microsoft details the requirements necessary to enable hotpatching on Windows 11 Enterprise clients. In summary, you will need:
Arm64 devices are in public preview and require a registry key modification to support the new hotpatch update system. You can read more about the specific steps to adjust this setting in the official blog post.
Microsoft goes on to explain that the hotpatch updates will align with the standard update schedule, which will still apply to Windows 10 and Windows 11 23H2 devices. There will be a different KB number for the hotpatch releases. The team also confirmed in the blog post that the Windows quality update policy will be able to automatically detect whether or not your device meets the necessary requirements to enroll in the hotpatch update system.
Again, it's important to note that this rollout is only for Windows 11 Enterprise clients and is not available for Windows 11 Home and Windows 11 Pro machines. You can read more about the new hotpatch system update protocol over at the official Microsoft blog.
]]>If you're rocking an older PC system and looking for an economical upgrade, perhaps to the latest AM5 system, then today's deal can help. At just $124, this motherboard from Asus is firmly positioned on the budget end of the scale when it comes to current motherboard prices. But don't let the budget terminology put you off, as this mobo still has all the main features you would need - primarily the CPU socket - but also the necessary DDR5 RAM, PCIe 5.0 support, and plenty of USB ports. It doesn't use the latest WiFi 7 receiver, but do you even have a WiFi 7-capable router or network to pair it with? If you do, it will cost you a little more to find a motherboard with built-in WiFi 7 - alternatively, you could get a separate WiFi card or adapter.
This motherboard deal on the Asus Prime B650-Plus WiFi is available at Amazon for just $124. When this board first launched, it was commanding a $180 price tag, and since then, the usual list price has been around $150, with the all-time low price at $119. This deal price shaves around $25 off the usual asking price.
The Asus Prime B650-Plus WiFi comes with support for AMD Ryzen 7000/8000/9000-series processors, a 2.5 Gb LAN port, PCIe 5.0 M.2 slots, USB Type-C ports, and a header for connecting a Thunderbolt 4 connection to your case.
Asus Prime B650-Plus WiFi: now $124 at Amazon (was $149)
A budget AM5 motherboard offering from Asus that gives you WiFi 6E, support for Ryzen 7000/8000/9000-series CPUs, 2.5 Gb LAN, PCIe 5.0 M.2 slots, USB Type-C, and even a header for a Thunderbolt 4 connection to your case. View Deal
The Asus Prime B650-Plus WiFi is an ATX-sized motherboard, so it will need a mid-tower-sized or larger case to accompany its Length. The rear I/O does not come pre-fitted with a plate either, so you must install that into your chosen case before installing the motherboard.
Don't forget to look at our Amazon coupon codes for April 2025 and see if you can save on today's deal or other products at Amazon.
]]>A new workaround that bypasses the Microsoft account requirement when setting up a Windows 11 PC has been discovered. This comes right after an old workaround was plugged by the company.
The new workaround, discovered by Wither OrNot on X (and was confirmed to work by BleepingComputer) takes advantage of one line of code that is implemented into the command prompt during a Windows 11 install.
The new workaround is even more straightforward than the previous workaround that Microsoft shut down (which required registry tweaks). Only two steps are required: Pressing Shift + F10 to get into the command prompt when the Windows 11 installation wizard appears, then executing the command start ms-cxh:localonly in the CMD.
Improved bypass for Windows 11 OOBE:1. Shift-F102. start ms-cxh:localonlyOnly required on Home and Pro editions. pic.twitter.com/ZUa89ZPBI3March 29, 2025
This bypasses the Microsoft account requirement and opens up a new window where users can input credentials to make a new local account instead. It works on Windows 11 Home and Pro, and apparently works on other editions, such as Windows 11 Enterprise, too.
However, the X poster claimed that killing the internet on non-Home and non-Pro versions of Windows 11 still works to bypass the Microsoft account requirement.
This discovery couldn't have come at a better time. Starting with Windows 11 Insider Preview Build 26200.5516, Microsoft killed a previous workaround that also used the command prompt (the OOBE/BYPASSNRO method) to restore access to local account creation during Windows 11 installs. As a result, the only way to get this same workaround working in the a preview build was to implement the same functionality through the Windows Registry, significantly complicating things..
Specifically, the workaround of this workaround required implementing reg add “HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\OOBE” /v BypassNRO /t REG_DWORD /d 1 /f to the registry, then rebooting the system. Only then did the OOBE/BYPASSNRO command work in the command prompt.
Regardless, there are other workarounds to bypass the Microsoft account requirement in Windows 11. The Rufus method in our "How to Install and Log In to Windows 11 Without a Microsoft Account" tutorial still works, and it is arguably the easiest method of them all.
Rufus is a bootable USB drive creator that has built-in functionality to disable the Microsoft account requirement right on the drive itself. Rufus also sports a plethora of other Windows 11 bypasses, including RAM, Secure Boot, TPM 2.0, and BitLocker automatic encryption bypasses.
While Nintendo made a splash at yesterday's formal unveiling of the Nintendo Switch 2, the company's hardware developers stayed quiet on the chip powering it. Now, Nvidia, which makes the custom system on a chip, has provided some more information in a blog post.
"Nintendo doesn't share too much on the hardware spec," Switch 2 technical director Tetsuya Sasaki said at a developer roundtable. "What we really like to focus on is the value we can provide to our consumers."
Nvidia is following Nintendo's lead, withholding information like core counts and speeds. Still, the company claims the new chip offers "10x the graphics performance of the Nintendo Switch."
Nvidia's RT cores allow for hardware ray tracing, lighting, and reflections, while tensor cores power DLSS upscaling. DLSS is likely being used to achieve up to 4K performance when the system is docked, and to help hit up to 120 frames per second in handheld mode.
The company also confirmed that the tensor cores allow for face tracking and background removal with AI, which was shown off with the new social GameChat feature as well as in Switch 2 games we went hands-on with, such as Super Mario Party Jamboree – Nintendo Switch 2 Edition + Jamboree TV. It isn't clear if this uses any of the same technology as Nvidia Broadcast on PC.
Additionally, Nvidia confirmed that the Switch 2's new variable refresh rate (VRR) display is powered G-Sync in handheld mode, which should prevent screen tearing.
Nvidia also powered the original Nintendo Switch, which used a custom variant of the Tegra X1. Nintendo managed to get a lot of mileage out of that chip, which was old when it launched; games are still coming out eight years later.
We'll see how much developers can squeeze from the new chip when the Switch 2 launches on June 5 for $449.99.
]]>It's no secret that Nvidia's RTX 5090 is a powerhouse for AI workloads, given Blackwell's architectural improvements and support for the lower precision data formats. A Vietnamese shop, Nguyencongpc (via I_Leak_VN), recently shared a few photos of an AI rig they set up for a customer, featuring seven RTX 5090s hooked up to multiple kilowatt PSUs in an open-air GPU frame. Despite the average RTX 5090 costing nearly $4,000, professionals find their performance worth the investment, even though Nvidia is not positioning these cards as workstation solutions.
The elusive RTX 5090 has been off the shelves and out of reach for most of its lifespan unless you're willing to pay through the nose. This is mainly because Nvidia's profit margins are significantly higher on their server-focused Blackwell B100/B200/B300 GPUs than on typical GeForce products. For Nvidia, allocating the limited number of wafers it procures from TSMC toward these AI accelerators is simply more profitable.
The AI system resembles mining rigs from the pandemic era. The open-air GPU frame carries seven Gigabyte RTX 5090 Gaming OC units connected using what we can see are PCIe riser cables, powered by multiple Super Flower Leadex 2000W PSUs. The system can easily be valued at over $30,000, considering these GPUs go for $3,500-$4,000 on a good day.
Posted by nguyenviet009 on
This single frame offers a massive 224GB memory pool for AI training and inference. Of course, this isn't unified memory, so developers must rely on techniques like model parallelism to distribute workloads efficiently across multiple GPUs. On that note, Nvidia's forthcoming Blackwell workstation cards are outfitted with up to 96GB of VRAM and cost between $8,000 and $9,000.
Even though the RTX Pro 6000 Blackwell offers more memory, multiple RTX 5090s provide a performance advantage for AI tasks where raw compute is essential, especially on a limited budget. However, when price is not a concern and you wish to scale AI more effectively, more VRAM per individual GPU becomes essential to handle complex models with hundreds of billions of parameters.
With the advent of AI, Nvidia now differentiates its GPU offerings by memory capacities, which results in a steep price curve if you're looking for higher memory configurations. There have been workarounds by modders, where blower-style RTX 4090s with 48GB of memory are common in the Chinese community.
It might take months before prices normalize since practically every new GPU flies off the shelves instantly and is never seen at MSRP. You might want to check out the used market for last-generation GPUs, which can usually be found near their launch MSRPs or even lower.
]]>RiVAI Technologies has launched the Lingyu CPU, China’s first domestically designed high-performance RISC-V server processor. The unveiling occurred in Shenzhen, reflecting the country's ongoing push for greater self-sufficiency in semiconductor development.
The Lingyu CPU adopts a one-core, dual architecture approach, integrating 32 general-purpose computing cores (CPU) alongside eight specialized intelligent computing cores (LPU). The configuration efficiently handles tasks such as inference for open-source large language models. The architecture aims to balance processing power and energy efficiency, thereby lowering the total cost of ownership (TCO).
RiVAI Technologies was founded by Zhangxi Tan, who studied under Professor David Patterson, a pioneer of RISC-V and 2017 Turing Award recipient. Professor Patterson continues to serve as RiVAI's technical advisor, promoting RISC-V adoption in China.
The company is also said to have partnered with over 50 companies, including Lenovo and SenseTime, to promote adoption and ecosystem development for its RISC-V processor. These collaborations are expected to support the deployment of the Lingyu CPU across various industries and encourage further advancements in RISC-V-based computing solutions.
RiVAI’s announcement comes amid broader efforts in China to shift away from reliance on x86 and Arm processors by promoting the adoption of RISC-V chips. The Chinese government is driving this initiative, encouraging research institutions, chipmakers, and companies to invest in RISC-V development.
Unlike proprietary architectures controlled by Western companies, RISC-V is an open-source instruction set that allows Chinese firms to design and manufacture processors without external restrictions. The push for RISC-V adoption comes in response to ongoing trade tensions and sanctions that have limited China’s access to advanced foreign-made chips.
To accelerate this transition, the Chinese government provides policy support, funding, and incentives for companies working on RISC-V technology. Major domestic tech firms, including Alibaba and Tencent, have already started developing RISC-V-based solutions, while state-backed research institutions are working on software optimization for the architecture.
This shift could help China build a more self-sufficient semiconductor industry, reducing its dependence on Western technologies. However, challenges remain, including software compatibility and ecosystem development, which will determine the long-term viability of RISC-V as a mainstream alternative to x86 and Arm processors.
]]>Luxor Technology, a Bitcoin mining software and services company building machines in Thailand, is in a quandary: It needs to ship 5,600 units before the tariffs hit.
Lauren Lin, Head of Technology at Luxor, told Bloomberg that they’re considering chartering a flight to get the machines into the U.S., especially as it has less than 48 hours before the 10% levies apply to all imports arriving in the States. Aside from that, Thailand also expects to get higher duties starting April 9, with exporters from that country expecting to get charged 36%.
Many Bitcoin mining hardware companies have been based in China. Still, they’ve spread out to other countries like Indonesia, Malaysia, and Thailand since the U.S. applied tariffs, bans, and sanctions on the East Asian giant in 2018. However, the White House’s expansion of trade taxes to all countries with a trade imbalance with the United States means that these companies must set up shop within its borders to avoid these fees.
A few companies have already started moving to transfer some manufacturing within the U.S. Bitmain Technologies Ltd., the biggest Bitcoin mining hardware maker, said on X that it will launch a local production line “to provide faster response times and more efficient services to the North American customers.”
Another Chinese competitor, MicroBT, was said to have previously struck a deal to use U.S.-based Riot Blockchain’s manufacturing capabilities. Luxor is also reported to have made a $131-million deal with the company for WhatsMiner machines, which will be assembled on U.S. soil.
However, even if these companies move their assembly and manufacturing lines within the 50 states, these tariffs will still affect them. That’s because these taxes will also apply to raw materials like aluminum. This means that electronic component manufacturers, including PC case and GPU makers, are affected, too, and will likely have to increase their prices to cope.
Some companies, like Puget Systems, say they might be able to absorb these additional costs for the moment, but they will inevitably have to raise their prices—either to pay the government fees or to offset the costs of moving production into the U.S.
]]>Donald Trump introduced a sweeping set of import taxes on Wednesday that targeted nearly all U.S. trading partners — but offered a notable exemption for the high-tech industry.
The tariffs include a flat 10% fee on all incoming goods beginning April 5, followed by higher, customized rates for about 60 countries starting April 9. However, numerous items, including computer chips, copper, medicine, and lumber, will be excluded from the new import fees. Initial reports indicate the administration has said tariffs for semiconductors could come at a later date.
There is a catch with the exceptions. A senior official from the White House told a Reuters correspondent that the Trump administration is preparing separate tariffs for industries like semiconductors, drugs, and critical minerals. On the one hand, this means that these items will remain outside the scope of the current tariff framework (i.e., no 32% tariff on Taiwanese chips for now); on the other hand, it remains to be seen how high those import taxes will be.
Initially excluding semiconductors from the sweeping import taxes enables American companies to buy chips made in Taiwan without any curbs — good news for AMD, Broadcom, Nvidia, and Qualcomm, as the majority of their chips are produced in Taiwan. Also, the exemption of semiconductors applies to chips produced in Europe, Japan, and South Korea, which is again good news for automotive and consumer electronics industries in the U.S. as both use loads of chips.
The exclusion also concerns chips made in China, however, including those made by companies controlled by the state and at fabs sanctioned by the U.S. For now, China's semiconductor industry can rejoice, unlike the American semiconductor industry, which will now have to pay 20% extra for European and 24% extra for Japanese tools they import to their fabs in the U.S.
While 20% to 24% does not sound like a lot compared to the over 50% tariff set on China-originated goods, we are talking about very expensive equipment. ASML's lithography tools cost a significant amount of money: a Low-NA EUV litho machine costs around $200 million, whereas a High-NA EUV litho system is projected to cost $380 million even without extra taxes. With Trump's tariffs, these tools will cost $240 million and $456 million, respectively, disrupting all the capital expenditure plans for companies building advanced semiconductor production facilities in the U.S., including Intel, TSMC, and Samsung Foundry.
Those who use mature process technologies — such as GlobalFoundries and Texas Instruments — will also have to pay extra for their tools. Ultimately, this will make chips produced in the U.S. more expensive.
The highest adjusted rate goes to Vietnam at 46%, followed by Cambodia at 49%, Taiwan at 32%, and China at 34%, which on top of a previous 20% duty, brings China's total to 54%. Other countries affected include the European Union at 20%, Japan at 24%, South Korea at 25%, and India at 26%. The tariffs look massive and, in the case of China, even prohibitive.
While the punitive tariffs set on the majority of goods made in China were something to be expected, the massive import taxes on products from countries like India (32%), Taiwan (32%), Thailand (36%), Vietnam (46%), and other countries in the region will clearly hit the high-tech industry supply chain as large PC makers have moved their production away from China to these countries to avoid tariffs from the U.S. administration.
It seems manufacturers cannot actually avoid these tariffs, so expect products like PCs, smartphones, TVs, and other high-tech items to get more expensive in the U.S. and profitability of American companies to get lower.
However, some goods and countries (such as those already under heavy sanctions) are excluded. These exemptions include semiconductors, copper, drugs, lumber, specific minerals that the U.S. does not produce domestically, precious metals, energy resources, steel, aluminum, and vehicle-related goods already taxed under existing rules, and products covered under national security provisions.
]]>AMD allegedly plans to launch a special-edition RX 9070 GRE GPU, which is considered a cost-effective RDNA 4 option for the budget market, per IThome. These special-edition GPUs were initially intended for the Chinese market, but some GRE models later became available globally. Though exact specifications are unknown, the reported 12GB of memory aligns this GPU as a viable option for the increasingly growing 1440p market.
When AMD introduced the RX 6750 GRE and RX 7900 GRE in 2023, the GRE moniker stood for "Golden Rabbit Edition," coinciding with the Chinese zodiac. With the RX 7650 GRE in early February, however, this badge was seemingly renamed to "Great Radeon Edition," making the tag more general, suited for international audiences, and not tied to a specific year.
Following AMD's hierarchy, the RDNA 4 pack will presumably be led by the RX 9070 XT, the RX 9070, and the RX 9070 GRE. Regarding specifications, we will likely see a cut-down Navi 48 chip with an alleged 12GB frame buffer across a 192-bit memory interface. Given that AMD's 60 XT-grade GPUs typically occupy the $350 territory, we might see the RX 9070 GRE in the ballpark of $450. This should be a well-rounded estimate, as it sits right between a potential RX 9060 XT ($350 expected) and the RX 9070 ($550).
Nvidia is rumored to launch its RTX 5060 Ti 8GB/16GB offerings in the third week of April. We probably don't need a crystal ball to see that Jensen is aiming this GPU right at the $400 mark. This might be an opportunity for the RX 9070 GRE to match or even nudge below Nvidia's pricing. In any case, final specifications of the RX 9070 GRE, based on how much AMD decides to trim Navi 48, will dictate pricing and performance against the RTX 5060 Ti.
Street prices haven't precisely adhered to intended MSRPs this generation, primarily due to supply constraints. We've discussed this problem in detail, but the gist is that TSMC can only process so many wafers each month. In any case, the RX 9070 GRE will likely debut as a China-exclusive model at launch. Global availability could be timed near the RX 9060 series, with this model serving as an additional option to help mitigate demand, but that's speculation.
]]>Following the grand reveal of the Nintendo Switch 2, we learned a handful of technical details about the console. Most notably, the Switch 2 is set to require a different MicroSD standard than its predecessor, named MicroSD Express. And, if current pricing for announced products is anything to go by, it will be painful on your wallet. In fact, on a capacity basis, they're pricier than many modern SSDs — MicroSD Express cards range from 20 to 25 cents per GB of storage, whereas bargain basement SSDs can retail for as little as 5 to 6 cents per GB. That's partly due to the NVMe and PCIe 3.0 support, commonly found on M.2 SSDs, that's baked right into the new MicroSD Express cards.
Sandisk's MicroSD Express cards have an MSRP of $49.99 for just 128GB of storage, with the 256GB variant at $64.99. These cards operate at speeds up to 880 MB/s read, 480 MB/s write, and 100 MB/s sustained write.
However, Lexar has announced larger capacities of up to 1TB. But, you might want to brace your wallet, as the prices are predictably not pretty for this new standard.
Lexar's Play Pro 1TB MicroSD Express card costs a staggering $199.99, with the 512 GB model at $99.99, and 256 GB at $49.99. Lexar boasts that these cards can achieve speeds of up to 900 MB/s read and 600 MB/s write.
Sandisk's MicroSD Express cards cost up to $0.39 per-gigabyte for the 128GB model, while the 256GB model costs $0.25 per-gigabyte. Lexar's options seem to be the best deal, with all three storage variants at a standardized cost of $0.20 per gigabyte, while also seemingly boasting higher-end specs.
We know that the Switch 2 will ship with 256GB of storage as standard, but there's a catch. Even if you purchase a physical game, you might not be able to play it immediately just by inserting the cartridge. Some titles will require the full title to be downloaded and installed onto the system, with the cartridge serving as a glorified physical license key, which Nintendo calls a "Game-Key Card". This might be down to companies and publishers wanting to cut down on cartridge costs, especially as the price of NAND storage is expected to rise.
For example, if you wanted to purchase the Street Fighter 6 cartridge, you'll end up having to install an additional 50GB of data onto your system right off of the bat. This means that the meager 256GB of storage that the console ships with will inevitably fill up. This reveals a hidden cost for the privilege of using Nintendo's new system, expensive new MicroSD cards to expand storage, unless you're willing to play the frustrating game of redownloading titles and managing storage any time you insert a Game-Key Card into your system.
While it might be tough to tell the difference between a standard MicroSD card and a MicroSD Express card at a glance, potentially causing some confusion for potential buyers, the technical details explain that MicroSD Express is a big jump from the UHS-I standard used by the Nintendo Switch.
The SD Association's current SD Express speed classifications divide MicroSD Express into four categories: Class 150, Class 300, Class 450, and Class 600. The numbers at the end of each classification denote the minimum read / write performance of the cards in MB/s. MicroSD Express cards also have significantly more pins than their older UHS-I brethren, at either 16 or 17, compared to just eight.
Underpinning the technology are NVMe and PCIe 3.0 interfaces, which allow for speeds of up to 2GB/s (using a PCIe 4.0 interface). You can read the deep-dive details of the tech on our sister site, AnandTech.
However, with technical details of the Switch 2's capabilities on the lighter side, we still don't know if the console will be able to match those theoretical speeds, and no currently-announced MicroSD Express card is able to achieve those peak speeds, either.
“The new microSD Express standard offers us a way to deliver a memory card with incomparable performance in that form factor,” said Joey Lopez, Director of Brand Marketing at Lexar in a press release. “We’re excited to create a card for our customers that leverages the benefits of this new standard and prepares gamers for the next generation of handheld gaming.”
So, there's a gulf between the fastest UHS-I MicroSD Card and the fastest theoretical MicroSD Express card. The fastest announced card is currently the Lexar Play Pro MicroSDXC Express card, but those speeds will inevitably have to be tested once the Switch 2, and the MicroSD Express cards, are in our hands. For now, you can check out our hands-on with the Nintendo Switch 2.
]]>On Nintendo's specs page, the CPU and GPU in the Nintendo Switch 2 are boiled down to a vague "Custom processor made by NVIDIA." At a developer roundtable with some of the Switch 2's creators, we learned more about what to expect from the Switch 2's hardware, and the tools it can deliver to game developers.
Producer Kouichi Kawamoto, technical director Tetsuya Sasaki, and director Takuhiro Dohta took questions from the press (questions and answers through interpreter Raymond Elliget) about the new Switch.
"Nintendo doesn't share too much on the hardware spec," Sasaki said. "What we really like to focus on is the value we can provide to our consumers." Still, the group did drop some knowledge that can let us know what to expect.
We learned a bit about the tech Nintendo is leaning on, as well as some details about the hardware that didn't make the spec sheet. Here's what we found out:
By sticking with Nvidia, Nintendo will be able to access its DLSS technologies.
Dohta confirmed that Nintendo uses DLSS upscaling technology and is offering it as a tool to others in response to a question from Inverse's Shannon Liao.
"When it comes to the hardware, it is able to output to a TV at a max of 4K and whether the software developer is going to use that as a native resolution or get it to a smaller rate and an upscale is something that the software developer can choose," he said. "And that's, I think it opens up a lot of options for the software developer to choose from."
As for hardware ray tracing, Dohta confirmed the chip can support it, and suggested this is yet another tool for software developers to choose to implement.
Nintendo's official spec sheet claims that the Switch 2's 5220 mAh battery lasts between 2 and 6.5 hours on a charge as a "rough estimate." The developers of the system were reluctant to put a more specific number on it. Sasaki pointed out that it depends heavily on the game you're playing and the conditions you use the system in.
Dohta added that with features like GameChat, there are far more features on the system side that are more complex than the original system, and that the variability of battery life is even… wider than it was in Nintendo Switch, making it difficult to even compare the successor to the original system in terms of battery life.
When asked about Nintendo's solution for backwards compatibility with Switch games and the GameCube classics available on the system, the developers confirmed these games are actually emulated. (This is similar to what Xbox does with backwards compatibility).
"It's a bit of a difficult response, but taking into consideration it's not just the hardware that's being used to emulate, I guess you could categorize it as software-based," Sasaki said of the solution.
The new Joy-Cons connect to the Switch 2 with Bluetooth 3.0. When asked about struggles players had with connecting multiple Bluetooth devices, including controllers and headsets, to the original Switch, Sasaki kept it simple:
"Yes, it has improved," he said.
He added that the size of the system and its bigger antennas should have "a big impact" and enable better connections. Additionally, the number of antennas has increased and "a lot" of further adjustments have been made.
The Switch 2 has a 7.9-inch LCD display with support for HDR. A premium version of the original Switch had an OLED screen, which one member of the press said could be perceived as a downgrade.
Sasaki said that during development, many advancements had been made in LCD technology.
Kawamoto added that the OLED version of the original Switch didn't have HDR support, which this new LCD screen does.
When asked by CNET's Scott Stein if the top USB-C port on the Switch 2 could be used for external displays like Xreal glasses, Kawamoto said that only the bottom port on the system supports video out.
"So in terms of supporting the glasses, it's not an official Nintendo product, so it's hard to say, "Kawamoto said.
The top USB-C port has been demoed with the new Nintendo Switch 2 camera and can also charge the system in tabletop mode.
]]>The Nintendo Switch 2 feels familiar, but it doesn't feel the same. At a hands-on event in New York, I was among the first to play the successor to Nintendo's most popular ever console, and I came away largely excited, though that may be more because of the games than the hardware itself.
Don't get me wrong, the Switch 2 itself is quite nice. But that $450 handheld-turned-console is only as good as the games that Nintendo and third-party developers create.
The Switch 2 is very much a sequel, but one that's also clearly an evolution of Nintendo's point of view on gaming. The few hours I spent playing early games made me excited to spend more time with the device. Better start saving up.
That black matte finish on the Joy-Con 2 controllers and the system itself make an incredible first impression. The system, at 1.18 pounds with the Joy-Con 2 controllers attached, is a bit heavier than I expect a Switch to be, but the larger 7.9-inch display is worth it. (And that's still lighter than the best PC gaming handheld gaming PCs, like the Steam Deck OLED, which is 1.41 pounds.)
There is something about the lack of color (other than the callbacks to the neon red and orange around the joysticks and under the Joy-Cons) that doesn't feel very Nintendo. Even the company's last attempt at having any sort of edge, with the GameCube, had indigo and orange options alongside black. But Hey, I had the black GameCube, so I can get past it. I'm sure there will be variant colors eventually.
The Joy-Cons come off with the press of release buttons on each side. I didn't spend a ton of time connecting and disconnecting controllers (in fact, most demos had the system hidden away). But on one unit I briefly tried it on, my first impression was that the click was strong. I wouldn't purposely wiggle the connector the wrong way, but I'm definitely not worried about accidentally pulling the Joy-Cons off.
Those longer controllers fit better in my hands than the originals did. They're still thin, but I found the length to be more comfortable, though I'm not sure many people will notice a huge difference.
The control sticks still seem to be potentiometer-style like on the original Joy-Cons. In a round-table discussion, the Switch's developers told the press that the Joy-Cons 2's sticks have been entirely redesigned for larger, smoother movements. But the words "hall effect" were never mentioned, if that was something you were hoping for. The sticks didn't feel all that different in a series of 10-15 minute demos, but I need more time with these to really feel how they've changed.
Nintendo's new kickstand is a huge improvement over the original Switch, which had a kickstand so tiny it felt like an afterthought. On the Switch 2, you get a much larger, stable stand that leans to any angle you want. I could actually see using this one on an airplane tray.
But it's not until you start gaming that you see the biggest improvement over the original Nintendo Switch: the new display. No, it's not OLED, so I can understand why some might think this is a step down from the OLED Switch model. But this LCD screen has been bumped up to 1920 x 1080 with a 120 Hz variable refresh rate. as well as HDR10 support that made Mario Kart World look fantastic. I sure didn't mind the lack of OLED in my brief time with the device, though I look forward to trying more games on the system, as most demos were presented on televisions.
As for the dock, I only saw it in one demo, paired with the Nintendo Switch 2 camera. It's definitely bulkier, which makes room for a cooling fan, but other than that, it seems to be similar to the existing Switch dock in practice.
While Nintendo's consoles have a reputation for ease of use and a focus on games above all other features, there's some PC gaming leaking into the Switch.
First, there's the mouse —or mice, actually. Both Joy-Con 2 controllers have sensors that, when combined with the accelerometers and gyroscopes, letting them act as mice. I sought these games out at Nintendo's showcase, wanting to see if it felt like to use a mouse on a modern Nintendo console (After all, it's been a while since the 1992 SNES mouse). The Switch 2 wrist straps include little mouse skates to make scrolling smoother.
The results were a mixed bag during my brief trials. In the Nintendo Switch 2 Edition of Metroid Prime 4 Beyond, you can switch between using the Joy-Con as a controller or mouse at will. With the mouse mode, I was far more exacting with Samus Aran's arm cannon than I was with a combination of the right joystick and motion controls. But while Nintendo has made the ZL and ZR buttons larger to accommodate this kind of use, the new Joy-Cons aren't much thicker, and I found wrist and index finger a bit uncomfortable by the end of the demo. By the end, I had switched back to the standard experience.
But I had a much better time with the upcoming Switch 2 version of Civilization VII. It felt similar to playing a Civ game on PC, and the slower pace made for a more comfortable experience. In fact, the game only requires a single Joy-Con to use as a mouse, but I do wish you could program some quick actions to the other controller.
For some, the standout mouse experience might be Drag x Drive, in which you use both mice simultaneously to play what is effectively Rocket League meets wheelchair basketball with robots. I've never seen a mouse used like this in a game, and I'm curious if this could actually send some ideas back to the PC space. Here, you use each mouse to control the wheels, rolling the robot around the court and revving to gain speed. It's a cool trick, though by the end of my demo I wondered if there would eventually be a mode where you could use a joystick.
Cyberpunk 2077: Ultimate Edition was demonstrated with a Pro controller, but developer CD Projekt Red has already committed to mouse control. I'm curious to see how that pans out.
The big question here is how many people will be in a position to use their Joy-Cons like mice. While I play PC games at my desk, I tend to play console titles on my couch, and it's a literal stretch to reach forward to my coffee table. The Drag x Drive tutorial pointed out that you can use mouse mode on your pants, which did work, but I don't think that's a solution for long play sessions. And if that's the case, there's the question of how long developers will support it.
I couldn't help but note that Nintendo had nice mouse mats and a variety of sitting and standing desks for games with mouse control.
The other area where the Switch 2 feels like it's getting into more enthusiast territory is with various resolution options and quality modes. This occurred to a degree on the original Switch, with its 720p display (with some games that played at 540p) and an option to output at 1080p on the dock, but the Switch 2 is far more capable. This is a trend we've also seen on the Xbox Series X and PlayStation 5.
The Switch 2 can output to 4K at a max of 60 frames per second when docked. In games where you can pick a lower resolution, like 1080p or 1440p, that goes up to a 120 Hz variable refresh rate.
Alternatively, the built-in screen can go up to 120 fps at 1080p, in handheld or tabletop mode. I played my original Switch in handheld mode on the couch all the time, so that's still quite an upgrade.
But it's not just the system that has graphics options. Games, too, will give you a degree of customizability. For example, Metroid Prime 4 Beyond: Nintendo Switch 2 Edition has quality and performance modes to choose from.
At its showcase, Metroid Prime 4 was the only game in which Nintendo was explicit about what resolution and frame rate a game was playing at. In this case, it was in docked 1080p 120 fps performance mode, and the game felt extremely responsive. Between the high frame rate and the mouse option, that felt surprisingly like a Nintendo-made PC experience.
Otherwise, the only graphics performance data I heard about was from a Civilization VII developer, who said Firaxis Games is aiming for 1080p 60 fps, which seems fine for that game.
Nintendo is also releasing additional hardware accessories in a $79.99 Pro Controller and a $49.99 camera.
The Pro Controller feels much like the existing controller for the Switch, though I did feel the buttons were a bit clicker. My favorite part was the two buttons on the rear that can be mapped to face buttons, though I didn't see any demonstration controllers set up to use these features. The Pro Controller, like the right Joy-Con 2, features the new C button to enable and use the Switch 2's Game Chat.
The Nintendo Camera will be a more divisive piece of technology. It's intended for use in Game Chat, but can also be used to put your face in games like Mario Party Jamboree – Nintendo Switch 2 Edition + Jamboree TV, which put me and other players in the game to get roasted by Bowser if we didn't please him by winning mini games. The green screen effect was rough, though, with lots of jagged lines.
You won't necessarily have to buy Nintendo's camera, though, as Nintendo's own website says you can use a "compatible USB-C camera." I guess I'll keep my Logitech StreamCam around just in case.
Did I mention Nintendo showed off a bunch of games?
Nintendo's first-party library often consists of hit system sellers, and the system is launching with Mario Kart World, which is sure to be a huge seller (Mario Kart 8 Deluxe was a sales behemoth on the existing Switch). Here are a few standouts:
Mario Kart World
This game was the most memorable moment of the showcase. (My guess is it was being played at around 90 fps, but no one would say.) The game looked great both on TVs and on the Switch's HDR display. Racing was fine enough, but my favorite part was the Knockout Tour mode.
Knockout Tour pits you against 24 other players as you race towards milestones rather than one finish line. At each milestone, a number of players are removed from the game, raising the stakes every few minutes. Dirty play seemed fair in this scenario, and luck and skill were equally important. This mode alone could sell some Nintendo Switch Online subscriptions.
Playing with 24 racers makes the game feel far more grand, and the ability to drive off-road on huge maps gave me an idea of the type of varied environments the Switch 2 is capable of. I also got laughs out of the huge character selection, including alternate outfits like Mariachi Waluigi and surprising guests, like a cow.
Cyberpunk 2077: Ultimate Edition
The fact that this game is coming to the Switch 2 seems to be a testament to the system's power. But in the corner of the demo area with mature games, it looked a bit rough, with pixelated edges on characters and some lackluster textures. A permanent watermark on the screen said this was an early build that isn't representative of the final product, so I'm hoping CD Projekt Red can put in some more optimization. I was told there are performance and quality modes for this game, but I'm not sure which I played.
Nintendo Switch 2 Welcome Tour
I only got to try a handful of the mini games and demos that showcase the Switch 2's new features, but I was shocked to learn this game isn't free. It's cute, and I particularly enjoyed a game in which you guess the frame rate objects are thrown at. Ends up I can tell 120 fps vs 60 fps after all.
But I wouldn't pay for this. I think it would have made a fun pack-in like Astro's Playroom did for the PlayStation 5.
Donkey Kong Bananza
Punching things is cathartic. But since I can't just punch things in real life, doing it as Donkey Kong is the next-best thing. This game showed off the way the Switch 2 handles destructive environments, which was impressive. You can break just about anything in certain levels of this game. There were points where I dug tunnels under an island just to see where I would eventually come out when I hit the end.
There's a ton of collectibles here, too. I have a feeling this game will keep completionists going for a while.
Oh, and there was a cute robot hanging out with DK for part of the demo. It seems like a partner, but Nintendo reps wouldn't tell me anything about it.
Metroid Prime 4 Beyond - Nintendo Switch 2 Edition
This long-awaited sequel to the first-person shooter arm of the Metroid franchise impressed. It looked great, and ran very smoothly at 1080p at 120 fps. While the game supports mouse mode, it didn't feel explicitly built for it. Still, I had enough fun in this demo to give either control mode a try again.
Drag x Drive
I can see this combination of Rocket League and wheelchair basketball developing a cult following. Using both Joy-Cons in mouse mode was a workout, and I'm not sure how many people will have space for this kind of mouse movement. Still, I was surprised by the deep level of strategy, including extra points for jump shots and getting air off of ramps. I'm hoping for a more typical control scheme, which would make me consider playing this game more regularly.
Nintendo Classics: GameCube
Soul Calibur II and F-Zero GX felt perfect during my demo. The controller felt awesome, though I noticed it didn't rumble. No notes. You will, however, need Nintendo Switch Online and a new Expansion Pack to access this library, which also includes The Legend of Zelda: The Wind Waker (the second-best Zelda game) and Super Mario Strikers.
Other games on display included Civilization VII, Hades 2, Hogwarts Legacy, Street Fighter 6, and Kirby and the Forgotten World — Nintendo Switch 2 Edition.
Everyone in the PC enthusiast community is likely familiar with cooling manufacturer Thermalright. The company is well known for providing quality competitive coolers at rock-bottom prices and raising the bar for value in the cooling realm. The company can do this because it typically directly manufactures its own products, having strong vertical integration as part of its business model.
Some of my past reviews of their products include the “Thermalright Phantom Spirit 120 Review: Simply the Best”, “Thermalright Phantom Spirit 120 EVO Review: This isn’t a competition. This is a massacre.”, and “Thermalright Grand Vision 360 Review: It’s not a competition, it is a massacre (again).” I only mention this to establish that I genuinely love Thermalright’s products – most of the time.
When I ask the typical “Will Thermalright’s Hypervision make our list of best coolers on the market question,” the answer is a clear “no” in this case, because this SKU is a pointless addition to Thermalright’s lineup which shouldn’t exist and confuses customers.
Let’s take a look at the specifications and features of the cooler, as well as the things I don’t like about this liquid cooling model, then we’ll go over thermal performance and noise levels.
The Hyper Vision 360 is the second product from Thermalright with packaging that I would describe as “fancy.” As you open the top, the white section with “low temperature, high performance” pops up – just like the previously reviewed Grand Vision 360. The product is secured with plastic wrappings and molded cardboard.
Included with the box are the following:
▶ Thermalright TF7 Thermal Paste
Included with the AIO is a small tube of Thermalright’s TF7 thermal paste. This is one of the better pastes on the market, as you can see in our best thermal paste tests.
▶ 480x480 3.95-inch display
One thing that immediately sets the Hyper Vision 360 apart from its competitors, at least in this price range, is the inclusion of a 3.95-inch LCD screen. While this screen is larger than the one included with the Grand Vision 360, it is the same resolution -- so the quality of the image is effectively lower due to the decreased PPI of the screen being used.
▶ Consumer unfriendly: Accessible refill port - but using it breaks the warranty
I go out of my way to recognize when companies don’t include a consumer-unfriendly “warranty void if removed” stickers on top of the refill port – an action which is technically illegal under the Magnuson-Moss Warranty Act in the USA.
Thermalright includes an accessible refill port hidden behind a sticker with its logo on it, and while they don’t include a scary sticker on top of the refill port, their warranty policy is no different than those that do. If you attempt to service your AIO by refilling it, Thermalright will deny you warranty service.
What a shame!This is a consumer unfriendly move, especially when you consider that competitors such as BeQuiet not only allow this - but encourage it by including additional coolant with AIOs like the Light Loop 360.
▶ Standard 27mm radiator
The radiator included has a standard 27mm thickness, which means it should fit in the vast majority of ATX PC cases, without space constraints.
▶ Discernable pump whine
During testing, I noticed that the AIO's pump emitted a subtle whine. While not overly loud, it almost interfered with noise-normalized assessments which I run at 38.9 dBA – and many users will prefer their pumps to run at 36.4 dBA or quieter.
This type of issue with a liquid pump is more of a hassle to deal with compared to potential problems with an air cooler – it's easier to replace a fan than it is to go through the process of RMAing an AIO for having a noisy pump and being potentially told it is operating within expected performance.
This is a factor worth considering when investing in liquid cooling if you're not fully confident in the ease of a company's warranty process.
▶ Mixed Bag: Software with tons of customization options, but stability may vary
You don’t need to install Thermalright’s software to operate the AIO normally, but you’ll need to download it if you wish to customize the display – and if you paid extra for a screen, why wouldn’t you want to customize it? Thermalright doesn’t make it obvious where to download the software, but this link will take you to the company’s TRCC software.
At first glance, you’ll see the screen above and might think that Thermalright’s software is rather basic. The software is deceptively simple looking, but exploring the options further reveals a wide variety of preset customization options – more than I’ve seen from any other AIO software I’ve used before.
That said, your mileage may vary when dealing with this software. While my own personal experience with Thermalright’s software has been flawless, I’ve received reports from users having problems with the software on social media and the Tom’s Hardware Discord server. Some of these users I’ve been able to help resolve their problems, but others I’ve been unsuccessful in helping to troubleshoot.
▶ Cable management features
Thermalright’s Hyper Vision AIO features pre-installed fans with a quick-connect system; the cables are routed through the tubing of the AIO with clips. This was a good effort. But it ends up a tad bit messier than needed. This is a minor complaint overall though. The bigger problem with this SKU is its inferior performance compared to similarly named Thermalright products.
▶ 120mm TL-K12W fans
There’s more to a cooler than just the heatsink or radiator. The bundled fans have a significant impact on cooling and noise levels, as well as how the cooler looks in your case.
These fans are 25mm thick and have a quick connect system designed to simplify installation and (in theory) reduce cable clutter, and have subtle ARGB accents.
There are many factors other than the CPU cooler that can influence your cooling performance, including the case you use and the fans installed in it. A system's motherboard can also influence this, especially if it suffers from bending, which results in poor cooler contact with the CPU.
In order to prevent bending from impacting our cooling results, we’ve installed Thermalright’s LGA 1700 contact frame into our testing rig. If your motherboard is affected by bending, your thermal results will be worse than those shown below. Not all motherboards are affected equally by this issue. I tested Raptor Lake CPUs in two motherboards. And while one showed significant thermal improvements after installing Thermalright’s LGA1700 contact frame, the other motherboard showed no difference in temperatures whatsoever. Check out our review of the contact frame for more information.
I’ve also tested this cooler with Intel’s latest platform, Arrow Lake and LGA 1851.
The installation of this AIO is simple. The following steps assume that you will mount the radiator to your case first, which is generally a good idea unless you are building in a very cramped case.
1. You’ll first need to place the backplate against the rear of the motherboard.
2. Next, you’ll secure the backplate by attaching standoffs. You’ll then place the mounting bars on top of the standoffs, and secure them with the included screws.
3. Apply the included thermal paste to your CPU. If you have any questions on how to do this properly, please refer to our handy guide on how to apply thermal paste.
4. Place the CPU block on top of the CPU, and secure it with a screwdriver. Attach the LCD screen after securing the CPU block.
5. Connect the PWM and ARGB cables to your motherboard. If you wish to use the display, you’ll need to connect the USB cable to the CPU block on one end, and to a USB and SATA power header on the other.
6. Next you can power on your computer, as installation is complete.
Without power limits enforced on Intel’s Core Ultra 9 285K and i7-14700K CPUs, the CPU will hit its peak temperature (TJ Max) and thermally throttle with even the strongest of air coolers and even most liquid coolers on the market. For the best liquid coolers on the market, the results of this test will be shown using the CPU’s temperature. However, when the CPU reaches its peak temperature, I’ve measured the CPU package power to determine the maximum wattage cooled to best compare their performance. It’s important to note that thermal performance can scale differently depending on the CPU it’s being tested with.
Looking at the results here with Intel’s “Raptor Lake” Core i7-14700K, you’ll notice the chart is measuring the CPU package power instead of the average temperature. This is because while Thermalright’s Hyper Vision 360 is a strong AIO, it isn’t strong enough to keep the CPU from reaching its peak temperature in stress testing.
It is important to remember that CPU coolers can scale (and perform) differently depending on the CPU it is paired with due to differences in manufacturing processes and the location of hotspots on the CPU – and in this case, the Hyper Vision 360 performs even worse with Arrow Lake. In fact, in my stress testing, FSP’s MP7 air cooler actually performed better than Thermalright’s AIO on Intel’s Core Ultra 9 285K CPU.
Worse yet, this cooler runs LOUD by default, reaching a whopping 53 dBA! I don’t understand why Thermalright let these fans run so loudly, especially since – as the next section will show – they perform well even when normalized to low noise levels!
Finding the right balance between fan noise levels and cooling performance is important. While running fans at full speed can improve cooling capacity to some extent, the benefits are limited and many users prefer a quieter system.
While this cooler’s default behavior is loud with underwhelming maximum performance, when noise-normalized, it is very strong, losing only 1W of performance – a figure so small as to only be relevant for benchmarking.
253W results
My recent reviews have focused more on tests with both the CPU and GPU being stressed, but many of y’all have indicated that you would like to see more CPU-only tests. In response, I’ve started testing Intel’s “Arrow Lake” Core Ultra 9 285K with a 253W limit.
The results of Thermalright’s Hyper Vision when a more reasonable 253W limit is imposed on Intel’s Core Ultra 9 285k aren’t as good as I had expected. At 80 degrees C, it’s 3C ahead of the worst result I have from an AIO in this benchmark and 4C behind the best result I’ve recorded. My results are somewhat limited here, but I expect this to be more or less “average” for AIOs I test in the future.
Testing a CPU Cooler in isolation is great for synthetic benchmarks, but doesn’t tell the whole story of how it will perform. I’ve incorporated two tests with a power limit imposed on the CPU, while also running a full load on MSI’s GeForce RTX 4070 Ti SUPER 16G VENTUS 3X.
The CPU power limit of 135W was chosen based on the worst CPU power consumption I observed in gaming with Intel’s Core Ultra 9 285K, which was in Rise of the Tomb Raider.
In this test, the Hyper Vision 360 actually performs very well – at least in terms of overall temperature. It averaged 61C, only 2C behind the best result we’ve recorded. In terms of noise levels (when tied to the default fan curve of my motherboard) it wasn’t too bad either, measuring at 41.4 dBA – though this is technically the third-loudest result in this chart.
Our second round of CPU + GPU testing is also performed with Arrow Lake. The power limit of 85W was chosen based on typical power consumption in gaming scenarios using the Core Ultra 9 285K CPU. This should be fairly easy for most coolers, the main point of this test is to see how quietly (or loudly!) a cooler runs in low-intensity scenarios.
With a CPU temperature of 53 degrees C, the thermal performance was alright. What’s frustrating about the results in this section is the noise levels. Looking at the chart, you might think: But Albert, that reading of 38.9 is even better than the Grand Vision 360!
And that technically would be true. But what this chart is incapable of expressing is that in the case of the Hyper Vision 360, the noise level is caused by the pump whine of the unit I tested. If the pump wasn’t noisy, the rating here would be even quieter – and that annoys me because this cooler would be a better value if the pump was properly tuned! Pump whine has a more irritating pitch and frequency to its noise than any PC fan I’ve heard.
Unless you really need a slightly larger LCD screen, I advise passing on Thermalright’s Hyper Vision 360. Instead, most users should consider purchasing Thermalright’s superior Grand Vision 360 instead. Both AIOs cost the same, but the Hyper Vision 360 has worse overall performance and minor but noticeable pump whine. Most of the time, thanks to test results, I gladly recommend Thermalright products. But in this case, the Hyper Vision 360 is an example of unnecessary SKU spam that only serves to confuse customers. It’s not worth your time or hard-earned money.
]]>