Gaming graphic cards: Best Graphics Cards 2023 – Top Gaming GPUs for the Money

Nvidia GeForce RTX 4090 Review: Queen of the Castle

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Editor’s Choice

(Image: © Tom’s Hardware)

Tom’s Hardware Verdict

The RTX 4090 delivers on the technological and performance fronts, easily besting previous generation offerings. With major enhancements to all the core hardware and significantly higher clock speeds, plus forward looking tech like DLSS 3, the new bar has been set very high — with an equally high price tag.

TODAY’S BEST DEALS

Pros
  • +

    Fastest GPU currently available

  • +

    Major architectural improvements

  • +

    DLSS 3 addresses CPU bottlenecks

  • +

    Excellent for content creation

  • +

    AV1 support and dual NVENC

Cons
  • Extreme pricing and power

  • Limited gains at 1440p and lower resolutions

  • DLSS 3 adoption will take time

  • We need to see AMD RDNA 3

  • The inevitable RTX 4090 Ti looms

Why you can trust Tom’s Hardware
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

The Nvidia GeForce RTX 4090 hype train has been building for most of 2022. After more than a year of extreme GPU prices and shortages, CEO Jensen Huang revealed key details at GTC 2022, with a price sure to make many cry out in despair. $1,599 for the top offering from Nvidia’s Ada Lovelace architecture? Actually, that’s only $100 more than the RTX 3090 at launch, and if the card can come anywhere near Nvidia’s claims of 2x–4x the performance of an RTX 3090 Ti, there will undoubtedly be people willing to pay it. The RTX 4090 now sits atop the GPU benchmarks hierarchy throne, at least at 1440p and 4K. For anyone who’s after the fastest possible GPU, never mind the price, it now ranks among the best graphics cards.

That’s not to say the RTX 4090 represents a good value, though that can get a bit subjective. Looking just at the FPS delivered by the various GPUs per dollar spent, it ranks dead last out of 68 GPUs from the past decade. Except our standard ranking uses 1080p ultra performance, and the 4090 most decidedly is not a card designed to excel at 1080p. In fact, it’s so fast that CPU bottlenecks are still a concern even when gaming at 1440p ultra. Look at 4K performance and factor in ray tracing, and you could argue it’s possibly one of the best values — see what we mean about value being subjective?

Again, you’ll pay dearly for the privilege of owning an RTX 4090 card, as the base model RTX 4090 Founders Edition costs $1,599 and partner cards can push the price up to $1,999. But for those who want the best, or anyone with deep enough pockets that $2,000 isn’t a huge deal, this is the card you’ll want to get right now, and we’d be surprised to see anything surpass it in this generation, short of a future RTX 4090 Ti. 

Swipe to scroll horizontally

Current Top-Tier GPU Specifications
Graphics Card RTX 4090 RTX 3090 Ti RTX 3090 RTX 3080 Ti RX 6950 XT Arc A770 16GB
Architecture AD102 GA102 GA102 GA102 Navi 21 ACM-G10
Process Technology TSMC 4N Samsung 8N Samsung 8N Samsung 8N TSMC N7 TSMC N6
Transistors (Billion) 76. 2) 608.4 628.4 628.4 628.4 519 406
SMs / CUs / Xe-Cores 128 84 82 80 80 32
GPU Shaders 16384 10752 10496 10240 5120 4096
Tensor Cores 512 336 328 320 N/A 512
Ray Tracing “Cores” 128 84 82 80 80 32
Boost Clock (MHz) 2520 1860 1695 1665 2310 2100
VRAM Speed (Gbps) 21 21 19. 5 19 18 17.5
VRAM (GB) 24 24 24 12 16 16
VRAM Bus Width 384 384 384 384 256 256
L2 / Infinity Cache 72 6 6 6 128 16
ROPs 176 112 112 112 128 128
TMUs 512 336 328 320 320 256
TFLOPS FP32 82. 6 40 35.6 34.1 23.7 17.2
TFLOPS FP16 (FP8/INT8) 661 (1321) 160 (320) 142 (285) 136 (273) 47.4 138 (275)
Bandwidth (GBps) 1008 1008 936 912 576 560
TDP (watts) 450 450 350 350 335 225
Launch Date Oct 2022 Mar 2022 Sep 2020 Jun 2021 May 2022 Oct 2022
Launch Price $1,599 $1,999 $1,499 $1,199 $1,099 $349

Here’s a look at the who’s who of the extreme performance graphics card world, with the fastest cards from Nvidia, AMD, and now Intel. Obviously, Intel’s Arc A770 competes on a completely different playing field, but it’s still interesting to show how it stacks up on paper.

We’re going to simply refer you to our Nvidia Ada Lovelace Architectural deep dive if you want to learn about all the new technologies and changes made with the RTX 40-series. The above specs table tells a lot of what you need to know. Transistor counts have nearly tripled compared to Ampere; core counts on the RTX 4090 are 52% higher than the RTX 3090 Ti; GPU clock speeds are 35% faster, and the GDDR6X memory? It’s still mostly unchanged, except there’s now 12x more L2 cache to keep the GPU from having to request data from memory as often.

On paper, that gives the RTX 4090 just over double the compute performance of the RTX 3090 Ti, and there are definitely workloads where you’ll see exactly those sorts of gains. But under the hood, there are other changes that can further widen the gap.

Ray tracing once again gets a big emphasis, and three new technologies — Shader Execution Reordering (SER), Opacity Micro-Maps (OMM) and Displaced Micro-Meshes (DMM) — all offer potential improvements. However, they also require developers to use them, which means existing games and engines won’t benefit.

Deep learning and AI workloads also stand to see massive generational improvements. Ada includes the FP8 Transformer Engine from Hopper h200, along with FP8 number format support. That means double the compute per Tensor core, for algorithms that can use FP8 instead of FP16, and up to four times the number-crunching prowess of the 3090 Ti.

One algorithm that can utilize the new Tensor cores — along with an improved Optical Flow Accelerator (OFA) — is DLSS 3. In fact, DLSS 3 requires an RTX 40-series graphics card, so earlier RTX cards won’t benefit. What does DLSS 3 do? It takes the current and previously rendered frames and generates an extra in-between frame to fill the gap. In some cases, it can nearly double the performance of DLSS 2. We’ll take a closer look at DLSS 3 later in this review.

From a professional perspective, particularly for anyone interested in deep learning, you can easily justify the cost of the RTX 4090 — time is money, and doubling or quadrupling throughput will definitely save time. Content creators will find a lot to like and it’s a quick and easy upgrade from a 3090 or 3090 Ti to the 4090. We’ll look at ProViz performance as well.

But what about gamers? Unlike the RTX 3090 and 3090 Ti, Nvidia isn’t going on about how the RTX 4090 is designed for professionals. Yes, it will work great for such people, but it’s also part of the GeForce family, and Nvidia isn’t holding back on its gaming performance claims and comparisons. Maybe the past two years of cryptocurrency mining are to blame, though GPU mining is now unprofitable so at least gamers won’t have to fight miners for cards this round. 

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content
  • Nvidia RTX 4090 at Amazon for $1,599.99

Nvidia RTX 4090: Price Comparison

15 Amazon customer reviews

☆☆☆☆☆

$1,449. 99

View

$1,599.99

View

$1,649.99

View

$1,689

View

$1,708

View

Show More Deals

powered by

  • 1

Current page:
Meet the Nvidia GeForce RTX 4090

Next Page Nvidia GeForce RTX 4090 Founders Edition

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

Nvidia GeForce RTX 4070 Review: Mainstream Ada Arrives

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

(Image: © Tom’s Hardware)

Tom’s Hardware Verdict

The RTX 4070 brings Ada Lovelace down to the mainstream with a $599 price tag. It’s basically on par with the RTX 3080 in a more compact and efficient package, with DLSS 3 Frame Generation sweetening the pot.

Pros
  • +

    Efficient and fast graphics card

  • +

    Great features at a mostly reasonable price

  • +

    Excellent ray tracing and AI hardware

Cons
  • Still more expensive than the last-gen 3070

  • Unnecessary 16-pin power connector and adapter

  • DLSS 3 isn’t really a killer feature

Why you can trust Tom’s Hardware
Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Nvidia positions its new GeForce RTX 4070 as a great upgrade for GTX 1070 and RTX 2070 users, but that doesn’t hide the fact that in many cases, it’s effectively tied with the last generation’s RTX 3080. The $599 MSRP means it’s also replacing the RTX 3070 Ti, with 50% more VRAM and dramatically improved efficiency. Is the RTX 4070 one of the best graphics cards? It’s certainly an easier recommendation than cards that cost $1,000 or more, but you’ll inevitably trade performance for those saved pennies.

At its core, the RTX 4070 borrows heavily from the RTX 4070 Ti. Both use the AD104 GPU, and both feature a 192-bit memory interface with 12GB of GDDR6X 12Gbps VRAM. The main difference, other than the $200 price cut, is that the RTX 4070 has 5,888 CUDA cores compared to 7,680 on the 4070 Ti. Clock speeds are also theoretically a bit lower, though we’ll get into that more in our testing. Ultimately, we’re looking at a 25% price cut to go with the 23% reduction in processor cores.

We’ve covered Nvidia’s Ada Lovelace architecture already, so start there if you want to know more about what makes the RTX 40-series GPUs tick. The main question here is how the RTX 4070 stacks up against its costlier siblings, not to mention the previous generation RTX 30-series. Here are the official specifications for the reference card. 

Swipe to scroll horizontally

Nvidia RTX 4070 Compared to Other Ada / Ampere GPUs
Graphics Card RTX 4070 RTX 4080 RTX 4070 Ti RTX 3080 Ti RTX 3080 RTX 3070 Ti RTX 3070
Architecture AD104 AD103 AD104 GA102 GA102 GA104 GA104
Process Technology TSMC 4N TSMC 4N TSMC 4N Samsung 8N Samsung 8N Samsung 8N Samsung 8N
Transistors (Billion) 32 45. 2) 294.5 378.6 294.5 628.4 628.4 392.5 392.5
SMs 46 76 60 80 68 48 46
GPU Cores (Shaders) 5888 9728 7680 10240 8704 6144 5888
Tensor Cores 184 304 240 320 272 192 184
Ray Tracing “Cores” 46 76 60 80 68 48 46
Boost Clock (MHz) 2475 2505 2610 1665 1710 1765 1725
VRAM Speed (Gbps) 21 22. 4 21 19 19 19 14
VRAM (GB) 12 16 12 12 10 8 8
VRAM Bus Width 192 256 192 384 320 256 256
L2 Cache (MiB) 36 64 48 6 5 4 4
ROPs 64 112 80 112 96 96 96
TMUs 184 304 240 320 272 192 184
TFLOPS FP32 (Boost) 29. 1 48.7 40.1 34.1 29.8 21.7 20.3
TFLOPS FP16 (FP8) 233 (466) 390 (780) 321 (641) 136 (273) 119 (238) 87 (174) 81 (163)
Bandwidth (GBps) 504 717 504 912 760 608 448
TGP (watts) 200 320 285 350 320 290 220
Launch Date Apr 2023 Nov 2022 Jan 2023 Jun 2021 Sep 2020 Jun 2021 Oct 2020
Launch Price $599 $1,199 $799 $1,199 $699 $599 $499

There’s a pretty steep slope going from the RTX 4080 to the 4070 Ti, and from there to the RTX 4070. We’re now looking at the same number of GPU shaders — 5888 — as Nvidia used on the previous generation RTX 3070. Of course, there are plenty of other changes that have taken place.

Chief among those is the massive increase in GPU core clocks. 5888 shaders running at 2.5GHz will deliver a lot more performance than the same number of shaders clocked at 1.7GHz — almost 50% more performance, by the math. Nvidia also likes to be conservative, and real-world gaming clocks are closer to 2.7GHz… though the RTX 3070 also clocked closer to 1.9GHz in our testing.

The memory bandwidth ends up being slightly higher than the 3070 as well, but the significantly larger L2 cache will inevitably mean it performs much better than the raw bandwidth might suggest. Moving to a 192-bit interface instead of the 256-bit interface on the GA104 does present some interesting compromises, but we’re glad to at least have 12GB of VRAM this round — the 3060 Ti, 3070, and 3070 Ti with 8GB are all feeling a bit limited these days. But short of using memory chips in “clamshell” mode (two chips per channel, on both sides of the circuit board), 12GB represents the maximum for a 192-bit interface right now.

While AMD was throwing shade yesterday about the lack of VRAM on the RTX 4070, it’s important to note that AMD has yet to reveal its own “mainstream” 7000-series parts, and it will face similar potential compromises. A 256-bit interface allows for 16GB of VRAM, but it also increases board and component costs. Perhaps we’ll get a 16GB RX 7800 XT, but the RX 7700 XT will likely end up at 12GB VRAM as well. As for the previous-gen AMD GPUs having more VRAM, that’s certainly true, but capacity is only part of the equation, so we need to see how the RTX 4070 stacks up before declaring a victor.

Another noteworthy item is the 200W TGP (Total Graphics Power), and Nvidia was keen to emphasize that in many cases, the RTX 4070 will use less power than TGP, while competing cards (and previous generation offerings) usually hit or exceeded TGP. We can confirm that’s true here, and we’ll dig into the particulars more later on.

The good news is that we finally have a latest-gen graphics card starting at $599. There will naturally be third-party overclocked cards that jack up the price, with extras like RGB lighting and beefier cooling, but Nvidia has restricted this pre-launch review to cards that sell at MSRP. We’ve got a PNY model as well that we’ll look at in more detail in a separate review, though we’ll include the performance results in our charts. (Spoiler: It’s just as fast as the Founders Edition.)

Image 1 of 2

Four GPCs, one NVENC, and one NVDEC for the RTX 4070 (Image credit: Tom’s Hardware)The full AD104 implementation includes five GPCs, four NVDEC units, and two NVENC blocks. (Image credit: Tom’s Hardware)

Above are the block diagrams for the RTX 4070 and for the full AD104, and you can see all the extra stuff that’s included but turned off on this lower-tier AD104 implementation. None of the blocks in that image are “to scale,” and Nvidia didn’t provide a die shot of AD104, so we can’t determine just how much space is dedicated to the various bits and pieces — not until someone else does the dirty work, anyway (looking at you, Fritzchens Fritz).

As discussed previously, AD104 includes Nvidia’s 4th-gen Tensor cores, 3rd-gen RT cores, new and improved NVENC/NVDEC units for video encoding and decoding (now with AV1 support), and a significantly more powerful Optical Flow Accelerator (OFA). The latter is used for DLSS 3, and while it’s “theoretically” possible to do Frame Generation with the Ampere OFA (or using some other alternative), so far only RTX 40-series cards can provide that feature.

The Tensor cores meanwhile now support FP8 with sparsity. It’s not clear how useful that is in all workloads, but AI and deep learning have certainly leveraged lower precision number formats to boost performance without significantly altering the quality of the results — at least in some workloads. It will ultimately depend on the work being done, and figuring out just what uses FP8 versus FP16, plus sparsity, can be tricky. Basically, it’s a problem for software developers, but we’ll probably see additional tools that end up leveraging such features (like Stable Diffusion or GPT Text Generation).

Those interested in AI research may find other reasons to pick an RTX 4070 over its competition, and we’ll look at performance in some of those tasks as well as gaming and professional workloads. But before the benchmarks, let’s take a closer look at the RTX 4070 Founders Edition.

  • MORE: Best Graphics Cards
  • MORE: GPU Benchmarks and Hierarchy
  • MORE: All Graphics Content
  • 1

Current page:
Nvidia GeForce RTX 4070 Founders Edition Review

Next Page Nvidia RTX 4070 Founders Edition Design

Jarred Walton is a senior editor at Tom’s Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge ‘3D decelerators’ to today’s GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

Differences between a professional video card and a gaming one

For most users, all graphics cards are the same. However, they are divided into three categories – office, gaming, professional. If the first two are clear, then there is little professional information.

Why you need professional video cards

Each video card solves one main task – it displays a picture that is pre-processed by the graphics processor. Information is supplied to the graphics chip in full – the location of objects, shades, degree of illumination, visibility level, and much more. Previously, games used classic pixel graphics, but its time has passed. Now 3D scenes are formed using polygons.

A polygon is a certain plane, or rather a flat polyhedron, in a three-dimensional image. Most games use triangular polygons. With their help, a complete picture is formed. The more such triangles, the better the result.

If you remember, in the old games, the heroes were distinguished by a somewhat angular model. This was due to a lack of capacity. The number of polygons used was minimal. However, video cards gradually improved, and the characters acquired realism. This is easy to verify, it is enough to study the games that were updated at regular intervals.

Colossal resources are spent on creating one character, from 15 to 45 thousand triangles. In some games, the numbers are even more solid, reaching 80,000 polygons or more. The task of the gaming video chip is to reproduce a bright and saturated picture with excellent color reproduction and various effects.

Professional video cards perform almost identical tasks as gaming ones. However, the emphasis here is on accuracy. This is due to the fact that real things are created on the basis of projects. For this reason, the number of polygons can be significantly higher than in the game format.

Key differences between gaming and professional video cards

Further, we will highlight in more detail the main points that divide video cards into several categories.

Video memory

Here lies one of the basic differences. Looking at the leading gaming card, the RTX 3090, it comes with 24GB of storage. Quite a solid indicator, which more than covers all the current needs in games. However, the top professional model Quadro RTX 8000 has 48 GB of memory. Moreover, a twofold difference is observed in previous models. Most likely it will continue in the near future.

Strict standard

Gamer cards are being finalized by third-party manufacturers and companies. For example, on the market you can find options from Asus, MSI, Palit. NVIDIA and AMD are open to some experimentation and don’t exercise much control.

With professional video cards, the situation is different. All of them are produced strictly under the supervision of these companies. Advanced technologies and components are involved, the consumer must be guaranteed to receive a high-quality product. This is largely due to the fact that such chips are actively used in various developments and the requirements are quite high.

ECC memory

As already noted, in professional devices, accuracy comes first. Errors in the course of work are unacceptable. To eliminate even the slightest inaccuracies, ECC memory is used. Its main feature is the rapid detection and correction of bit errors.

This option slightly reduces memory performance. The tests carried out showed that the difference with the memory used in game cards is about 3%. But it was also revealed that in direct calculations the number of corrected errors is on average 70-90 cases per 100 hours of operation. It turns out almost 1 error per hour.

In the rendering of a picture or scene, these errors are not so critical, since they will result in a pixel of the wrong color on one of the frames. But professional maps can be engaged in the calculation and visualization of fluid flows, particles, heat distribution or deformation load, and in this case, an error in the calculations can be critical or lead to the need to restart the calculation again. And the duration of such tasks can reach a week or even longer.

OpenGL

Such software is used in the development of various applications in the format of 2D/3D graphics. Support at the system level significantly speeds up many processes. However, the final cost of the product increases significantly.

Drivers

Special drivers have been developed for a professional video card. They provide access to various additional settings and useful options. There are also a number of special monitoring tools. Game drivers do not have such capabilities. What’s more, NVIDIA developers have created a BIOS for the Quadro to further improve the performance of the chip. For example, the card allows you to choose – you need an increased amount of memory or error correction. Increased performance of double parity tasks due to additional blocks in the chip.

Relevance

Developers of gaming video cards update with enviable constancy. After 2 years, top models lose their relevance and flagship positions. With professional models, the situation is different. They have a much higher margin of safety. Typically, a replacement appears only after 3-4 years, and the product life cycle is calculated for 5-7 years.

Ports

It is by the connectors that it is easy to understand what kind of video card was in the hands. Professional grade devices don’t even have HDMI. The main connector is DP, and as an auxiliary DVI.

Price

The price tag for a professional video card can really scare. If $1,500 for a top-end gaming model seems like overkill, nearly $15,000 for a professional version is shocking to many. However, the demand for them is quite high, since without them it is difficult to create various modern graphic projects, to make high-precision calculations and modeling.

Is it possible to play on a professional video card

The answer is extremely simple – yes. Of course, in certain games there may be FPS drops, but in general, the process will be comfortable. Another thing is that it is not advisable to purchase such a video card for games. Professional models are created only for highly specialized tasks. Yes, and they are too expensive to buy for entertainment.

Support

If for a home card, drivers can be updated 2-3 times a month, then for professional cards, updates can come out once a quarter, and this is not due to the fact that all forces are thrown at the home segment, but quite the opposite – for professional adapters The software part is tested much more thoroughly.

The second feature is that if for home adapters the software is delivered as is and in case of failures the standard recommendation is to roll back to the previous version, then in case of errors on professional cards, you can request the help of an engineer for a deeper analysis of the causes and finding ways to fix the error.

In some countries, even live voice communication is available, and not just an email support ticket.

Another feature is software support. In a number of programs, if the system does not include a professional video card, the calculation will completely fall on the processor, and a home card, even the top one, will be ignored.

Conclusion

Professional video cards are fundamentally different from gaming ones in terms of their capabilities and software. They are created exclusively for the development and implementation of various projects related to various areas of production, video processing, and so on. Game models are fully adapted for entertainment. In professional activities, they will not show themselves from the best side, since they do not have the proper capabilities and tools for this.

Graphics Cards – All Series|ASUS CIS

Sorting and Filtering

Sorting and filtering

Clear All

By Chipset Brand

NVIDIA

By Brand/Batch

ROG – Republic of Gamers

ROG Matrix

ROG Poseidon

ROG Strix

TUF Gaming

Dual Mini Series

Phoenix

Cerberus

Chipset manufacturer

NVIDIA

Chipset

Radeon RX 7000 Series

Radeon™ RX 7600

Radeon RX 6000 Series

Radeon™ RX 6950XT

Radeon™ RX 6900XT

Radeon™ RX 6800XT

Radeon™ RX 6800

Radeon™ RX 6700XT

Radeon™ RX 6600XT

Radeon™ RX 6600

Radeon™ RX 6500XT

Radeon RX 5000 Series

Radeon™ RX 5700XT

Radeon™ RX 5700

Radeon™ RX 5600XT

Radeon™ RX 5500XT

GeForce RTX 4000 Series

GeForce RTX™ 4090

GeForce RTX™ 4080

GeForce RTX™ 4070TI

GeForce RTX™ 4070

GeForce RTX 30 series

GeForce RTX™ 3090

GeForce RTX™ 3080

GeForce RTX™ 3070

GeForce RTX™ 3060 Ti

GeForce RTX™ 3060

GeForce RTX™ 3080Ti

GeForce RTX™ 3070Ti

GeForce RTX™ 3050

GeForce RTX™ 3090Ti

GeForce RTX 20 series

GeForce RTX 2080 SUPER™

GeForce RTX 2070 SUPER™

GeForce RTX 2060 SUPER™

GeForce RTX™ 2080 Ti

GeForce RTX™ 2070

GeForce RTX™ 2080

GeForce RTX™ 2060

Radeon RX 500 Series

Radeon™ RX 590

Radeon™ RX 580

Radeon™ RX 570

Radeon™ RX 560

Radeon™ RX 550

Radeon™ 550

Radeon RX Vega series

Radeon™ RX VEGA⁶⁴

Radeon™ RX VEGA⁵⁶

GeForce GTX 16 Series

GeForce® GTX 1660 SUPER™

GeForce® GTX 1650 SUPER™

GeForce® GTX 1660 Ti

GeForce® GTX 1650

GeForce GTX 10 series

GeForce® GTX 1080 Ti

GeForce® GTX 1070TI

GeForce® GT 1030

GeForce GT 700

GeForce® GT 730

GeForce® GT 710

Memory capacity

Memory type

GDDR6X

Video card power consumption

Over 300W

200W-299W

100W-199W

Less than 100W

Recommended power supplies

Connectors

USB Type-C

display port

Slot height

2. 7-slot

2.5 slot

2-slot

2.9 slot

2.4-slot

2.3 slot

1 slot

2.2-slot

3.12-slot

3.15 slot

3.5 slot

Dimensions (length)

>30cm

25 – 30cm

20 – 25cm

< 20cm

Cancel

Apply

Return to filter

Republic of Gamers graphics cards are designed for the most demanding gamers and overclockers. These are the best models in their class, distinguished by innovative technologies and original engineering solutions.

ROG Matrix

ASUS estore price from

Optimized for compact chassis, ROG Matrix GeForce RTX 2080 Ti features a unique maintenance-free water cooling system.

ROG Poseidon

ASUS estore price from

Poseidon series graphics cards are equipped with the original DirectCU h3O hybrid cooler, which includes a large heatsink with Wing-blade fans for conventional air cooling, but also supports the ability to connect to a water cooling system loop.

ROG Strix

ASUS estore price from

ROG Strix graphics cards are capable of unleashing the full power of modern GPUs due to the selected element base, advanced power system and original coolers.

See all ROG – Republic of Gamers

More

>

The Dual series graphics cards are pure speed with no frills for a balanced computing setup. A powerful cooler with advanced technologies borrowed from flagship models is responsible for cooling.

See all

TUF Gaming graphics cards offer high graphics performance combined with the reliability typical of the TUF Gaming product ecosystem. With an automated manufacturing process, a steel mounting plate, IP5X dust-certified optimized fans, and stringent quality tests, this graphics card is the perfect companion to a PC built with TUF Gaming components.

See all TUF Gaming

Turbo series graphics cards are designed specifically for computers with limited ventilation. They have a lot of optimizations to make it easier for the airflow from the 80mm fan to pass through the heatsink and then blow it out.

See all

ASUS Dual MINI graphics cards are specifically designed to bring the thermal performance of larger cards to the evolving small-form-factor market. A balanced blend of performance and cooling power make the MINI an obvious choice for those looking for high-end graphics where full-sized cards just won’t fit.