GPUs Latest news

Green Easter egg? What we already know (and what we don't) about the GeForce GTX 1650

It's a real shame that Nvidia is putting the new GeForce entry-level class so much into the non-focus. In fact, these "butter-and-bread" cards are not only ideal for system integrators and ready-made PC manufacturers, but also for the mostly young and not quite as cash-rich clientele. But even with the first "real" graphics card for small money, a later brand loyalty is usually established. That is why the board partners are certainly not so happy with the image-based disregard.

Information is scarce and since the NDA is not allowed anyway (and you couldn't test anything at all if you had something to test and about whose existence you shouldn't write at all), I can only make guesses and your own calculations at this point. For some time now, only the 896 CUDA cores have been colported. The rest is unknown, but actually only simple Turing math, so I just calculate.

Turing's streaming multiprocessors have fewer CUDA cores than Pascal's, but the design partially compensates for this by distributing more streaming multiprocessors (SM) on the GPU. The Turing architecture assigns each set of 16 CUDA cores, one scheduler, and one disposition unit per 16 CUDA cores (such as Pascal). Four of these 16-core groupings include the SM, 96 KB cache, which can be configured as 64 KB L1/32 KB of shared memory, or vice versa, and four texture units. As a result, the TU117 of the GeForce GTX 1650 should have 14 SM purely mathematically, if the 896 CUDA cores are correct.

Because Turing has twice as many schedulers as Pascal, only one statement needs to be issued to the CUDA cores in every second cycle. In between, there is enough room to send a different instruction to any other device, including the INT32 cores. This is no different with the GTX 1650 than with the other larger Turing cards.

Nvidia (speculatively) splits these 14 SM into two graphics processing clusters. In addition, 14 SMs, each with four associated texture units, total56 texture units for the entire GPU. Four 32-bit memory controls give the TU117 an aggregated 128-bit bus that operates the four GDDR5 modules at up to 128 GB/s. However, this is a considerable bandwidth disadvantage compared to the GTX 1660 and is even lower than the level of the old GeForce GTX 1060, but also significantly higher than that of the GeForce GTX 1050 Ti.

So, purely speculatively, I would classify this card in performance somewhere with an older GeForce GTX 1060 with 3 GB, but that's just an estimate, not anymore. That is precisely why I have summarized the things that I consider relevant and reasonably safe in the following table:

GeForce GTX 1060
GeForce GTX 1660 Ti 6 GB
GeForce GTX 1660 6 GB
GeForce GTX 1650
Architecture (GPU)
GP106 TU116-400 TU116-300 TU-117
CUDA Cores
1280 1536* 1280* 896
Tensor Cores
No No No No
RT Cores
No No No No
Texture Units
180 96 80 56
Base Clock Rate
1506 MHz 1500 MHz 1530 MHz 1485 MHz
GPU Boost Rate
1708 MHz 1770 MHz 1785 MHZ 1665 MHz
Storage expansion
6 GB GDDR5 6 GB GDDR6 6 GB GDDR5 4GB GDDR5
Storage bus
192-bit 192-bit 192-bit 128-bit
Memory clock
4000 MHz 6000 MHz 4000 MHz 4000 MHz
Rops
48 48 48 32
L2 Cache
1.5 MB 1.5 MB 1.5 MB 1 MB
Board design PG161 PG165 PG174
Tdp
120 W 120 W 120 W 75 W
Transistors
4.4 billion 6.6 billion 6.6 billion ?
The size
200 mm2 284 mm2 284 mm" ?
Sli
No No No No

 

In terms of power consumption, the reference design should not exceed the 75-watt mark, but the cards with factory OC are supposed to do so. This could also be used to speculate that there could be entry-level entry-level cards without an external power supply and those with factory OC with a 6-pin connector. With a little more arithmetic and the possible upper limit of almost 1900 MHz for the boost clock and the usual scaling of the GTX 1660, one would probably be in ranges around 80 to 85 watts for moderate OC and up to 100 watts for a maximum OC.

Nvidia has not yet confirmed the announced launch date on 23.04.2019, nor has released any data or at least general information. So at the moment we only know that what could come and when about. The rest is based solely on conjectures and extrapolations (as in this news). And I shouldn't write what I personally already know. So all we have left is this little attempt at interpretation.

 

 

 

 

Danke für die Spende



Du fandest, der Beitrag war interessant und möchtest uns unterstützen? Klasse!

Hier erfährst Du, wie: Hier spenden.

Hier kann Du per PayPal spenden.

About the author

Igor Wallossek

Editor-in-chief and name-giver of igor'sLAB as the content successor of Tom's Hardware Germany, whose license was returned in June 2019 in order to better meet the qualitative demands of web content and challenges of new media such as YouTube with its own channel.

Computer nerd since 1983, audio freak since 1979 and pretty much open to anything with a plug or battery for over 50 years.

Follow Igor:
YouTube Facebook Instagram Twitter