Radeon RX Vega 56

By pictured test you mean the in the link you provided? If yes, then yes it does but in the field of test and in itself; the field of test is virtual reality, which is very demanding in CPU and GPU, and in itself is the coding and the app and how it's programmed.


Regular games are less demanding and different games/apps have different CPU and GPU intensity. VR demands a high CPU to give good performance. This is one of the special cases I meant before.



If you're interested in virtual reality gaming, better CPU can give better performance.


Also, CPU's age better than GPU's. 2012 CPU's like yours can still hold them selves up in many cases. Not as much in VR, tho. My overclocked i5 3570K is serving me very well in my favorite titles like Destiny 2 at 1080p 60fps at around max settings.



As long as the GPU graph is fixed at 99% or performance is not facing stuttering, that 99% is always the same no matter what CPU is used with it or anything else. It is like driving a Ferrari and a Toyota Corolla at fixed 60mph (or KM/h). Neither of them is going faster than the other. You can add a bottleneck of a 60mph speed limit street to this example to see it better.


Then again, testing your favorite games and seeing by yourself is the best course of action. Considering an upgrade is a good idea if the performance is really hurt to your taste, specially that you already have the parts with you. Are you looking at VR gaming?
 
I repeat: the topic is about bottlenecks, not stress tests. Whither the benchmark reaches 100% or 1% GPU utilization has nothing to do with bottlenecks in a stress test in the first place. CPU utilization should be monitored too. But still, all it does it finding what a build with that GPU is capable of GPU wise. Reaching 100% is not an of course thing here. Try a single core Pentium 4 or even a Core 2 Duo with RTX 2080 Ti and no stress test will ever get the GPU to ~99%. Everyone concerned with bottlenecks know that. Look it up on Youtube. Many testers did it with games on unlocked FPS and very high settings, which is a stress test too.

A stress test does not necessarily mean reaching 100% GPU. Just Google Unigine Heaven Bottleneck and you will see. Many will show held-back GPU usage. Heaven is GPU intensive alright, but no matter what, games still use CPU and need some power from it.
I tried Google, didn't find much. If you have some data you want to share, I'll be happy to check it out.

In no. 2 it doesn't have to be the lowest possible usage. It just has to be low enough to not bottleneck the GPU. I mean that those concerned with bottlenecks prefer lower usage in them and the lowest the better because this mean the CPU will have more life in it.
Ok, then I agree.

The benchmark is not what decides a build can reach 100% in GPU. The specifications of the build are what do. Only if the build is capable of 100% GPU on a benchmark, that benchmark can reach it. I referred to this above.
And I disagree. Stress tests put a very small load on the processor. That is why 7 cores were at 10% load. Why are you ignoring this?

This only means that in this specific setup; i.e.. the VRMARK and its settings, a bottleneck is causing those two different results. Only percentage monitoring here confirms that, and there's non. So this is not a good bottleneck test even tho is does find a bottleneck. It could be an optimization issue if no records of CPU and GPU percentage usage is there. This is a VR test. It's known for being demanding.
I too think there is something bottlenecking the card in this benchmark.

Now, if heaven is tested with the CPU that makes the GPU give better performance in the above linked test, the performance will be the same as what the OP got since he already reached 99% (full performance actually, but GPU's don't like to go over 99%). No CPU can make a GPU go more than what it can on it's fixed clock speeds.
Of course, stress tests are designed to load the card to 100% regardless of the other components.

My recommendation for the OP is to not bother with bottlenecks right of the bat, but to play normally and monitor the practical frames-per-second count (or performance in general) for any frame drops at the preferred game settings. Only if bothering drops were noticed, one should look for bottlenecks if they want to clear that performance drop.
Yeah, if the bottleneck doesn't bother him, then he shouldn't buy anything new.

To the OP: Can you try playing some games and monitor the usage?
 
And I disagree. Stress tests put a very small load on the processor. That is why 7 cores were at 10% load. Why are you ignoring this?


I'm not ignoring it. I talked about it in post no. 12 "here."


I tried again at much lower resolution to put more stress on the CPU. This time core1 reached 89% max, core2 95%, core3 66% and core4 75%. My CPU is still powerful for Heaven so it won't be a bottleneck on it.



Could my Google results differ from yours? I found this and someone is not getting more than 80% in a benchmark stress test:
https://www.overclock.net/forum/10-amd-cpus/1461836-unigine-heaven-bottleneck.html
The bottleneck might not be the CPU, but there must be one somewhere else, or else the GPU shouldn't be held back at 80%. Probably the RAM.



Also, "very small CPU load" is still a load (I think you first said benchmark stress tests trigger components individually?). Use lower resolution and that load increases cuz the CPU wants to show the so many FPS the GPU can process.


I also put more emphasis that GPU benchmark apps' purpose is not to run the GPU at full utilization. Their purpose is to give an indication what a whole build can provide in gaming performance.


Can you download Unigine Heaven and MSI Afterburner and try? Try first with the highest resolution you can use then again with lower resolution you can, with both having Afterburner open on the background with all CPU cores monitored. You will find that with high resolution CPU usage is minimal while with lower resolution CPU usage is clearly more. This only means at some point on weaker CPU than yours CPU usage will go so high and something must happen because of that. GPU bottleneck is that something. You will also find that this GPU benchmark does not stress the GPU individually but CPU is used too. If CPU is used, at some pint it must reach a limit too like the GPU.



I'll search about your opinion too and see if I might be wrong in mine. I could be wrong of course, I'm not perfect.



Cheers!
 
Cheers guys.

I'll get back to you ASAP as soon as I've put some games under test with monitoring, etc.

I would mention now however I've snapped up an amazing deal on an MSI Z370-A Pro motherboard (only £66) :angel: ... and therefore will be looking for a CPU next.. I'm keeping my eye on the i5 8600k but waiting for the price to drop a little bit.

I'll come back with results on current rig, but it'll also be interesting to come back and see what the changes are after the upgrade too!
 
Good finds!


Are you considering overclocking at least later in the life of your new CPU? If yes, 8600K is a good pick. If not, look up comparisons for 8600K vs 8400. Both are 6-core, which what has become to matter the most these days, with the exception of lower clock speeds on 8400 but its boost speed increases really well. Price difference could be inviting too. Find comparisons that use the video card you have to get a better view. Who knows, maybe you'll find a good sweet spot that saves you good cash for further upgrades; e.g. SSD's.
 
Indeed, the reason I got this motherboard was because it has extremely good reviews, especially with the overclocking... and no matter what CPU I get I'd hate to not be able to overclock it! So it's something I'm sure I'll be doing for sure!

I just need to consider what DDR4 RAM to get... is there a minimum speed you'd recommend? I've never considered RAM speeds to be massively influential for purposes of gaming, though I could be wrong? I'm a little out of touch with hardware than I once was!

Edit: Also might be worth mentioning I got a fantastic deal on the Vega 56.. £300 :D
 
Last edited:
Yes, it is good to have the build overclock-ready. I personally had mine like so and only overclocked after more than four years of ownership, not right in the beginning. Overclocking is still risky and reduces CPU life. I overclock when the time of needing more CPU power comes. But it did give more life to the CPU that I'm still using it until now with no complaints.

I'm not certain about RAM speed. I'd get the highest speed that meets the best bang for the buck. So if going from 2133Mhz to 2400Mhz costs little cash, I'd go for it, but if 2400Mhz to 2666Mhz costs so much, I'd not take it. Latest I5's support 2666Mhz max. I don't have actual experience with DDR4, but all my previous searches showed no worthy performance difference with higher RAM speeds. I do however have experience with DDR3 in which jumping from 1333Mhz to 1600Mhz made a difference in Windows 7 experience and making sure the RAM working dual channel felt like Windows shuts down faster. I think it's due to faster memory flushing before shutting down.

I'm still on GTX 680 4GB and about a year ago I found an extra to SLI with it. I found it for an end cost of ~£83.5 which made my build last gen. console-like 4K +30FPS ready. That's off topic, but I mean that it is good to have a future proof build like overclock-ready.
 
So I tested the in game benchmark tool of For Honor and comparing against the results here:

http://www.computerforums.org/forums/hardware/radeon-rx-vega-56-a-230942/index3.html#post12195603

Presumably they'd be testing using the exact same in game benchmarking, otherwise nothing would be consistent?

Their system has an i7-7700k processor, with an SSD. My game is not installed on an SSD.

I used the exact same settings at 4k resolution and 1080p and basically got the exact same average results, with actually mine being a few frames higher than theirs on the 4k resolution.

The CPU usage was around 20-30% on the 4k resolution, whereas on 1080p it was closer to around 50% or so.

I'm surprised at the results. This may be because I have a Gigabyte 'OC' version?

Also, I noticed at one point the coil whine was much stronger when running at 1080p than 4k? Any particular reason?

Also, is coil whine a bad thing in anyway, aside from a little noise?
 
The link takes me to this very page.


Well, the higher the resolution the less CPU is used and the load goes to the GPU. CPU becomes more irrelevant then, even tho still used. CPU usage going down there is a normal thing. That's why your build on that older CPU didn't struggle to go on par with theirs. Keep in mind that it doesn't mean you get better performance. This only reduces the CPU bottleneck possibility (this is only relevant when it's related to graphics. If it's AI relevant like in some huge multiplayer games, the load won't move from CPU to GPU. But that's a different story). If your GPU variant is of better factory clock, then it makes sense that you're better. Remember that also few more or less frames per second could be within the error margin. If your results are close to theirs, just consider it equal.


I don't get the coil whine point. Is it a vibration sound that comes as the fans rev up? Can you tell what fans are related to it? Could be some loss wires/plugs hitting internal parts of the case maybe? If I'm to make a guess, it's the CPU fan since it happens with 1080p which puts more load on the CPU than 4K. Usage means more heat means fan goes faster.
 
Last edited:
Only found out about it myself today. Coil whine is essentially when the GPU is drawing a lot of power the components are vibrating like crazy. If you Youtube it you can see more.

Interestingly, in Global Wattman, if I adjust the power limit to +50% i get an average framerate of about 38, where as you can see the GTX 1080 is sitting at 41... I'd say that's pretty damn good to be honest!


Now I'm having second thoughts as to whether I've been a bit too impulsive buying that motherboard :confused:
 
Back
Top Bottom