No, it doesn't. It just means the benchmark can use 100% of the GPU. And this is a GPU stress test. Of course it's reaching 100%, what did you expect? All of the tasks are designed to use a high percentage of the card and not the other stuff, because the other stuff doesn't matter. That's why the processor load was so low.
I repeat: the topic is about bottlenecks, not stress tests. Whither the benchmark reaches 100% or 1% GPU utilization has nothing to do with bottlenecks in a stress test in the first place. CPU utilization should be monitored too. But still, all it does it finding what a build with that GPU is capable of GPU wise. Reaching 100% is not an of course thing here. Try a single core Pentium 4 or even a Core 2 Duo with RTX 2080 Ti and no stress test will ever get the GPU to ~99%. Everyone concerned with bottlenecks know that. Look it up on Youtube. Many testers did it with games on unlocked FPS and very high settings, which is a stress test too.
A stress test does not necessarily mean reaching 100% GPU. Just Google Unigine Heaven Bottleneck and you will see. Many will show held-back GPU usage. Heaven is GPU intensive alright, but no matter what, games still use CPU and need some power from it.
I agree with number 1, in gaming.
I don't understand what you meant in number 2. Why does it have to be the lowest possible percentage?
To be clear, I meant "the 'No-bottleneck' indications and targets in gaming are."
The thread is about RX Vega 56, a GPU. So I believe the whole thing is about gaming. But let's see the OP. Not sure if he had a different initial thought at hand when this thread was started.
In no. 2 it doesn't have to be the lowest possible usage. It just has to be low enough to not bottleneck the GPU. I mean that those concerned with bottlenecks prefer lower usage in them and the lowest the better because this mean the CPU will have more life in it.
Exactly, the card is supposed to reach 100%. This only proves the benchmark is doing its job.
The benchmark is not what decides a build can reach 100% in GPU. The specifications of the build are what do. Only if the build is capable of 100% GPU on a benchmark, that benchmark can reach it. I referred to this above.
Yes, as the OP said in his post, there are other benchmarks of systems using the same card and a different processor, and the results are higher.
This only means that in this specific setup; i.e.. the VRMARK and its settings, a bottleneck is causing those two different results. Only percentage monitoring here confirms that, and there's non. So this is not a good bottleneck test even tho is does find a bottleneck. It could be an optimization issue if no records of CPU and GPU percentage usage is there. This is a VR test. It's known for being demanding.
Now, if heaven is tested with the CPU that makes the GPU give better performance in the above linked test, the performance will be the same as what the OP got since he already reached 99% (full performance actually, but GPU's don't like to go over 99%). No CPU can make a GPU go more than what it can on it's fixed clock speeds.
My recommendation for the OP is to not bother with bottlenecks right of the bat, but to play normally and monitor the practical frames-per-second count (or performance in general) for any frame drops at the preferred game settings. Only if bothering drops were noticed, one should look for bottlenecks if they want to clear that performance drop.