Do More Shots Make the Same Qubits Hotter?
Why 10,000 shots are not the same as running one physical circuit 10,000 times on a wire that keeps heating up.
The Question
A common first intuition is that a circuit with more shots must be harder on the hardware because the same qubits are being used again and again. If 1,000 shots already create some noise, then 10,000 shots should make the lines hotter, the qubits dirtier, and the result worse in a direct linear way.
This is a reasonable question because the interface looks repetitive: you submit one circuit and ask the machine to execute it many times. From the outside, it looks like a loop over the same physical object. That mental model is close enough to be useful, but wrong in the important places.
Why the Intuition Feels Right
Repeated shots do expose the experiment to more time, but not in the simple “same qubit gets hotter every time” sense.
Where the confusion comes from
In classical benchmarking, repeating a workload can heat a CPU, saturate memory, or stress I/O. It is natural to transfer that picture to quantum hardware and imagine a similar build-up effect. There is also real time passing during a long job, and quantum hardware is sensitive to drift, calibration changes, and queue conditions.
So the underlying concern is legitimate: longer jobs can be more exposed to instability. The mistake is imagining that one shot leaves the qubit in a damaged state for the next shot in the same way a hot transistor stays hot after a compute-heavy batch.
What Actually Happens
A shot is a fresh preparation, evolution, and measurement cycle, not a cumulative walk through the same unrecovered quantum state.
Fresh runs, not one long continuous state
Each shot is meant to start from a prepared initial state, execute the circuit, and measure the result. After measurement, the system is reset so the next shot begins again from the prescribed starting point. In other words, shots are repeated experiments, not one evolving quantum state stretched over thousands of observations.
That is why more shots usually improve the statistical estimate of the output distribution. If the hardware were simply accumulating unrecoverable heat from shot to shot, the whole notion of repeated sampling would break down immediately.
The real risks are different: long jobs can see slow drift, imperfect resets, queue-dependent conditions, and calibration mismatch between when you compiled the circuit and when it actually ran. Those effects are about time, control quality, and hardware stability, not about one qubit wire literally becoming hotter with each shot in the user-facing sense.
What You Should Do in Practice
Treat shots as a trade-off between statistical precision and exposure to real-world hardware variability.
Practical guidance
- Use enough shots to estimate the quantity you care about, but do not assume that more is automatically better.
- If a result changes with shot count, separate statistical variance from drift by repeating the full job at different times.
- Watch reset quality, readout calibration, and backend calibration timestamps when runs look inconsistent.
- For debugging, compare hardware results with a simulator at the same circuit depth before blaming shots alone.
The right model is: more shots reduce sampling noise, but longer or repeated jobs increase exposure to a non-ideal machine. That is a subtle systems problem, not a simple heat accumulation story.
Summary
More shots do not simply make the same qubits “hotter,” but they do increase your dependence on stable hardware over time.
A shot is a repeated preparation-and-measurement cycle. The machine is not supposed to carry forward one damaged quantum state from shot 1 to shot 10,000. What grows with larger jobs is your exposure to the hardware as a physical system with drift, resets, control error, and scheduling realities.
Continue the Quantum FAQ
The next story builds the error model first: why quantum machines make errors, in plain language and without assuming a hardware background.