Something else to consider: DAW audio is handled as 32bit floating point internally (1bit sign, 8bits exponent, 23bits fractional). Each magnitude (..., 10
-2, 10
-1, 10
0, 10
1, 10
2, ...) has an equal precision of 24bits signed.
Examples:
- [0.1 to 1.0) has 23bits of precision
- [0.01 to 0.1) has 23bits of precision
- [-0.1 to -1.0) has 23 bits of precision
- Etc.
So each smaller magnitude distributes the precision across a smaller range of values. In the reverse direction, each larger magnitude distributes the precision across a larger range of values.
Examples:
- [1.0 to 10.0) has 23bits of precision
- [10.0 to 100.0) has 23bits of precision
- [-1.0 to -10.0) has 23bits of precision
- Etc.
Unfortunately, this means that when your audio exceeds 0db, each magnitude of volume increase spreads the precision across a larger number of values.
Examples:
- [0db to 1db) has 24bits of precision
- [1db to 10db) has 24bits of precision
- [10db to 100db) has 24 bits of precision
- Etc.
At the 10db to 100db range, the result should be an audibly decayed bit depth.
So essentially, when your volume exceeds 0db, any samples outside of the -1.0 to 1.0 range have a decreased precision. At least that's how I would expect it to work with my rudimentary understanding of IEEE single-precision floating point.
I am going to give it a test when I get home tonight:
Synth -> 100db Gain -> -100db Gain -> Master