No. As pointed out following your post, Audacity will only profile about 20-ish seconds of a song. In the case of that graph, I selected about 16 beats of bass heavy material from a modern bass heavy pop song (the graph is labeled as such in my original post). The graph was posted as an example of what RTA output looks like.Cyrano de Maniac wrote:I take it this spectrogram was run against the entire song?
This is NOT a graph of a 1930's song.
Actually the 48Hz peak is a bass guitar.Cyrano de Maniac wrote:Unfortunately that won't likely show you what you're really interested in, as it's sort of the average across every moment of the song. The 48Hz peak might be from a kick drum or bass or just a recording technology artifact.
True. See the Equal Loudness Contour for more on this.Cyrano de Maniac wrote:Even then, you run into the problem that loudness as perceived by human ears is not constant across the audio spectrum.
Semi-True. Human hearing is much more sensitive in the spoken voice frequency range than either low or high frequencies. And, in general, the power density of music has a -3dB/oct slope as frequency increases. So, human hearing is not as sensitive to high frequencies, but it does takes a lot less energy to produce sounds in that range. This means that high frequencies need to be louder than mid frequencies to be perceived to be "the same" volume level.Cyrano de Maniac wrote:The higher frequencies require a lot less energy to be perceived as loud.
As an example, at 40phon, 10kHz needs to be 15dB louder than 1kHz for both to be perceived to be "equally" loud. But, each tone could be produced with "the same" amount of energy.
Psychoacoustics can be fun.