'Old up!
No-one asked what sampling frequency and bit-depth (resolution) you're using to record your wav!
If you've got a nice pro sound card (or even an Audigy) you can record at 24bit, 96kHz. (Or even 192kHz)
CDs are 44.1kHz , 16 bit.
To capture the maximum frequency a human can hear, 20kHz, you have to double it and add a bit to get the sampling frequency. This is the Nyquist Theorem.
16bit refers to the number of available 'steps' used to represent the curve that makes your sound's waveform. Fewer steps lead to quantization errors, which manifest themselves as mid-range noise. Use a bit-crusher plug-in to get an idea of this noise - or bounce down a wav at very low volume and then normalize it.
Increasing either property will drastically increase file size, for not much gain. If you can't hear better than CD quality then there's not much point!
High bit rates and sample frequencies are useful for recording anything that will later be processed - samples, tracks bounced down during production, vocals, instrument recordings, etc. In this case, processing everything at a high bitrate/frequency and dithering down at the end of the process will ensure that the minimum loss of quality will occur.
(Most processing stages will reduce the quality of your wavs by a tiny amount- usually this shows up in the top end, or as quantization errors. This is especially true if you combine digital and analogue equipment).
So basically, make sure you record at 44.1KHz, 16bit, unless you want to compress, limit or otherwise process your mix. Which IS a good idea - if you know what you're doing ;)
Using MP3 IS a lossy process. Especially if you don't use a genuine Fraunhofer codec. If you want to see how your sound is being affected, take a shortish clip, and repeatedly encode and decode it. After a very few goes, you should hear the audio artifacts quite clearly.
Tequila