Why is that?
Why is that?
Honestly, latency/performance stuff. As in: how do VST synths ensure that they’ll synthesize in time to keep up with the audio buffer, depending on user hardware. I’m asking because I’ve seen/heard countless VST synths fail at this and sound like a clicky mess, and I feel like if I understood how it’s handled in code it would make more sense to me.
Spectrogram*
That’s actually an accurate description of what is happening: an audio file turned into a 2d image with the x axis being time, the y axis being frequency and color being amplitude.
I get where you’re coming from, but I also think it’s fair to say archaeologists have at least some insight into what happens to glass over long periods of time. Hopefully Microsoft has consulted with them.
Do you think that hamburgers come from Hamburg?
…perhaps?
You clearly made it a gender issue with your initial comment. I’m not the one bitching, I think the trailer looks amazing.
They are, but this is also clearly not within budget, not a sexist conspiracy.
Would you volunteer to do all the extra work that would require, for free?
Good luck lmao