My situation is as follows: i am trying to generate a waveform the hard way, by constructing the samples one by one and then saving the result to a .wav file using Python.
When the frequency is constant, everything is fine: i use $y(t) = \sin(2 \pi \cdot f \cdot t)$. However, if i change the frequency to be a function of time, things go wrong. If the function is a linear one of the form $f(t) = a + bt$, it still works. But if i choose, for example, $f(t) = 40 + 10 \sin(t)$, the max frequency increases over time, reaching a maximum higher than the expect 50Hz.
I have read something about instantaneous frequency, namely, this: Why does a wave continuously decreasing in frequency start increasing its frequency past the half of its length?. But doing the integral evaluation makes the sound even weirder. And the method i currently have works for linear function of time, so i figured there must be something else wrong.
I also tried to generate chunks of sound in the frequency i need, at each time, and then glue them together. I calculated the period of the oscillation, so that a chunk has as many samples as necessary to make a whole period on that frequency, so that there are no "jumps" between different frequencies. But generates a cracking sound in the sample.
Here is an example of the kind of a JavaScript implementation of the kind of sound i need: http://jsfiddle.net/m7US6/4/.
Any help would be appreciated. Thanks.
