yes I have this problem too, when I stream a large amount of data through the serial cable and at the same time play the stream, then the problem occurs continuosly.
I suspect that is not possible…
normally hardware irq’s cannot be ignored, maybe they are multiplexed or they have different priorities? No idea.
If you have too much IO load the problem occours, I tried to use the cpu boost switch but nothing changes.
Anyway thanks for the question, I am interested too
In my application, it usually occurs in about 5 seconds periods, although not exactly 5 sec! I don’t know if there is a specific cause or not! but I checked it by a toggle pin in IRQ handler and before that, I noticed this issue by monitoring received data in another GSM module.
as I checked, low IRQ has the highest priority, even more than wavecom firmware, and actually I tested out different ways to get rid of those lost frames, but nothing achieved yet
you know, I inactivated all the unnecessary sections in my code and it just remained call_subscribe() and audio handler for Play audio stream, but the problem still persists!
unfortunately, I found out that in the sample code (Pcm_Speak_and_Play), if I change the buffer size to save 15 seconds and use a toggle pin as below in LowIRQHandler (audio play stream), there is the same problem about missing frames (in fact, missing toggled signals in interrupt routine)
// Recorded sound duration in seconds
#define DURATION 15
#define BUFFER_SIZE DURATION * 2 * 8000
s32 OFGpioOut_Handle;
adl_ioDefs_t Gpio_OF_Config = ADL_IO_GPIO | 23 | ADL_IO_DIR_OUT | ADL_IO_LEV_LOW;
bool appPlayLowIrqHandler( adl_irqID_e Source, adl_irqNotificationLevel_e NotificationLevel, adl_irqEventData_t *Data )
{
if (adl_ioReadSingle(OFGpioOut_Handle,&Gpio_OF_Config)) {
adl_ioWriteSingle ( OFGpioOut_Handle, &Gpio_OF_Config, FALSE );
} else {
adl_ioWriteSingle ( OFGpioOut_Handle, &Gpio_OF_Config, TRUE );
}
// Copy samples from common buffer
if (!pcm_buf_read(&listen_play_buffer, BufferSizePlay, StreamBufferPlay))
{
TRACE (( 1, "appListenLowIrqHandler: buffer underrun"));
// Call High level handler
return 1;
}
else
{
// Set BufferReady flag to TRUE
*( ( adl_audioStream_t * )Data->SourceData )->BufferReady = TRUE;
// Do NOT call the high level handler
return 0;
}
}
Hi,
yes the file area is empty because I don’t have uploaded any binary package.
you can still get or browse the sources following the guide at sourceforge.net/projects/fastrack-voice/develop
svn co https://fastrack-voice.svn.sourceforge.net/svnroot/fastrack-voice fastrack-voice
I was referring to the method of recording the audio - i.e. in Audacity I selected 16bit, signed, Mono PCM.
Yes, and when I convert the recorded data to the fixed arrays that I feed to the Audio Stream Play function, I store them as u8 (i.e. unsigned bytes) as it appears that the audio functions reassemble the PCM output correctly and output it to the voice call or speaker.
I use CoolEdit-200 (now Adobe Audition, I beleive).
It doesn’t have a signed/unsigned option - so I just get whatever it gives me.
It has the option to save the PCM data to a text file, looking like this:
So its 16-bit data is signed.
I’ve made the above format into s16 arrays, and the PCM output seems to reproduce that faithfully in a voice call!
The sample code seems to be just using u8* in the old idiom (ie, before void* was invented) to indicate a “generic” pointer?
My understanding is that 16-bit WAV files do indeed use signed data; eg,
I’ve had another look at my code (it was written a while ago!), and I am also feeding *s16 data to the audioStream() functions. I’m using compression on my raw input PCM, which is why I’m storing the compressed audio data as u8 arrays in program space. I decompress the appropriate amount of raw data in the audio clip event handler. Works fine for me.