play generated voice stream in a call

yes I have this problem too, when I stream a large amount of data through the serial cable and at the same time play the stream, then the problem occurs continuosly.

I suspect that is not possible… :frowning:

normally hardware irq’s cannot be ignored, maybe they are multiplexed or they have different priorities? No idea.
If you have too much IO load the problem occours, I tried to use the cpu boost switch but nothing changes.

Anyway thanks for the question, I am interested too :wink:

Do you remember how often it occurs?

In my application, it usually occurs in about 5 seconds periods, although not exactly 5 sec! I don’t know if there is a specific cause or not! but I checked it by a toggle pin in IRQ handler and before that, I noticed this issue by monitoring received data in another GSM module.

as I checked, low IRQ has the highest priority, even more than wavecom firmware, and actually I tested out different ways to get rid of those lost frames, but nothing achieved yet :frowning:

you know, I inactivated all the unnecessary sections in my code and it just remained call_subscribe() and audio handler for Play audio stream, but the problem still persists!

I cannot give a period yet (never checked), I can only say it happens more often than 5s.

mmmhh… it seems we have a different problem… If I preload the audiostream in wavecom memory, then the problem not occours.

unfortunately, I found out that in the sample code (Pcm_Speak_and_Play), if I change the buffer size to save 15 seconds and use a toggle pin as below in LowIRQHandler (audio play stream), there is the same problem about missing frames (in fact, missing toggled signals in interrupt routine) :frowning:

// Recorded sound duration in seconds
#define DURATION 15
#define BUFFER_SIZE DURATION * 2 * 8000


s32     OFGpioOut_Handle;
adl_ioDefs_t Gpio_OF_Config = ADL_IO_GPIO | 23 | ADL_IO_DIR_OUT | ADL_IO_LEV_LOW;


bool appPlayLowIrqHandler( adl_irqID_e Source, adl_irqNotificationLevel_e NotificationLevel, adl_irqEventData_t *Data )
{

	if (adl_ioReadSingle(OFGpioOut_Handle,&Gpio_OF_Config)) {
		adl_ioWriteSingle ( OFGpioOut_Handle, &Gpio_OF_Config, FALSE );
	} else {
		adl_ioWriteSingle ( OFGpioOut_Handle, &Gpio_OF_Config, TRUE );
	}


    // Copy samples from common buffer
    if (!pcm_buf_read(&listen_play_buffer, BufferSizePlay, StreamBufferPlay))
    {
        TRACE (( 1, "appListenLowIrqHandler: buffer underrun"));
        // Call High level handler
        return 1;
    }
    else
    {
        // Set BufferReady flag to TRUE
        *( ( adl_audioStream_t * )Data->SourceData )->BufferReady = TRUE;

        // Do NOT call the high level handler
        return 0;
    }
}

any common experience, any idea?

The ‘Files’ area for that project appears to be empty:


Hi,
yes the file area is empty because I don’t have uploaded any binary package.
you can still get or browse the sources following the guide at sourceforge.net/projects/fastrack-voice/develop

svn co https://fastrack-voice.svn.sourceforge.net/svnroot/fastrack-voice fastrack-voice

(my emphasis)

But the example in the ADL User guide is un-signed

Hiya,

My Bad.

I was referring to the method of recording the audio - i.e. in Audacity I selected 16bit, signed, Mono PCM.

Yes, and when I convert the recorded data to the fixed arrays that I feed to the Audio Stream Play function, I store them as u8 (i.e. unsigned bytes) as it appears that the audio functions reassemble the PCM output correctly and output it to the voice call or speaker.

Sorry about the confusion.

ciao, Dave

I use CoolEdit-200 (now Adobe Audition, I beleive).
It doesn’t have a signed/unsigned option - so I just get whatever it gives me.
It has the option to save the PCM data to a text file, looking like this:

So its 16-bit data is signed.

I’ve made the above format into s16 arrays, and the PCM output seems to reproduce that faithfully in a voice call!

The sample code seems to be just using u8* in the old idiom (ie, before void* was invented) to indicate a “generic” pointer?

My understanding is that 16-bit WAV files do indeed use signed data; eg,

see also:
wotsit.org/list.asp?al=W
support.microsoft.com/kb/89879

None of this messing about would be necessary if Sierra Wireless would just document stuff properly in the first place! :angry: :unamused:

Hiya,

I’ve had another look at my code (it was written a while ago!), and I am also feeding *s16 data to the audioStream() functions. I’m using compression on my raw input PCM, which is why I’m storing the compressed audio data as u8 arrays in program space. I decompress the appropriate amount of raw data in the audio clip event handler. Works fine for me.

Absolutely.

ciao, Dave