Error Management: Backtrace


#1

Hello Everyone,

I don’t know if anyone here can answer these questions, but I have been waiting for 3 days for my Wavecom Support Contact and thought I would poll the Wavecom Forum to see if I can get some answers. So here are my questions.

1.) Based upon my observations of the Bug sample and my own application, each backtrace seems to take approximately 588 bytes. Correct me if I am wrong but this seems like a lot of data usage for a basic error description and the memory address of the error. Is this correct or am I including extra information?

2.) Is there anyway to reduce the size of the backtrace? I was considering compressing the result or omitting sections since a lot of the data appears to be repeated.

3.) Alternatively, has anyone managed to embed the backtrace decoding performed by the Target Monitoring Tool (TMT) into the application itself? This would eliminate the need to load a file into the TMT.

Thank you for your time.
-gcoakley


#2

That sounds about right from what I recall

Why do you want to do this?

AIUI, this is deep within the Wavecom “core” - so it’s out of the user’s control.

I’d have thought that would take up more space than the raw backtrace…?


#3

Since this application will be sent out for testing by customers and the backtrace information will most likely be sent over the air rather than over the serial port, I would like to reduce the amount of data to the bare minimal. Also if the backtrace information is corrupted, then a smaller backtrace will help to limit the data usage through re-requesting the information.

Based upon my Wavecom contact’ description of the backtrace, the backtrace consists of the type of error (RTK, ARM, Watch Dog, ARM Data, or Null Pointer) and the address of the function that caused the error. Given this information, I find it hard to believe that ~588 bytes are needed for this little information when less than 100 bytes should be able to hold it. Perhaps my contact is mistaken and there is additional information.


#4

My understanding is that it’s a bit more than just that: it includes a “snapshot” of the call stack so that you also see what called the function that caused the error - going back a few calls.

Whether or not that justifies 600-odd bytes is another matter…