MC7430 struck

Hi Community,

We are testing the qmi support on MC7430, when the MTU is configured below 1500 and when the data traffic is greater than interface MTU, the modem freezes after some time. I see that it could be an issue with mc7430 fragmentation handling. We had checked the behaviour on MBPL r20 with the connection -manager and the issue is seen. I had checked all the options mentioned in the freedesktop and Sierra forms(suggested to use gobinet, but it’s in EOL), nothing worked. The only way to recover is to restart the box. Any help will be greatly appreciated.

Error log :
[11368.933001] qmi_wwan 2-2:1.8: nonzero urb status received: -71
[11368.950482] qmi_wwan 2-2:1.8: wdm_int_callback - 0 bytes
[11370.307002] qmi_wwan 2-2:1.8: nonzero urb status received: -71
[11370.324475] qmi_wwan 2-2:1.8: wdm_int_callback - 0 bytes
[11371.811010] qmi_wwan 2-2:1.8: nonzero urb status received: -71
[11371.828494] qmi_wwan 2-2:1.8: wdm_int_callback - 0 bytes

Notes: qmi_wwan may be a watchdog after the above logs.

Thanks
KK

do you see this problem on Ubuntu PC?
FYI, no problem is found to run the connection manager sample on MC7430 with FW SWI9X30C_02.36.00.00 on Ubuntu 18 with kernel version 4.13.
I have done a iperf UDP test for over 50 minutes.

Hi Jyijyi,

Thanks a lot for your quick response. We test it on ubuntu 18 with 4.15 kernal. We tested with 2.36 as well and we saw the issue. Although, on 2.33 it was seen very frequently, in 2.36 it took us about 40 min to reproduce the issue. Further, we tested in Gobinet it looks to be stabe.
Following are the steps we followed,

Preconfig
Client-side :

  1. connect the MC7430 via qmicli or connection-manager
  2. change the interface mtu of the wwan0 to 1100(tested with Carrier provided mtu, if MTU below 1500, the chance of hitting the issue is high )
  3. make sure the default gateway is via wwan
  4. ping to google is working

Server-side :

  1. make the packet size 1450
    iperf -s -u -l 1450

Steps to do after pre config:

  1. Client-side:

iperf -u -c -b 10M

  1. Client-side: Keep repeating command at step-1 about 10 minutes

  2. Server-side: change the packet size to 1500

iperf -s -u -l 1500

  1. Client-side: Keep repeating the command at step-1 about 5 more minutes

  2. Client-side:

iperf -u -c -b 10M -l 1400

  1. Client-side: Keep repeating command at step-5 about 10 minutes

  2. Server-side: change the packet size to 1260

iperf -s -u -l 1260

  1. Client-side: We kept repeating the command at step-5 until the issue is seen (it happens in about 10 minutes)

Please note:
In original testing, we lowered the MTU and connected a host behind the current device. After few min of browsing we hit the issue. Did traffic profiling and we tested with various traffic types. With above steps, we can see the issue consistently. We also observed that the throughput decreases with fragmentation and the ping delay increases gradually over time. We took sim7600 as a reference, we did the same testing for about a day and didnt find any degrade in the iperf throughput or the ping response.

Thanks,
KK

I used the default MTU and did not make any change to it.
Also I used iperf to send UDP, no problem is found to run the following script on MC7430 with FW SWI9X30C_02.36.00.00 on Ubuntu 18 with kernel version 4.13.


while true; do
date
sudo iperf -u -c 116.66.221.43 -p 5050 -b 10M
done


Attached log2.txt is the log for this experiment.
log2.txt (115.9 KB)

Hi Jyijyi,

Is the default MTU 1500 ? In my test, setting the wwan interface mtu below 1500 is the main step. For example, the Airtel isp is giving the mtu as 1358. Can you try setting wwan interface mtu as 1100. I also observed that the chance of occurrence is high when fragmentation occurs on both TX and RX. So, can you increase the iperf udp payload length on both the server and client. I will try to get the logs with 2.36 firmware.

Thanks
KK

yes, MTU is 1500.

owner@ubuntu:~$ ifconfig wwan0
wwan0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.119.34.31  netmask 255.255.255.192  destination 10.119.34.31
        inet6 fe80::778:a221:f484:d5eb  prefixlen 64  scopeid 0x20<link>
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
        RX packets 4  bytes 336 (336.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 19  bytes 1208 (1.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Why don’t you use gobinet if it is working fine for you

Following are the concerns for not using the gobinet sierra driver

  1. Our product supports a couple of other qmi modems and they are using qmi_wwan. Maintaining both the qmi drivers would be difficult.
  2. qmi_wwan has the support for newer qmi modems(Sierra/non-sierra).
  3. Sierra gobinet is EOL, so we may not get the needed support. We might be asked to move MBPL.

This issue seems to be specific to AMD CPU and USB 3.0

Then you can try at!usbspeed=0 to limit to usb2.0

I tried this setting as well, but with this setting, I am seeing an issue on the RX side. When the packet size is greater than the interface MTU, then the overrun’s are increasing. The first few packets are making in(20-40) and later are dropped. It can be easily created with a simple client-server socket program, where the server always sends a packet greater than the interface MTU.

Logs
ip -s -s link show dev wwp3s0f3u2i8
2: wwp3s0f3u2i8: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1100 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
link/none
RX: bytes packets errors dropped overrun mcast
20492801 23633 92945 0 61964 0
RX errors: length crc frame fifo missed
0 0 0 0 0
TX: bytes packets errors dropped carrier collsns
19388011 36615 0 0 0 0
TX errors: aborted fifo window heartbeat transns
0 0 0 0 0 0

It looks while some part of the rx ring buffers are not released. Can we dump the rx and tx buffers?

Why the packet size is greater than the interface MTU? it should be divided into two…

So the stuck problem is solved ?

Modem struck issue is resolved. But, we have seen few service providers where the MTU published is less than 1500. So this solution of putting the modem in USB 2.0 will an issue for us :frowning: . Unfortunately, we are not able to capture the packets in tcpdump. I am suspecting that the packet is dropped on the host but before the tcpdump hook.

OK, at least there is no more stuck issue when you lower the MTU to 1100.
So can you elaborate more on what is the issue now on setting the USB2.0 and MTU 1100?

I made a client-server program where they exchange packet of length 1450. When I freshly start the system, connect the modem and start the client, then the packet exchange works fine for few packets. Later, I see that all the packets of length 1450 are dropped on the receive side(RX). I also observed that, if packet size less than the mtu is received, they are not dropped. On the transmit side, irrespective of the packet size they are transmitted and received on the server.

You can capture wireshark log in both side and see if there is packet dropped in the network

When you set mtu to 1100 on both TX RX side, there should not be packets of 1450 size …