Allow for "best effort" capturing, where occasional packet drops are allowed.

Logic's current behavior for when it can't keep up to the capture stream is to stop, drop, and roll. User experience is to either retry with current settings or lower the rate.

I'm under the impression that Logic considers a single dropped packet to be enough to stop the capture. Being able to relax that and make the capture a "best effort" might be a better end-user experience. Basically I want to be able to adjust how many packets can be dropped during a capture before Logic gives up. If I'm probing a 1mHz signal at the full 100MS/s, I've got enough oversampling that dropping a measurement won't affect my results. Additionally, unless a trigger occurs between when a packet is dropped and when its neighbors get evicted from the pre-roll, the dropped packet wouldn't have been relevant anyway.

  • jbradach
  • Jul 11 2018
  • Mark Garrison commented
    July 11, 2018 20:30

    Thanks for bringing this up, as it's a fairly important consideration. Ultimately I consider this to fall under the category "how should the software handle unreliability".

    First, I believe we need to put our main focus into making the product as reliable as possible. In an ideal case, we would make the recording process perfectly reliable for 100% of captures. If that was possible, this issue wouldn't exist.

    And to that end, we've made improvements. The products we ship now have significantly better resistance to these capture failures, by adding more memory to the hardware. On top of that, we've tuned the way we setup and run captures on the USB level, and have also added USB host controller vendor specific code to handle quirks that are specific to some machines.

    However, it's still not perfect. On a small number of laptops, we've found that the original Logic isn't able to reliably sample past 8 MSPS, despite our best efforts.

    On top of that, if there is other USB traffic during a capture, our transfers will be delayed, causing unavoidable capture termination.

    So that leads us to ask, what should the user experience be when this happens? Our first thought was that the user should never get incorrect data on the screen. The most direct solution to this problem is to terminate the capture the moment data is lost, to prevent any data after the loss event from being appended to the capture.

    That's all we do right now. The next question is if we should allow for discontinuous captures, or captures where the data set contains gaps, where data was not recorded.

    Longer term, we do want to support discontinuous data captures. This will be essential to support hardware triggering and sample rates that exceed USB streaming bandwidth.

    It's an enormous undertaking. So big in fact, that it's worth considering other options, such as:
    1. shipping more ram on future products, so much that usb bandwidth is not a necessity.
    2.recommending customers add an extra USB host controller to their system, if the issue persists (PCI express and expresscard USB host controllers are very inexpensive)
    3. the software could benchmark the USB reliability, and limit the maximum sample rate.
    4. It could also automatically detect out of date drivers, and monitor bus bandwidth utilization to identify problematic devices. It could walk you through the process of moving all USB devices to one host controller so the logic analyzer could exclusively use the other one
    5. we could switch from bulk USB transfers to USB isochronous transfers.
    6. Future products could implement hardware compression.

    That's a quick summary of our perspective on the problem. Another thing we need to do a better job of, through the opt-in usage statistics we started collecting from the last few releases, is to monitor how many of our users experience this problem.