If you need to trigger on a rare event, there is no reason why the software should be parsing the ongoing data looking for the trigger. The software should tell the hardware what event to look for, then let it handle the trigger.
With the current method I can only sample at 5MS/s when triggering on 1 channel for continuous running without the memory footprint going up forever. If the trigger was in the hardware, it could sit there for days at 100MS/s looking for the trigger.
+1 on this, and having a stop trigger would allow for example to let it run and jut get the specific time frame we are looking at
Attachments Open full size
Ideally, there'd be a separate pin - SYNC_IN - that could be configured to start/stop the sample clock.
I'm using Logic Pro 16 with a 16-bit micro and I'd like to be able to synchronize the internal logic pro 16 sampling clock with a bunch of other Vector CANtech measurement IO stuff.
Having the SYNC_IN line be separate from the other 16-pins would be best so that I can use the full 2bytes worth of pins to bit-bang my protocol.
Of course, the SYNC_IN line might as well be dual-purpose and be used as a normal digital input too.
And while you're at it, ... adding one more channel - total of 18 - would be good so that I can use the 2 extra bits for CLOCK and FRAME (chip select) signal.
Attachments Open full size
+1 on this as well. It would also be nice if you had a live mode and only start storing data when a trigger event has been met.
Attachments Open full size
+1 on this. And adding state triggering to allow a sequence of events to trigger the analyzer would be a big plus.
Attachments Open full size
happy to sacrifice the use of 1 or 2 pins for a physical start stop trigger function...
Attachments Open full size
This is something I've thought a lot about. As we work on advanced, potentially arbitrary and user-defined triggers in the relatively near term, (e.g. triggers that could include analog or protocol results) then it is a much simpler problem space to do it all in software.
I imagine the only issue with doing it in software (that users would care about) is that on slower machines it might not work at all (not be able to keep up). And that is a problem to be sure...
The next generation of hardware should help a huge amount with slower computers because as currently planned the data will come over already in the final, compressed format, and therefore in most cases use much less USB bandwidth and processing power.
Would love to get more feedback on this issue. Thanks!
Attachments Open full size
A year ago I'd have really doubted a software approach would work, but given how well the real-time display/decode seems to work in the Beta software, it seems feasible to me, at least provided the analyzers can continue to keep up as they get more complex (like the separate idea for analog analyzers, etc).
Attachments Open full size