-
Notifications
You must be signed in to change notification settings - Fork 22
Codice continuation packets #2569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Codice continuation packets #2569
Conversation
| All packets have (SHCOARSE, EVENT_DATA, CHKSUM) fields. To combine | ||
| the segmented packets, we only concatenate along the EVENT_DATA field | ||
| into the first packet of the group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for my future self and may be others. I don't want to lose context of these changes if someone looks at this later since those details are in our email. Can you break line where appropriate and make it more clear as needed?
| All packets have (SHCOARSE, EVENT_DATA, CHKSUM) fields. To combine | |
| the segmented packets, we only concatenate along the EVENT_DATA field | |
| into the first packet of the group. | |
| Direct event data are segmented into packets. CoDICE assembles these segmented packets slightly differently than the standard. Onboard, CoDICE direct event data are segmented as follows: | |
| Standalone / unsegmented packets: | |
| Data is packed in the order defined in the telemetry definition in Excel. eg. | |
| (SHCOARSE, many metadata fields, EVENT_DATA, CHECKSUM). | |
| First segment: | |
| Data is packed in the order defined in the telemetry definition in Excel. Eg. | |
| (SHCOARSE, many metadata fields, EVENT_DATA, CHECKSUM). | |
| As we see here, the first segment packet contains metadata defined in the telemetry definition. Those metadata are unpacked in later functions/steps because those metadata will always be fixed bit lengths and should exist. | |
| Middle segment: | |
| Data is packed in the following order: SHCOARSE, EVENT_DATA, CHECKSUM. | |
| There can be multiple middle packets for a given packet group. | |
| Last segment: | |
| Data is packed in the following order: SHCOARSE, EVENT_DATA, CHECKSUM. | |
| This last segment of event data can contain padded values. | |
| Because of this behavior, we defined the XTCE to unpack the data using the field order | |
| (SHCOARSE, EVENT_DATA, CHKSUM). This simplifies XTCE-based unpacking and allows | |
| the remaining packet-specific details to be handled in code. In this function, | |
| the segmented event data are combined by concatenating the EVENT_DATA field across | |
| the packet group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is too verbose. I made this a generic function in the ultra continuation packet PR so that multiple instruments can use it. Over there, it isn't the same packet definition as here, so this wouldn't make sense to add to the generic function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add it somewhere in this codice_l1a_de.py? I don't mind where we put this in that file but would be nice to capture this for future reference.
|
Current code looks good to me. I think there are few changes coming from yesterday discussion. let me know when you want me to review this again. |
Change Summary
Overview
This changes the CoDICE packet definition for direct events. Previously, we were reading in many fields for each packet, but this is incorrect, the fields are only in the first packet. This meant that we were extracting things with XTCE, then recombining those fields back into binary and then extracting again. The issue came about when the second packet's binary payload was not long enough to unpack all the individual fields.
This is a significant refactor as well. Rather than working with binary strings, we are working with bytes directly and doing bitshifts on numpy arrays. Thanks to good unit tests, this was able to be refactored with the help of AI.
closes #2568