-
Notifications
You must be signed in to change notification settings - Fork 45
chore: Add persistence store support for FDv2 #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Bug: Deadlock Risk in FeatureStoreClientWrapperIn Additional Locations (1) |
| self.__lock.unlock() | ||
|
|
||
| if modified: | ||
| self.__listeners.notify(status) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the threading situation like for this dispatch? Is the caller thread always the same thread?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, unfortunately that's the way the current listeners work in python. The notify method itself catches all exceptions, which is why we aren't doing it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. Well it is good in a sense, because it helps prevent out of order handling from unfortunate thread context switching. If it was multi-threaded, then I would be concerned we would need some async queue type situation.
| """ | ||
| update_status is called from the data store to push a status update. | ||
| """ | ||
| self.__lock.lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason this has to use lock/unlock vs with?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the LD provided RW lock. There is no context usage for this class.
4e44909 to
e88a705
Compare
| self.__status = status | ||
| modified = True | ||
|
|
||
| self.__lock.unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Deadlock Risk in FeatureStoreClientWrapper
In FeatureStoreClientWrapper.__update_availability, the ReadWriteLock may not be released if an exception occurs after acquisition. This happens in two places: when updating __last_available and when starting self.__poller, which could lead to a deadlock.
Additional Locations (1)
| def status(self) -> DataStoreStatus: | ||
| self.__lock.rlock() | ||
| status = copy(self.__status) | ||
| self.__lock.runlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note
Adds FDv2 persistent store integration (read-only/read-write), data store status monitoring, and comprehensive tests.
FDv2 & Store:
FeatureStorewithStore.with_persistence(...); respectDataStoreMode.READ_ONLYvsREAD_WRITEfor writes.FeatureStoreClientWrapperto sortinitdata and emitDataStoreStatusavailability updates with recovery polling.commit(); replace prints withlog.Configuration:
DataSystemConfigwithdata_store_modeand optionaldata_store; makeprimary_synchronizeroptional.ConfigBuilder.data_store(...); providedaemon(...)(read-only) andpersistent_store(...)(read-write) helpers.DataStoreModeenum.Status APIs:
DataStoreStatusProviderImplwith listener support and monitoring detection.DataStoreStatuswith equality semantics.Tests:
test_fdv2_persistence.pycovering read-only/write modes, delta and delete persistence, and status provider/monitoring.Written by Cursor Bugbot for commit e88a705. This will update automatically on new commits. Configure here.