-
Notifications
You must be signed in to change notification settings - Fork 2.3k
FINERACT-2443: Add test cases for reports #5353
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
I have removed the unused import, which was causing the build error. The code should work fine now. |
|
@Shivd131 Seems half done. Covering 1 report is not enough. PR title and commit messages are incorrect and does not follow the required format: |
53134d6 to
43a44b4
Compare
|
@Shivd131 change from "[FINERACT-2443] Test: Add ReportExecutionIntegrationTest for end-to-end" to "FINERACT-2443: Add test cases for reports" and squash and commit your changes, only 1 commit must be included in the PR, the title of that commit must be "FINERACT-2443: Add test cases for reports". |
9a11055 to
b1dc17c
Compare
|
@IOhacker thank you so much for the help and directions:). I have updated my commits and PR to follow the conventions. Apologies for the remiss. @adamsaghy Thanks for the feedback. I thought over it and have expanded the test to cover a more encompassing scenario. Here is the updated flow I have implemented: 1. Client Listing
2. Logic Verification (Active Loans - Details)
Also I would like to know if I have misunderstood your previous feedback. I would appreciate knowing exactly which additional scenarios or logic you expect to be tested so I can make this test robust. |
adamsaghy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure to cover ALL reports!
|
@Shivd131 could you please work on the changes requested? |
ee111be to
0a5d7a8
Compare
|
Hi @adamsaghy and @IOhacker , I’ve updated the test to cover all reports. Please review it, and let me know if there’s anything else I should address. Thank you |
|
@Shivd131 Kindly asking you to make sure all reports are covered if you want to call this PR: "Add test cases for reports". If you dont want to cover all reports, kindly asking you to rename the commit and PR title to reflect the correct situation. I would love if ALL available reports would be covered! |
|
@adamsaghy I think I may be oversimplifying things on my end, so I’d like to clarify my understanding.. I’m still getting familiar with the overall system, and I want to be careful not to misinterpret what is meant by “cover all the reports.” At the moment, the test dynamically fetches the full list of installed reports and iterates through them to trigger the execution pipeline. The goal there was to verify that the observability instrumentation works in a global sense, even when specific report parameters are missing (hence the intentional try catch to ignore parameter/validation errors). When you say “cover all reports,” do you mean adding explicit data setup and assertions for each individual report (for example, General Ledger, Balance Sheet, etc.) to fully validate their execution? That would certainly be more comprehensive, but it would also increase the test runtime and overall complexity quite a bit. I would be glad if you could guide me by telling me the specifics about the requirement. Once I have that clarity, I’d be happy to refine or extend the tests accordingly, to what's actually needed, I want to avoid this PR ending up as a check for only a subset of cases, which is what you are currently pointing towards, so your elaboration would be very helpful :) Thanks for the help! |
|
@Shivd131 Hi I missed the code which doing this however ignoring the errors are not too useful: |
0a5d7a8 to
1113ed7
Compare
|
Hi @adamsaghy , I considered a few approaches. I initially thought about writing individual test cases for every single report, but I felt that approach would be brittle, time-consuming to maintain, and quite resource-intensive. But I feel this would be a better way to test it, keeping things quite simple: Spin up a real Client/Loan/Office context and inject those IDs into every report call. This allows standard reports (like Loan Schedule or Client Listings) to actually resolve data and return 200s. Crucially, I removed the blind try-catch. The test now strictly asserts that no report throws a 500:
I believe this is a sweet spot, it dynamically verifies the execution pipeline for the entire report registry and proves stability, without flaking out on validation errors or creating an overly complex test suite. Let me know your take on this, will be open to adjust and work on them. |
1113ed7 to
38e2329
Compare
Description
Currently, the reporting module lacks end-to-end integration tests that verify data integrity in the generated output. Existing tests primarily check for HTTP 200 OK statuses but do not validate that created client/loan data actually appears in the report body.
Changes:
ReportExecutionIntegrationTest.java.verifyAllReportsExecution:200 OK: Validated as success.400 Bad Request: Logged as expected validation error (for reports requiring specialized parameters).500 Server Error: Fails the build.This ensures that the observability instrumentation and reporting engine are stable across all report types, paving the way for future refactoring of the reporting logic.
Checklist
Please make sure these boxes are checked before submitting your pull request - thanks!
Your assigned reviewer(s) will follow our guidelines for code reviews.