Skip to content

Conversation

@Shivd131
Copy link
Contributor

@Shivd131 Shivd131 commented Jan 20, 2026

Description

Currently, the reporting module lacks end-to-end integration tests that verify data integrity in the generated output. Existing tests primarily check for HTTP 200 OK statuses but do not validate that created client/loan data actually appears in the report body.

Changes:

  • Added ReportExecutionIntegrationTest.java.
  • The verifyAllReportsExecution:
    1. Creates real Client/Loan entities and injects a "Super Map" of standard parameters (R_clientId, R_loanId, R_officeId, etc.) into every report call. This allows standard reports (like Loan Schedule or Client Listings) to resolve data and return 200s.
    2. Iterates through ALL installed reports and asserts that no report returns a 500 Server Error:
      • 200 OK: Validated as success.
      • 400 Bad Request: Logged as expected validation error (for reports requiring specialized parameters).
      • 500 Server Error: Fails the build.
    3. Retained specific data integrity checks for "Client Listing" and "Active Loans" to verify format negotiation (CSV/JSON) and entity filtering.

This ensures that the observability instrumentation and reporting engine are stable across all report types, paving the way for future refactoring of the reporting logic.

Checklist

Please make sure these boxes are checked before submitting your pull request - thanks!

  • Write the commit message as per our guidelines
  • Acknowledge that we will not review PRs that are not passing the build ("green") - it is your responsibility to get a proposed PR to pass the build, not primarily the project's maintainers.
  • Create/update unit or integration tests for verifying the changes made.
  • Follow our coding conventions.
  • Add required Swagger annotation and update API documentation at fineract-provider/src/main/resources/static/legacy-docs/apiLive.htm with details of any API changes
  • This PR must not be a "code dump". Large changes can be made in a branch, with assistance. Ask for help on the developer mailing list.

Your assigned reviewer(s) will follow our guidelines for code reviews.

@Shivd131
Copy link
Contributor Author

Shivd131 commented Jan 20, 2026

I have removed the unused import, which was causing the build error. The code should work fine now.
Thanks

@adamsaghy
Copy link
Contributor

@Shivd131 Seems half done. Covering 1 report is not enough.

PR title and commit messages are incorrect and does not follow the required format:
FINERACT-2443: Add test cases for reports

@Shivd131 Shivd131 force-pushed the feature/report-integration-test branch 2 times, most recently from 53134d6 to 43a44b4 Compare January 20, 2026 10:54
@IOhacker
Copy link
Contributor

@Shivd131 change from "[FINERACT-2443] Test: Add ReportExecutionIntegrationTest for end-to-end" to "FINERACT-2443: Add test cases for reports" and squash and commit your changes, only 1 commit must be included in the PR, the title of that commit must be "FINERACT-2443: Add test cases for reports".

@Shivd131 Shivd131 changed the title [FINERACT-2443] Test: Add ReportExecutionIntegrationTest for end-to-end verification FINERACT-2443: Add test cases for reports Jan 20, 2026
@Shivd131 Shivd131 force-pushed the feature/report-integration-test branch from 9a11055 to b1dc17c Compare January 20, 2026 16:04
@Shivd131
Copy link
Contributor Author

Shivd131 commented Jan 20, 2026

@IOhacker thank you so much for the help and directions:). I have updated my commits and PR to follow the conventions. Apologies for the remiss.

@adamsaghy Thanks for the feedback. I thought over it and have expanded the test to cover a more encompassing scenario.

Here is the updated flow I have implemented:

1. Client Listing

  • Tests simple entity retrieval (m_client).
  • Verifies format negotiation (CSV, JSON outputs).

2. Logic Verification (Active Loans - Details)

  • Data Setup: Creates Client → Loan Product → Loan Application.
  • Lifecycle Testing: Explicitly moves the loan from Approved (Status 200) → Disbursed (Status 300).
  • Validation: Runs the "Active Loans" report. This proves the engine correctly handles:
    • SQL Joins: (m_loan + m_client + m_product_loan)
    • State Filtering: Verifying the engine respects the WHERE loan_status_id = 300 predicate (ensuring undisbursed loans don't appear).
    • Parameter Mapping: Validating dynamic parameters like R_officeId and R_loanProductId.

Also I would like to know if I have misunderstood your previous feedback. I would appreciate knowing exactly which additional scenarios or logic you expect to be tested so I can make this test robust.

Copy link
Contributor

@adamsaghy adamsaghy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure to cover ALL reports!

@IOhacker
Copy link
Contributor

@Shivd131 could you please work on the changes requested?

@Shivd131 Shivd131 force-pushed the feature/report-integration-test branch from ee111be to 0a5d7a8 Compare January 25, 2026 18:54
@Shivd131
Copy link
Contributor Author

Hi @adamsaghy and @IOhacker , I’ve updated the test to cover all reports. Please review it, and let me know if there’s anything else I should address.

Thank you

@adamsaghy
Copy link
Contributor

@Shivd131 Kindly asking you to make sure all reports are covered if you want to call this PR: "Add test cases for reports".

If you dont want to cover all reports, kindly asking you to rename the commit and PR title to reflect the correct situation.

I would love if ALL available reports would be covered!

@Shivd131
Copy link
Contributor Author

@adamsaghy I think I may be oversimplifying things on my end, so I’d like to clarify my understanding.. I’m still getting familiar with the overall system, and I want to be careful not to misinterpret what is meant by “cover all the reports.”

At the moment, the test dynamically fetches the full list of installed reports and iterates through them to trigger the execution pipeline. The goal there was to verify that the observability instrumentation works in a global sense, even when specific report parameters are missing (hence the intentional try catch to ignore parameter/validation errors).

When you say “cover all reports,” do you mean adding explicit data setup and assertions for each individual report (for example, General Ledger, Balance Sheet, etc.) to fully validate their execution? That would certainly be more comprehensive, but it would also increase the test runtime and overall complexity quite a bit.

I would be glad if you could guide me by telling me the specifics about the requirement. Once I have that clarity, I’d be happy to refine or extend the tests accordingly, to what's actually needed, I want to avoid this PR ending up as a check for only a subset of cases, which is what you are currently pointing towards, so your elaboration would be very helpful :)

Thanks for the help!

@adamsaghy
Copy link
Contributor

@Shivd131 Hi

I missed the code which doing this however ignoring the errors are not too useful:

for (GetReportsResponse report : allReports) {
            try {
                fineractClient().reportsRun.runReportGetData(report.getReportName(), Map.of("R_officeId", "1"), false).execute();
            } catch (Exception ignored) {
                // Ignore network/parsing errors, the attempted execution being recorded by metrics is focused.
            }
        }
        
Would it be possible to ensure each report is called and there are no errors?

@Shivd131 Shivd131 force-pushed the feature/report-integration-test branch from 0a5d7a8 to 1113ed7 Compare January 27, 2026 18:12
@Shivd131
Copy link
Contributor Author

Hi @adamsaghy ,

I considered a few approaches. I initially thought about writing individual test cases for every single report, but I felt that approach would be brittle, time-consuming to maintain, and quite resource-intensive. But I feel this would be a better way to test it, keeping things quite simple:

Spin up a real Client/Loan/Office context and inject those IDs into every report call. This allows standard reports (like Loan Schedule or Client Listings) to actually resolve data and return 200s.

Crucially, I removed the blind try-catch. The test now strictly asserts that no report throws a 500:

  • 200 OK: Success.

  • 400 Bad Request: Logged as expected (since we can't guess custom params for every obscure report, but this confirms the validation engine is working).

  • 500 Server Error: Fails the build.

// Iterate and Execute
        for (GetReportsResponse report : allReports) {
            String reportName = report.getReportName();

            Response<RunReportsResponse> response = fineractClient().reportsRun.runReportGetData(reportName, commonParams, false).execute();

            // Strict Validation
            // Allow 200 (Success) OR 400 (Bad Request - Validation Error).
            // FAIL on 500 (Server Error) or any other code.
            if (response.code() == 200) {
                successCount++;
            } else if (response.code() == 400) {
                // If it fails, ensure it's a controlled validation error, not a crash.
                validationErrorCount++;
                log.info("Report '{}' returned 400 (Expected for missing specialized params).", reportName);
            } else {
                // Fail the test if we get a 500 or 404
                assertThat(response.code()).as("Report '%s' failed with unexpected status code", reportName).isNotEqualTo(500);
            }
        }

I believe this is a sweet spot, it dynamically verifies the execution pipeline for the entire report registry and proves stability, without flaking out on validation errors or creating an overly complex test suite.

Let me know your take on this, will be open to adjust and work on them.

@Shivd131 Shivd131 force-pushed the feature/report-integration-test branch from 1113ed7 to 38e2329 Compare January 27, 2026 18:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants