Skip to content

Conversation

@vinitkumar
Copy link
Owner

@vinitkumar vinitkumar commented Jun 10, 2025

Adds comprehensive pytest coverage for the json2xml.utils module, including custom exceptions and utility functions for reading JSON from files, URLs, and strings, plus end-to-end integration with the XML converter.

Introduces TestExceptions to verify custom error messages.
Adds unit tests for readfromjson, readfromurl, and readfromstring, covering success and failure modes.
Provides integration tests chaining read utilities with dicttoxml

@vinitkumar vinitkumar requested a review from Copilot June 10, 2025 19:25
@codecov
Copy link

codecov bot commented Jun 10, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 99.30%. Comparing base (818f76b) to head (c006170).
⚠️ Report is 12 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #241      +/-   ##
==========================================
+ Coverage   97.56%   99.30%   +1.73%     
==========================================
  Files           3        3              
  Lines         288      288              
==========================================
+ Hits          281      286       +5     
+ Misses          7        2       -5     
Flag Coverage Δ
unittests 99.30% <ø> (+1.73%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

)

if TYPE_CHECKING:
from _pytest.capture import CaptureFixture

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'CaptureFixture' is not used.

Copilot Autofix

AI 7 months ago

To fix the issue, we will remove the unused import of CaptureFixture from the TYPE_CHECKING block. This will clean up the code and eliminate the unnecessary dependency. No other changes are required, as this does not affect the functionality of the code.


Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -19,3 +19,2 @@
 if TYPE_CHECKING:
-    from _pytest.capture import CaptureFixture
     from _pytest.fixtures import FixtureRequest
EOF
@@ -19,3 +19,2 @@
if TYPE_CHECKING:
from _pytest.capture import CaptureFixture
from _pytest.fixtures import FixtureRequest
Copilot is powered by AI and may make mistakes. Always verify output.

if TYPE_CHECKING:
from _pytest.capture import CaptureFixture
from _pytest.fixtures import FixtureRequest

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'FixtureRequest' is not used.

Copilot Autofix

AI 7 months ago

To fix the issue, the unused import of FixtureRequest should be removed from the TYPE_CHECKING block. This will eliminate the unnecessary dependency and improve code readability. The change is localized to the TYPE_CHECKING block, and no other parts of the code need to be modified.

Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -20,3 +20,2 @@
     from _pytest.capture import CaptureFixture
-    from _pytest.fixtures import FixtureRequest
     from _pytest.logging import LogCaptureFixture
EOF
@@ -20,3 +20,2 @@
from _pytest.capture import CaptureFixture
from _pytest.fixtures import FixtureRequest
from _pytest.logging import LogCaptureFixture
Copilot is powered by AI and may make mistakes. Always verify output.
if TYPE_CHECKING:
from _pytest.capture import CaptureFixture
from _pytest.fixtures import FixtureRequest
from _pytest.logging import LogCaptureFixture

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'LogCaptureFixture' is not used.

Copilot Autofix

AI 7 months ago

To fix the problem, the unused import statement for LogCaptureFixture should be removed from the file. This involves deleting the specific line where LogCaptureFixture is imported. No other changes are necessary since this import is not referenced anywhere else in the file.

Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -21,3 +21,3 @@
     from _pytest.fixtures import FixtureRequest
-    from _pytest.logging import LogCaptureFixture
+
     from _pytest.monkeypatch import MonkeyPatch
EOF
@@ -21,3 +21,3 @@
from _pytest.fixtures import FixtureRequest
from _pytest.logging import LogCaptureFixture

from _pytest.monkeypatch import MonkeyPatch
Copilot is powered by AI and may make mistakes. Always verify output.
from _pytest.capture import CaptureFixture
from _pytest.fixtures import FixtureRequest
from _pytest.logging import LogCaptureFixture
from _pytest.monkeypatch import MonkeyPatch

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'MonkeyPatch' is not used.

Copilot Autofix

AI 7 months ago

To fix the issue, the unused import statement for MonkeyPatch should be removed from the file. This involves deleting the line from _pytest.monkeypatch import MonkeyPatch (line 23). No other changes are required since this import is not referenced anywhere in the code.

Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -22,3 +22,3 @@
     from _pytest.logging import LogCaptureFixture
-    from _pytest.monkeypatch import MonkeyPatch
+
     from pytest_mock.plugin import MockerFixture
EOF
@@ -22,3 +22,3 @@
from _pytest.logging import LogCaptureFixture
from _pytest.monkeypatch import MonkeyPatch

from pytest_mock.plugin import MockerFixture
Copilot is powered by AI and may make mistakes. Always verify output.
from _pytest.fixtures import FixtureRequest
from _pytest.logging import LogCaptureFixture
from _pytest.monkeypatch import MonkeyPatch
from pytest_mock.plugin import MockerFixture

Check notice

Code scanning / CodeQL

Unused import Note test

Import of 'MockerFixture' is not used.

Copilot Autofix

AI 7 months ago

To fix the problem, we should remove the unused import from pytest_mock.plugin import MockerFixture on line 24. This will eliminate the unnecessary dependency and improve the code's readability. No other changes are required since this import is not used anywhere in the provided code.

Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -23,3 +23,3 @@
     from _pytest.monkeypatch import MonkeyPatch
-    from pytest_mock.plugin import MockerFixture
+
 
EOF
@@ -23,3 +23,3 @@
from _pytest.monkeypatch import MonkeyPatch
from pytest_mock.plugin import MockerFixture


Copilot is powered by AI and may make mistakes. Always verify output.
"""Test JSONReadError exception."""
with pytest.raises(JSONReadError) as exc_info:
raise JSONReadError("Test error message")
assert str(exc_info.value) == "Test error message"

Check warning

Code scanning / CodeQL

Unreachable code Warning test

This statement is unreachable.

Copilot Autofix

AI 7 months ago

To fix the issue, the assert statement should be moved into the pytest.raises context manager block. The pytest.raises block is designed to catch the raised exception, and the assert statement can then validate the exception's message. This ensures that the assert statement is executed and serves its intended purpose of verifying the exception's message.


Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -33,3 +33,3 @@
             raise JSONReadError("Test error message")
-        assert str(exc_info.value) == "Test error message"
+            assert str(exc_info.value) == "Test error message"
 
EOF
@@ -33,3 +33,3 @@
raise JSONReadError("Test error message")
assert str(exc_info.value) == "Test error message"
assert str(exc_info.value) == "Test error message"

Copilot is powered by AI and may make mistakes. Always verify output.
"""Test InvalidDataError exception."""
with pytest.raises(InvalidDataError) as exc_info:
raise InvalidDataError("Invalid data")
assert str(exc_info.value) == "Invalid data"

Check warning

Code scanning / CodeQL

Unreachable code Warning test

This statement is unreachable.

Copilot Autofix

AI 7 months ago

To fix the issue, the unreachable assert statement should be moved into the with pytest.raises context manager block. This ensures that the assert statement is executed after the exception is raised and caught, allowing the test to verify the exception's message as intended. This change preserves the functionality of the test while eliminating the unreachable code.


Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -39,3 +39,3 @@
             raise InvalidDataError("Invalid data")
-        assert str(exc_info.value) == "Invalid data"
+            assert str(exc_info.value) == "Invalid data"
 
EOF
@@ -39,3 +39,3 @@
raise InvalidDataError("Invalid data")
assert str(exc_info.value) == "Invalid data"
assert str(exc_info.value) == "Invalid data"

Copilot is powered by AI and may make mistakes. Always verify output.
"""Test URLReadError exception."""
with pytest.raises(URLReadError) as exc_info:
raise URLReadError("URL error")
assert str(exc_info.value) == "URL error"

Check warning

Code scanning / CodeQL

Unreachable code Warning test

This statement is unreachable.

Copilot Autofix

AI 7 months ago

To fix the issue, the unreachable assert statement should be moved outside the raise block while preserving its functionality. The pytest.raises context manager already captures the exception, allowing the assert statement to verify the exception message after the exception is raised. This ensures the test remains valid and functional.

The fix involves:

  1. Moving the assert statement to follow the raise statement within the pytest.raises context manager.
  2. Ensuring the test logic remains intact and correctly verifies the exception message.

Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -45,2 +45,3 @@
             raise URLReadError("URL error")
+        # Verify the exception message after it is raised
         assert str(exc_info.value) == "URL error"
EOF
@@ -45,2 +45,3 @@
raise URLReadError("URL error")
# Verify the exception message after it is raised
assert str(exc_info.value) == "URL error"
Copilot is powered by AI and may make mistakes. Always verify output.
"""Test StringReadError exception."""
with pytest.raises(StringReadError) as exc_info:
raise StringReadError("String error")
assert str(exc_info.value) == "String error"

Check warning

Code scanning / CodeQL

Unreachable code Warning test

This statement is unreachable.

Copilot Autofix

AI 7 months ago

To fix the issue, the unreachable assertion on line 52 should be moved inside the pytest.raises block. This ensures that the assertion is executed after the exception is raised and captured by pytest.raises. The fix preserves the functionality of the test while eliminating unreachable code.

Steps:

  1. Move the assertion assert str(exc_info.value) == "String error" inside the pytest.raises block.
  2. Ensure the indentation and formatting align with the rest of the code.

Suggested changeset 1
tests/test_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/tests/test_utils.py b/tests/test_utils.py
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -51,3 +51,3 @@
             raise StringReadError("String error")
-        assert str(exc_info.value) == "String error"
+            assert str(exc_info.value) == "String error"
 
EOF
@@ -51,3 +51,3 @@
raise StringReadError("String error")
assert str(exc_info.value) == "String error"
assert str(exc_info.value) == "String error"

Copilot is powered by AI and may make mistakes. Always verify output.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

Adds comprehensive pytest coverage for the json2xml.utils module, including custom exceptions and utility functions for reading JSON from files, URLs, and strings, plus end-to-end integration with the XML converter.

  • Introduces TestExceptions to verify custom error messages.
  • Adds unit tests for readfromjson, readfromurl, and readfromstring, covering success and failure modes.
  • Provides integration tests chaining read utilities with dicttoxml.
Comments suppressed due to low confidence (1)

tests/test_utils.py:199

  • All other parse failures are wrapped in JSONReadError; expecting a raw JSONDecodeError here is inconsistent. Either update the function to wrap errors uniformly or adjust the test to expect JSONReadError.
with pytest.raises(json.JSONDecodeError):

Comment on lines +58 to +84
def test_readfromjson_valid_file(self) -> None:
"""Test reading a valid JSON file."""
test_data = {"key": "value", "number": 42}

with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(test_data, f)
temp_filename = f.name

try:
result = readfromjson(temp_filename)
assert result == test_data
finally:
import os
os.unlink(temp_filename)

def test_readfromjson_invalid_json_content(self) -> None:
"""Test reading a file with invalid JSON content."""
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
f.write('{"invalid": json content}') # Invalid JSON
temp_filename = f.name

try:
with pytest.raises(JSONReadError, match="Invalid JSON File"):
readfromjson(temp_filename)
finally:
import os
os.unlink(temp_filename)
Copy link

Copilot AI Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Consider using pytest's tmp_path fixture instead of manual tempfile and cleanup to simplify file management and avoid repetitive os.unlink calls.

Suggested change
def test_readfromjson_valid_file(self) -> None:
"""Test reading a valid JSON file."""
test_data = {"key": "value", "number": 42}
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
json.dump(test_data, f)
temp_filename = f.name
try:
result = readfromjson(temp_filename)
assert result == test_data
finally:
import os
os.unlink(temp_filename)
def test_readfromjson_invalid_json_content(self) -> None:
"""Test reading a file with invalid JSON content."""
with tempfile.NamedTemporaryFile(mode='w', suffix='.json', delete=False) as f:
f.write('{"invalid": json content}') # Invalid JSON
temp_filename = f.name
try:
with pytest.raises(JSONReadError, match="Invalid JSON File"):
readfromjson(temp_filename)
finally:
import os
os.unlink(temp_filename)
def test_readfromjson_valid_file(self, tmp_path) -> None:
"""Test reading a valid JSON file."""
test_data = {"key": "value", "number": 42}
temp_file = tmp_path / "test.json"
temp_file.write_text(json.dumps(test_data))
result = readfromjson(str(temp_file))
assert result == test_data
def test_readfromjson_invalid_json_content(self, tmp_path) -> None:
"""Test reading a file with invalid JSON content."""
temp_file = tmp_path / "invalid.json"
temp_file.write_text('{"invalid": json content}') # Invalid JSON
with pytest.raises(JSONReadError, match="Invalid JSON File"):
readfromjson(str(temp_file))

Copilot uses AI. Check for mistakes.
Comment on lines 99 to 109
import os
# Make file unreadable (this might not work on all systems)
os.chmod(temp_filename, 0o000)

with pytest.raises(JSONReadError, match="Invalid JSON File"):
readfromjson(temp_filename)
finally:
# Restore permissions and cleanup
import os
try:
os.chmod(temp_filename, 0o644)
Copy link

Copilot AI Jun 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing file permissions to 0o000 can be platform-dependent (e.g., Windows); consider marking this test as xfail/skip on unsupported platforms or use pytest's monkeypatch to simulate permission errors.

Suggested change
import os
# Make file unreadable (this might not work on all systems)
os.chmod(temp_filename, 0o000)
with pytest.raises(JSONReadError, match="Invalid JSON File"):
readfromjson(temp_filename)
finally:
# Restore permissions and cleanup
import os
try:
os.chmod(temp_filename, 0o644)
from unittest.mock import patch
# Simulate a PermissionError when attempting to open the file
with patch("builtins.open", side_effect=PermissionError):
with pytest.raises(JSONReadError, match="Invalid JSON File"):
readfromjson(temp_filename)
finally:
import os
try:

Copilot uses AI. Check for mistakes.
@vinitkumar vinitkumar merged commit 6cbdea8 into master Jun 10, 2025
31 checks passed
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jun 11, 2025

Reviewer's Guide

This PR introduces a new test suite for the json2xml.utils module, adding unit tests for custom exceptions and the readfromjson, readfromurl, and readfromstring utilities—covering success and various failure modes—and integration tests chaining these readers with the dicttoxml converter.

File-Level Changes

Change Details Files
Added exception tests to validate custom error classes
  • Verify JSONReadError message
  • Verify InvalidDataError message
  • Verify URLReadError message
  • Verify StringReadError message
tests/test_utils.py
Added unit tests for readfromjson covering file I/O and parsing
  • Reading a valid temporary JSON file
  • Handling invalid JSON content
  • Handling missing file
  • Simulating PermissionError
  • Simulating generic OSError
tests/test_utils.py
Added unit tests for readfromurl covering HTTP success and errors
  • Successful GET request with and without query params
  • HTTP error statuses (404, 500) raising URLReadError
  • Handling invalid JSON payload from URL
tests/test_utils.py
Added unit tests for readfromstring validating input types and JSON syntax
  • Parsing valid, empty, and complex JSON strings
  • Rejecting non-string inputs (int, list, dict, None)
  • Rejecting various malformed JSON syntaxes
tests/test_utils.py
Added integration tests chaining readers with dicttoxml
  • Converting JSON string to XML
  • Converting JSON from URL to XML
tests/test_utils.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @vinitkumar - I've reviewed your changes - here's some feedback:

  • In TestReadFromString, you can consolidate the many invalid-JSON cases into a single pytest.mark.parametrize to reduce boilerplate and improve readability.
  • Instead of managing temp files and cleanup manually in TestReadFromJson, consider using pytest’s tmp_path fixture to simplify file creation and automatic teardown.
Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟡 Testing: 4 issues found
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +55 to +56
class TestReadFromJson:
"""Test readfromjson function."""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add tests for readfromjson with an empty file or a file containing non-object JSON.

Add tests for empty files (should raise JSONReadError) and for files with valid non-object JSON (should raise an appropriate error, given the function's dict return type).

Suggested change
class TestReadFromJson:
"""Test readfromjson function."""
class TestReadFromJson:
"""Test readfromjson function."""
def test_readfromjson_empty_file(self, tmp_path):
"""Test that reading from an empty file raises JSONReadError."""
empty_file = tmp_path / "empty.json"
empty_file.write_text("")
with pytest.raises(JSONReadError):
readfromjson(str(empty_file))
def test_readfromjson_non_object_json(self, tmp_path):
"""Test that reading from a file with non-object JSON raises InvalidDataError."""
# JSON array
array_file = tmp_path / "array.json"
array_file.write_text('[1, 2, 3]')
with pytest.raises(InvalidDataError):
readfromjson(str(array_file))
# JSON string
string_file = tmp_path / "string.json"
string_file.write_text('"hello"')
with pytest.raises(InvalidDataError):
readfromjson(str(string_file))

mock_http.request.return_value = mock_response
mock_pool_manager.return_value = mock_http

with pytest.raises(json.JSONDecodeError):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider consistent error wrapping for JSONDecodeError in readfromurl.

Decide if readfromurl should wrap json.JSONDecodeError in a custom error for consistency with TestReadFromJson. If so, update this test to expect the custom error instead of the raw exception.

Suggested implementation:

        with pytest.raises(JSONReadError):
            readfromurl("http://example.com/invalid.json")

You must also update the implementation of readfromurl (likely in json2xml/utils.py) to catch json.JSONDecodeError and raise JSONReadError instead. For example:

import json

def readfromurl(url):
    ...
    try:
        data = json.loads(response.data.decode("utf-8"))
    except json.JSONDecodeError as e:
        raise JSONReadError("Invalid JSON data") from e
    ...

Comment on lines +110 to +112
class TestReadFromUrl:
"""Test readfromurl function."""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Enhance TestReadFromUrl with tests for network-level errors.

Add tests that mock mock_http.request to raise exceptions like TimeoutError and NewConnectionError, and verify that readfromurl responds appropriately, such as by raising URLReadError.

Suggested change
class TestReadFromUrl:
"""Test readfromurl function."""
from requests.exceptions import Timeout, ConnectionError
import urllib3
class TestReadFromUrl:
"""Test readfromurl function."""
@patch("json2xml.utils.http")
def test_readfromurl_timeout_error(self, mock_http: Mock) -> None:
"""Test readfromurl raises URLReadError on TimeoutError."""
mock_http.request.side_effect = Timeout("Request timed out")
with pytest.raises(URLReadError, match="URL Read Error"):
readfromurl("http://example.com")
@patch("json2xml.utils.http")
def test_readfromurl_new_connection_error(self, mock_http: Mock) -> None:
"""Test readfromurl raises URLReadError on NewConnectionError."""
mock_http.request.side_effect = urllib3.exceptions.NewConnectionError(
None, "Failed to establish a new connection"
)
with pytest.raises(URLReadError, match="URL Read Error"):
readfromurl("http://example.com")

Comment on lines +198 to +200
class TestReadFromString:
"""Test readfromstring function."""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Test readfromstring with valid JSON strings that are not dictionary objects.

Add tests for valid JSON inputs that are not dictionaries (e.g., arrays, null, booleans, numbers, strings) to ensure readfromstring raises an appropriate error for type mismatches.

Suggested change
class TestReadFromString:
"""Test readfromstring function."""
class TestReadFromString:
"""Test readfromstring function."""
@pytest.mark.parametrize(
"json_input",
[
'["array", "of", "values"]',
'null',
'true',
'false',
'123',
'"just a string"',
'3.14'
]
)
def test_readfromstring_non_dict_types(self, json_input):
"""readfromstring should raise InvalidDataError for non-dict JSON inputs."""
with pytest.raises(InvalidDataError):
readfromstring(json_input)

Comment on lines +274 to +279
readfromstring("this is just plain text")


class TestIntegration:
"""Integration tests combining multiple utilities."""

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add an integration test for readfromjson chained with dicttoxml.

An integration test for readfromjson with dicttoxml should be added to match the coverage described in the PR.

Suggested change
readfromstring("this is just plain text")
class TestIntegration:
"""Integration tests combining multiple utilities."""
readfromstring("this is just plain text")
class TestIntegration:
"""Integration tests combining multiple utilities."""
def test_readfromjson_chained_with_dicttoxml(self):
"""Test chaining readfromjson with dicttoxml."""
from json2xml.utils import readfromjson
from json2xml import dicttoxml
json_input = '{"foo": "bar", "baz": 123}'
data = readfromjson(json_input)
xml_output = dicttoxml(data)
assert "<foo>bar</foo>" in xml_output
assert "<baz>123</baz>" in xml_output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants