The Safety Net: Automated Testing in tests/
[home]
The Developer's Diary
The Safety Net: Automated Testing in tests/
Building with confidence and ensuring our application remains robust and reliable over time.
We've meticulously designed our SpLD assessment application with a clean, decoupled architecture. But how do we prove that it works correctly? And more importantly, how do we ensure that future changes don't break existing functionality? This is the crucial role of the tests/
directory, which houses our automated test suite.
Why Automated Testing is Non-Negotiable
For many developers, especially when starting out, writing tests can feel like extra work for no immediate payoff. The application "works" when you run it, so why bother? This mindset is a trap that leads to fragile, unmaintainable software.
Novice Coder (NC): "Tests? Are you kidding? My code works. I know because I run the app and click on stuff. That's my test."
Experienced Coder (IC): "That's manual testing, and it's brittle and doesn't scale. An automated test suite is our professional safety net. In test_services.py
, we'll write unit tests using pytest
. We'll test our service layer in complete isolation by injecting a mock repository using a library like unittest.mock
. This ensures our logic is correct without touching a database. In test_repository.py
, we'll write integration tests. These will run against a real, but temporary, in-memory SQLite database to verify our SQL is correct. A good test suite is the single best indicator of a healthy codebase."
Automated tests provide three key benefits:
- Confidence: They allow you to refactor and add new features with the confidence that you haven't broken anything.
- Documentation: Well-written tests describe exactly how your code is intended to be used.
- Design: The act of writing a test forces you to think about your code's design and its dependencies, often leading to a cleaner architecture.
Deep Dive into Good Testing: The Why, What, and How
Our testing strategy will focus on two primary types of tests, each targeting a different layer of our architecture.
Unit Testing the Service Layer (The Logic)
Why: To verify that our core application logic (the use cases in the service layer) works correctly in isolation.
What: We test the methods on our `HistoryService`. We do not want this test to depend on the database or the UI.
How: We use a testing framework like pytest
and a "mocking" library like unittest.mock
. We create a "mock" repository—a fake object that pretends to be our real repository. This allows us to control its behavior (e.g., what it returns when a method is called) and check that our service interacts with it correctly.
# tests/test_services.py
import pytest
from unittest.mock import Mock
from datetime import date
from app.core.models import Assessment
from app.core.services import HistoryService
def test_create_new_assessment_use_case():
# Arrange: Set up the test conditions
mock_repo = Mock()
history_service = HistoryService(repo=mock_repo)
# Define how our mock repository should behave
expected_assessment = Assessment(
assessment_id=1,
client_id=123,
assessment_date=date.today(),
hypotheses=["dyslexia"]
)
mock_repo.add_assessment.return_value = expected_assessment
# Act: Execute the code we are testing
result = history_service.create_new_assessment(
client_id=123,
assessment_date=date.today(),
hypotheses=["dyslexia"]
)
# Assert: Verify the outcome
# Was the repository's add_assessment method called exactly once?
mock_repo.add_assessment.assert_called_once()
# Does the result from the service match what we expect?
assert result.assessment_id == 1
assert result.client_id == 123
Integration Testing the Repository (The Bridge)
Why: To ensure that our application can correctly integrate with an external dependency—in this case, the SQLite database.
What: We test the methods on our `SQLiteRepository` to make sure our SQL queries are correct and that data is saved and retrieved as expected.
How: We use a real SQLite database for this test, but we run it in-memory (:memory:
) so that it's fast and gets destroyed after each test, ensuring tests don't interfere with each other. A `pytest.fixture` is a great way to set this up.
# tests/test_repository.py
import sqlite3
import pytest
from datetime import date
from app.core.models import Assessment
from app.data.repository import SQLiteRepository
@pytest.fixture
def in_memory_repo():
"""A pytest fixture to create a fresh in-memory database for each test."""
repo = SQLiteRepository(db_path=":memory:")
return repo
def test_add_and_retrieve_assessment(in_memory_repo):
# Arrange
new_assessment = Assessment(
assessment_id=None,
client_id=456,
assessment_date=date.today(),
hypotheses=["adhd", "working memory"]
)
# Act
created_assessment = in_memory_repo.add_assessment(new_assessment)
# Assert
assert created_assessment.assessment_id is not None # The DB should assign an ID
# To fully test this, we would add a get_assessment method to our repo
# and then assert that the retrieved object matches the one we saved.
# retrieved = in_memory_repo.get_assessment(created_assessment.assessment_id)
# assert retrieved == created_assessment
By combining these two testing strategies, we create a robust safety net. Unit tests quickly and efficiently validate our core logic, while integration tests ensure our bridge to the outside world is solid. This disciplined approach is the hallmark of a professional, maintainable, and successful software project.
Comments
Post a Comment