Back to Repositories

Testing UNet2D AutoChunking Implementation in ColossalAI

This test suite validates the automatic chunking functionality for UNet2D models in the ColossalAI framework, specifically focusing on diffusion model components. It ensures proper memory management and model execution with various memory constraints.

Test Coverage Overview

The test suite covers UNet2D model functionality with different memory configurations and batch sizes.

Key areas tested include:
  • Memory management with various max_memory settings
  • Model behavior with different input shapes
  • Integration with diffusers library components
  • Version compatibility checks

Implementation Analysis

The testing approach utilizes parametrized testing to evaluate multiple configurations systematically.

Technical implementation features:
  • Dynamic test parameterization using @parameterize decorator
  • Spawn-based parallel execution
  • Memory cleanup using clear_cache_before_run
  • Version-specific test skipping logic

Technical Details

Testing infrastructure includes:
  • PyTest framework integration
  • Torch tensor operations
  • Diffusers library compatibility checks
  • Memory management utilities
  • Custom test utilities (run_test function)
  • Automated cache clearing mechanisms

Best Practices Demonstrated

The test implementation showcases several testing best practices:

  • Proper test isolation and resource cleanup
  • Conditional test execution based on environment
  • Parameterized test cases for comprehensive coverage
  • Clear separation of test data generation and execution
  • Explicit version compatibility handling

hpcaitech/colossalai

tests/test_autochunk/test_autochunk_diffuser/test_autochunk_unet.py

            
from typing import List, Tuple

import pytest
import torch

try:
    import diffusers

    MODELS = [diffusers.UNet2DModel]
    HAS_REPO = True
    from packaging import version

    SKIP_UNET_TEST = version.parse(diffusers.__version__) > version.parse("0.10.2")
except:
    MODELS = []
    HAS_REPO = False
    SKIP_UNET_TEST = False

from test_autochunk_diffuser_utils import run_test

from colossalai.autochunk.autochunk_codegen import AUTOCHUNK_AVAILABLE
from colossalai.testing import clear_cache_before_run, parameterize, spawn

BATCH_SIZE = 1
HEIGHT = 448
WIDTH = 448
IN_CHANNELS = 3
LATENTS_SHAPE = (BATCH_SIZE, IN_CHANNELS, HEIGHT // 7, WIDTH // 7)


def get_data(shape: tuple) -> Tuple[List, List]:
    sample = torch.randn(shape)
    meta_args = [
        ("sample", sample),
    ]
    concrete_args = [("timestep", 50)]
    return meta_args, concrete_args


@pytest.mark.skipif(
    SKIP_UNET_TEST,
    reason="diffusers version > 0.10.2",
)
@pytest.mark.skipif(
    not (AUTOCHUNK_AVAILABLE and HAS_REPO),
    reason="torch version is lower than 1.12.0",
)
@clear_cache_before_run()
@parameterize("model", MODELS)
@parameterize("shape", [LATENTS_SHAPE])
@parameterize("max_memory", [None, 150, 300])
def test_evoformer_block(model, shape, max_memory):
    spawn(
        run_test,
        1,
        max_memory=max_memory,
        model=model,
        data=get_data(shape),
    )


if __name__ == "__main__":
    test_evoformer_block()