Back to Repositories

Testing ChatInterface Streaming with Multimodal Messages in gradio-app/gradio

This test suite validates the streaming functionality of Gradio’s ChatInterface component with multimodal message support. It implements a simple echo service that demonstrates progressive text rendering and maintains a global run counter for test verification.

Test Coverage Overview

The test coverage focuses on the ChatInterface’s streaming capabilities and multimodal message handling.

  • Tests progressive text rendering through character-by-character echo
  • Verifies multimodal message support
  • Tracks execution runs using global state
  • Tests demo launch and unload functionality

Implementation Analysis

The implementation uses a slow_echo function that yields partial message responses to simulate streaming behavior. The approach demonstrates Gradio’s streaming capabilities through generator functions and global state management.

Key patterns include:
  • Generator-based streaming response
  • Global run counter for test state
  • Blocks context for component rendering
  • Unload handler for test cleanup

Technical Details

Testing tools and configuration:

  • Gradio ChatInterface with multimodal=True
  • Blocks API for component organization
  • Global state management for run tracking
  • Demo launch configuration for manual testing
  • Unload handler for test state reset

Best Practices Demonstrated

The test implementation showcases several testing best practices for Gradio applications.

  • Proper cleanup through unload handlers
  • Isolation of test state
  • Progressive response testing
  • Clear separation of interface setup and testing logic
  • Modular component configuration

gradio-app/gradio

demo/test_chatinterface_streaming_echo/multimodal_messages_testcase.py

            
import gradio as gr

runs = 0

def reset_runs():
    global runs
    runs = 0

def slow_echo(message, history):
    global runs  # i didn't want to add state or anything to this demo
    runs = runs + 1
    for i in range(len(message['text'])):
        yield f"Run {runs} - You typed: " + message['text'][: i + 1]

chat = gr.ChatInterface(slow_echo, multimodal=True, type="messages")

with gr.Blocks() as demo:
    chat.render()
    # We reset the global variable to minimize flakes
    # this works because CI runs only one test at at time
    # need to use gr.State if we want to parallelize this test
    # currently chatinterface does not support that
    demo.unload(reset_runs)

if __name__ == "__main__":
    demo.launch()