Back to Repositories

Testing ChatInterface Streaming with Multimodal Support in gradio-app/gradio

This test suite validates the streaming functionality of Gradio’s ChatInterface component with multimodal tuple support. It implements a simple echo service that demonstrates progressive text generation and maintains a global run counter for verification purposes.

Test Coverage Overview

The test provides coverage for Gradio’s ChatInterface streaming capabilities with multimodal input handling.

Key areas tested include:
  • Progressive text streaming response generation
  • Multimodal message handling with tuple support
  • Global state management across chat interactions
  • Run counter reset functionality

Implementation Analysis

The implementation uses a straightforward approach with a global counter to track interaction runs. The slow_echo function demonstrates streaming by yielding progressive text responses character by character, while maintaining run count state. The test leverages Gradio’s ChatInterface with multimodal=True and type=’tuples’ configuration for enhanced message handling.

Notable patterns include:
  • Generator-based streaming response
  • Global state management for run tracking
  • Unload event handler for state reset

Technical Details

Testing components include:
  • Gradio ChatInterface with multimodal support
  • Blocks context for component rendering
  • Global variable for state management
  • Generator function for streaming responses
  • Unload event handler for cleanup

Best Practices Demonstrated

The test exemplifies several testing best practices in Gradio application development.

Notable practices include:
  • Isolation of test state through reset functionality
  • Clear separation of interface setup and launch logic
  • Proper cleanup handling with unload events
  • Progressive response generation for realistic chat simulation

gradio-app/gradio

demo/test_chatinterface_streaming_echo/multimodal_tuples_testcase.py

            
import gradio as gr

runs = 0

def reset_runs():
    global runs
    runs = 0

def slow_echo(message, history):
    global runs  # i didn't want to add state or anything to this demo
    runs = runs + 1
    for i in range(len(message['text'])):
        yield f"Run {runs} - You typed: " + message['text'][: i + 1]

chat = gr.ChatInterface(slow_echo, multimodal=True, type="tuples")

with gr.Blocks() as demo:
    chat.render()
    # We reset the global variable to minimize flakes
    # this works because CI runs only one test at at time
    # need to use gr.State if we want to parallelize this test
    # currently chatinterface does not support that
    demo.unload(reset_runs)

if __name__ == "__main__":
    demo.launch()