Back to Repositories

Testing OpenAI Chat Completion Stream Handling in gpt4free

This test suite validates the OpenAI API integration functionality, specifically focusing on chat completion streaming capabilities. It tests both streaming and non-streaming responses using a local API endpoint.

Test Coverage Overview

The test coverage encompasses OpenAI ChatCompletion API interaction validation, with particular attention to response handling.

Key areas tested include:
  • Stream vs non-stream response handling
  • Message content extraction
  • Local API endpoint integration
  • Response format validation

Implementation Analysis

The testing approach implements a dual-path verification strategy for ChatCompletion responses. It utilizes conditional logic to handle both streamed and non-streamed responses, ensuring proper content extraction and output formatting. The implementation leverages Python’s native type checking and OpenAI’s client library features.

Technical Details

Testing tools and configuration:
  • OpenAI Python client library
  • Local API endpoint (port 1337)
  • GPT-3.5-turbo model integration
  • Stream parameter testing
  • Dictionary vs iterator response handling

Best Practices Demonstrated

The test demonstrates robust error handling and response type validation. Notable practices include proper stream handling with flush control, null checking for content extraction, and clear separation of concerns between API interaction and output handling. The code organization follows a clean main() function pattern with conditional logic for different response types.

xtekky/gpt4free

etc/testing/test_interference.py

            
# type: ignore
import openai

openai.api_key = ""
openai.api_base = "http://localhost:1337"


def main():
    chat_completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "write a poem about a tree"}],
        stream=True,
    )

    if isinstance(chat_completion, dict):
        # not stream
        print(chat_completion.choices[0].message.content)
    else:
        # stream
        for token in chat_completion:
            content = token["choices"][0]["delta"].get("content")
            if content != None:
                print(content, end="", flush=True)


if __name__ == "__main__":
    main()