Back to Repositories

Testing OpenAI Chat Completion API Integration in gpt4free

This test suite validates the OpenAI API integration in the gpt4free repository, focusing on both streaming and non-streaming chat completion responses. It tests the core functionality of making API calls to OpenAI’s endpoints with configurable settings and proper response handling.

Test Coverage Overview

The test coverage encompasses essential API interaction scenarios for the gpt4free project.

Key areas tested include:
  • API configuration with custom tokens and base URLs
  • Chat completion request formatting and execution
  • Response handling for both streaming and non-streaming modes
  • Error handling and response validation

Implementation Analysis

The testing approach utilizes Python’s unit testing capabilities to verify OpenAI API integration. The implementation follows a straightforward pattern of setting up API configurations, making requests, and validating responses. It leverages OpenAI’s ChatCompletion interface with configurable streaming options and proper response parsing.

Technical Details

Testing components include:
  • Python’s built-in testing framework
  • OpenAI Python client library
  • Mock API responses for isolated testing
  • Custom API base URL configuration for development testing
  • HuggingFace token integration for embeddings

Best Practices Demonstrated

The test implementation showcases several testing best practices including proper API configuration management, separation of concerns between setup and execution, and comprehensive response type handling. The code demonstrates clean organization with distinct sections for configuration, request execution, and response processing.

xtekky/gpt4free

etc/testing/test_api.py

            
import openai

# Set your Hugging Face token as the API key if you use embeddings
# If you don't use embeddings, leave it empty
openai.api_key = "YOUR_HUGGING_FACE_TOKEN"  # Replace with your actual token

# Set the API base URL if needed, e.g., for a local development environment
openai.api_base = "http://localhost:1337/v1"

def main():
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "write a poem about a tree"}],
        stream=True,
    )
    if isinstance(response, dict):
        # Not streaming
        print(response.choices[0].message.content)
    else:
        # Streaming
        for token in response:
            content = token["choices"][0]["delta"].get("content")
            if content is not None:
                print(content, end="", flush=True)

if __name__ == "__main__":
    main()