Back to Repositories

Testing Llama Chat Model Integration in ruoyi-vue-pro

This test suite validates the integration of OllamaChatModel with Spring AI, focusing on both synchronous and streaming chat interactions. It demonstrates the implementation of a Llama-based chat model with custom system prompts and user messages.

Test Coverage Overview

The test suite covers fundamental chat model operations with comprehensive test cases for both synchronous and streaming responses.

  • Synchronous chat response testing via testCall()
  • Streaming response handling through testStream()
  • System message and user message integration
  • Custom prompt configuration with Chinese language support

Implementation Analysis

The testing approach utilizes JUnit 5 with Spring AI’s chat model abstractions.

Key implementation patterns include:
  • Direct OllamaApi configuration with local endpoint
  • Custom OllamaOptions setup with LLAMA3 model specification
  • Reactive programming with Reactor’s Flux for streaming responses
  • Message chain construction using SystemMessage and UserMessage

Technical Details

Testing infrastructure includes:

  • JUnit Jupiter test framework
  • Spring AI chat model interfaces
  • Ollama API integration at http://127.0.0.1:11434
  • Reactor Core for reactive streams
  • Custom OllamaOptions configuration

Best Practices Demonstrated

The test suite exemplifies several testing best practices:

  • Clear test method naming conventions
  • Proper test setup with @Disabled annotation for manual execution
  • Separation of concerns between synchronous and streaming tests
  • Explicit parameter preparation and response handling
  • Structured prompt construction with system and user messages

yunaiv/ruoyi-vue-pro

yudao-module-ai/yudao-spring-boot-starter-ai/src/test/java/cn/iocoder/yudao/framework/ai/chat/LlamaChatModelTests.java

            
package cn.iocoder.yudao.framework.ai.chat;

import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Test;
import org.springframework.ai.chat.messages.Message;
import org.springframework.ai.chat.messages.SystemMessage;
import org.springframework.ai.chat.messages.UserMessage;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.ai.chat.prompt.Prompt;
import org.springframework.ai.ollama.OllamaChatModel;
import org.springframework.ai.ollama.api.OllamaApi;
import org.springframework.ai.ollama.api.OllamaModel;
import org.springframework.ai.ollama.api.OllamaOptions;
import reactor.core.publisher.Flux;

import java.util.ArrayList;
import java.util.List;

/**
 * {@link OllamaChatModel} 集成测试
 *
 * @author 芋道源码
 */
public class LlamaChatModelTests {

    private final OllamaApi ollamaApi = new OllamaApi(
            "http://127.0.0.1:11434");
    private final OllamaChatModel chatModel = new OllamaChatModel(ollamaApi,
            OllamaOptions.create().withModel(OllamaModel.LLAMA3.getModelName()));

    @Test
    @Disabled
    public void testCall() {
        // 准备参数
        List<Message> messages = new ArrayList<>();
        messages.add(new SystemMessage("你是一个优质的文言文作者,用文言文描述着各城市的人文风景。"));
        messages.add(new UserMessage("1 + 1 = ?"));

        // 调用
        ChatResponse response = chatModel.call(new Prompt(messages));
        // 打印结果
        System.out.println(response);
        System.out.println(response.getResult().getOutput());
    }

    @Test
    @Disabled
    public void testStream() {
        // 准备参数
        List<Message> messages = new ArrayList<>();
        messages.add(new SystemMessage("你是一个优质的文言文作者,用文言文描述着各城市的人文风景。"));
        messages.add(new UserMessage("1 + 1 = ?"));

        // 调用
        Flux<ChatResponse> flux = chatModel.stream(new Prompt(messages));
        // 打印结果
        flux.doOnNext(response -> {
//            System.out.println(response);
            System.out.println(response.getResult().getOutput());
        }).then().block();
    }

}