Skip to content

[Bug] [Module Name] Bug title LLM test using prompt page does not get complete results #2142

Open
@adogwangwang

Description

Search before asking

  • I had searched in the issues and found no similar issues.

Operating system information

Linux

Python version information

=3.11

DB-GPT version

main

Related scenes

  • Chat Data
  • Chat Excel
  • Chat DB
  • Chat Knowledge
  • Model Management
  • Dashboard
  • Plugins

Installation Information

Device information

GPU V100

Models information

LLM: qwen2.5-72b

What happened

  1. 当我使用prompt时,输入,点击LLM测试之后,发现后台会输入一个很完整的回答,但是在prompt界面当中只能看到第一句,很奇怪,请教各位老师是什么原因?结果如图,黑色是我后台输出的一部分,远比LLM OUT呈现的内容多的多
    image
    image

  2. 请问在prompt界面中右下角的输出验证是什么意思?点击之后会在LLM OUT中出现红色的提示,‘当前场景没有找到可用的Prompt模版,chat_with_db_qa‘

What you expected to happen

请老师解答我这两个问题

How to reproduce

连接数据库执行即可

Additional context

No response

Are you willing to submit PR?

  • Yes I am willing to submit a PR!

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions