system
参数中写入 “咒语” 来避免聊天时忘记催眠角色设置,支持咒语的增删查改。机器人具有多种功能,但是 telegram bot 交互能力有限,难以像桌面软件或者 web 网页那样同时显示大量信息或布局多种功能的操作 UI。因此机器人底层设计为有限状态机以简化前端 UI,这样也更适合在移动端使用
下面给出机器人的操作菜单以及部分控制界面
CREATE TABLE IF NOT EXISTS user_info (id INT NOT NULL AUTO_INCREMENT,user_id VARCHAR(190) NOT NULL,user_key VARCHAR(190) NOT NULL,user_img_key VARCHAR(190) NOT NULL,prompts TEXT,PRIMARY KEY (id),UNIQUE KEY (user_id)
)
其中 prompts 字段存储 json 格式的咒语文本from openai import OpenAI
client = OpenAI(api_key='XXX') # 填入你的 apiresponse = ate(model="gpt-3.5-turbo",messages=[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Who won the world series in 2020?"},{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},{"role": "user", "content": "Where was it played?"}]
)print(response.choices[0].t)
messages
参数需要开发者自行维护,任何时刻,模型记忆仅涵盖在 message 信息内。可以通过 system
字段设置模型的行为,例如设定模型的个性或是提供其行为的具体说明等。本 bot 直接将用户咒语作为 system
参数,并且在组合 messages 的多轮对话时,总是在用户的最后一条回复后加上 “,扮演指定角色回答。”
的附加内容,以保证模型永远不会忘记角色设定stream
参数要求模型进行流式传输器回复内容,这样就能通过多次编辑 bot 的回复消息内容实现流式显示,详见开源代码response_format={ "type": "json_object" }
参数要求模型以 json 格式进行返回。本 bot 没有使用该功能,详见文档说明本项目使用 stability.ai 的 stable-diffusion-xl-1024-v1-0 模型生成图像,最小用例如下
import os
import io
import warnings
from PIL import Image
from stability_sdk import client
import stability_ation_pb2 as generation# Our Host URL should not be prepended with "https" nor should it have a trailing slash.
os.environ['STABILITY_HOST'] = 'grpc.stability.ai:443'# Sign up for an account at the following link to get an API Key.
# Click on the following link once you have created an account to be taken to your API Key.
# Paste your API Key below.# Set up our connection to the API.
stability_api = client.StabilityInference(key='XXX', # 填入你的 apiverbose=True, # Print ine="stable-diffusion-xl-1024-v1-0", # Set the engine to use for generation.# Check out the following link for a list of available engines:
)# Set up our initial generation parameters.
answers = ate(prompt="expansive landscape rolling greens with gargantuan yggdrasil, intricate world-spanning roots towering under a blue alien sky, masterful, ghibli",seed=4253978046, # If a seed is provided, the resulting generated image will be deterministic.# What this means is that as long as all generation parameters remain the same, you can always recall the same image simply by generating it again.# Note: This isn't quite the case for Clip Guided generations, which we'll tackle in a future example notebook.steps=50, # Amount of inference steps performed on image generation. Defaults to 30. cfg_scale=8.0, # Influences how strongly your generation is guided to match your prompt.# Setting this value higher increases the strength in which it tries to match your prompt.# Defaults to 7.0 if not specified.width=1024, # Generation width, defaults to 512 if not included.height=1024, # Generation height, defaults to 512 if not included.samples=1, # Number of images to generate, defaults to 1 if not included.sampler=generation.SAMPLER_K_DPMPP_2M # Choose which sampler we want to denoise our generation with.# Defaults to k_dpmpp_2m if not specified. Clip Guidance only supports ancestral samplers.# (Available Samplers: ddim, plms, k_euler, k_euler_ancestral, k_heun, k_dpm_2, k_dpm_2_ancestral, k_dpmpp_2s_ancestral, k_lms, k_dpmpp_2m, k_dpmpp_sde)
)# Set up our warning to print to the console if the adult content classifier is tripped.
# If adult content classifier is not tripped, save generated images.
for resp in answers:for artifact in resp.artifacts:if artifact.finish_reason == generation.FILTER:warnings.warn("Your request activated the API's safety filters and could not be processed.""Please modify the prompt and try again.")pe == generation.ARTIFACT_IMAGE:img = Image.open(io.BytesIO(artifact.binary))img.save(str(artifact.seed)+ ".png") # Save our generated images with their seed number as the filename.
注意几点
IMGPROMPT = "A prompt example for 一个童话般的宁静小镇,鸟瞰视角,动漫风格 is “a painting of a fairy tale town, serene landscape, a bird's eye view, anime style, Highly detailed, Vivid Colors.” "
IMGPROMPT += "Another prompt example for 双马尾动漫少女,蓝黑色头发,颜色鲜艳 is “a painting of 1girl, blue | black hair, low twintails, anime style, with bright colors, Highly detailed.” "
IMGPROMPT += "Another prompt example for 拟人化的兔子肖像,油画,史诗电影风格 is “a oil portrait of the bunny, Octane rendering, anthropomorphic creature, reddit moderator, epic, cinematic, elegant, highly detailed, featured on artstation.” "
IMGPROMPT += "Another prompt example for 黄昏下,大雨中,两个持刀的海盗在海盗船上决斗 is “Two knife-wielding pirates dueling on a pirate ship, dusk, heavy rain, unreal engine, 8k, high-definition, by Alphonse Mucha and Wayne Barlowe.” "
IMGPROMPT += "Now write a prompts for "
当然,bot 也提供了直接使用用户输入内容作为 prompt 生成图像的命令,熟悉 AI 图像生成方法的用户可以直接提供高质量的 image prompt 序列from pathlib import Path
from openai import OpenAI
client = OpenAI(api_key='XXX') # 填入你的 api# text2voice
speech_file_path = Path(__file__).parent / "
response = client.ate(model="tts-1",voice="alloy",input="Hello, World! 你好世界!",response_format='opus'
)
response.stream_to_file(speech_file_path)# voice2text
file_path = Path(__file__).parent / "
audio_file = open(file_path, "rb")
transcript = ate(model="whisper-1", file=audio_file, response_format="text"
)
print(transcript)
本文发布于:2024-02-05 06:35:00,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170726403863913.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |