通义千问-7B-预训练-模型库
弹出新页面
from modelscope import AutoModelForCausalLM, AutoTokenizer
from modelscope import GenerationConfig
import datetime
print("启动时间:" + str(w()))
tokenizer = AutoTokenizer.from_pretrained("qwen/Qwen-7B-Chat", revision = 'v1.0.5',trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-7B-Chat", revision = 'v1.0.5',device_map="auto",offload_folder="offload_folder", trust_remote_code=True,fp16 = True).eval()
ation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat",revision = 'v1.0.5', trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
model.float()print("开始执行:" + str(w()))
response, history = model.chat(tokenizer, "你好", history=None)
print(response)
print("第一个问题处理完毕:" + str(w()))
response, history = model.chat(tokenizer, "浙江的省会在哪里?", history=history)
print(response)
print("第二个问题处理完毕:" + str(w()))
response, history = model.chat(tokenizer, "它有什么好玩的景点", history=history)
print(response)
print("第三个问题处理完毕:" + str(w()))
pip install transformers_stream_generator
解决方法
pip install transformers_stream_generator
这就好了,重新运行下
ValueError: The current device_map
had weights offloaded to the disk. Please provide an offload_folder
for them. Alternatively, make sure you have safetensors
installed if the model you are using offers the weights in this format.
参照这哥们的
git clone .git
pip install -
python Qwen_demo.py
拉代码
git clone .git
apt-get update
apt-get install git-lfs
git clone .git
初始化一下
git init
git lfs install
为了方便我把模型移动到一开始的文件夹里面
pip install -
也可以使用web依赖
pip install -r requirements_
然后ctrl+s 保存
python web_demo.py
pip install bitsandbytes
添加依赖
from transformers import BitsAndBytesConfig
import torch
添加
quantization_config = BitsAndBytesConfig(load_in_4bit=True,bnb_4bit_quant_type='nf4',bnb_4bit_compute_dtype=torch.bfloat16)
本文发布于:2024-02-02 00:02:30,感谢您对本站的认可!
本文链接:https://www.4u4v.net/it/170680920840031.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
留言与评论(共有 0 条评论) |