ãã¨ãã¨å°èª¬ãæ¸ãããã®AIãªã®ã§ç©èªã«ã¯å¼·ãã ããããããæ¥æ¬è£½ã ãã ã¨ãããã¨ã§å¤§å çãæ©é試ãã¦ããã åãçä¼¼ããã¦ãã£ãã使ã£ã¦ã¿ãã®ã ãããã®ã¾ã¾ã ã¨ããªãçãæç« ããåºã¦ããªãã使ãæ¹ã«å·¥å¤«ãå¿ è¦ããã§ããã ããã§ãããªã³ã¼ããæ¸ããã def b(prompt): input_ids = tokenizer.encode( prompt, add_special_tokens=False, return_tensors="pt" ).cuda() tokens = model.generate( input_ids.to(device=model.device), max_new_tokens=320, temperature=0.6, top_p=0.9, repetition_penalty=1.2, do_sample=True, pad_token_id=tok
{{#tags}}- {{label}}
{{/tags}}