LINEãã36å(3.6B)ãã©ã¡ã¼ã¿ã®å¤§è¦æ¨¡è¨èªã¢ãã«(LLM)ãå ¬éãããã®ã§æ©ééãã§ã¿ããæ£ç¢ºã«ã¯éãã ã®ã¯æ¨æ¥ã®ãã¤ãªã¼AIãã¥ã¼ã¹ãªã®ã ããé¢ç½ãã£ãã®ã§ãã¡ãã«ã転è¼ããã ç´°ããããæ¹ã¯å¤§å çã®ãã¼ã¸ãåç §ã®ãã¨ã ä¾ã«ãã£ã¦ãããªé¢æ°ãæ¸ãã def line(prompt): # æ¨è«ã®å®è¡ input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") tokens = model.generate( input_ids.to(device=model.device), min_length=50, max_length=300, temperature=1.0, do_sample=True, pad_token_id=tokenizer.pad_token_i
{{#tags}}- {{label}}
{{/tags}}