ããããèªã¢ãã«Tibetan-BERT-wwmã¯ãà½à½à½¼à½à¼à½à½à½à¼à½à½à¼à½à½à½²à½à½¦à¼à½¦à¾à¾±à½¼à½à¼à½à¾±à½ºà½à¼à½¡à½´à½£à¼à½à¾±à½²à¼à½à¾²à½²à½à½¦à¼à½à½à½´à½à¼à½à½ºà¼à½à½à½à¼à½£à¼à½¦à¾²à½à¼à½¦à¾à¾±à½¼à½à¼à½à¾±à½ºà½à¼à½à¼ããã©ããã¼ã¯ãã¤ãºããã®ãè¨èªå¦çcolaboratoryãã¼ã¯ãã¤ãºããããèª Yatao Liang, Hui Lv, Yan Li, La Duo, Chuanyi Liu, and Qingguo ZhouãTibetan-BERT-wwm: A Tibetan Pretrained Model With Whole Word Masking for Text Classificationãã横ç®ã«ãTibetan-BERT-wwmã®ãã¼ã¯ãã¤ã¶ããà½à½à½¼à½à¼à½à½à½à¼à½à½à¼à½à½à½²à½à½¦à¼à½¦à¾à¾±à½¼à½à¼à½à¾±à½ºà½à¼à½¡à½´à½£à¼à½à¾±à½²à¼à½à¾²à½²à½à½¦à¼à½à½à½´à½à¼à½à½ºà¼à½à½à½à¼à½£à¼à½¦à¾²à½à¼à½¦à¾à¾±à½¼à½à¼à½
{{#tags}}- {{label}}
{{/tags}}