Shisa.AI Develops Multilingual LLM with Industry-Leading Performance
Releasing an open-source 405B parameter model that surpasses GPT-4 in Japanese language tasks.
Shisa.AI is pleased to announce the official release of the "Llama 3.1 Shisa V2 405B," a multilingual LLM that sets a new standard for domestically produced models in Japan (press release). This model demonstrates Japanese language performance that surpasses GPT-4, establishing Japanese AI research as highly competitive on the global stage.
Leveraging our extensive expertise in LLM development, we have created new, high-quality Japanese training datasets optimized through rigorous experimentation. These datasets, along with the models themselves, are now available for download on Hugging Face under open-source licenses like Apache 2.0, permitting commercial use.
This release underscores Japan’s world-class capability in developing advanced language models. Shisa.AI remains committed to advancing LLM performance and efficiency, contributing to a robust, Japanese-centered multilingual AI infrastructure.
More from the newsroom
Shisa 7B released
A bilingual general-purpose chat model using a synthetic-data driven approach.
Read moreShisa-Gamma-7b-v1 Surpasses 1 Million Downloads
One year after its role in pioneering evolutionary model merges, our model reaches a significant milestone.
Read more