What is the size of DeepSeek-V2's pre-training corpus?
Question
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Lorem ipsum dolor sit amet, consectetur adipiscing elit.Morbi adipiscing gravdio, sit amet suscipit risus ultrices eu.Fusce viverra neque at purus laoreet consequa.Vivamus vulputate posuere nisl quis consequat.
Answers ( 1 )
DeepSeek-V2 is pre-trained on a corpus of 8.1 trillion tokens. This large-scale corpus includes an increased proportion of Chinese data, which enhances the model's performance in Chinese language tasks.