What is the training process of LLaVA-NeXT?
Question
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Lorem ipsum dolor sit amet, consectetur adipiscing elit.Morbi adipiscing gravdio, sit amet suscipit risus ultrices eu.Fusce viverra neque at purus laoreet consequa.Vivamus vulputate posuere nisl quis consequat.
Answers ( 1 )
LLaVA-NeXT's training involves two stages:
- Stage 1: Trains the connector using 558,000 data samples.
- Stage 2: Trains the full model using 760,000 data samples.
The training is efficient, requiring only about 32 GPUs for one day, with total training data of 1.318 million samples.