-
Notifications
You must be signed in to change notification settings - Fork 379
Description
Hi, thank you for the great work — it has been very helpful.
I have a question regarding model versions. I noticed that there are no reported experiments or evaluations on LLaMA 3 in the repository. Could you share whether this was a deliberate choice, and if so, what the main considerations were?
I personally tried loading LLaMA3-8B weights and adapting the code with only minor modifications. However, in my experiments, the performance did not improve and even slightly degraded compared to the baseline. At this point, I am not sure whether this is caused by issues in my implementation, or whether there may be some incompatibility or mismatch between LLaMA 3 and the llama-adapter approach.
May I ask if the authors have experimented with LLaMA 3, or if you have any insights or experience regarding its compatibility or expected performance with llama-adapter?
Thank you very much for your time.