Learn and Burn
Subscribe
Sign in
Share this discussion
Running an LLM on a small customizable chip
learnandburn.ai
Copy link
Facebook
Email
Note
Other
Running an LLM on a small customizable chip
Unbox Research
Oct 30
2
Share this post
Running an LLM on a small customizable chip
learnandburn.ai
Copy link
Facebook
Email
Note
Other
[Paper: LlamaF: An Efficient Llama2 Architecture Accelerator on Embedded FPGAs]
Read →
Comments
Share
Share
Copy link
Facebook
Email
Note
Other
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts
Running an LLM on a small customizable chip
Running an LLM on a small customizable chip
Running an LLM on a small customizable chip
[Paper: LlamaF: An Efficient Llama2 Architecture Accelerator on Embedded FPGAs]