llama.go is like llama.cpp in pure Golang. The code of the project is based on the legendary ggml.cpp framework of Georgi Gerganov written in C++ with the same attitude to performance and elegance. Both models store FP32 weights, so you'll needs at least 32Gb of RAM (not VRAM or GPU RAM) for LLaMA-7B. Double to 64Gb for LLaMA-13B.
Features
- Tensor math in pure Golang
- Implement LLaMA neural net architecture and model loading
- Test with smaller LLaMA-7B model
- Be sure Go inference works exactly same way as C++
- Let Go shine! Enable multi-threading and messaging to boost performance
- Cross-patform compatibility with Mac, Linux and Windows
License
MIT LicenseFollow LLaMA.go
Other Useful Business Software
Outgrown Windows Task Scheduler?
Windows Task Scheduler wasn't built for complex, cross-platform automation. Get a free diagnostic that shows exactly where things are failing and provides remediation recommendations. Interactive HTML report delivered in minutes.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of LLaMA.go!