TabbyML/tabby is a self-hosted AI coding assistant written in Rust. It's an active project with 1080 commits, 36 branches, and 79 open issues. It has gained substantial interest with 13395 stars, 79 watchers, and 482 forks. The project is well-documented with a detailed README, and it's regularly updated.
Issues primarily concern bugs, feature requests, and compatibility problems. Notable issues include:
Older unresolved issues request support for platforms and languages like Vulkan Backend (#124), Sublime Text Editor (#219), and C#. Performance and functionality improvements are also requested.
There are 19 open pull requests. Notable themes include:
Commonalities include active discussions, frequent commits, and changes to multiple files and lines of code. Concerns include potential breaking changes (#902), need for standardized code formatting, security implications (#905), and stale pull requests (#314, #621, #661). Major uncertainties involve hardware compatibility (#902, #895) and the trade-off between security and user experience (#905). A worrying anomaly is the large number of file changes in PR #902 and a potential privacy issue.
The recently opened issues for the software project primarily revolve around bugs, feature requests, and compatibility issues. Notably, there are several issues related to the software's compatibility with various systems and platforms, such as Windows (#939), Pycharm (#917), and Apple M3 Max (#943). There are also feature requests for improvements such as the ability to configure model parameters (#915), Windows build for Tabby (#909), and model selection from client plugins (#900). Issue #889, which discusses the inability of the Tabby server to scale with increasing connections, is particularly concerning as it directly impacts the software's performance and usability.
The older open issues, which remain unresolved, cover a wide range of topics. These include requests for support for different platforms and languages such as Vulkan Backend (#124), Sublime Text Editor (#219), and C# (#239). There are also issues related to the software's performance and functionality, such as fine-tuning the model for specific project codebases (#354) and improving TabbyML's performance on personal projects (#463). Recently closed issues primarily consist of bug fixes and feature implementations, including the addition of generation stats to the TextGeneration
trait (#855) and support for exllama/self-hosted inference engines (#854). A common theme among the open and recently closed issues is the need for improved compatibility and functionality across various platforms and programming languages.
There are 19 open pull requests in total. The most recent ones are #941, #916, #905, #902, #895, #887, and #886. The oldest open pull requests are #314, #621, and #661.
gpu_devices
. The maintainers are concerned about the impact on downstream components that rely on the existing cuda_devices
field.chmod
by first-time users).The TabbyML/tabby project is a self-hosted AI coding assistant developed by the organization TabbyML. It is designed to offer an open-source and on-premises alternative to GitHub Copilot. The software is self-contained, eliminating the need for a DBMS or cloud service, and features an OpenAPI interface for easy integration with existing infrastructure. It also supports consumer-grade GPUs. The project is written in Rust and is licensed under a non-specified "Other" license.
The repository is quite active and mature, with 1080 total commits, 36 branches, and 79 open issues. It has garnered significant interest, as indicated by its 13395 stars and 79 watchers. The repository size is approximately 20MB, and it has been forked 482 times. The project's technical architecture is based on Rust, and it uses Docker for containerization. The README file includes detailed instructions for getting started, contributing to the project, and using the software.
The repository has a number of notable aspects. It features a detailed "What's New" section in the README, keeping users updated on the latest releases and major updates. The project also includes a comprehensive "Getting Started" guide, complete with installation instructions and configuration details. The software's ability to run on consumer-grade GPUs is a significant feature, making it more accessible to a wider range of users. The project's use of Docker also simplifies the setup process and enhances its portability. The project seems to be in active development, with regular updates and improvements being made.