Home / News / Artificial Intelligence / TabbyML raises $3.2 million as an open-source GitHub Copilot competitor

TabbyML raises $3.2 million as an open-source GitHub Copilot competitor

The race to create AI assistants that help programmers is on. TabbyML, founded by two ex-Googlers, received $3.2 million in seed funding for its open-source code generator.

TabbyML, a self-hosted coding assistant, is more customizable than GitHub’s Copilot, according to startup co-founder Meng Zhang. “We believe in a future where all companies will have some sort of customization demand in software development,” he told .

“There are probably more mature and complete products in the proprietary software space, but GitHub’s OpenAI-powered tool has more limitations,” he said.

Zhang’s co-founder Lucy Gao said open source software suits larger companies. Independent developers may use open source code, but enterprise engineers often use proprietary code that Copilot cannot access.

“If my colleague just wrote a line of code, I can quote it immediately [by using TabbyML],” Gao said.

Code generators, like other AI pilots, can be buggy. Gao said a self-hosted solution makes the challenge “relatively easy to address”. When users ignore TabbyML’s suggestions or edit its auto-filled code, the AI model finetunes.

Code generators aid programmers rather than replace them, and results are promising. GitHub reported in June that 30% of Copilot users accepted the coding assistant’s suggestions. At a developer event, Google announced that 24% of its software engineers had more than five “assistive moments” a day using Cider, its AI-augmented internal code editor. Zhang found this figure more revealing.

Zhang said “it’s not that simple” to fire engineers after using a code generator. Not a production line—coding.”

TabbyML, launched in April, has 11,000 GitHub stars. Yunqi Partners and ZooCap invested in its latest round.

Zhang said OpenAI’s advantage over Copilot the Goliath will fade as other AI models improve and computing power costs drop.

Zhang said GitHub and OpenAI can deploy AI models with tens of billions of parameters in the cloud, which is their advantage. Though serving such large models is more expensive, Copilot has managed to mitigate expenses by request batching.

The strategy has shown its limits: As per the Wall Street Journal, Microsoft lost over $20 per GitHub Copilot user in the first few months of this year.

Tabby recommends models trained on 1-3 billion parameters to lower the deployment barrier, but it lowers quality in the short term.

“However, as computing power costs drop and open source models improve, GitHub and OpenAI’s competitive edge will eventually diminish,” Zhang said.

About Chambers

Check Also

We have discovered the origin of our second moon through scientific analysis

After three years of research, scientists have made significant progress in determining the origin of …