GitHub today announced the overall availability of Copilot Enterprise, the $39/month version of its code completion tool and developer-centric chatbot for giant businesses. Copilot Enterprise includes the entire features of the prevailing Marketing strategy, including IP indemnity, but extends this with numerous crucial features for larger teams. The highlight here is the flexibility to reference a company’s internal code and knowledge base. Copilot is now also integrated with Microsoft’s Bing search engine (currently in beta) and shortly, users may even find a way to fine-tune Copilot’s models based on a team’s existing codebase as well.
With that, recent developers on a team can, for instance, ask Copilot how you can deploy a container image to the cloud and get a solution that is particular to the method of their organization. For plenty of developers, in any case, it’s not necessarily understanding the codebase that may be a roadblock to being productive when moving firms but understanding different processes — though Copilot can obviously help with understanding the code, too.
Many teams already keep their documentation in GitHub repositories today, making it relatively easy for Copilot to reason over it. Indeed, as GitHub CEO Thomas Dohmke told me, since GitHub itself stores virtually all of its internal documents on the service — and recently gave access to those recent features to all of its employees — some people have began using it for non-engineering questions, too, and began asking Copilot about vacation policies, for instance.
Dohmke told me that customers had been asking for these features to reference internal information from the earliest days of Copilot. “A variety of the things that developers do inside organizations are different to what they do at home or in open source, within the sense that organizations have a process or a certain library to make use of — and plenty of of them have internal tools, systems and dependencies that don’t exist like that on the surface,” he noted.
As for the Bing integration, Dohmke noted that this may be useful for asking Copilot about things which will have modified because the model was originally trained (think open source libraries or APIs). For now, this feature is barely available within the Enterprise version and while Dohmke wouldn’t say much about whether it can come to other editions as well, I wouldn’t be surprised if GitHub brought this capability to the opposite tiers at a later point, too.
One feature that may likely remain an enterprise feature — partly due to its associated cost — is fine-tuning, which is able to launch soon. “We let firms pick a set of repositories of their GitHub organization after which fine-tune the model on those repositories,” Dohmke explained. “We’re abstracting the complexity of generative AI and fine-tuning away from the shopper and allow them to leverage their codebase to generate an optimized model for them that then is used throughout the Copilot scenarios.” He did note that this also signifies that the model can’t be as up-to-date as when using embeddings, skills and agents (just like the recent Bing agent). He argues that each one of that is complementary, though, and the shoppers who’re already testing this feature are seeing significant improvements. That’s very true for teams which might be working with codebases in languages that aren’t as widely used because the likes of Python and JavaScript, or with internal libraries that don’t really exist outside of a company.
On top of talking about today’s release, I also asked Dohmke about his high-level considering of where Copilot goes next. The reply is basically “more Copilot in additional places. I feel, in the following yr, we’re going to see an increasing concentrate on that end-to-end experience of putting Copilots where you already do the work versus making a recent destination to go and replica and paste stuff there. I feel that’s where we at GitHub are incredibly excited in regards to the opportunity that we have now by putting Copilot on github.com by having Copilot available within the place where developers are already collaborating, where they’re already constructing the world’s software.”
Talking in regards to the underlying technology and where that’s going, Dohmke noted that the auto-completion feature currently runs on GPT 3.5 Turbo. Due to its latency requirements, GitHub never moved that model to GPT 4, but Dohmke also noted the team has updated the model “greater than half a dozens times” because the launch of Copilot Business.
As of now, it doesn’t seem like GitHub will follow the Google model of differentiating its pricing tiers by the scale of the models that power those experiences. “Different use cases require different models. Different optimizations — latency, accuracy, quality of the final result, responsible AI — for every model version play a giant role to be certain that that the output is moral, compliant and secure and doesn’t generate a lower-quality code than what our customers expect. We are going to proceed happening that path of using one of the best models for different pieces of the Copilot experience,” Dohmke said.