Unlocking the Power of GitLab Duo: Navigating Self-Hosted Models
As an experienced IT professional, you understand the importance of having complete control over your organization’s data privacy, security, and the deployment of large language models (LLMs) within your infrastructure. GitLab Duo Self-Hosted Models offer a comprehensive solution that empowers you to manage the entire lifecycle of requests made to LLM backends for GitLab Duo features, ensuring that all requests stay within your enterprise network and avoiding external dependencies.
Embrace the Benefits of Self-Hosted Models
By deploying self-hosted models, you can unlock a world of advantages that cater to the specific needs of your organization:
-
Choose Any GitLab-Approved LLM: With self-hosted models, you have the flexibility to select any GitLab-approved LLM that aligns with your infrastructure requirements and performance needs.
-
Retain Full Control Over Data: By keeping all request-response logs within your domain, you can ensure complete privacy and security, eliminating the need for external API calls.
-
Isolate the Environment: Self-hosting allows you to isolate the GitLab instance, AI Gateway, and models within your own environment, further enhancing the security and control over your data.
-
Tailor GitLab Duo Features: Customize the GitLab Duo features to meet the specific needs of your users, ensuring that your team has access to the tools and capabilities they require.
-
Eliminate Reliance on Shared Infrastructure: By deploying your own AI Gateway, you can avoid dependency on the shared GitLab AI Gateway, providing your organization with greater flexibility and independence.
This setup ensures enterprise-level privacy and flexibility, seamlessly integrating your LLMs with GitLab Duo features to unlock the full potential of your software development lifecycle.
Preparing for a Self-Hosted Model Infrastructure
Before setting up a self-hosted model infrastructure, there are a few key prerequisites you’ll need to address:
-
Supported Model: Ensure that you have access to a supported LLM, either cloud-based or on-premises.
-
Supported Serving Platform: Identify a supported serving platform, such as vLLM, AWS Bedrock, or Azure OpenAI, to host and serve your LLMs.
-
AI Gateway: Establish a locally hosted or GitLab.com AI Gateway to efficiently configure your AI infrastructure.
-
Licensing: Verify that you have a valid GitLab Ultimate + Duo Enterprise license, which is required for self-hosted model deployment.
Choosing a Configuration Type
When setting up a self-hosted model infrastructure, you’ll need to decide between two configuration options:
- GitLab.com AI Gateway with Default GitLab External Vendor LLMs:
- This is the default Enterprise offering and is not fully self-hosted.
- In this configuration, you connect your self-managed GitLab instance to the GitLab-hosted AI Gateway, which integrates with external vendor LLM providers, such as Google Vertex or Anthropic.
- The LLMs communicate through the GitLab Cloud Connector, providing a ready-to-use AI solution without the need for on-premise infrastructure.
-
For licensing, you’ll need a GitLab Premium or Ultimate subscription and GitLab Duo Enterprise.
-
Self-Hosted AI Gateway and LLMs:
- In this configuration, you deploy your own AI Gateway and LLMs within your infrastructure, without relying on external public services.
- This setup gives you full control over your data and security.
- For licensing, you’ll need a valid GitLab license, which you can request through the Customers Portal.
Before moving forward, carefully consider which configuration best aligns with your organization’s specific requirements and priorities.
Setting Up a Self-Hosted Model Infrastructure
To set up a fully isolated self-hosted model infrastructure, follow these steps:
- Install a Large Language Model (LLM) Serving Infrastructure:
- GitLab supports various platforms for serving and hosting your LLMs, including vLLM, AWS Bedrock, and Azure OpenAI.
- Refer to the supported LLM platforms documentation to explore the features and capabilities of each option, and choose the most suitable platform for your needs.
-
Additionally, review the supported models and hardware requirements documentation to select models that best align with your infrastructure and performance goals.
-
Install the GitLab AI Gateway:
-
Deploy the AI Gateway within your infrastructure to efficiently configure your AI environment.
-
Configure GitLab Duo Features:
-
Refer to the Configure GitLab Duo features documentation for step-by-step instructions on customizing your self-hosted model setup to meet your operational needs.
-
Enable Logging:
- Consult the logging documentation to find the configuration details for enabling logging within your environment.
- Utilize the logs to track and manage the performance of your self-hosted model infrastructure effectively.
By following these steps, you can establish a fully isolated self-hosted model infrastructure that aligns with your organization’s unique requirements, ensuring enterprise-level privacy, security, and flexibility.
Conclusion
GitLab Duo Self-Hosted Models offer a powerful solution for organizations seeking complete control over their data, security, and AI infrastructure. By leveraging self-hosted models, you can choose from a wide range of GitLab-approved LLMs, retain full control over request-response logs, and tailor the GitLab Duo features to meet the specific needs of your users.
This comprehensive approach to self-hosting ensures enterprise-level privacy and flexibility, allowing you to seamlessly integrate your LLMs with GitLab Duo features and unlock the full potential of your software development lifecycle. As an experienced IT professional, you can now confidently navigate the world of self-hosted models and empower your organization to thrive in the dynamic landscape of technology and innovation.
For any questions or support related to setting up or using this feature, be sure to consult the GitLab documentation or reach out to the GitLab support team through the GitLab forum.