JFrog Ltd, known for their "Liquid Software," unveils a new addition to their suite: the Model Context Protocol (MCP) Server. Designed to enhance developer productivity and streamline workflows, this new architecture enables large language models (LLMs) and AI agents to securely interact with tools and data sources within the JFrog Platform. This innovative protocol allows developers to integrate AI tools and coding agents with JFrog, thus facilitating a shift towards self-service AI during the entire development cycle. Developers benefit from increased productivity and the ability to build smarter and more secure applications at a faster pace.
The MCP is developed as an open, industry-standard integration framework. Through natural language commands like “Create a new local repository,” developers can now operate the JFrog Platform directly from their Integrated Development Environments (IDEs) or AI assistants. This eliminates the need for context switching, allowing teams to be aware of open-source vulnerabilities and software package usage instantaneously. AI automation assists in simplifying previously complex queries, thus empowering development teams to work more swiftly and intelligently.
While the deployment of remote MCP servers facilitates faster code iteration and improves software reliability, it is crucial to prioritize security. The JFrog Security Research Team identified vulnerabilities, such as CVE-2025-6514, which could exploit MCP clients. Therefore, JFrog's MCP Server emphasizes security by leveraging trusted connections like HTTPS, ensuring sustainable and secure operations.Key Features of JFrog's MCP Server