Software supply chain company JFrog Ltd. announced today a brand new integration with Amazon SageMaker to enable developers and data scientists to collaborate efficiently on constructing, training and deploying machine learning models.
SageMaker is a cloud-based machine-learning platform that enables the creation, training and deployment of machine learning models on the cloud and is utilized by developers to deploy them on embedded systems and edge devices. The brand new pairing with JFrog’s Artifactory allows the models to be incorporated into a contemporary software development life cycle, making each model immutable, traceable, secure and validated because it matures for release.
The mixing has been designed to deal with concerns around artificial intelligence and machine learning. A recent Forrester Consulting survey found that fifty% of knowledge decision-makers cited applying governance policies inside AI and machine learning as the most important challenge to widespread usage, while 45% cited data and model security because the gating factor.
JFrog’s SageMaker integration has been designed to deal with these concerns by integrating DevSecOps best practices into ML model management, allowing developers and data scientists to expand, secure and grow their ML projects while ensuring security, regulatory and organizational compliance.
“As more firms begin managing big data within the cloud, DevOps team leaders are asking how they’ll scale data science and ML capabilities to speed up software delivery without introducing risk and complexity,” said Kelly Hartman, senior vice chairman of worldwide channels and alliances at JFrog. “Working with AWS, we’ve been in a position to design a workflow that indoctrinates DevSecOps best practices to ML model development within the cloud, delivering flexibility, speed, security and peace of mind.”
Key features of the mixing bring machine learning closer to the usual software development and production lifecycles, ensuring enhanced protection against deletion or modification of models. The mixing enables the event, training, securing and deployment of models in a more streamlined manner.
The mixing also provides capabilities to detect and block the usage of malicious models throughout the organization, enhancing security and tools for scanning model licenses and ensuring they comply with company policies and regulatory requirements.
For improved transparency and control, the mixing supports storing homegrown and internally augmented models with robust access controls and detailed versioning history. The mixing simplifies the technique of bundling and distributing models as part of standard software releases, aligning machine learning development more closely with traditional software deployment processes.
Together with its SageMaker integration, JFrog also unveiled recent versioning capabilities for its ML Model Management solution today that assist in bringing model development into a company’s secure and compliant SDLC. The brand new versioning capabilities increase transparency around each model version, allowing developers, DevOps teams and data scientists to be certain that the suitable version is getting used at the suitable place and time while also ensuring that it’s secure.
Image: JFrog
Your vote of support is very important to us and it helps us keep the content FREE.
One click below supports our mission to offer free, deep, and relevant content.
Join our community on YouTube
Join the community that features greater than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and lots of more luminaries and experts.
THANK YOU