In a global marketplace where decision-making needs to happen with increasing velocity, data science teams often need not only to speed up their modeling deployment but also do it at scale across their entire enterprise. Often, they are doing this with smaller teams in place than they need due to the shortage of data scientists. It’s no wonder, then, that it can take an average of 17 months for a typical firm to break even on its returns from AI projects.
At DataRobot, we understand that the ability to move quickly is an essential component of any AI project. Our platform allows data science teams to do what previously would have taken days or weeks in mere minutes or hours, giving large enterprises the ability to make faster and more accurate decisions based on real-time data.
We have long aspired to be the preeminent leader in MLOps with our focus on Applied AI and our strategy to get every model into production, then managing and continuously updating models to ensure peak performance over time. For this reason, Algorithmia is the perfect complement to DataRobot’s full end-to-end solution.
Algorithmia automates machine learning deployment, provides maximum tooling flexibility, optimizes collaboration between operations and development, and leverages existing software development lifecycle (SDLC) and continuous integration/continuous development (CI/CD) practices. Over 130,000 engineers and data scientists have used Algorithmia’s platform to date, including Fortune 500 companies, non-governmental organizations, and government agencies.
DataRobot MLOps Augmented with Algorithmia’s GPU Acceleration
DataRobot was already a leader in the MLOps space, with capabilities influenced by data science best practices, including model management and monitoring for data drift and accuracy, combined with strong governance capabilities for the business. From a technology perspective, we do an excellent job serving high volume and low latency predictions of our own models and managing and monitoring external models with our highly differentiated MLOps agent approach, allowing customers to leave their existing models in place. To this, the Algorithmia acquisition will add a host of complementary capabilities that significantly enhance our MLOps offering and further bolster the strength of our AI platform, including robust GPU acceleration, as well as a solid IT backbone.
Algorithmia lines up perfectly with our quest to bring MLOps and augmented intelligence to humans with efficiency, accuracy, and speed, allowing machine learning teams to operate more effectively. Their technology works over preconfigured CPUs and GPUs to provide enhanced support for deep learning, NLP and computer vision workloads, taking advantage of the latest advances in cloud, on-premise, and hybrid environments.
Well over 50% of Algorithmia’s workloads run on GPUs today, and its custom scheduler on Kubernetes hyper-optimizes workloads at scale. This allows for the pipelining of incredibly complex inference models. It also supports hundreds of GPUs running tasks simultaneously, creating efficiencies much greater than just dumping models into Kubernetes, which is critical to getting models into production and customers seeing a return on investment.
From a technical point of view, you need specialized hardware and GPUs to harness deep learning and neural networks, but GPUs can bring difficult challenges to deploying in certain environments. Additionally, Algorithmia has invested heavily in deploying models in extremely secure environments with rigid processes and authentication requirements often seen in large financial institutions and governments. With this acquisition, DataRobot is supercharging the already robust governance capabilities in our MLOps offering.
Algorithmia’s CTO, Kenny Daniel, recently said, “GPUs don’t play nicely with certain containerized workloads, which are extremely ubiquitous in the industry” and that “solving the problems of running containerized ML workloads in any environment is the short version of the technical problem that Algorithmia solves.” We couldn’t agree more.
In the ever-evolving landscape of machine learning technology, plug-and-play MLOps integrations with other systems don’t often exist. The machine learning community has realized that GPUs are excellent solutions for parallel processing of simple operations, often showing massive improvements over CPUs in training models, sometimes to the tune of ten times the speed.
A New Dimension to the DataRobot Story
As a part of the DataRobot family, Algorithmia adds a new dimension to our story by giving us the power to serve models with much more complex payloads. Algorithmia has focused maniacally on the later stages of the machine learning lifecycle and demystifying the complex problems unseen in traditional software. By allowing teams to attack CPU and GPU workloads efficiently, their MLOps expertise perfectly complements our own existing strength in MLOps model monitoring and management to deliver the best-in-class infrastructure to our customers that enables Continuous AI.
As organizational AI strategies mature, we are more frequently encountering situations where our customers have way more than just a handful of models in production. This is driving the need for endorsed, enterprise-class infrastructure. Algorithmia enables machine learning leaders to put models into production faster within their existing operational processes. This is unique in comparison to do-it-yourself MLOps solutions that lock users into specific technology stacks. Algorithmia delivers every model to production with enterprise-grade security and governance, ensuring continuity of your business with zero downtime.
DataRobot and Algorithmia Navigate the Intricacies of IT and DevOps Teams
Very often, data scientists are left to navigate the intricacies of IT and DevOps on their own without clear communication with the IT team about what technology needs to be in place before they can deploy a model – things like source code management, testing, monitoring, and alerting. Across all industries, compliance and regulations around machine learning are on the rise, and it is increasingly important for IT leaders to prioritize security and governance in their processes. Being able to effectively manage those services securely is critical in machine learning where models may be retrained and updated on a constant basis.
With this pedigree and track record, we are delighted to welcome the Algorithmia team and technology to the DataRobot family.
To learn more about the exciting road ahead for DataRobot MLOps and Algorithmia, schedule a demo today.
DEMO
Schedule a demo to learn more about DataRobot and Algorithmia
Request a Demo
About the author
Nenshad Bardoliwalla
Chief Product Officer, DataRobot
Nenshad Bardoliwalla is Chief Product Officer of DataRobot, where he leads product strategy, product management, product design, and customer experience across the company. Bardoliwalla joined DataRobot in 2019 through the acquisition of Paxata, where he was the Co-Founder and Chief Product Officer. He formerly served as vice president for product management, product development, and technology for Enterprise Performance Management at SAP, after many years at Hyperion and Siebel Systems. He lives in the San Francisco Bay Area, CA with his family.
Meet Nenshad Bardoliwalla
Kenny Daniel
CTO of MLOps, DataRobot
Kenny Daniel is co-founder and CTO of Algorithmia, where he brings his combination of industry and academic experience to shape the direction and development of the Algorithmia enterprise MLOps platform. Previously, he worked in the industry building deep learning and image-based recognition systems. Kenny holds degrees from Carnegie Mellon University and the University of Southern California, where he studied Artificial Intelligence and Mechanism Design.
Meet Kenny Daniel