Google expands Vertex, its managed AI service, with new features

- Advertisement -


About a year ago, Google announced the launch of Vertex AI, a managed AI platform designed to help companies accelerate the deployment of AI models. To mark the service’s anniversary and the start of the Google Applied ML Summit, Google announced new Vertex features this morning, including a dedicated AI training server and “example-based” explanations.

- Advertisement -

“A year ago, we launched Vertex AI with the goal of creating the next generation of AI that will enable data scientists and engineers to do meaningful and creative work,” Henry Tappen, Google Cloud Group Product Manager, told TechCrunch via email. “The new Vertex AI features we’re launching today will continue to accelerate the deployment of machine learning models across organizations and democratize AI so that more people can deploy models in production, continuously monitor and impact the business with AI.”

- Advertisement -

As Google has historically stated, the advantage of Vertex is that it integrates Google cloud services for AI under a unified user interface and API. Google says customers such as Ford, Seagate, Wayfair, Cashapp, Cruise and Lowe’s are using the service to build, train and deploy machine learning models in a single environment, taking models from experimentation to production.

Vertex competes with managed AI platforms from cloud providers such as Amazon Web Services and Azure. Technically, it fits into the category of platforms known as MLOps, a set of best practices for AI startup businesses. Deloitte predicts The MLOps market will be worth $4 billion in 2025, an increase of almost 12 times from 2019.

- Advertisement -

Gartner projects the advent of managed services such as Vertex will lead to an 18.4% growth in the cloud computing market in 2021, with cloud computing predicted to account for 14.2% of total global IT spending. “As enterprises increase investment in mobility, collaboration, and other remote work technologies and infrastructure, the growth [will] supported through 2024,” Gartner wrote in a November 2020 study.

New opportunities

Among the new Vertex features is the AI ​​Training Reduction Server, a technology that Google says optimizes throughput and latency for multi-system distributed training on Nvidia GPUs. In machine learning, “distributed learning” refers to distributing the work of training a system across multiple machines, GPUs, processors, or specialty chips, thus reducing the time and resources needed to complete the training.

“This greatly reduces the learning time required for large language workloads such as BERTand provides cost parity across approaches,” said Andrew Moore, vice president and general manager of cloud AI at Google, in a Google Cloud blog post today. “In many mission-critical business scenarios, a shortened training cycle allows data scientists to train a model with higher predictive performance within the constraints of the deployment window.”

Vertex Preview now also includes tabular workflows that aim to increase the customization of the model creation process. As Moore explained, tabular workflows allow users to choose which parts of the workflow they want to process with Google’s “AutoML” technology and which parts they want to develop themselves. AutoML or Automated Machine Learning, which is not unique to Google Cloud or Vertex, covers any technology that automates aspects of AI development and can cover development steps from the raw dataset to the creation of a deployable machine learning model. AutoML can save time, but it can’t always beat the human factor, especially where precision is required.

“Table workflow elements can also be integrated into existing Vertex AI pipelines,” said Moore. “We have added new driven algorithms, including advanced exploration models like TabNet, new algorithms for feature selection, model distillation, and… much more.”

In addition to development pipelines, Vertex is also getting integration (in preview) with serverless Spark. serverless Apache-backed version of an open source analytics engine for processing data. Vertex users can now run a serverless Spark session for interactive code development.

Elsewhere, customers can analyze data features on the Neo4j platform and then deploy models with Vertex through a new partnership with Neo4j. And – thanks to a collaboration between Google and Labelbox – it’s now easier to access Labelbox’s image, text, audio, and video data labeling services from the Vertex control panel. Labels are necessary for most AI models to learn how to make predictions; models are trained to define relationships between labels, also called annotations, and sample data (for example, a caption “frog” and a photo of a frog).

In the case of data mislabeling, Moore offers Example-Based Explanations as a solution. Available in preview, the new Vertex features use example-based explanations to help diagnose and troubleshoot data issues. Of course, no explicable AI technique can catch every bug; Computational linguist Vagrant Gautam warns against over-reliance on the tools and methods used to explain AI.

“Google has some documentation on the limitations and a more detailed white paper on explainable AI, but none of that is mentioned anywhere. [today’s Vertex AI announcement]they told TechCrunch via email. “The announcement emphasizes that “skill ownership should not be a selection criterion for participation” and that the new features they provide can “scale AI to non-software experts.” What worries me is that non-specialists believe in AI and its explainability more than they should, and now various Google customers can build and deploy models faster without stopping to wonder if this problem really needs to be solved with machine learning. , and calling their models explainable (and therefore credible and good) without knowing the full extent of the limitations associated with doing so for their specific cases.

However, Moore suggests that example-based explanations can be a useful tool when used in tandem with other model auditing techniques.

“Data scientists don’t need to be infrastructure or operations engineers for models to be accurate, explainable, scalable, fault-tolerant, and secure in an ever-changing environment,” Moore added. “Our customers require tools to easily manage and maintain their machine learning models. “


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox