Machine learning can give companies a competitive advantage by using the data they collect—such as shopping patterns—to generate predictions that help improve product profitability (such as e-commerce recommendations). But it’s hard for any employee to keep up, much less manage the huge volumes of data being generated. This poses a problem, as AI systems tend to produce superior predictions when given the most up-to-date data. Systems that do not regularly retrain on new data run the risk of becoming outdated and less accurate over time.
Fortunately, a new set of methods, dubbed “MLOPs,” promise to simplify the process of feeding data into systems by abstracting away the complexities. One of her supporters is Mike Del Balso, CEO tecton. Del Balso co-founded Tecton while at Uber, when the company struggled to create and deploy new machine learning models.
“Models equipped with highly accurate real-time features can make much more accurate predictions. But building data pipelines to build these features is difficult, requires a significant amount of data scientists, and can add weeks or months to project timelines,” Del Balso told TechCrunch via email.
Del Balso, who previously led the search ads machine learning teams at Google, launched Tecton in 2019 with Jeremy Hermann and Kevin Stumpf, two former Uber colleagues. While at Uber, the trio created Michelangelo, an artificial intelligence platform that Uber used internally to generate market forecasts, calculate expected arrival times, and automatically detect fraud, among other use cases.
The success of Michelangelo inspired Del Balso, Herman and Stumpf to create a commercial version of the technology, which became known as Tecton. Investors followed suit. For example, Tecton today announced it has raised $100 million in a Series C round, bringing the company’s total fundraising to $160 million. The tranche was led by Kleiner Perkins with contributions from Databricks, Snowflake, Andreessen Horowitz, Sequoia Capital, Bain Capital Ventures and Tiger Global. Del Balso says it will be used to scale Tecton’s engineering and marketing teams.
“We expect the software we use today to be highly personalized and intelligent,” said Kleiner Perkins partner Bucky Moore in a statement provided to TechCrunch. “While machine learning makes this possible, it remains far from reality as building the supporting infrastructure is prohibitively difficult for all but the most advanced companies. Tecton makes this infrastructure available to any team, enabling them to build machine learning applications faster.”
At a high level, Tecton automates the process of building features using real-time data sources. “Features” in machine learning are individual independent variables that act as inputs to an AI system. Systems use functions to make their predictions.
“[Automation,] allows companies to deploy real-time machine learning models much faster with less data processing effort,” Del Balso said. “It also allows companies to generate more accurate forecasts. This, in turn, can have a direct impact on the bottom line, for example by improving fraud detection or providing better product recommendations.”
In addition to organizing data pipelines, Tecton can store feature values in AI training and deployment environments. The platform can also track data pipelines by calculating latency and processing costs, and extract historical features to train systems in a production environment.
Tecton also hosts an open source feature store platform, Holiday, which does not require dedicated infrastructure. Instead, Feast reuses existing cloud or on-premises hardware, starting up new resources as needed.
“Tecton’s typical use cases are machine learning applications that benefit from real-time inference. Some examples include fraud detection, recommender systems, search, underwriting, personalization, and real-time pricing,” Del Balso said. “Many of these machine learning models perform much better when they make real-time predictions using real-time data. For example, fraud detection models are significantly more accurate when using data about user behavior from just seconds before, such as the number, size, and geographic location of transactions.”
According to Cognitive scienceThe global MLOps platform market will be worth $4 billion by 2025 – up from $350 million in 2019. Tecton isn’t the only startup chasing it. Rivals include comet, Weights and biases, iterative, InfuseAI, Arricto as well as Continuous name a few. On the feature store front, Tecton competes with Rasgo as well as Moleculeas well as more famous brands such as SpliceGoogle and AWS.
Del Balso cites several things Tecton is doing well, such as strategic partnerships and integrations with Databricks, Snowflake, and Redis. Tecton has hundreds of active users – not a word about customers, other than the fact that the base has increased fivefold in the last year – and Del Balso said that the gross margin (net sales less cost of goods sold) exceeds 80%. Annual recurring income appears to have tripled from 2021 to 2022, but Del Balso declined to provide exact numbers.
“We are still at the beginning of the MLOps journey. This is a difficult transition for businesses. Their data science teams need to act like data engineers and start building production quality code. They need a whole set of new tools to support this transition, and they need to integrate these tools into consistent machine learning platforms. The ecosystem of MLOps tools is still highly fragmented, making it difficult for enterprises to build these machine learning platforms,” said Del Balso. “The pandemic has accelerated the transition to digital experiences, and with it the importance of deploying operational machine learning to support those experiences. We believe the pandemic has accelerated the adoption of new MLOps tools, including feature repositories and feature platforms.”
Based in San Francisco, Tecton currently has 80 employees. The company plans to hire about 20 people over the next six months.
Credit: techcrunch.com /