Bobidi Launches Developer Rewards for Testing Companies’ AI Models

- Advertisement -


In the rush to build, test, and deploy AI systems, businesses often lack the resources and time to fully validate their systems and ensure they are error-free. In 2018 report, Gartner has predicted that 85% of AI projects will produce erroneous results due to biases in the data, algorithms, or teams responsible for managing them. Even big tech companies aren’t immune to pitfalls – for one client, IBM in the end failed to create a system for diagnosing cancer based on artificial intelligence, which cost 62 million dollars in 4 years.

- Advertisement -

Inspired”reward for mistakes”, founded by Jung-Suh Choi and Suhyun Bae. Bobidi, a platform designed to help companies validate their AI systems by providing systems to the global data scientist community. Together with Bobidi, Bae and Choi aimed to create a product that allows customers to connect AI systems to the bug-finding community in a “secure” way via an API.

- Advertisement -

The idea is to allow developers to test AI systems and biases—that is, edge cases where systems perform poorly—to reduce the time it takes to test, Choi explained in an email interview. Bae was previously a senior engineer at Google and led augmented reality mapping at Niantic, while Choi was a senior manager at eBay and led the “human engineering” team at Facebook. They met at a high-tech event about 10 years ago.

“By the time bias or flaws are identified in the model, the damage is already irreversible,” Choi said. “For example, natural language processing algorithms [like OpenAI’s GPT-3] often it turns out that they make problematic comments or react incorrectly to these comments related to hate speech, discrimination and insults. By using Bobidi, the community can “pre-test” the algorithm and find those loopholes, which is actually very effective because you can test the algorithm with a lot of people under specific conditions that reflect social and political contexts that are constantly changing.”

- Advertisement -

To test the models, the Bobidi developer community creates a validation dataset for a given system. As developers try to find loopholes in the system, clients receive analysis that includes false negative and positive patterns and their associated metadata (such as the number of edge cases).

Giving confidential systems and models to the outside world might make some companies think, but Choi claims that Bobidi “automatically expires” models after a certain number of days so they can’t be reverse engineered. Customers pay for the service based on the number of “legitimate” attempts made by the community, which is a dollar ($0.99) per 10 attempts.

Choi points out that the amount of money developers can make with Bobidi — $10 to $20 an hour — is well above the minimum wage in many parts of the world. By suggesting that Choi’s estimates are based on facts, Bobidi counters a trend in the data science industry that tends to pay poorly for data validators and tokenizers. One study found that annotators on the widely used ImageNet computer vision dataset earned a median salary of $2 an hour, with only 4% earning more than $7.25 an hour.

Apart from the fee structure, crowd validation is not a new idea. In 2017, the University of Maryland’s Computational Linguistics and Information Processing Lab launched a platform called Break It Up, which allows researchers to send models to users tasked with coming up with examples to beat them. Elsewhere, Meta maintains a platform called Dynabench that offers users “wacky” models designed to analyze sentiment, answer questions, detect hate speech, and more.

But Bae and Choi believe the “gamified” approach will help Bobidi stand out from the crowd. While it’s just a start, the vendor claims it has clients in augmented reality and computer vision startups, including Seerslab, Deepixel and Gunsens.

This was enough to convince several investors to invest in this venture. Today, Bobidi closed a $5.5 million seed round involving Y Combinator, We Ventures, Hyundai Motor Group, Scrum Ventures, New Product Experimentation (NPE) at Meta, Lotte Ventures, Atlas Pac Capital and several unknown angel investors.

It should be noted that Bobidi is one of the first investments for NPE, which shifted gears last year from building consumer-facing apps to seed-stage investments in AI-focused startups. When contacted for comment, NPE Head of Investment Sunita Parasuraman said via email, “We are thrilled to support the talented founders of Bobidi who are helping companies better validate AI models with an innovative solution that people around the world are using.”

“Bobidi is a hybrid of community and AI, a unique combination of experiences that we share,” Choi added. “We believe that the era of big data is coming to an end and we are about to enter a new era of quality data. This means we are moving from an era where the focus was on building the best model with datasets, to a new era where people are challenged to find the best dataset with the opposite approach to the full model.”

Choi said the proceeds from the seed round will go towards hiring — Bobidi currently has 12 employees — and building “customer insight experiences” and various “core machine learning technologies.” The company hopes to triple the size of its team by the end of the year despite economic obstacles.


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox