Building better startups with responsible AI

DMCA / Correction Notice
- Advertisement -


The founders feel that implementing responsible AI practices is challenging and could slow down the progress of their business. They often jump to mature examples like Salesforce office of ethical and humanitarian use And think the only way to avoid creating harmful products is to build a bigger team. The truth is much simpler.

- Advertisement -

I set out to find out how founders were thinking about responsible AI practices on the ground by talking with some of the most successful early-stage founders and found that many of them were implementing responsible AI practices.

He just didn’t say it. They call it “good business”.

advertisement

It turns out, simple practices that make business sense and result in better products will go a long way toward reducing the risk of unexpected social harm. These practices rely on the insight that people, not data, are at the heart of successfully implementing AI solutions. If you keep in mind the reality that human beings are always in the loop, you can build a better business with more responsibility.

Thinking AI as Bureaucracy. Like bureaucracy, AI depends on adhering to some general policy (“model”) that leads to appropriate decisions in most cases. However, this general policy can never account for all the possible scenarios that a bureaucracy would need to handle – much like an AI model cannot be trained to estimate every possible input.

- Advertisement -

When these general policies (or models) fail, those already marginalized are disproportionately affected (a classic algorithm Example Somali immigrants are being tagged for fraud because of their unusual community shopping habits).

Bureaucracy works to solve this problem”street level bureaucratLike judges, DMV agents and even teachers, who can handle unique cases or decide not to implement the policy. For example, teachers may omit a curriculum prerequisite in dire circumstances, or judges may be more or less liberal in punishing.

If an AI will inevitably fail, then – like a bureaucracy – we must keep humans in the loop and design with them. As one founder told me, “If I were the first Martian person to come to Earth, I would think: Humans are processing machines—I should use them.”

Whether humans are advancing AI systems when they are uncertain, or users are choosing whether to reject, accept, or manipulate model results, these people determine how well any AI-based solution works in the real world. Will work like

Here are five practical tips that the founders of AI companies share for keeping and even tapping into the loop to create a more responsible AI that is also good for business:

Introduce only as little AI as you need

Today, many companies are planning to launch some services with end-to-end AI-powered process. When those processes struggle to function under a wide range of use cases, the ones that suffer the most are already marginalized.

In trying to diagnose failures, installers degrade one component at a time, yet hope to automate as much as possible. They should consider the opposite: introducing one AI component at a time.

There are many processes – even with all the wonders of AI – that are still less expensive and more reliable to run with humans in a loop. If you build an end-to-end system with many components coming together online, you may find it difficult to identify which one is best suited for AI.

Many founders saw AI as a way of delegating the most time-consuming, low-stakes tasks to their systems away from humans, and they began with all human-powered systems to recognize that these critical- What were the se-automated tasks.

This “AI second” approach also enables founders to enter areas where data is not immediately available. The people who operate the parts of the system also create the same data you would need to automate those tasks. One founder told us that, without advice on introducing AI slowly, and only when it was clearly more accurate than an operator, they would never have gotten off the ground.

create some friction

Many founders believe that to be successful, the product needs to be out of the box, with as little user input as possible.

Since AI is typically used to automate part of an existing workflow – complete with associated preconceptions about how much to rely on that workflow output – a completely seamless approach can be disastrous.

For example, when a ACLU listens showed that while Amazon’s facial recognition tool would misidentify 28 members of Congress (a large proportion of whom were black) as criminals, loose default settings were at the heart of the problem. Out of the box the accuracy limit was set at just 80%, an obviously wrong setting if a user takes a positive result at face value.

Prompting users to engage with a product’s strengths and weaknesses before implementing it can offset the potential for harmful perception mismatches. It can also make customers happy with the final product performance.

One founder we spoke with found that customers ultimately use their product more effectively if the customer has to customize it before they can use it. He sees this as a key component of a “design-first” approach and finds that it helps users play to a product’s strengths on a context-specific basis. While this approach required more time to move forward, it translated into revenue gains for customers.

refer, not answer

Many AI-based solutions focus on providing output recommendation. Once these recommendations are made, they must be acted upon by humans.

Without context, poor recommendations can be blindly followed, leading to downstream losses. Similarly, larger recommendations can be rejected if the humans in the loop do not trust the system and the context is lacking.

Instead of taking decisions away from users, consider giving them the tools to make decisions. This approach harnesses the power of humans in a loop to identify problematic model outputs while securing the necessary user purchases for a successful product.

One founder shared that when their AI made recommendations directly, users didn’t trust it. Their customers were pleased with the accuracy that their model’s predictions yielded, but individual users ignored the recommendations. They then removed the recommendation feature and instead used their model to increase resources that could inform a user’s decision (for example, this process is similar to these five previous procedures and worked out here ). This increased the adoption rate and revenue.

Consider Your Non-Users and Non-Buyers

It is a known problem in enterprise technology that products can easily serve the CEO and not the end user. This is even more problematic in the AI ​​space, where a solution is often part of a larger system that interfaces with a few direct users and many indirect ones.

Take, for example, the controversy that arose when Starbucks began using automated scheduling software To allocate shifts. The scheduler is optimized for efficiency, completely disregarding working conditions. After a successful labor petition and a high-profile New York Times article, Barista’s inputs were taken into account, morale and productivity improved.

Instead of taking a customer literally on what they ask you to solve, consider mapping out all the stakeholders involved and understanding their needs. before this You decide what your AI will help with optimization. That way, you’ll avoid inadvertently creating a product that is unnecessarily harmful and possibly even lead to a better business opportunity.

One founder we spoke with took this approach to heart, camping next to their users to understand their needs before deciding to customize their products. He then worked with both customers and union representatives to figure out how to create a product that worked for both.

While customers originally wanted a product that would allow each user to take on more workloads, these conversations revealed an opportunity to unlock savings for their customers by optimizing existing workloads.

This insight allowed the founder to develop a product that empowered humans in the loop And The solution saved management more money than they thought they wanted.

Be clear about what an AI theater is

If you limit the extent to which you promote what your AI can do, you can both avoid irresponsible results and sell your product more effectively.

Yes, the hype around AI helps sell products. However, it’s important to know how to prevent those buzzwords from getting in the way of accuracy. While speaking of the autonomous capabilities of your product may be good for sales, it can backfire if you apply that rhetoric indiscriminately.

For example, one of the founders we spoke to found that playing with the power of their AI also raised their customers’ privacy concerns. This concern persisted even as the founders explained that parts of the product in question depend not on data, but on human judgment.

Language choice can help align expectations And Build trust in a product. Rather than using the language of autonomy with their users, some of the founders we spoke to found that words like “enhancement” and “help” were more likely to motivate adoption. This “AI as a tool” framing was also less likely to create blind trust that could lead to poor results down the line. Being clear can reduce overconfidence in AI and help you sell.

These are just some of the practical lessons learned by real founders from AI to reduce the risk of unexpected pitfalls and create more successful products that are built to last. We also believe that there is an opportunity for new startups to build services that help make it easier to build ethical AI that is good for business as well. So here are some requests for startups:

  • Join the Humans in the Loop: We need startups that solve the “human in the loop” attention problem. Delegating to humans requires ensuring that humans notice when AI is uncertain so that they can intervene meaningfully. If the AI ​​is correct 95% of the time, research shows that people tend to be complacent and that the AI ​​is unlikely to catch 5% of the cases when it is wrong. The solution requires more than just technology; Just as social media was more of a psychological innovation than a technological one, we think startups in this space can (and should) emerge from social insights.
  • Standard Compliance for Responsible AI: There are opportunities for startups that consolidate existing standards around responsible AI and measure compliance. publication of AI Standards There has been an increase over the past two years as public pressure on AI regulation continues to grow. a recent survey showed that 84% of Americans think AI should be managed carefully and rate it as a top priority. Companies want to signal that they are taking this seriously and showing that they are following the standards set by IEEE, CSET and others would be useful. Meanwhile, the current draft of the EU expansion AI Act (AIA) emphasizes on industry standards. If the AIA is passed, compliance will become imperative. Given the market built around GDPR compliance, we think this is the place to look.

Whether you’re trying one of these tips or starting one of these companies, simple, responsible AI practices can help you unlock immense business opportunities. To avoid creating harmful products, you need to be thoughtful in your deployment of AI.

Fortunately, this thoughtfulness will pay dividends when it comes to the long-term success of your business.

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories