All technology carries risk. Some technology is worth that risk—when it's managed.
Algorithmic risk—including machine learning (ML) narrowly and artificial intelligence (AI) broadly—is a new, and, relative to other engineering-incurred risks like seismic readiness, difficult to manage well. Regulators, policymakers, engineers, and academics alike struggle to understand how to manage algorithmic risk, and even what kinds of risks are worth managing.
How are you going to begin?
Teach your organization the basics
If your organization has a team of policymakers and data scientists, they can learn enough to get started quickly. We can teach your organization the basics of algorithmic bias—how to identify and ameliorate it at a conceptual level—in only a few hours. We can tailor our materials to your organization's specific needs. With more time, we can even develop hands-on labs tailored to real-life topic areas relevant to business concerns.
Reach out to an auditing firm
External audits can be helpful. Increasingly, they are required by law. However, not all auditing firms are equal. Even among excellent auditing firms, specialties differ. Building an internal capacity to understand algorithmic bias and fairness can help your organization evaluate potential auditing firms. As necessary, we can help refer and evaluate quotes and bids to find the right fit for your problem and models.
Get in touch
Your organization can learn enough about algorithmic bias to help itself. How many hours should you spend training your team, and how much money should you spend on an auditing firm? What specifically should your team learn, and what sort of audit would be most useful? These are questions we can help answer. Get in touch for a free consultation.