Overview

Below is an overview of the current discussion topics within the LF AI Ethics Committee. Further updates will follow as the committee work develops. 
  • Focus of the committee is on policies, guidelines, tooling and use cases by industry
  • Survey and contact current open source Trusted AI related projects to join LF AI efforts 
  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI
  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology

Current Participants

  • AT&T, Amdocs, Ericsson, IBM, Orange, TechM, Tencent

Chairs

Sub Categories:

- Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations

- Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks

- Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options

- Lineage: Methods to ensure provenance of datasets and AI models, including reproducability of generated datasets and AI models

Working Group:

Name

Organization

Email

Jim Spohrer

IBM


Maureen McElaney

IBM


Susan Malaika

IBM






If you are interested in getting involved please email info@lfai.foundation for more information. Getting Involved