Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology

Mail List

Please self subscribe to the mail list here: https://lists.lfai.foundation/g/trustedai-committee

Committee


Participants

  • Initial Organizations Participating: AT&T, Amdocs, Ericsson, IBM, Orange, TechM, Tencent

Committee Chairs

NameRegionOrganizationEmail AddressLF ID
Animesh SinghNorth AmericaIBMsinghan@us.ibm.com
Souad OualiEuropeOrangesouad.ouali@orange.com
Jeff CaoAsiaTencentjeffcao@tencent.com

Committee Participants
NameOrganization Email AddressLF ID
Ofer HermoniAmdocs ofer.hermoni@amdocs.com 
Mazin GilbertATT mazin@research.att.com 
Alka RoyATT AR6705@att.com 
Mikael Anneroth Ericssonmikael.anneroth@ericsson.com 
Jim SpohrerIBM spohrer@us.ibm.com
Maureen McElaneyIBM mmcelaney@us.ibm.com
Susan MalaikaIBM malaika@us.ibm.com
Romeo KienzlerIBMromeo.kienzler@ch.ibm.com
Francois JezequelOrangefrancois.jezequel@orange.com 
Nat SubramanianTech Mahindra Natarajan.Subramanian@Techmahindra.com
 Han XiaoTencent hanhxiao@tencent.com
Committee


Assets

- All the assets being 

Sub Categories

- Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations

- Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks

- Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options

- Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models

Projects


Meetings

How to Join: Contact trustedai-committee@lists.lfai.foundation for more information about how to join. 

Meeting Content (minutes / recording / slides / other):

DateMinutes
 
Attendees: Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K
  • Goals defined for the meeting:
  • Assign chairs to two working groups
    1. AI Principles Working Group
    2. AI Use Cases Working Group
    3. Possible Discussion about third working group

  • Discussion about LFAI day in Paris

  • More next steps
    • Will begin recording meetings in future calls.