Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 31 Next »

Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology

Mail List

Please self subscribe to the mail list here at https://lists.lfai.foundation/g/trustedai-committee

Or email trustedai-committee@lists.lfai.foundation for more information. 

Participants

Initial Organizations Participating: AT&T, Amdocs, Ericsson, IBM, Orange, TechM, Tencent

Committee Chairs

NameRegionOrganizationEmail AddressLF ID
Animesh SinghNorth AmericaIBMsinghan@us.ibm.com
Souad OualiEuropeOrangesouad.ouali@orange.com
Jeff CaoAsiaTencentjeffcao@tencent.com

Committee Participants
NameOrganization Email AddressLF ID
Ofer HermoniAmdocs ofer.hermoni@amdocs.com 
Mazin GilbertATT mazin@research.att.com 
Alka RoyATT AR6705@att.com 
Mikael Anneroth Ericssonmikael.anneroth@ericsson.com 
Jim SpohrerIBM spohrer@us.ibm.com
Maureen McElaneyIBM mmcelaney@us.ibm.com
Susan MalaikaIBM malaika@us.ibm.com
Romeo KienzlerIBMromeo.kienzler@ch.ibm.com
Francois JezequelOrangefrancois.jezequel@orange.com 
Nat SubramanianTech Mahindra Natarajan.Subramanian@Techmahindra.com
 Han XiaoTencent hanhxiao@tencent.com

Assets

- All the assets being 

Sub Categories

- Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations

- Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks

- Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options

- Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models

Projects

Meetings

How to Join: Visit the Trusted AI Committee Group Calendar to self subscribe to meetings.

Or email trustedai-committee@lists.lfai.foundation for more information. 

Meeting Content (minutes / recording / slides / other):

DateMinutes
 
Attendees: Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K
  • Goals defined for the meeting:
  • Assign chairs to two working groups
    1. AI Principles Working Group
    2. AI Use Cases Working Group
    3. Possible Discussion about third working group

  • Discussion about LFAI day in Paris

  • More next steps
    • Will begin recording meetings in future calls.
  • No labels