22.02.2024 The Transformation of the Trusted AI Committee to Responsible AI as a Generative AI Commons Workstream – LFAI & Data (lfaidata.foundation)

We switched to the new LFX system for meeting sheduling and recording. All previous mailing list subscribers should be listed as LFX Members of the Committee. At https://openprofile.dev/ you should be able to see all the recordings.



(Screenshot of the calender of openprofile )



Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology




Assets




Meetings


  1. EDIT THE CALENDAR

    Customise the different types of events you'd like to manage in this calendar.

    #legIndex/#totalLegs
  2. RESTRICT THE CALENDAR

    Optionally, restrict who can view or add events to the team calendar.

    #legIndex/#totalLegs
  3. SHARE WITH YOUR TEAM

    Grab the calendar's URL and email it to your team, or paste it on a page to embed the calendar.

    #legIndex/#totalLegs
  4. ADD AN EVENT

    The calendar is ready to go! Click any day on the calendar to add an event or use the Add event button.

    #legIndex/#totalLegs
  5. SUBSCRIBE

    Subscribe to calendars using your favourite calendar client.

    #legIndex/#totalLegs

Trusted AI Committee Monthly Meeting - 4th Thursday of the month (additional meetings as needed)

  • 10 AM ET USA (reference time, for all other meetings the time conversion has to be checked for daylight savings)
  • 10 PM Shenzen China
  • 7:30 PM India
  • 4 PM Paris
  • 7 AM PT USA (updated for daylight savings time as needed)

Zoom channel : 

https://zoom-lfx.platform.linuxfoundation.org/meeting/94505370068?password=bde61b75-05ae-468f-9107-7383d8f3e449







Committee Chairs

Name

Region

Organization

Email Address

LF ID

LinkedIn

Andreas Fehlner

Europe

ONNX

fehlner@arcor.de 

https://www.linkedin.com/in/andreas-fehlner-60499971

Susan Malaika

America

IBM

malaika@us.ibm.com

Susan Malaika 
(but different email address)

https://www.linkedin.com/in/susanmalaika

Suparna Bhattacharya

AsiaHPEsuparna.bhattacharya@hpe.comSuparna Bhattacharya https://www.linkedin.com/in/suparna-bhattacharya-5a7798b

Adrian Gonzalez Sanchez

Europe

HEC Montreal / Microsoft / OdiseIA

adrian.gonzalez-sanchez@hec.ca 

Adrian Gonzalez Sanchez(but different email address)

https://www.linkedin.com/in/adriangs86




Participants

Initial Organizations Participating: IBM, Orange, AT&T, Amdocs, Ericsson, TechM, Tencent


Name

Organization

 Email Address

LF ID

Ofer Hermoni

PieEye

oferher@gmail.com

Ofer Hermoni 

Mazin Gilbert

ATT 

mazin@research.att.com 

...

Alka Roy

Responsible Innovation Project

alka@responsibleproject.com 

...

Mikael Anneroth 

Ericsson

mikael.anneroth@ericsson.com 

...

Alejandro Saucedo

The Institute for Ethical AI and Machine Learning

a@ethical.institute

Alejandro Saucedo 

Jim Spohrer

Retired IBM, ISSIP.org

spohrer@gmail.com

Jim Spohrer 

Saishruthi Swaminathan

IBM

saishruthi.tn@ibm.com

Saishruthi Swaminathan 

Susan Malaika

IBM

malaika@us.ibm.com

sumalaika (but different email address)

Romeo Kienzler

IBM

romeo.kienzler@ch.ibm.com

Romeo Kienzler 

Francois Jezequel

Orange

francois.jezequel@orange.com 

Francois Jezequel 

Nat Subramanian

Tech Mahindra 

Natarajan.Subramanian@Techmahindra.com

Natarajan Subramanian 

Han Xiao

Tencent

hanhxiao@tencent.com

...

Wenjing Chu

Futurewei

chu.wenjing@gmail.com

Wenjing Chu

Yassi Moghaddam

ISSIP

yassi@issip.org

Yassi Moghaddam 
Animesh SinghIBMsinghan@us.ibm.comAnimesh Singh 
Souad OualiOrangesouad.ouali@orange.comSouad Ouali 
Jeff CaoTencentjeffcao@tencent.com...
Ron DoyleBroadcomron.doyle@broadcom.com




Sub Categories

  • Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
  • Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
  • Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
  • Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models




Projects




Meeting Content (minutes / recording / slides / other)


Date

Agenda/Minutes

Thu, Sep 28 @ 4:00 pm

Zoom Recording Link: https://zoom.us/rec/play/9wmVWdg8wlCuv3E8CVNfKI4uxZA-lHC5RCdwZikVHj4zb3cvvQVw7sE0DQ2vw7XgXT2UgmrFelOa3FEW._gQfopF1nWY820Pf?canPlayFromShare=true&from=share_recording_detail&continueMode=true&componentName=rec-play&originRequestUrl=https%3A%2F%2Fzoom.us%2Frec%2Fshare%2FWCKI--kYX2WJWEm_pa39jWYw8YCxMxxpFc5nHXqzXXaE6Uo_6SUQMLuX1rznqX8s.LBD6697U22Q-rk55


Recording could be also accessed by http://openprofile.dev

Trusted AI Committee


 

* Preparing for September 7 TAC session
* Migration to LFX
* Anything on Generative AI

GMT20230824-140219_Recording_3686x2304.mp4

 

Friday August 11, at 10am US Eastern 

 Working Session to prepare for the TAC Trusted AI Committee presentation on Thursday September 7 at 9am US Eastern

 

  • Agenda
    Suparna @Suparna Bhattacharya & Gabe @Rodolfo (Gabe) Esteves etc report on  CMF developments with ONNX - 10 mins
  • Suparna @Suparna Bhattacharya & Vijay @Vijay Arya etc report on CMF developments with AI-Explainability-360  - 10 mins
  • Vijay @Vijay Arya on what's new in AI-Explainability-360 https://github.com/Trusted-AI/AIX360/releases including the time series feature - 10 mins
  • Question to @Vijay Arya from @Jen Shelby : Is this something we want to create a social post for?
  • Andreas's guest @Andreas Fehlner (Martin Nocker)- Homomorphically Encrypted Machine learning with ONNX models - 15 mins
  • Review the TAC materials - 10 mins (Initial draft agenda attached in slack) - We'll discuss the date and the content
  • Adrian @Adrian Gonzalez Sanchez, Ofer @Ofer Hermoni, Ali @Ali Hashmi, Phaedra @Phaedra Boinodiris - Blog news and anything else
    --------------
    Presentation (Martin Nocker, 15min): HE-MAN – Homomorphically Encrypted MAchine learning with oNnx models. Machine learning (ML) algorithms play a crucial role in the success of products and services, especially with the abundance of data available. Fully homomorphic encryption (FHE) is a promising technique that enables individuals to use ML services without sacrificing privacy. However, integrating FHE into ML applications remains challenging. Existing implementations lack easy integration with ML frameworks and often support only specific models. To address these challenges, we present HE-MAN, an open-source two-pa(rty machine learning toolset. HE-MAN facilitates privacy-preserving inference with ONNX models and homomorphically encrypted data. With HE-MAN, both the model and input data remain undisclosed. Notably, HE-MAN offers seamless support for a wide range of ML models in the ONNX format out of the box. We evaluate the performance of HE-MAN on various network architectures and provide accuracy and latency metrics for homomorphically encrypted inference.

Trusted_AI_Committee_2023_07_27_Handout_ONNX_Nocker.pdf

TrustedAI_20230727.mp4  -Topics included CMF Developments with ONNX and  Homomorphically Encrypted Machine learning with ONNX models


 

CMF and AI Explainability led by Suparna Bhattacharya and Vijay Arya - with MaryAnn, Gabe and Soumi Das
ACTION for the Committee - identify 2 use cases that drive the integration of ONNX, CMF, Explainability - illustrating the benefits 
Trusted AI Working Session-CMF and AI Explainability-20230713.mp4

 


MarkTechPost,  Jean-marc Mommessin

Active and Continuous Learning for Trusted AI, Martin Foltin 

AI Explainability, Vijay Arya

Recording (video) Zoom

 

  • Open Voice Network Follow-On - 20 minutes @Lucy Hyde invites John Stine & Open Voice Network folks e.g., Nathan Southern

Recording (video) Zoom

Recording (video) confluence

Slides - ONNX 

 

Part 0 - Metadata / Lineage / Provenance topic from Suparna Bhattacharya & Aalap Tripathy & Ann Mary Roy & Professor Soranghsu Bhattacharya & Team
Part 1 - Open Voice Network - Introductions https://openvoicenetwork.org  
  • Open Voice NetworkOpen Voice Network, The Open Voice Network, Voice assistance worthy of user trust—created in the inclusive, open-source style you’d expect from a community of The Linux Foundation.

Part 2 - Identify small steps/publications to motivate concrete actions over 2023 in the context of these pillars:
Technology | Education | Regulations | Shifting power : Librarians / Ontologies / Tools
Possible Publications / Blogs
  • Interplay of Big Dreams and Small Steps |
    Inventory of trustworthy tools and how they fit - into the areas of ACT |Metadata, Lineage and Provenance tools in particular |
  • Giving power to people who don't have it - Phaedra @Phaedra Boinodiris and Ofer @Ofer Hermoni
  • -- Why give power ; The vision - including why important to everyone including companies
  • More small steps to take / blogs-articles to write
Part 3 - Review goals of committee taken from https://wiki.lfaidata.foundation/display/DL/Trusted+AI+Committee - including whether we want to go ahead with badges
  • Overview
  • Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops.
  • Focus of the committee is on policies, guidelines, tooling and use cases by industry
  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts
  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data
  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology
Part 4 - Any highlights from the US Senate Subcommittee on the Judiciary - Oversight on AI hearing

Part 5 - Any Other Business


Recording (video)

 

Join the Trusted AI Committee at the LF-AI for the upcoming session on April 27 at 10am Eastern where you will hear from:

  1. Adrian Gonzalez Sanchez: From Regulation to Realization – Linking ACT (European Union AI Act) to internal governance in companies
  2. Phaedra Boinodiris: Risks of generative AI and strategies to mitigate
  3. All: Explore what was presented and suggest next steps
  4. All : Update the Trusted AI Committee list https://wiki.lfaidata.foundation/display/DL/Trusted+AI+Committee
  5. Suparna Bhattacharya: Call to Action 

------------------------------------------------------------------------------------
We all have prework to do! Please listen to these videos:


We look forward to your contributions

Recording (video)

Recording (audio)

 

Proposed agenda (ET)

  • 10am - Kick off Meeting
    • Housekeeping items:
      • Wiki page update
      • Online recording
      • Others
  • 10:05 - Generative AI and New Regulations - Adrian Gonzalez Sanchez 
    • Presentation (PDF) 


  • 10:15 - Discussion
  • 10:30 - Formulate any next steps
  • 10:35 - News from the open source Trusted AI projects
  • 10:45 - Any other business

Call Lead: Susan Malaika 

 

Invitees: Beat Buesser; Phaedra Boinodiris ; Alexy Khrabov, David Radley, Adrian Gonzalez Sanchez

Optional: Ofer Hermoni , Nancy Rausch, Alejandro Saucedo, Sri Krishnamurthy, Andreas Fehlner, Suparna Bhattacharya

Attendees: Beat Buesser, Phaedra Boinodiris, Alexy Khrabov, Adrian Gonzalez Sanchez, Ofer Hermoni, Andreas Fehlne


Discussion

  • Phaedra - Consider : Large Language Models opportunity and risks - in the context trusted ai - how to mitigate risk -
  • Adrian: European Union AI Act https://artificialintelligenceact.eu/
  • Suparna -what does it mean foundation models in general, where language models is one example. Another related area is data centric trustworthy AI in this context
  • Alexy - Science - More work on understanding at the scientific way (e.g., Validation in Medical Context) - software engineering ad hoc driven by practice
  • Fast Forward - what’s next for ChatGPT

  • Andreas : File Formats for Models - additional needs for Trustworthy AI - in addition to Lineage
  • Idea: Create a PoV - Trustworthy AI for Generative application - Take AI ACT approach
  • Gaps in EU AT Act: https://venturebeat.com/ai/coming-ai-regulation-may-not-protect-us-from-dangerous-ai/ useful source


Next steps

  • Set up a series of calls through the LF-AI Trusted AI mechanisms to have the following presenters
  • Run 3 sessions with presentations
  • Then create a presentation and/or document
  • Create the synthesis -A Point of View on : Trustworthy AI for Generative Applications

  • Occasionally the open source project leaders are invited to the call …
  • ACTION: Adrian will schedule next meeting

 

malaika@us.ibm.com has scheduled a call on Monday October 31, 2022 to determine next steps for the committee due to a change in leadership- please connect with Susan if you would like to be added to the call

The group met once a month - on the third Thursday each month at 10am US Eastern. See notes below for prior calls . Activities of the committee included:

  • Reviewing all trusted AI related projects at the LF-AI and making suggestions - e.g.,
  • AI Fairness 360
  • AI Explainability
  • Adversarial Robustness Toolbox
  • Related projects such as Egeria, Open Lineage etc
  • Reviewing the activities of the subgroups - known as working groups - and making suggestions
  • MLSecOps WG
  • Principles WG (completed)
  • Highlighting new projects that should/could be suitable for the LF-AI
  • Identifying trends in the industry in Trusted AI that should be of interest to the LF-AI
  • Initiating Working Groups within the Trusted AI Committee at the LF-AI to address particular issues


Reporting to:

  • The LF-AI Board of Governors on the activities of the Committee and taking guidance from the board - next meeting on Nov 1, 2022
  • The LF-AI TAC - making suggestions to the TAC and taking guidance


Questions:

  • Should the Trusted AI Committee continue to meet once a month with similar goals?
  • Who will:
  • Identify the overall program and approach for 2023 - should that be the subject of the next Trusted AI Committee Call?
    • Host the meetings?
    • Identify the speakers?
    • Make sure all is set speakers and community?
    • Should the Trusted AI Committee take an interest in the activities of the PyTorch Consortium?


Invitees and interested parties on the call on October 31, 2022

  • HPE Suparna Battacharya
  • IBM Beat Buesser David Radley Christian Kadner Ruchi Mahindru Susan Malaika Cheranellore(Vasu) Vasu William Bittles ;
  • Beat leads the Adversarial Robustness Toolbox - a graduated project at the LF-AI
  • David works on Egeria Project - a graduated project at the LF-AI
  • William is involved in open lineage
  • Susan co-led Principles WG - a subgroup of Trusted AI Committee - work completed
  • Institute for Ethical AI Alejandro Saucedo Alejandro is also at Seldon - Leads MLSec Working Group - a subgroup of Trusted AI Committee
  • QuantUniversity Sri Krishnamurthy
  • SAS Nancy Rausch Currently chair of LF-AI and Data TAC
  • Trumpf Andreas Fehlner

 

Recording (video)

Recording (audio)

Principles report  =(R)REPEATS

Recording (audio)

Recording (video)

LFAI Trusted AI Committee Structure and Schedule: Animesh Singh

Real World Trusted AI Usecase and Implementation in Financial Industry: Stacey Ronaghan

AIF360 Update: Samuel Hoffman

AIX360 Update: Vijay Arya

ART Update: Beat Buesser

 

Recording (audio)

Recording (video)

Setting up Trusted AI TSC

Principles Update

Coursera Course Update

Calendar discussion - Europe and Asia Friendly

Recording

Walkthrough of LFAI Trusted AI Website and github location of projects

Trusted AI Video Series

Trusted AI Course in collaboration with University of Pennsylvania

Recording

Z-Inspection: A holistic and analytic process to assess Ethical AI - Roberto Zicari - University of Frankfurt, Germany

Age-At-Home - the exemplar of TrustedAI, David Martin, Hacker in Charge at motion-ai.com

Plotly Demo with SHAP and AIX360 - Xing Han, Plot.ly

  • Explain the Tips dataset with SHAP: Article, Demo
  • Heart Disease Classification with AIX360: Post, Demo
  • Community-made SHAP-to-dashboard API: Post

Swiss Digital Trust Label - short summary - Romeo Kienzler, IBM

  • Announced by Swiss President Doris Leuthard at WEF 2nd Sept. 2019
  • Geneva based initiative for sustainable and fair treatment of data
  • Among others, those companies are involved already Google, Uber, IBM, Microsoft, Facebook, Roche, Mozilla, Booking.com, UBS, Credit Suisse, Zurich, Siemens IKRK,  EPFL,  ETH, UNO
  • Booking.com, Credit Suisse, IBM, Swiss Re, SBB, Kudelski and Canton of Waadt to deliver pilot

  • Montreal AI Ethics Institute Presentation

  • Status on Trusted AI projects in Open Governance

  • Principles Working Group update - Susan Malaika

  • Trusted AI Committee activities summarization for Governing board - Animesh Singh

  • Swaminathan Chandrasekaran, KPMG Managing Director would be talking about how they are working with practitioners in the field on their AI Governance and Trusted AI needs.

  • Susan Malaika from IBM will be giving an update from the Principles Working Group, and progress there.

  • Saishruthi Swaminathan to do a presentation on AI Transparency in Marketplace

  • Francois Jezequel to present on Orange Responsible AI initiative.

  • Andrew and Tommy did a deep dive in Kubeflow Serving and Trusted AI Integration

  • Principles Working Group discussion

  • AI for People is focused on the intersection of AI and Society with a lot of commonality with the focus areas of our committee. Marta will be joining to present their organization and what they are working on.

  • Proposal of Use Case to be tested by AT&T using Apache Nifi and AIF360  (romeo)

  • Introduction to baseline data set for AI Bias detection (romeo)

  • Exemplar walk-through: retrospective bias detection with Apache Nifi and AIF360 (romeo)

  • Principles Working Group Status Update (Susan)

Discuss AIF360 work around SKLearn community (Samuel Hoffman, IBM Research demo)

Discuss "Many organizations have principles documents, and a bit of backlash - for not enough practical examples."

Resource

  • Watch updates on production ML with Alejandro Saucedo done with Susan Malaika on the Cognitive Systems Institute call:

Notes from the call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20200123.md

Proposed Agenda

  • Meeting notes are now in GitHub here: https://github.com/lfai/trusted-ai/tree/master/committee-meeting-notes

  • Since we don't record and share our committee meetings should our committee channel in Slack be made private for asyncronous conversation outside these calls?

  • Introduction of MLOps  in IBM Trusted AI projects

  • Design thinking around integrating Trusted AI projects in Kubeflow Serving

Notes from the call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191212.md

Proposed Agenda:

  • Jim to get feedback from LFAI Board meeting

  • Romeo to demo AIF360 -Nifi Integration + feedback from his talk at OSS Lyon

  • Alka to present AT&T Working doc

  • Discuss holiday week meeting potential conflicts (28 Nov - US Holiday, 26 Dec - Day after Christmas)

Notes from call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191114.md

Attendees:

Ofer, Alka, Francois, Nat, Han, Animesh, Jim, Maureen, Susan, Alejandro

Summary

  • Animesh walked through the draft slides (to be presented in Lyon to LFAI governing board about TAIC)

  • Discussion of changes to make

  • Discussion of members, processes, and schedules

Detail

  • Jim will put slides in Google Doc and share with all participants

  • Susan is exploring a slack channel for communications

  • Trust and Responsibility, Color, Icons to add Amdocs, Alejandro's Institute

  • Next call Cancelled (31 October) as many committee members will be at OSS EU and TensorFlow World

Notes from the call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191017.md

Attendees:

Animesh Singh (IBM), Maureen McElaney (IBM), Han Xiao (Tencent), Alejandro Saucedo, Mikael Anneroth (Eriksson), Ofer Hermoni (Amdocs)

Animesh will check with Souad Ouali to ensure Orange wants to lead the Principles working group and host regular meetings. Committee members on the call were not included in the email chains that occurred so we need to confirm who is in charge and how communication will occur.

The Technical working group has made progress but nothing concrete to report.

A possible third working group could form around AI Standards.

Notes from the call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191003.md

 

Attendees: Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K

  • Goals defined for the meeting:

Working Group Names and Leads have been confirmed:

  • Principles, lead: Souad Ouali (Orange France) with members from Orange, AT&T, Tech Mahindra, Tencent, IBM, Ericsson, Amdocs.
  • Technical, lead: Romeo Kienzler (IBM Switzerland) with members from IBM, AT&T, Tech Mahindra, Tencent, Ericsson, Amdocs, Orange.
  • Working groups will have a weekly meeting to make progress.  First read out to LF AI governing board will be Oct 31 in Lyon France.
  • The Principles team will study the existing material from companies, governments, and professional associations (IEEE), and some up with set that can be shared with the technical team for feedback as a first step.  We need to identify and compile the existing materials.
  • The Technical team is working on Acuomos+Angel+AIF360 integration demonstration.

Possible Discussion about third working group

Discussion about LFAI day in Paris


More next steps

Will begin recording meetings in future calls.


Notes from call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20190919.md

  • No labels

6 Comments

  1. Here are some additional notes taken by Jim Spohrer (IBM) <spohrer@us.ibm.com> - https://docs.google.com/document/d/1QJ3t0YD3mzOa-gajjbHtgjXEO7_LPY1gmRW1SHcoNo4/edit

  2. This document: https://docs.google.com/document/d/1QJ3t0YD3mzOa-gajjbHtgjXEO7_LPY1gmRW1SHcoNo4/edit?usp=sharing

    LF AI - Trusted AI Committee Call - Every other week
    (may change)

    Ibrahim Haddad (LF): I will set up "Trusted AI" Zoom to record future calls
    Ibrahim: Attendees should go to Wiki for agenda: https://wiki.lfai.foundation/display/DL/LF+AI+Trusted+AI+Committee
    
Jim Spohrer (IBM): We should rename ourselves when we join zoom - so people can easily see your company - I just did that (a good point to add to meeting start when we review the agenda)

    
Nat Subramanian (Tech Mahindra): Wiki todo: Need a section of wiki to highlight guest speakers

    
Ibrahim: Zoom todo: will set up a zoom for Trusted AI - and in future co-chairs will be able to record

    Animesh Singh (IBM): Showed a Google Doc with agenda - need to get link
    Animesh: Two working groups
 (WG) will have lead, core members, optional members and meet separately
    Animesh: WG Principles and WG Technical
    
Animesh: WG Techincal focus - landscape projects in Trusted AI area, and specifically those that are coming into LF AI to support Trusted AI work
    Animesh: Which OS projects will come into LF AI and which are being integrated into LF AI projects
    
Animesh: Face-to-face meetings
 for WG will need to be planned by WG leads
    
Animesh: WG should have outcome and timetable

    Animesh: sharing a Google Doc with the above
 information

    Maureen McElaney (IBM): 
I’ve added a loose agenda for today’s call to the wiki. Since I’m on mobile can someone take on the responsibility of adding any further notes there?

    Jim Spohrer (IBM): I am taking notes and can clean them up and upload them, link from Wiki to notes

    
Animesh: Introducing Romeo Kienzler (IBM Switzerland) who added LF AI meeting in Paris hosted by Orange
    Animesh: Romeo has created top data sciences and AI Coursera courses

    Alka Roy (AT&T Innovation Center) - 7:15 AM PT - Had trouble with new link - just joining.
    
Romeo Kienzler (IBM Switzerland): Working on pipelines for technical working group

    Animesh: Can we use Slack to keep an ongoing work cadence
?
    
Nat: Not slack because of licensing complications. ICRC is being used instead. Communication channel gap.
    
Nat: Acumos has SCRUM meetings almost everyday

    Nat: Acumos meetings - we can re-use time slots and leverage for Trusted AI WG.
    Nat: Tencent Angel team interest in on boarding a model

    
Nat: two working groups - Principles and Technical
    
Nat: Synergies between Acumos + Angel + AIF360
    
Nat: Principles works on policies and strategies


    Alka Roy (AT&T Innovation Center) - 7:30 AM PT: Good morning!

    Alka: How to get to a set of criteria and guidelines?
    Animesh: job of WG Principles
    
Alka: Principles team creates, and technical team provides feedback/adopts
    
Animesh: yes, criteria evolves with feedback from terchnical


    Nat Subramanian (Tech Mahindra):: Romeo is IBM in Switzerland?

    Nat: China has most difficulty joining with times we have proposed so far (early California)

    Animesh: Romeo’s time might be better for China and India meetings

    
Animesh: Let’s identify right subset of people, and let’ them find a good weekly time for them; Since Nat knows the folks - he can best find a way to loop in Romeo. I am OK with calls after 10pm PT

    Nat: I will get a time for Technical WG

    
Animesh: I will add Romeo email to WIki - his email is 
romeo.kienzler@ch.ibm.com

    Francois Jezequel (Orange) - 7:32 AM PT: Suoad is willing to continue, just not available today

    Animesh: Souad
    Francois: “Sue-wad” rhymes with “Squad”
    Alka: IEEE has good work we can borrow from

    
Alka: how to apply principles, practical technical and principle WG interactions

    Alka: going beyond theory into practice will be helpful
    
Animesh: Required people and optional people

    Animesh: Lineage -> Certification -> Badging

    
Animesh: Can help connect with IBM Research and ATT and rest of Principles committee

    
Animesh: Factsheets project is relevant - Factsheets: Increasing Trust in AI Services Through Supplier Documents of Conformity - Certifications - https://arxiv.org/pdf/1808.07261.pdfhttps://www.ibm.com/blogs/research/2018/08/factsheets-ai/
    
Alka: Please add me to the Principles (optional), and I will find the right ATT required

    Animesh: Alka is Bay Area for a visit
 sometime to discuss
    Alka: Rueben (AT&T)
    
Alka: I will make sure AT&T is engaged

    Nat: I have met with Rueben in NJ

    
Jeff Cao (Tencent) - 7:44 AM PT: I have ideas for person on technical person - Fitz Wang (Tencent)
    
Nat: I will reach out to Fitz, and please share name of others


    Mikael Anneroth (Ericsson) - 7:45 AM PT: please add Ericsson to Principles work group - would like to share and discuss - I am in Stockholm Sweden

    Jeff Cao (Tencent): Han Xiao is on the list

    
Animesh: Susan Malaika (IBM) is also interested)

    Animesh: If Orange will drive Principles

    Jeff Cao (Tencent): Tencent can join the principles group


    
Francois Jezequel (Orange): Yes, Souad is interested in leading Principles WG
    
Alka: identify gaps since a lot of work has already bee done

    
Francois: Minimum intersection that can be operationalized (how to make it practical in operation)

    Alka: Please let’s make the goal in writing, and when to have meeting to share each perspective, consolidate, and agree

    
Animesh: Francois or Souad can send a kick-off email, and we can get each group add their members (like IBM include Susan, Maureen, Jim, etc.

    Animesh: Shared several Google doc links and will upload them to Wiki

    
Alka: Strategy meeting - f2f being planned - in October - by that time converged on principles - progress to report

    Ofer: End of October, OSS Oct 31, full day governing board meeting
    
Nat: details readout in Oct 31
    
Alka: good deadline

    
Alka: Trusted and Responsible AI added to mission
 for LF AI - any objections
    All: No objections noted

  3. Working Group Names and Leads:

    Principles, lead: Souad Ouali (Orange France) with members from Orange, AT&T, Tech Mahindra, Tencent, IBM, Ericsson, Amdocs.


    Technical, lead: Romeo Kienzler (IBM Switzerland) with members from IBM, AT&T, Tech Mahindra, Tencent, Ericsson, Amdocs, Orange.


    Working groups will have a weekly meeting to make progress.  First read out to LF AI governing board will be Oct 31 in Lyon France.

    The Principles team will study the existing material from companies, governments, and professional associations (IEEE), and some up with set that can be shared with the technical team for feedback as a first step.  We need to identify and compile the existing materials.

    The Technical team is working on Acuomos+Angel+AIF360 integration demonstration.

  4. Thanks Souad for your note below to committee members - just re-reading it...

    (1) Great to keep a global balance on the committee

    (2) Look forward to definitions you are drafting

    (3) Collecting material - see some links below.

    (4) Cross analysis - Harvard study good starting point.

    (5) Pragmatic approach - what real-world enterprise use cases?

    (6) Open source tools - how tools and Trusted AI Workflows relate to above use cases?

    (7) Watch on LFAI landscape projects - which projects have been checked against principles?

    (8) How to ensure trust in open source package - how to organize audit to ensure files not corrupted? See Factsheets link below for a start.


    Some useful link:

    Globally Diverse - LFAI Trusted AI Committee: https://wiki.lfai.foundation/display/DL/Trusted+AI+Committee

    University: Harvard AI Principles meta-analysis: 32 sets of principles side by side, with 8 themes: https://ai-hr.cyber.harvard.edu/primp-viz.htm

    Company: AT&T (Tom Moore, Chief Privacy Officer): https://about.att.com/innovationblog/2019/05/our_guiding_principles.html

    Company: IBM Everyday Ethics for AI: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf

    Company: IBM NIST AI Standards response: https://www.nist.gov/sites/default/files/documents/2019/06/06/nist-ai-rfi-ibm-001.pdf

    Company: IBM Factsheets: Increasing Trust in AI Services Through Supplier Documents of Conformity - Certifications - https://arxiv.org/pdf/1808.07261.pdf

    Company: Tencent sent some files - but I do not have a URL to the documents.

    Trusted AI group calendar: https://wiki.lfai.foundation/pages/viewpage.action?pageId=12091895 (Please subscribe)
    Mailing list: https://lists.lfai.foundation/g/trustedai-committee (Please subscribe)
    GitHub: https://github.com/lfai/trusted-ai/


    Q: Where is the best place for us to upload documents to share with the Trusted AI Committee/Principles WG?

  5. Some notes from Oct 17, 2019 committee call:


    Summary
    - Animesh walked through the draft slides
    (to be presented in Lyon to LFAI governing board about TAIC)
    - Discussion of changes to make
    - Discussion of members, processes, and schedules

    Detail
    - Jim will put slides in Google Doc and share with all participants
    (https://drive.google.com/drive/folders/1RSHBzTj7SRpbioR31JP9ASEjvuV9kqnw)

    - Susan is exploring a slack channel for communications
    - Trust and Responsibility, Color, Icons to add Amdocs, Alejandro's Institute
    


    From chat:
    + @Ofer to PWG

    Alka: helped draft these - and acknowledges need to get everyone’s perspective - asked about feedback loop with Usecases Group

    
Francois: These slides will be a good resource - thank-you; now we have to organize the way we contribute to each part of the document - Suoad will lead getting consensus and finalizing this draft

    
Francois: Still need to define connection between principles and tooling - and make it concrete
Animesh: Members of UWG?
Animesh: Need members who will create use cases - software and actual industry use cases
Animesh: What are the telco use cases?

    
Alka: personal focus on PWG definition. Will work to gather AT&T use cases and try to find the right active person

    
Nat: interest yes. However, current focus on the release - in 3-4 weeks from now release will be complete for Acumos. Technical people tied up on release.
Nat: Reuben can give guidance, and Nat working on next level of technical experts

    
Han: Jeff and I working to get more awareness and attract others in Tencent; open source office and develops team to get involved. Find real world use cases

    Francois: Discussion of when PWG will meet - off-line and maybe two meetings. (Alka, Francois, Jeff, Susan)
    Alka: I have material to share with everyone when we can schedule that
    Animesh: Alejandro has great material on GitHub.


  6. January 23 meeting notes:

    Attendees: Animesh Singh (IBM), Samuel Hoffman (IBM), Nat (TechM), Susan Malaika (IBM), Alejandro Saucedo (Ethical AI Institute and Seldon), Maureen McElaney (IBM, Ofer Hermoni, Parag Ved, Yassi Moghaddam (ISSIP), Jim Spohrer (IBM)


    Animesh: Asked Nat for Acumos update

    Nat: Acumos update

    Animesh: Introduced Samuel Hoffman (IBM Research) to give Trusted AI and Scikitlearn demo

    Nat: Will the IBM materials be donated to LF AI? Aligned with IBM Product OpenScale?

    Animesh: Contributed if LF AI pull and community adoption

    Jim: Correct, product team at IBM wants to see community pull for it to be contributed (actual use and contributions from community)
    Nat: Mentioned Acumos Gaya - and will send links

    Ofer: Introduce Yassi yourself

    Yassi: introduced herself as exec director of professional association (Jim on Board, Susan lead on AI speaker series)

    Nat: gave history of the LF AI Trusted AI Committeee - companies from Asia, USA, Europe involved - two working groups, use cases and principles

    Nat: constantly evolving area

    Ofer: discuss it