Below are titles and descriptions for each of these approved IEEE P7000™ standards projects.  Join any of the IEEE P7000™ standards Working Groups already in motion.  Your insights can literally set the standards for the future of ethical intelligent and autonomous technologies.  Join today!

IEEE P7000™ – Model Process for Addressing Ethical Concerns During System Design outlines an approach for identifying and analyzing potential ethical issues in a system or software program from the onset of the effort. The values-based system design methods addresses ethical considerations at each stage of development to help avoid negative unintended consequences while increasing innovation.
Learn More | Join The Working Group


IEEE P7001™ – Transparency of Autonomous Systems provides a Standard for developing autonomous technologies that can assess their own actions and help users understand why a technology makes certain decisions in different situations. The project also offers ways to provide transparency and accountability for a system to help guide and improve it, such as incorporating an event data recorder in a self-driving car or accessing data from a device’s sensors.
Learn More | Join The Working Group


IEEE P7002™ – Data Privacy Process specifies how to manage privacy issues for systems or software that collect personal data. It will do so by defining requirements that cover corporate data collection policies and quality assurance. It also includes a use case and data model for organizations developing applications involving personal information. The standard will help designers by providing ways to identify and measure privacy controls in their systems utilizing privacy impact assessments.
Learn More | Join The Working Group


IEEE P7003™ – Algorithmic Bias Considerations provides developers of algorithms for autonomous or intelligent systems with protocols to avoid negative bias in their code. Bias could include the use of subjective or incorrect interpretations of data like mistaking correlation with causation. The project offers specific steps to take for eliminating issues of negative bias in the creation of algorithms.  The standard will also include benchmarking procedures and criteria for selecting validation data sets, establishing and communicating the application boundaries for which the algorithm has been designed, and guarding against unintended consequences.
Learn More | Join The Working Group


IEEE P7004™ – Standard on Child and Student Data Governance provides processes and certifications for transparency and accountability for educational institutions that handle data meant to ensure the safety of students. The standard defines how to access, collect, share, and remove data related to children and students in any educational or institutional setting where their information will be access, stored, or shared.
Learn More | Join The Working Group


IEEE P7005™ – Standard on Employer Data Governance provides guidelines and certifications on storing, protecting, and using employee data in an ethical and transparent way. The project recommends tools and services that help employees make informed decisions with their personal information.  The standard will help provide clarity and recommendations both for how employees can share their information in a safe and trusted environment as well as how employers can align with employees in this process while still utilizing information needed for regular work flows.
Learn More | Join The Working Group


IEEE P7006™ – Standard on Personal Data AI Agent Working Group addresses concerns raised about machines making decisions without human input. This standard hopes to educate government and industry on why it is best to put mechanisms into place to enable the design of systems that will mitigate the ethical concerns when AI systems can organize and share personal information on their own.  Designed as a tool to allow any individual to essentially create their own personal “terms and conditions” for their data, the AI Agent will provide a technological tool for individuals to manage and control their identity in the digital and virtual world.
Learn More | Join The Working Group


IEEE P7007™ – Ontological Standard for Ethically driven Robotics and Automation Systems establishes a set of ontologies with different abstraction levels that contain concepts, definitions and axioms that are necessary to establish ethically driven methodologies for the design of Robots and Automation Systems.
Learn More | Join The Working Group


IEEE P7008™ – Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems establishes a delineation of typical nudges (currently in use or that could be created) that contains concepts, functions and benefits necessary to establish and ensure ethically driven methodologies for the design of the robotic, intelligent and autonomous systems that incorporate them. “Nudges” as exhibited by robotic, intelligent or autonomous systems are defined as overt or hidden suggestions or manipulations designed to influence the behavior or emotions of a user.
Learn More | Join The Working Group


IEEE P7009™ – Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems establishes a practical, technical baseline of specific methodologies and tools for the development, implementation, and use of effective fail-safe mechanisms in autonomous and semi-autonomous systems. The standard includes (but is not limited to): clear procedures for measuring, testing, and certifying a system’s ability to fail safely on a scale from weak to strong, and instructions for improvement in the case of unsatisfactory performance. The standard serves as the basis for developers, as well as users and regulators, to design fail-safe mechanisms in a robust, transparent, and accountable manner.
Learn More | Join The Working Group


IEEE P7010™ – Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems will establish wellbeing metrics relating to human factors directly affected by intelligent and autonomous systems and establish a baseline for the types of objective and subjective data these systems should analyze and include (in their programming and functioning) to proactively increase human wellbeing.
Learn More | Join The Working Group


IEEE P7011™ – Standard for the Process of Identifying & Rating the Trust-worthiness of News Sources will address the negative impacts of the unchecked proliferation of fake news by providing an open system of easy-to-understand ratings. In so doing, it shall assist in the restoration of trust in some purveyors, appropriately discredit other purveyors, provide a disincentive for the publication of fake news, and promote a path of improvement for purveyors wishing to do so. The standards shall target a representative sample set of news stories in order to provide a meaningful and accurate rating scorecard.

Learn More | Join The Working Group

IEEE P7012™ – Standard for Machine Readable Personal Privacy Terms will provide individuals with means to proffer their own terms respecting personal privacy, in ways that can be read, acknowledged and agreed to by machines operated by others in the networked world. In a more formal sense, the purpose of the standard is to enable individuals to operate as first parties in agreements with others–mostly companies–operating as second parties. Note that the purpose of this standard is not to address privacy policies, since these are one-sided and need no agreement. (Terms require agreement; privacy policies do not.)

Learn More | Join The Working Group


IEEE P7013™ – Inclusion and Application Standards for Automated Facial Analysis Technology research continues to show that artificial intelligence which is used for automated facial analysis is susceptible to bias that can exacerbate human prejudice and systematically disadvantage individuals based on gender, ethnicity, age, and other factors. The purpose of the standard is to provide inclusion guidelines for developing and benchmarking automated facial analysis technology to mitigate demographic and phenotypic bias and discrimination. The reporting rubrics/protocols established in this standard serve to increase transparency of this automated technology so that developers and decision makers can compare available options to choose the most appropriate technology based on target populations and intended use cases. Given the sensitivity of the biometric data provided from a human face, the standard also delineates appropriate and inappropriate uses of automated facial analysis based on accuracy and values established by a global community.

Learn More | Join The Working Group