Reliable, Safe, Secure, and Time-Deterministic Intelligent Systems Special Technical Community (STC) will bring together experts in all related technologies required in this multi-topical field with the goal of moving toward an autonomous and intelligent world. https://www.computer.org/communities/special-technical-communities
The European Commission High-Level Group on AI invites all stakeholders to provide feedback on the Ethics Guidelines’ Trustworthy AI assessment list. The piloting phase will be open until 01 December 2019: The European Commission High-Level Group on Artificial Intelligence (AI HLEG) invites all stakeholders to provide feedback on the ‘Ethics Guidelines for Trustworthy AI’, published in April […]
On 26 June 2019, the European Commission High-Level Group on Artificial Intelligence (AI HLEG) published a report on Policy and Investment Recommendations for Trustworthy Artificial Intelligence putting forward 33 recommendations that can help guide Trustworthy AI towards sustainability, growth and competitiveness, as well as inclusion while protecting human beings. This comes pursuant to the first publication of the AI […]
The National Institute of Standards and Technology (NIST) has issued a Request for Information (RFI) on the current state, plans, challenges, and opportunities related to artificial intelligence (AI) technical standards, as well as priority areas for federal involvement in AI standards-related activities. NIST’s request comes pursuant to the February 11, 2019, Executive Order (EO) on Maintaining American Leadership in Artificial […]
The Ethics Guidelines on Artificial Intelligence (AI) developed by the European Commission’s High Level Expert Group (HLEG) on AI were published together with a European Commission Communication presenting the guidelines on 9 April. Relevant documents can be found at the following links: AI Ethics Guidelines – https://ec.europa.eu/digital-single-market/news-redirect/648305 AI new definition – https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines AI communication – https://ec.europa.eu/digital-single-market/news-redirect/648304
IEEE, the world’s largest technical professional organization dedicated to advancing technology for humanity, and the IEEE Standards Association (IEEE SA) today announced the launch of the first edition of Ethically Aligned Design (EAD1e), “A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems.” EAD1e is available from The IEEE Global Initiative on Ethics of […]
The IEC released a new white paper on artificial intelligence developed by its Market Strategy Board. This white paper sets the scene for understanding where artificial intelligence stands today and the outlook for the next 5 to 10 years. Taking an industrial perspective, it discusses in more detail: smart homes, intelligent manufacturing, smart transportation/self-driving vehicles, […]
The IEC set up a new Standardization Evaluation Group (SEG 10) tasked with assessing the impact of ethical issues and societal concerns on artificial intelligence applications. The group is open to all interested experts having knowledge and expertise in these fields. More information (including registration) is available from www.iec.ch/seg10.
“When developing new standards in robotics, social scientists and ethicists need to be involved even more strongly in standardization work on an international and European level, in order to evaluate advantages and disadvantages of atomization in an interdisciplinary way”, Austria’s Ministry of Labour, Social Affairs, Health and Consumer Protection Beate Hartinger-Klein stated at an informal […]
The OCEANIS community is open to interested organizations from around the world. Terms of Reference will be developed and the community will be overseen by a steering committee consisting of one representative per member organization, supported by a small secretariat. If your organization is interested in participating in OCEANIS, please email firstname.lastname@example.org.