ISO/IEC JTC 1 SC 42 Artificial Intelligence - Working Group 4
Use Cases & Applications 04/18/2024
Editor's comments and enhancements are shown in green. [✓ Reviewed]
The quality of use case submissions will be evaluated for inclusion in the Working Group's Technical Report based on the application area, relevant AI technologies, credible reference sources (see References section), and the following characteristics:
[1] Data Focus & Learning: Use cases for AI system which utilizes Machine Learning, and those that use a fixed a priori knowledge base.
[2] Level of Autonomy: Use cases demonstrating several degrees (dependent, autonomous, human/critic in the loop, etc.) of AI system autonomy.
[3] Verifiability & Transparency: Use cases demonstrating several types and levels of verifiability and transparency, including approaches for explainable AI, accountability, etc.
[4] Impact: Use cases demonstrating the impact of AI systems to society, environment, etc.
[5] Architecture: Use cases demonstrating several architectural paradigms for AI systems (e.g., cloud, distributed AI, crowdsourcing, swarm intelligence, etc.)
[6] Functional aspects, trustworthiness, and societal concerns
[7] AI life cycle components include acquire/process/apply.
These characteristics are identified in red in the use case.
Batch/Continuous/Discrete Manufacturing (Deployed in 75+ manufacturing lines in 10+ countries; Specifically identified the contributors to quality; predict potential quality failures).
Cerebra IOT signal intelligence platform provides the ability to have a holistic perspective and understanding of the sensitivity of the key parameters affecting output quality and ability to monitor and control the process in real-time. This will avoid variations in yields, build-up of inventories and missed customer deadlines.
Complete Description
Cerebra IOT signal intelligence platform ingested 3+ years of process data and sensor data regarding plant operations from temperature, rpm, torque and pressure sensors which were strapped on to industrial mixers. These are the mandatory sensors for the operations. Cerebra used its episode detection algorithms (deep learning) to filter signal from noise and specifically identify the contributors to quality (anomaly signatures) that can then be used as signals to predict quality. It used its proprietary N-dimensional Euclidian distance-based scoring algorithms to normalize and present a unified score to the business team. This unified health score provided the process team a different lens to benchmark, specifically target and radically improve process efficiencies. Cerebra then leveraged its sophisticated ensemble models to predict potential quality failures allowing the operations team to take real-time actions to control process deviations. The signals identified in the earlier steps provide Model Explainability to the end-user for reasons behind Quality deviation.
Mandate of the key sensors based on the type of equipment. Based on the type of equipment, the makers need to have the basic set on sensors imbibed onto the system. e.g. for a pump – it is important to measure the input flow and output flow rates, vibrations, rotation speed, lube oil temperature and pressure. This will guide the equipment manufactures to provide their customers and their data products to capture the minimum required data and understand the equipment performance.
Mandate for the organizations to expose the minimum and key parameters. The equipment owners need to enable the basic set of sensors for the equipment health and performance which are required for monitoring the asset from any failures.
Standards for Data Formats Each organization has a different way of capturing data and storing them in different formats. Due to this, the solutions are not scalable across organizations though the product behind them is same. It takes customised efforts each time.
Guidelines for deciding the sampling frequency based on the type of data. We see a need to have a specific set of guidelines to capture data at a minimum required sampling frequency, e.g. a vibration sensor should capture data at least at 1 ms or less.
Guidelines for Feature Engineering. There must be guidelines as to how the features need to be engineered for AI models. Lack of this would lead to more black box models not explaining how the models behave the way they do.
Guidelines for Standardization of event types and codes. There are multiple events which occur for an asset or in a manufacturing plant. Guidelines would help people capture the data in a similar fashion helping the industry to benchmark against one another and at industry level we can understand, which events are the most critical.
Guidelines for standardization of Fault and Error Codes for an equipment or process. Similar to events, it is also useful to capture fault, failure and error codes in a standard way.
Process Guidelines for event related data (Maintenance and Work Orders): Guidelines would help people capture the data in a similar fashion helping the industry to benchmark against one another and at industry level we can understand, which events are the most critical.
Guidelines for Training AI models: A defined set of guidelines for AI models would be useful for the data scientists to follow. It will also aid the consumers of AI models to understand how the outcome has been deduced.
Peer-reviewed scientific/technical publications on AI applications (e.g. [1]).
Patent documents describing AI solutions (e.g. [2], [3]).
Technical reports or presentations by renowned AI experts (e.g. [4])
High quality company whitepapers and presentations
Publicly accessible sources with sufficient detail
This list is not exhaustive. Other credible sources may be acceptable as well.
Examples of credible sources:
[1] B. Du Boulay. "Artificial Intelligence as an Effective Classroom Assistant". IEEE Intelligent Systems, V 31, p.76-81. 2016.
[2] S. Hong. "Artificial intelligence audio apparatus and operation method thereof". N US 9,948,764, Available at: https://patents.google.com/patent/US20150120618A1/en. 2018.
[3] M.R. Sumner, B.J. Newendorp and R.M. Orr. "Structured dictation using intelligent automated assistants". N US 9,865,280, 2018.
[4] J. Hendler, S. Ellis, K. McGuire, N. Negedley, A. Weinstock, M. Klawonn and D. Burns. "WATSON@RPI, Technical Project Review".
URL: https://www.slideshare.net/jahendler/watson-summer-review82013final. 2013