ISO/IEC JTC 1 SC 42 Artificial Intelligence - Working Group 4
Use Cases & Applications 02/18/2020
Editor's comments and enhancements are shown in green. [✓ Reviewed]
The quality of use case submissions will be evaluated for inclusion in the Working Group's Technical Report based on the application area, relevant AI technologies, credible reference sources (see References section), and the following characteristics:
 Data Focus & Learning: Use cases for AI system which utilizes Machine Learning, and those that use a fixed a priori knowledge base.
 Level of Autonomy: Use cases demonstrating several degrees (dependent, autonomous, human/critic in the loop, etc.) of AI system autonomy.
 Verifiability & Transparency: Use cases demonstrating several types and levels of verifiability and transparency, including approaches for explainable AI, accountability, etc.
 Impact: Use cases demonstrating the impact of AI systems to society, environment, etc.
 Architecture: Use cases demonstrating several architectural paradigms for AI systems (e.g., cloud, distributed AI, crowdsourcing, swarm intelligence, etc.)
 Functional aspects, trustworthiness, and societal concerns
 AI life cycle components include acquire/process/apply.
These characteristics are identified in red in the use case.
Simple programing/instruction and flexibility in usage
Automation of tasks lacking analytic description
Reliability and efficiency
Short Description (up to 150 words)
Assembly process often includes steps where two parts need to be matched and connected to each other through force exertion. In an ideal case, perfectly formed parts can be matched and be assembled together with predefined amount of force. Due to imperfection of production steps, surface imperfection and other factors such as flexibility of parts, this procedure can become complex and unpredictable. In such cases, human operator can be instructed with simple terms and demonstrations and perform the task easily, while a robotic system will need very detailed and extensive program instructions to be able to perform the task including required adaptation to the physical world. The need for such a complex program instruction will make use of automation cumbersome or uneconomical. Control algorithm that are based on machine learning, especially those including reinforcement learning can become alternative solutions increasing and extending the level of automation in manufacturing.
The case described here is a common step in assembly processes in manufacturing industry and includes matching and properly connecting two parts when one needs to be inserted into another. Successful and efficient insertion usually needs action by feeling. It is difficult to describe in terms of mathematical algorithms and therefore is difficult to program. Complexities in programming, or high degree of operational failure make usage of robots, or automation unattractive. Use of machine learning and artificial intelligence is one of promising methods to overcome such difficulties.
As will be described below, there are several different phases in the process, where different methodologies can and should be used. To make the methodology usable in a practical case, it should be utilizable by operators without deep technical knowledge with an effort that can be accepted on a production line. Ultimately, such methods must remove the need for programing completely.
The assumption here is that the parts to be assembled are properly localized, such that they can be manipulated by a robot in the desired way. The problem concerns the following steps:
Identification and picking the first part (A).
Moving A to the vicinity of the second part (B).
Alignment of the two parts.
Exertion of force with simultaneous movement for smooth insertion.
Termination of the task when complete insertion is complete.
The above task, with all possible challenges, can easily be performed by a human operator. An operator in majority of cases needs very limited amount of information. Using prior knowledge and experiences and the sensory system the task can be completed and all possible exceptions can be handled. With time, a human operator becomes constantly more efficient and performs the task faster and more reliably.
The topics to be handled in this use case are how a machine can be instructed, trained, perform and improve to a high level of reliability and efficiency. The process can be divided into following steps:
Localization of parts: Image processing, object identification, classification and localization.
Alignment of parts: Control and optimization with (mainly) vision inputs.
Insertion through exertion of forces: Control and optimization with (at least) vision and force sensor feedback
Sensing the termination of the process: Pattern recognition in time series.
Continuous improvement: Reinforcement learning.
Vision and force sensors are most commonly used sensors in such processes. The objects and environment need to be observed at moderate as well as in very close distances. Force sensors are needed but have the weakness of not being active before a complete contact. Therefore, use of other sensors could be helpful.
The method is used for assembly tasks with the target of reducing the programming effort and increasing flexibility. For that to be achieved, the effort necessary to teach, train and use the system should be minimum and the reliability should come high at short time. This implicitly means that the system should become useful with limited amount of data and at limited amount of time. After an initial relatively stable state is reached, reinforcement can be used to improve the efficiency of the system.
The solution will become more attractive if transfer learning is utilized to further reduce the initial training time.
For benchmarking purpose a specific set of objects to be assembled together should be defined and performance of the methods can be measured by necessary training time, need for computing power and memory as well as time for completion of the task. The objects in the tests can be geometrically relatively simple. Special features such as rough surfaces, tight fitting or flexibility of the objects can be considered for different classes of problems.
Te Tang, Hsien-Chun Lin, Masayoshi Tomizuka, A learning-based framework for robot peg-hole-insertion, Proceedings of the ASME 2015 Dynamic Systems and Control Conference, October 28-30, 2015, Columbus, Ohio, USA
Fares J. Abu-Dakka, Bojan Nemec, Aljaž Kramberger, Anders Glent Buch, Norbert Krüger and Aleš Ude, Solving peg-in-hole tasks by human demonstration and exception strategies, Industrial Robot: An International Journal 41/6 (2014) 575–584
Cited to support the detailed description
Jožef Stefan Institute, Dept. of Automatics, Biocybernetics, and Robotics, Slovania, Maersk Mc-Kinney Moller Institute, University of Southern Denmark
?Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller, Leveraging Demonstrations for Deep Reinforcement, Learning on Robotics Problems with Sparse Rewards, arXiv:1707.08817v2 [cs.AI] 8 Oct 2018
Mel Vecerik, Oleg Sushkov, David Barker, Thomas Roth¨orl, Todd Hester, Jon Scholz, A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning, arXiv:1810.01531v2 [cs.RO] 8 Oct 2018
Peer-reviewed scientific/technical publications on AI applications (e.g. ).
Patent documents describing AI solutions (e.g. , ).
Technical reports or presentations by renowned AI experts (e.g. )
High quality company whitepapers and presentations
Publicly accessible sources with sufficient detail
This list is not exhaustive. Other credible sources may be acceptable as well.
Examples of credible sources:
 B. Du Boulay. "Artificial Intelligence as an Effective Classroom Assistant". IEEE Intelligent Systems, V 31, p.76-81. 2016.
 S. Hong. "Artificial intelligence audio apparatus and operation method thereof". N US 9,948,764, Available at: https://patents.google.com/patent/US20150120618A1/en. 2018.
 M.R. Sumner, B.J. Newendorp and R.M. Orr. "Structured dictation using intelligent automated assistants". N US 9,865,280, 2018.
 J. Hendler, S. Ellis, K. McGuire, N. Negedley, A. Weinstock, M. Klawonn and D. Burns. "WATSON@RPI, Technical Project Review".
URL: https://www.slideshare.net/jahendler/watson-summer-review82013final. 2013