ISO/IEC JTC 1 SC 42 Artificial Intelligence - Working Group 4
Use Cases & Applications

The quality of use case submissions will be evaluated for inclusion in the Working Group's Technical Report based the application area, relevant AI technologies, credible reference sources (see References section), and the following characteristics:

  • Data Focus & Learning: Use cases for AI system which utilizes Machine Learning, and those that use a fixed a priori knowledge base.
  • Level of Autonomy: Use cases demonstrating several degrees (dependent, autonomous, human/critic in the loop, etc.) of AI system autonomy.
  • Verifiability & Transparency: Use cases demonstrating several types and levels of verifiability and transparency, including approaches for explainable AI, accountability, etc.
  • Impact: Use cases demonstrating the impact of AI systems to society, environment, etc.
  • Architecture: Use cases demonstrating several architectural paradigms for AI systems (e.g., cloud, distributed AI, crowdsourcing, swarm intelligence, etc.)
No. 34 ID: Use Case Name: Robotic task automation: Insertion
Embedded systems – Cloud service
ScopeRobotic assembly
  1. Simple programing/instruction and flexibility in usage
  2. Automation of tasks lacking analytic description
  3. Reliability and efficiency
(up to
150 words)
Assembly process often includes steps where two parts need to be matched and connected to each other through force exertion. In an ideal case, perfectly formed parts can be matched and be assembled together with predefined amount of force. Due to imperfection of production steps, surface imperfection and other factors such as flexibility of parts, this procedure can become complex and unpredictable. In such cases, human operator can be instructed with simple terms and demonstrations and perform the task easily, while a robotic system will need very detailed and extensive program instructions to be able to perform the task including required adaptation to the physical world. The need for such a complex program instruction will make use of automation cumbersome or uneconomical. Control algorithm that are based on machine learning, especially those including reinforcement learning can become alternative solutions increasing and extending the level of automation in manufacturing.
Complete Description The case described here is a common step in assembly processes in manufacturing industry and includes matching and properly connecting two parts when one needs to be inserted into another. Successful and efficient insertion usually needs action by feeling. It is difficult to describe in terms of mathematical algorithms and therefore is difficult to program. Complexities in programming, or high degree of operational failure make usage of robots, or automation unattractive. Use of machine learning and artificial intelligence is one of promising methods to overcome such difficulties.

As will be described below, there are several different phases in the process, where different methodologies can and should be used. To make the methodology usable in a practical case, it should be utilizable by operators without deep technical knowledge with an effort that can be accepted on a production line. Ultimately, such methods must remove the need for programing completely.

The assumption here is that the parts to be assembled are properly localized, such that they can be manipulated by a robot in the desired way. The problem concerns the following steps:

  1. Identification and picking the first part (A).
  2. Moving A to the vicinity of the second part (B).
  3. Alignment of the two parts.
  4. Exertion of force with simultaneous movement for smooth insertion.
  5. Termination of the task when complete insertion is complete.

The above task, with all possible challenges, can easily be performed by a human operator. An operator in majority of cases needs very limited amount of information. Using prior knowledge and experiences and the sensory system the task can be completed and all possible exceptions can be handled. With time, a human operator becomes constantly more efficient and performs the task faster and more reliably.

The topics to be handled in this use case are how a machine can be instructed, trained, perform and improve to a high level of reliability and efficiency. The process can be divided into following steps:

  1. Localization of parts: Image processing, object identification, classification and localization.
  2. Alignment of parts: Control and optimization with (mainly) vision inputs.
  3. Insertion through exertion of forces: Control and optimization with (at least) vision and force sensor feedback
  4. Sensing the termination of the process: Pattern recognition in time series.
  5. Continuous improvement: Reinforcement learning.

Vision and force sensors are most commonly used sensors in such processes. The objects and environment need to be observed at moderate as well as in very close distances. Force sensors are needed but have the weakness of not being active before a complete contact. Therefore, use of other sensors could be helpful.

The method is used for assembly tasks with the target of reducing the programming effort and increasing flexibility. For that to be achieved, the effort necessary to teach, train and use the system should be minimum and the reliability should come high at short time. This implicitly means that the system should become useful with limited amount of data and at limited amount of time. After an initial relatively stable state is reached, reinforcement can be used to improve the efficiency of the system.

The solution will become more attractive if transfer learning is utilized to further reduce the initial training time.

For benchmarking purpose a specific set of objects to be assembled together should be defined and performance of the methods can be measured by necessary training time, need for computing power and memory as well as time for completion of the task. The objects in the tests can be geometrically relatively simple. Special features such as rough surfaces, tight fitting or flexibility of the objects can be considered for different classes of problems.

StakeholdersDiscrete manufacturing industries; Operators
Threats &
Incorrect AI system use; New security threats
Indicators (KPIs)
Seq. No. Name Description Reference to mentioned
use case objectives
1 Ease of use Simplicity and efficiency during initial learning. Teaching process should be easy.
AI Features Task(s)Recognition, classification, control, optimization
Method(s)Deep learning, image processing, control, Optimization
HardwarePC equipped with GPU accelerators
Terms &
Concepts Used
Reinforcement learning
& Issues
  • Complex and unpredictable assembly process due to imperfection of production steps, surface imperfection and other factors such as flexibility of parts.
  • Accuracy of sensing
  • Coworking with humans
Societal Concerns Description Promoting sustainable industries, and investing in scientific research and innovation, are all important ways to facilitate sustainable development.
SDGs to
be achieved
Industry, Innovation, and Infrastructure
Data Characteristics
Volume (size)
(rate of change)
Scenario Conditions
No. Scenario
Triggering Event Pre-condition Post-Condition

Scenario Name Training
Step No. Event Name of
Description of

Specification of training data
Scenario Name Evaluation
Step No. Event Name of
Description of

Input of Evaluation
Output of Evaluation
Scenario Name Execution
Step No. Event Name of
Description of

Input of Execution
Output of Execution
Scenario Name Retraining
Step No. Event Name of
Description of

Specification of retraining data
No. Type Reference Status Impact of
use case
1 Conference Fan Dai, Arne Wahrburg, Björn Matthias, Hao Ding: Robot Assembly Skills Based on Compliant Motion Proceedings of 47th International Symposium on Robotics (ISR 2016), At Munich, Germany Published Cited to support the detailed description ABB Link
2 Conference Te Tang, Hsien-Chun Lin, Masayoshi Tomizuka, A learning-based framework for robot peg-hole-insertion, Proceedings of the ASME 2015 Dynamic Systems and Control Conference, October 28-30, 2015, Columbus, Ohio, USA Published Cited to support the detailed description University of California Link
3 Publication Fares J. Abu-Dakka, Bojan Nemec, Aljaž Kramberger, Anders Glent Buch, Norbert Krüger and Aleš Ude, Solving peg-in-hole tasks by human demonstration and exception strategies, Industrial Robot: An International Journal 41/6 (2014) 575–584 Published Cited to support the detailed description Jožef Stefan Institute, Dept. of Automatics, Biocybernetics, and Robotics, Slovania, Maersk Mc-Kinney Moller Institute, University of Southern Denmark Link
4 Publication ?Mel Vecerik, Todd Hester, Jonathan Scholz, Fumin Wang, Olivier Pietquin, Bilal Piot, Nicolas Heess, Thomas Rothörl, Thomas Lampe, Martin Riedmiller, Leveraging Demonstrations for Deep Reinforcement, Learning on Robotics Problems with Sparse Rewards, arXiv:1707.08817v2 [cs.AI] 8 Oct 2018 Published Cited to support the detailed description Deepmind Link
5 Publication Mel Vecerik, Oleg Sushkov, David Barker, Thomas Roth¨orl, Todd Hester, Jon Scholz, A Practical Approach to Insertion with Variable Socket Position Using Deep Reinforcement Learning, arXiv:1810.01531v2 [cs.RO] 8 Oct 2018 Published Cited to support the detailed description Deepmind Link

  • Peer-reviewed scientific/technical publications on AI applications (e.g. [1]).
  • Patent documents describing AI solutions (e.g. [2], [3]).
  • Technical reports or presentations by renowned AI experts (e.g. [4])
  • High quality company whitepapers and presentations
  • Publicly accessible sources with sufficient detail

    This list is not exhaustive. Other credible sources may be acceptable as well.

    Examples of credible sources:

    [1] B. Du Boulay. "Artificial Intelligence as an Effective Classroom Assistant". IEEE Intelligent Systems, V 31, p.76-81. 2016.

    [2] S. Hong. "Artificial intelligence audio apparatus and operation method thereof". N US 9,948,764, Available at: 2018.

    [3] M.R. Sumner, B.J. Newendorp and R.M. Orr. "Structured dictation using intelligent automated assistants". N US 9,865,280, 2018.

    [4] J. Hendler, S. Ellis, K. McGuire, N. Negedley, A. Weinstock, M. Klawonn and D. Burns. "WATSON@RPI, Technical Project Review".
    URL: 2013