The quality of use case submissions will be evaluated for inclusion in the Working Group's Technical Report based the application area, relevant AI technologies, credible reference sources (see References section), and the following characteristics:
Data Focus & Learning: Use cases for AI system which utilizes Machine Learning, and those that use a fixed a priori knowledge base.
Level of Autonomy: Use cases demonstrating several degrees (dependent, autonomous, human/critic in the loop, etc.) of AI system autonomy.
Verifiability & Transparency: Use cases demonstrating several types and levels of verifiability and transparency, including approaches for explainable AI, accountability, etc.
Impact: Use cases demonstrating the impact of AI systems to society, environment, etc.
Architecture: Use cases demonstrating several architectural paradigms for AI systems (e.g., cloud, distributed AI, crowdsourcing, swarm intelligence, etc.)
To design an efficient solution for customersí sentiment and intensity detection, especially in the situation of limited training dataset.
Short Description (up to 150 words)
The emotion-sensitive AI customer service of JD.com Int., is supported by AI technology and deep learning method. It is developed for ameliorating accuracy of customer sentiment and intensity. In sentiment classification, it has achieved 74% accuracy and 90% recall score while in intensity detection, it has accomplished 85% accuracy and 85% recall. During the special sale of -618-, it has increased customer satisfaction by 57%.
JDís customer service representatives need to handle millions of requests on a daily basis. Regular AI customer service systems, 24/7 online, are capable of offering instant assistance, which alleviates the labor resources to a large extent. However, it is quite challenging, if not impossible, for those systems to interpret emotions from customer input and respond as friendly as human. Under this background, based on huge data set of customer comments and rich experience of Natural Language Processing, our system can automatically detect sentiments like happy, angry, anxious, etc. Moreover, this system can also detect the intensity of customer sentiment. Furthermore, we adapt Convolutional Neural Networks, a widely used techniques in visual computing, to interpret the semantic meaning of customerís expression. It can improve the systemís performance for sentiment classification and intensity detection. Moreover, with the adoption of transfer learning, the system can also be applied into various types of data. To overcome the difficulty of limited training data, we also use data augmentation method such as reverse translation and data noise to increase the variability of training data. Up to now, the system has reached 90% recall and 74% accuracy rate for sentiment classification over 7 categories. The overall recall and accuracy for sentiment intensity are also around 85%?it has increased customer satisfaction by 57%.
Deep learning: a class of machine learning algorithms use a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Transfer learning: we adopt multi-task learning method in this system. Jointly training different annotated data in same domain, this method improves the model performance for classification problems. Data augmentation: we apply reverse translation to firstly translation Chinese into English and then translate it backward. We also use data noise to improve the data diversity.
Standardization Opportunities Requirements
Challenges & Issues
Challenge: the systemís performance should be as good as the human customer server. Issues: 1) limited training data; 2) sentiment classification among seven categories.
For sentiment classification: conversation data from after-sales customer services. Itís annotated by professional annotators into 7 categories of sentiments. For sentiment intensity: Only including sentiment data with ďangerĒ and ďanxiousĒ; itís annotated into 3 degrees of intensity: ďlow, medium, highĒ.
Peer-reviewed scientific/technical publications on AI applications (e.g. ).
Patent documents describing AI solutions (e.g. , ).
Technical reports or presentations by renowned AI experts (e.g. )
High quality company whitepapers and presentations
Publicly accessible sources with sufficient detail
This list is not exhaustive. Other credible sources may be acceptable as well.
Examples of credible sources:
 B. Du Boulay. "Artificial Intelligence as an Effective Classroom Assistant". IEEE Intelligent Systems, V 31, p.76-81. 2016.
 S. Hong. "Artificial intelligence audio apparatus and operation method thereof". N US 9,948,764, Available at: https://patents.google.com/patent/US20150120618A1/en. 2018.
 M.R. Sumner, B.J. Newendorp and R.M. Orr. "Structured dictation using intelligent automated assistants". N US 9,865,280, 2018.
 J. Hendler, S. Ellis, K. McGuire, N. Negedley, A. Weinstock, M. Klawonn and D. Burns. "WATSON@RPI, Technical Project Review".
URL: https://www.slideshare.net/jahendler/watson-summer-review82013final. 2013