21st International Conference on Human-Computer Interaction (HCII), and the Summary of Highlights of the 1st International Conference on Ada

October 04, 2019

Press Release

21st International Conference on Human-Computer Interaction (HCII), and the Summary of Highlights of the 1st International Conference on Ada

The 21st International Conference on Human-Computer Interaction (HCI International 2019) was held in Orlando, Florida, USA from 26-31 July, 2019.

SHANGHAI, China, Oct. 02, 2019 (GLOBE NEWSWIRE) -- The 21st International Conference on Human-Computer Interaction (HCI International 2019) was held in Orlando, Florida, USA from 26-31 July, 2019. It is noteworthy that HCII also held the 1st International Conference on Adaptive Instructional Systems (AIS) at this time.

As a subsidiary conference of HCII, the goal of the AIS conference is to understand the theory of adaptive instructional systems and share the latest technologies, tools and methods. The focus of this conference was on instructional customization, emphasizing the importance of accurate modeling of learners to accelerate their learning, and enhancing the effectiveness based on adaptive instructional systems so as to accurately reflect their long-term abilities in a variety of teaching fields.

The AIS conference invited the industry’s top experts to share their vision and discoveries on adaptive instructional system technologies (such as intelligent tutoring systems, intelligent tutors and personal assistants for learning), and to propose standards so as to improve the portability, scalability and interoperability of adaptive instructional system technologies and other instructional technologies. Experts from China’s artificial intelligence education unicorn Squirrel AI Learning were also invited to deliver speeches.

Robert Sottilare?Exploring Ways to Promote Interoperability of Adaptive Instructional Systems

Robert Sottilare is the Scientific Director of Intelligent Training at Soar Technologies, which is a provider specialized in design and development for military or civilian software solutions. The theme of Sottilare’s speech was “Exploring Ways to Promote Interoperability of Adaptive Instructional Systems”.

What is an adaptive instructional system? The adaptive instructional system is a set of artificial intelligence computer system that guides learning experience, customizes teaching, and gives advice in its learning field according to the goals, needs, and preferences of each learner or team.

What is interoperability? According to the late renowned American computer scientist Peter Wegner, interoperability was the ability for two or more software components to cooperate despite differences existed in language, interface and execution platform, and was an extensible form of reusability.

Sottilare introduced that the purpose of studying the interoperability of adaptive instructional systems is to enable the system to share information between components and other systems, and to optimize learning, performance, retention and generalization of training for students.

Adaptive instructional systems usually have four common components, including:

1. Domain models, which provide goals, contents, interaction and feedback with learners, as well as common errors, are the most difficult standardized components;

2. Learner or team models, which provide goals, preferences, interests, learning gaps compared to learning goals, as well as states (e.g., cognition, body, learning and performance), represent all aspects of the learner or team;

3. Teaching models, which provide best teaching practices and intervention measures in the relevant context, as well as strategies and suggestions in the professional design field;

4. Interface models, which provide multimodal interaction with learners, establishes standard interface specifications for sensor and system interaction.

On this basis, Sottilare put forward the following suggestions:

1. Review existing standards and teaching systems to identify information shared by four common components, including: the Generalized Intelligent Tutoring Framework (GIFT) developed by the US Army; AutoTutor at the University of Memphis; Cognitive Tutor at the Carnegie Mellon University; Digital Tutor designed by DARPA in the United States; and Educational Intelligent Resource Authoring Software Platform (ASPIRE) in Canterbury, New Zealand;

2. Develop a conceptual model dedicated to adaptive instructional systems (including four common components);

3. Consider and enable interoperability at all levels;

4. Consider learners in interoperability standards.

IEEE Interoperability Standard for Adaptive Instructional Systems 2247.2

Richard Tong is the Chief Architect of Squirrel AI Learning. He has served as the head of Greater China for Knewton and the Director of Solution Architecture for Amplify Education. In addition, he is also a member of the IEEE AIS (Adaptive Instructional System) Standard Working Group and the chairman of the Interoperability Group (IEEE 2247.2). The speech he brought this time is exactly the progress of2247.2.

As mentioned above, the adaptive instructional system is a computer-based artificial intelligence system, which customizes teaching and suggestions according to the goals, needs and preferences of each learner or team in the context of domain learning goals. The IEEE AIS working group mainly supports the conceptual model, interoperability standards and evaluation practices for the adaptive instructional system.

IEEE 2247.2 is an interoperability standard group under IEEE AIS. The main work is divided into vertical integration (Outerloop - > Innerloop; LMS - > Engine - > Model - > Data; Self-improvement; Process Integration Standardization) and horizontal integration (Data; Ontology and Content; Model Integration).

Currently, the higher priority work of the working group includes the Outerloop-Innerloop integration. The working group hopes that the system can achieve interoperability from the “Domain Independent Adaptive Framework” to the “Domain Tasks and Activities” through integration, and can reuse professional contents.

The ontology layer exchange is equally important. The working group hopes to set reasonable learning goals through integration; to obtain a larger background of domain knowledge; metadata plays a key role in adaptive instructional systems; and the reasons behind search, measurement and recommendation depend on domain ontology.

AutoTutor at the University of Memphis - Conversation Intelligence Tutoring System

Zhiqiang Cai is n research assistant professor of the Intelligent Systems Research Institute at the University of Memphis. His research interests include algorithm design and software development for tutoring systems and natural language processing. The theme of his speech was “Writing a Conversational Intelligence Tutoring System - AutoTutor”

AutoTutor is an intelligent tutoring system developed by the Intelligent Systems Research Institute at the University of Memphis. It helps students learn physics, computer knowledge and critical themes of thinking through natural language tutoring dialogues. Unlike other popular intelligent tutoring systems, such as Cognitive Tutor, AutoTutor focuses on natural language dialogues.

The system takes human voice or text as input. To handle this input, AutoTutor uses computational linguistic algorithms, including latent semantic analysis, regular expression matching and voice behavior classifiers. These complementary techniques respectively focus on the general meaning of the input, the precise diction or keywords, as well as the functional purpose of expression. In addition to natural language input, AutoTutor can also accept temporary events such as mouse clicks, learner emotions inferred from emotional sensors, and estimates of prior knowledge from student models. Based on these inputs, AutoTutor determines when to reply and in what language to reply.

AutoTutor will raise a series of challenging open questions that require students to explain and reason verbally in their answers. AutoTutor will deliver its content through an animation dialogue intelligent agent with a voice engine, some facial expressions, and basic gestures. Some themes also include image-text, animation, or interactive simulation environments. AutoTutor traces the learners’ cognitive state by analyzing the content of dialogue history. The latest version of the AutoTutor system can also adapt to the learners’ emotional states.

AutoTutor demonstrated the learning results in more than a dozen experiments of college students, especially in deep reasoning questions, with the theme of introduction to computer knowledge and conceptual physics. The AutoTutor test took the effect with an average value of 0.8 (ranging from 0.4 to 1.5), which depends on the amount of learning particularly. What level is 0.8? 1.0 would be roughly equivalent to a level that raises a exact letter grade (from C to B, for example).

Of course, the time and cost of creating AutoTutor content is significantly higher than that of non-interactive educational materials, which is a common problem in intelligent tutoring systems. The method of accelerating the production of intelligent tutoring systems is still challenging.

Vasile Rus?Standardizing Unstructured Interaction Data in Adaptive Instructional Systems

Also from the University of Memphis, Dr. Vasile Rus is the William Dunavant Professor of the University, who joined the Department of Computer Science at the University of Memphis in 2004. He is also a member of the Intelligent Systems Research Institute at the University of Memphis. The theme of his speech was “Standardizing Unstructured Interaction Data in Adaptive Instructional Systems”.

Simply speaking, the unstructured learning data in the learning process is mainly the text generated freely by learners, such as fill-in-the-blank questions, essay questions, and essay writing, etc.

Unstructured learning data has its advantages and disadvantages. The advantage is to reflect the learners’ thinking to achieve a real assessment, so that learners have the opportunity to provide novel and creative assembly, freely generated self-interpretation is beneficial to learning. The disadvantage is that such data is difficult to extend and it can be very expensive if it is assessed manually by experts; the same problem also occurs in standardization. 

It is often difficult to process such data, especially the learning data on the Internet. In the process of answering questions, the learners will emerge various kinds of problems, such as spelling errors and grammatical errors; incomplete sentences, vague sentences and complete fixed sentences; serious contextualization and so on.

What researchers need to do is to map unstructured data to structured data for analysis, exchange and alignment/fusion. Dr. Rus suggested that there are two methods to realize this: one is instant mapping, which is recommended if the student model is required to be updated continuously; the other is offline - recording the interaction between the learners and the system, and then extracting the knowledge components, behavioral elements and so on from the log file.

The key to the offline method is how to standardize logs and verbal behaviors. In the standardization of logs, researchers should record as much as possible, because every detail is important and consider practical factors, such as privacy and security issues. Research requires the use of machine-readable formats (XML or similar XML), which will make data extraction, fusion and exchange easier. Meanwhile, use appropriate IDs/ links to tasks, configuration files and dialog strategies so that all content can be linked to each other when needed (data source requirements); finally, researchers should be able to extract some parts of the logs and present them to the learners themselves in a user-friendly format (such as html).

Verbal behaviors help to understand the interaction between learners and tutors (systems), and standardizing them will allow for data exchange and meaningful cross-platform analysis and comparison. One challenge in standardizing verbal behavior information is that various research and development groups use different taxonomies.

Dr. Rus proposed some suggestions for standardizing verbal behaviors, such as using a common taxonomy, where various groups agreed on a standard taxonomy; alignment method, which can preserve the methods of each working group; mixed: only agree with the upper taxonomy.

Keith Brawner is the founder and CEO of Iterati Technologies. Iterati Technologies is a technology company based in Florida. Brawner is also a senior engineer at the Simulation and Training Technology Center at the US Army Combat Capability Development Command. The theme of his speech was “Standards are Needed: Capability Modeling and Recommendation Systems".

His department is called Learning in Intelligent Tutoring Environment (LITE) Lab, which is mainly engaged in research and development based on adaptive instructional systems to support one-on-one and one-to-many tutoring environments so as to achieve tailor-made self-regulated learning. The lab supports the results of US Army training.

Brawner introduced some of the training tools currently used by US Army soldiers, such as the Synthetic Training Environment (STE), which integrates virtual, constructive, and game-based training environments into a separate STE to provide simulated training services for the US Army. Sailor 2025 is a naval project aimed at improving and modernizing personnel management and training systems to more effectively recruit, develop, manage, reward and retain future US naval forces.