Can Artificial Intelligence Make You Feel Safe?

We have been listening to the commotion and breathless hyperbole around Artificial Intelligence (AI), finding ourselves asking the same questions other people do:

Is AI going to take my job away?

Will we be living in a world where we can’t believe any visual communication as AI advances?

The answers to those questions can be speculative, unsettling or even frightening, depending on who you ask.

The mission and values of The Mandt System drive our curiosity in some other directions, however. 

And our curiosity led us to ask:

What is the role of Artificial Intelligence in de-escalation training?

We asked that of chatGPT, and here the answer we got:

Artificial learning, or machine learning, can play a significant role in de-escalation training. De-escalation training is an essential component of many professions, including law enforcement, healthcare, social work, and customer service. The goal of de-escalation training is to equip individuals with the skills to calm tense situations and prevent them from escalating into violence or conflict.

Artificial learning can be used to simulate scenarios where de-escalation techniques may be necessary, and provide trainees with a safe and controlled environment to practice their skills. By using machine learning algorithms, these simulations can adapt to the trainee’s behavior, allowing for a personalized learning experience.

As you might imagine, an answer like that gets our attention. 

To be fair, the sort of learning described in that answer is not a recent development. Adaptive learning, where the learning experience changes depending on the learner’s choices, has been around for a long time, and in fact was one of the original and highly-touted possibilities of interactive training. 

In the case of training which has been developed to support certification, however, an adaptive learning strategy is a problem,  because the nature of certification would often would suggest that all learners are presented with the same learning experience.

Beyond that need for consistency the possibilities suggested by AI are interesting. And at The Mandt System, we are no strangers to the use of technology in facilitating and presenting learning experiences. We rely heavily on customer relationship management tools to allow Instructors to enroll in training and manage their own training courses. The pandemic ushered in an entirely new era in the use of eLearning to teach non-technical skills, and we have recently released Augmented Reality tools to assist in review and refresh technical skills. And we are planning more enhancement to take the Mandt learning experience beyond the classroom.

If there exist possibilities that the Brave New World of AI tools can enhance and improve the learning experience the Mandt System provides, we will explore those possibilities. 

After all, we used AI tools to help write this blog post!

The Mandt System, however, is focused on helping people understand and manage human behavior at its most variable and volatile. Our focus is on understanding the variability of human behavior in challenging situations precisely so that we can mitigate volatility and whenever possible, prevent the volatility of behavior from reaching harmful levels.

We are reminded of some of the challenges encountered in the early days of machine learning. When engineers were trying to teach computers to play chess, for example, computers ran into trouble because they could not predict the ability of human beings to do the unexpected. Attempts to use technology to replicate the painting of Renaissance masters showed that computers have no trouble with accurate photographic reproduction of a subject, but can’t replicate the subtleties that give the work of Gauguin or Vermeer its character. Computers might regard those subtleties as mistakes and not style.

Which brings us to the heart of the problem with AI, from the perspective of the work we partner with you to perform:

Managing and deescalating challenging behavior requires that we understand and anticipate the complexities and unpredictability’s of human behavior.

So our curiosity led us to another question:

Can Artificial Intelligence understand or predict human behavior?

Here’s what chatGPT had to say:

AI systems can analyze vast amounts of data about human behavior, such as social media posts, online searches, and purchasing history, and use machine learning algorithms to identify patterns and make predictions about future behavior. For example, AI-powered recommendation systems can suggest products or services based on an individual’s previous behavior and preferences.

However, AI’s ability to understand and predict human behavior is limited by the quality and quantity of data available, as well as the complexity and unpredictability of human behavior. AI models may struggle to accurately predict human behavior in novel situations or when faced with incomplete or conflicting data.

The fact is, we couldn’t have said it better ourselves. AI does not have the capacity to evaluate the circumstances and challenges of the individuals in your care. AI cannot take into consideration trauma histories and myriad other complexities that impact all human behavior, or help us improve relationships and interactions so that everybody can say:

“In this place and with these people, I feel safe.”

Ralph Metzner – Chief Innovation Officer 

Are you ready to start training?

Our programs will help you build a safer, healthier workplace culture.

View Training Schedule Request Information