mong the multitude of issues that impact global affairs today, advancements in machine-learning algorithms, predictive analytical technologies, and artificial intelligence are critical developments that seem at once overhyped and understated. The promises most often touted seem to be the potential for truly autonomous artificial intelligence (AI) that will undoubtedly help mankind, and conversely the onset of self-induced extinction as a result of a Skynet-like scenario.
At a more nuanced level, history would indicate that it will be some second or third order effect of these advancements that will impact human evolution in a previously unconsidered manner. Often overlooked, though, is the role that human bias plays in the interaction between technology and mankind, consequently redirecting our evolution. For example, the assembly line revolutionized manufacturing, but it also helped fuel the rise of densely populated cities, shifting the population from an agricultural focus to an urban focus, with technology that both pulled people to the cities and allowed machines to do more work.
As pointed out by technologists Igor Tulchinsky and Paul Daugherty, the continued advancements of two interdependent technologies, predictive analytical technologies and artificial intelligence (AI), hold great potential that can amplify the strengths of human capacities to solve problems. Yet, these advancements are not a panacea, as they will also amplify the challenges endemic to the human condition. Coherent integration of machine-learning algorithms into daily life will not only have a profound effect on the average user looking to make everyday choices, but it will significantly enhance the cognitive abilities of experts zeroing in on their respective foci. Negative outcomes and results produced by normal human biases and cognitive dissonance can and will increase as the convergence between man and AI intensifies and humans become more adept at using these technologies to satisfy their immediate needs.
While history shows that no two events in time can ever be fully compared due to differences in context, they can indeed be examined for common aspects. As Will and Ariel Durant emphasize in the “Lessons of History”, human behavior may change, but human nature will not. The advancement of machine-learning algorithms and AI may not have a direct analog in history, but we can glean some lessons from the rise of the internet. Easy access to the world’s knowledge is now possible, but it brought with it ubiquitous access to knowledge of dangerous weapons. There will always be some unintended outcome, in other words, that results from evolutionary development.
The theory of self-organizing criticality stipulates that in a complex system, the components of the system self-organize into critical states. Combining the human societal system with improved predictive analytical tools and AI will result in a cascade of events that both help and hurt humanity. These technologies will combine with human biases and cognitive dissonance, inadvertently leading to catastrophes. In the example of the internet, the cascade of interconnectedness, with its spread of knowledge and ideas, was in many ways a beneficial phenomenon. But the internet also spawned the dark web as a parallel network to ensure online privacy, greatly enabling illicit activity.
To date, the most pressing tangible concern about the integration of predictive analytics and AI with human bias has been an impact on the workforce. As simple tasks become automated, the picture of the future of work becomes blurry, and we come to question what the role for human contributions will be. Looking to AI specifically, Stuart Russell points out that we will also need to embed uncertainty as a basic principle in AI in order to prevent a “Skynet” scenario. By doing so, we will avoid the trap of designing AI that assesses humans as inefficient and renders us obsolete. In the coming years and beyond, companies and governments will need to formalize how they will ethically employ these technologies and account for human imperfection, because that imperfection will always remain constant, and result in outcomes that we never predicted.