5 Minute Read - There's a lot of deliberation around whether the rise and development of full artificial intelligence will threaten human existence (more of which can be read in my article ‘Are Humans the Next Horse? The Rise of the Robots’). Whether or not this is true, only time will tell, but we can definitely say for certain that most advancements in technology will pose security risks as a result of poorly designed, misused, or hacked systems with little or no integrated regulations.
A key area that is touched upon during a lot of AI conferences and articles is the importance of ensuring a human element within the process of developing AI solutions. A healthy combination of automation and human collaboration allows for the build of trust within these systems, especially ones that are prone to change, have inherent uncertainty, require governance oversight, or have critical consequences as a result of decisions being made. One interesting consideration by Jean-François Gagné, Head of AI Product Management and Strategy at ServiceNow, is around the differences when designing either ‘human in the loop’ or ‘human on the loop’ systems.
Depending on the type of application and the expected outcomes, it is important to choose the correct type of system. In a recent defence article, Terrence J. O’Shaughnessy, commander of the United States Northern Command and of the North American Aerospace Defense Command (NORAD), stated that the military should move from a human in the loop model (where a human still has control over stopping and starting actions) to a human on the loop model (still allowing oversight, but not requiring pre-approval, and pushing human control farther from the centre of the automated decision-making (FedScoop, 2020).
Almost all conversations around AI will eventually make its way towards discussing ethics. I have written a number of articles touching and focusing on ethical AI including 'Are Humans the Next Horse? The Rise of the Robots' and also my 3-part series starting with 'The Rise of Ethics in Artificial Intelligence (Part 1: Privacy)'. There are a number of important factors that need to be considered when building within a collaborative environment:
In a recent article, a Google engineer stated the company's AI has become sentient (CBC News, 2022). The sentient debate touches on technology, ethics, and philosophy in the understanding of what it means to be alive and whether we will ever fully know if AI has gained consciousness. Either way, there is a huge amount of progress that needs to be made before AI can be fully autonomous - if it ever will be, and will require a huge amount of human effort to shadow it along the way. Do you ever think AI will become fully sentient? Until next time, I hope you enjoyed the read. GB.
#AI #ArtificialIntelligence #DataScience #MachineLearning #ML #Trustworthy #Ethics #Policy #Regulations #Governance #Data #DecisionMaking #DocumentIntelligence #DueDiligence #Compliance