8 minute read
In my previous article ‘The Rise of Ethics in Artificial Intelligence (Part 2: Content Ownership)’, I highlighted some interesting stories that blur the lines between technology and humanity. In this final article of the series, I will outline what some of the industry leaders are doing in order to ensure trustworthy AI, and also some interesting areas to keep an eye on. All thoughts, views and comments are my own.
In 2018, at a Special Town Hall Event with Google and YouTube (Eventbrite, 2018), Google CEO Sundar Pichai said that “AI will have a bigger impact on the world that some of the most ubiquitous innovations in history” and that it is “more profound than electricity or fire” (Cranz, 2018). The uncertainty around the exact ‘impact’ and ‘profoundness’ of which is under constant debate amongst industry leaders and public figures. Some of the predictions of AI taking over the world, creating a new wave of cyberattacks, superhuman hacking, and even autonomous weapon systems, sound like a dystopian nightmare taken straight from an episode of Black Mirror. In September 2017, Russian President Vladimir Putin said that AI will create “colossal opportunities, but also threats that are difficult to predict” and also whoever becomes the leader in artificial intelligence “will become the ruler of the world” (James, 2017). Elon Musk replied on Twitter to these comments by stating that the “competition for AI superiority at national level will most likely be the cause of WW3” (Lant, 2017). And even the great theoretical physicist Stephen Hawking has advised creators of AI to “employ best practice and effective management” as the “success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know” (Kharpal, 2017). ‘We just don’t know’ sums up the uncertainty that surrounds the collective thoughts as we move towards the inevitable progression of AI, but now let us look at some of the safe guards that are being put in place to try and ensure that AI is created for good and not evil.
Figure 1: AI Army
The rapid rise of AI and machine learning has led to growing calls to examine its impact on society, with experts warning of the technology's potential misuse if it isn't controlled.
One of the ways in which this is being examined is by the High-Level Expert Group on AI (HLEG-AI). This independent expert group was created by the European Commission in 2018 to define a European strategy on AI and focus on defining human centred policies around ethical, legal, and societal issues. Towards the end of 2018, this group released the first draft of its ‘Ethics Guidelines for Trustworthy Artificial Intelligence’, which was then published in April 2019 (European Commission, 2019a). Part of these outlined the following seven essentials for achieving trustworthy AI.
Figure 2: Ethics Guidelines for Trustworthy AI
Complimenting this document, the group followed up in June 2019 by publishing the ‘Policy and Investment Recommendations for Trustworthy Artificial Intelligence’ (European Commission, 2019b). Professor Barry O’Sullivan, who is the Vice Chair of the HLEG-AI, President of the European Artificial Intelligence Association (EurAI), and also founding Director of the Insight Centre for Data Analytics at UCC, is holding a Trustworthy AI Event this coming Wednesday 19th February at the Aviva Stadium, Dublin (Eventbrite, 2020), aimed at discussing these recent developments and guidelines related to trustworthy AI and also the emerging regulations to come, more of which can be read in the recent Silicon Republic article, ‘Trustworthy artificial intelligence – is new EU regulation coming for AI?’ (Silicon Republic, 2020).
In addition to expert groups looking into these ethical guidelines, a number of advancements in research has also been underway. In 2018, the 82nd richest person in world Stephen Schwarzman, donated $350 million to Massachusetts Institute of Technology to set up the MIT Schwarzman College of Computing to focus on the opportunities, challenges, and ethical and policy implications by the rise of AI. In June 2019, he then donated $188 million to the University of Oxford, their largest single donation in hundreds of years, to help fund the research into studying the ethical implications of AI. Schwarzman stated, “What motivates me, among other things, is to have the core of humanities, the basic values of people, be considered in the context of technological development. Technology left unaffected would trample over certain aspects of human behaviour and human opportunities” (Carpani, 2019). Tim Berners-Lee, the inventor of the world wide web, voiced his support for Schwarzman's donation to Oxford: "It is essential that philosophy and ethics engages with those disciplines developing and using AI" he said in a statement shared by the university. "If AI is to benefit humanity we must understand its moral and ethical implications" he added (Rishi, 2019).
The following are just some interesting areas that I feel will be worth keeping an eye on as we move towards ensuring AI ethics is at the forefront of the creation of AI applications. I may write an article going into more future trends and further expanding the areas below.
We are all becoming more familiar with biometrics due to the advancement of technology in our mobile devices. It is estimated that the value of the global biometric market will grow from $33 billion to $65.3 billion by 2024 (Markets and Markets, 2018). Both government and business are starting to adopt this technology at an alarming rate. But is this progress or an invasion of our privacy and erosion of our freedom?
Identification: In 2016, Saudi Arabia enforced new regulations that required all telecommunication subscribers to register their fingerprints (Arab News, 2016). Also in 2016, both Hungary and Turkey started distributing biometric identity cards that contains a chip that can store peoples fingerprints, electronic signature, social security, and tax information (Mayhew, 2016a)(Mayhew, 2016b). In 2017, Pakistan introduced biometric passports to reduce the risk of forgery and also human trafficking by authenticating the identity of each traveller (Tribune, 2016). In 2018, India decided to continue with the world’s largest biometric identification program with almost the entire population of India’s (1 billion people) name, gender, date of birth, fingerprints, iris scans, and photos linked to a 12 digit number and registration card that they can use for a range of government services, banking and telephony (Ayyar, 2018).
Authentication: In 2017, Dubai airport announced that they will now use a virtual aquarium styled tunnel, fitted with 80 cameras, to scan passengers faces as they walk through security clearance (Ong, 2017). Last year, Barclays in collaboration with Hitachi, developed a finger vein scanner that uses infra-red technology to identify specific vein patterns in an attempt to further secure transactions (Barclays, 2019).
In one of my previous articles ‘Are Humans the Next Horse? The Rise of the Robots’, I discussed the impact of automation on the workforce. But what could this mean from a society point of view?
Robot Tax: It’s clear that no one knows for certain the exact impact that AI will have on the workforce; but inevitably, certain tasks will be automated. But how will the displaced workforce find meaning if their careers have come to a premature conclusion? And also what will happen if companies simply no longer need human workers? One idea that Bill Gates has suggested is that robots should be taxed. Not only would this give society time to adjust to the adoption of this technology, but it could also pay for further education opportunities and employment in soft-skilled careers. Now this needs a lot of consideration, with some of the advantages and disadvantages of the approach outlined quite nicely in the Emerj article ‘Robot Tax – A Summary of Arguments For and Against’ (Walker, 2019).
Citizenship: In 2017, Sophia the Robot from Hanson Robotics was famously given citizenship of Saudi Arabia (Stone, 2017). This is a really interesting concept that has caused a lot of controversy. Regardless of how woke Sophia has been programmed to be, does she deserve the acknowledgement to the full set of citizen rights?
It almost feels like every day there is a new advancement in technology. Companies are fighting to be the first to conquer in the discovery of AI dominancy. Some of the areas above outline how we are best trying to put guide-rails in place to ensure that this technology is used to compliment and augment the way we live in a positive manner.
With Sophia, is the prioritisation of technological utopia over the rights and lives of humans the right thing to do? Does this give the permission towards the removal of human responsibility should something go wrong? There’s a lot of questions to answer here to understand which direction we are heading. We are constantly pushing the boundaries between technology and human interaction, a boundary that is becoming thinner and thinner. But is it possible to control the genie once it is out of the bottle? Should we hold onto our 3 wishes just in case?
Until next time, I hope you enjoyed the read. GB