It’s predicted that robots and Artificial Intelligence will play a large part in our lives by 2025, but as anyone who’s seen a Terminator movie knows, these are exactly the kinds of technology prone to misuse or mishap. It shouldn’t be a great surprise then to learn that a group of authors spanning academia, civil society and industry – perhaps forewarned by such fiction – have written a report highlighting the potential use of Artificial Intelligence for malicious means. It is hoped that the report will generate legislation to mitigate possible risks to our physical, digital and political security.
What is AI?
Artificial Intelligence can be described as computer systems that are able to perform tasks normally requiring human intelligence. There are different types of intelligence – including intellectual, social and emotional – and the type used by each AI will depend on how it is programmed. For example, an AI might make a decision based on logic, but that will not always provide the most moral outcome.
Smart cities, smart homes, the Internet of Things and technologies such as drones and driverless cars may all considered forms of AI.
How will AI affect us?
If used incorrectly, AI could make our cities very unsafe. One thing to consider is who will be programming these AI systems – how can we trust that they will use AI for an ethical means? Who has access to the data and controls of AI? As AI technology becomes increasingly integrated into society there is the risk that it could be commandeered; if taken over by someone wanting to do harm, smart cities could all become security nightmares, even weapons of terrorism.
If used correctly, however, AI could make our cities safer. For instance, a very well-known start-up known as ShotSpotter uses sophisticated AI sensor technology to detect gun violence, alerting the police and other authorities within 45 seconds of a shot being fired. Hence Artificial Intelligence would help cities react faster, with greater accuracy and better decision-making capabilities.
The report covers the following ways in which AI can be used to mitigate harm:
- Disfigurement and crime prevention
- Active shooter response
- Riot and mob control
- Natural disaster and fire response
- Recognition and prevention of terrorism
- Safety of citizens at schools, government organisations, and transportation
How can we cut down on the risks of AI?
As the recent report uncovers, the first step to mitigating potential issues with AI is to highlight the possible problems. It makes four high-level recommendations:
- Policymakers should collaborate closely with technical researchers to investigate, prevent and mitigate possible malicious uses of AI.
- Researchers and engineers in Artificial Intelligence should take the dual-use nature of their work seriously, allowing considerations of potential misuse to influence research priorities and norms, and proactively reach out to relevant parties when harmful applications are foreseeable.
- Best practices should be identified in research areas, with more mature methods for addressing dual-use concerns (such as computer security) where applicable to AI.
- Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.
It is recommended that the issue of bias in the programming of AI might be overcome by the creation of multidisciplinary and diverse teams, particularly in regard to citizen involvement. The report also highlights that AI should initially be used for analysis and process improvement rather than important decision-making (which should involve human oversight).
Also, if the recent Facebook/Cambridge Analytica debacle is anything to go by, programs must have foolproof data-privacy settings to prevent misuse.
What is being done?
Based on this report it is hoped that policymakers will consider new laws to offset these potential threats. Best practices are being learned by other industries with similar ‘dual use’ risks, particularly in cybersecurity, which can have both positive and negative implications.
Meanwhile, UNESCO is aiming to launch a new project that investigates how to embed values of well-being, peace and human rights into AI systems.
The good news is that society is already being preventative rather than reactive. Discussions, trials and tests surrounding AI are already underway to minimise dangers in the future. Can the risks posed by AI be completely eliminated? No – but they are manageable, and as such may not need to be a major cause for concern.