artificial intelligence despite the wide range of existing use cases for ai, adoption across and within sectors remains uneven. increasingly, a divergence is occurring with sector participants split between early adopters, those increasing their adoption, and laggards that are falling behind. while it’s unsurprising that sectors such as government, wholesale trade and natural resources are least advanced in ai deployment, the greater insight is that among even the most engaged sectors such as insurance, and software and it services, less than half of participants are actively investing in ai, presenting substantial room for market penetration and growth. ai in the future looking ahead, ai applications become hard to distinguish from science fiction, with the movie minority report coming to mind. set in 2054, with predictive policing as the main theme, the film includes self-driving cars, personalised and location-based ads, voice automation in the home, robotic insects and gesture-controlled computers. interestingly, all of these already (in some shape or form) existed in 2017. rather than look to the film industry for inspiration, we’ve outlined the most optimistic and societally-beneficial applications for ai over the next decade in table 2. the dark side of ai while the advantages of ai include efficiency gains, increased scalability and innovation-led growth, the greatest risks include job obsolescence, biased results, and fake news2. according to the mckinsey global institute, in approximately 60% of occupations, at least 30% of basic activities are automatable by adapting current ai technologies. in other roles, ai will supplement workflows but still displace some workers in more complex occupations. while some believe ai will create more jobs than it destroys, we should expect a period during which many workers will be displaced. this will likely result in social upheaval and have political ramifications. in theory, ai can remove human decision-making and associated biases. however, many datasets often maintain systemic historic biases, including gender and race. from work done on ai and biases3, researchers found that ai, when trained on a large dataset of online material, closely associated the word “women” with occupations in humanities and the home, while “man” was associated with science and technology. bias also creeps into race representation. although the identification techniques employed by ai might be accurate, when used on imbalanced training data (with more representation of one race group and/or gender over another) the system has the potential to deliver biased results. for example, ai-enabled facial recognition software that can classify genders tend to misclassify 1% of lighter-skinned males but 12% of darker-skinned males and 35% of darker- skinned females4. being aware of the potential for bias is the starting point for avoiding inaccurate results/output. an emerging ai technique, generative adversarial networks (gans), can produce highly realistic media which are almost impossible to distinguish from real content. this software is also used to modify video by re-mapping a person’s lips with different audio. with the proliferation of ai technology, lower costs, and ever-increasing source material (e.g. smartphones and videoconferencing), the ability to create fake news and fictitious factual content becomes a lot more accessible. the long-term implications of artificial media will be diminishing trust. however, with increased awareness, people will become more accustomed to questioning if what they see is true. 2 https://www.stateofai2019.com/chapter-8-the-implications-of-ai 3 https://www.researchgate.net/publication/316973825_semantics_derived_automatically_from_language_corpora_contain_human-like_biases 4 http://proceedings.mlr.press/v81/buolamwini18a.html 8