The future of artificial intelligence

Artificial intelligence (AI) is the machine learning of tasks that we associate with the human brain – things like problem-solving, perceiving, learning, reasoning, and even creativity. AI has grown exponentially in recent years. The Covid-19 pandemic, in particular, highlighted the need for AI systems and automation that could respond swiftly to reduced numbers of workers. 

For organisations that had gone through a digital transformation, AI and associated emerging technologies were already being integrated into business processes. However, for many, Covid was the turning point that highlighted the need for AI solutions to be included in their business models. The AI cloud is a cutting-edge concept that will help make AI software more accessible to businesses by bringing together cloud computing and a shared infrastructure for AI use cases.

Healthcare offers many successful AI case studies, most recently for diagnosing and tracking Covid-19 using rapidly gathered big data, but also increasingly in things like cancer diagnostics or detecting the development of psychotic disorders. Other sectors that use real-world AI applications include the military, agriculture, manufacturing, telecommunications, IT and cybersecurity, and finance. AI art, or neural network art, is a genre in its own right. Holly Herndon, who has a PhD in Music and Acoustics from Stanford’s Centre for Computer Research, uses AI technology in her work.

What are the risks of AI?

Science fiction writers have long been fascinated by the idea of AI taking over. From Blade Runner to The Terminator, the fear is that the machines will start to think for themselves and rise up against humans. This moment is known as the ‘singularity’, defined as the point in time when technological growth overtakes human intelligence, creating a superintelligence developed by self-directed computers. Some people believe that this moment is nearer than we think.

In reality, AI offers many benefits, but the most obvious risks it currently poses are in relation to personal data privacy. In order for deep learning to take place, AI needs to draw information from large amounts of data that must come from people’s behaviours being tracked – their personal data. The Data Protection Act 2018, which enacted the general data protection regulation (GDPR), was brought in to ensure that people have to opt in to having their data gathered and stored, rather than having to make the request to opt out. Previously, businesses and organisations were able to simply use their customers’ data without permission.

Some of us may feel suspicious about our data being collected and yet, many of the applications we use are constantly gathering information about us, from the music we like and the books we read to the number of hours we sleep at night and the number of steps we walk in the day. When Amazon makes suggestions for what you might like to read next, it’s based on your purchasing and browsing history. A McKinsey & Company report from 2013 stated that 35% of Amazon’s revenue comes from recommendations generated by AI. AI is also instrumental in the way that LinkedIn helps both people to find jobs and companies to find people with the right skill set.

The more we allow our actions to be tracked, in theory, the more accurately our behaviours can be predicted and catered to, leading to easier decision making. New technologies like the Internet of Things (IoT) could help make this data even more interconnected and useful – a fridge that has already made a shopping order based on what you have run out of, for example.

Can AI be ethical?

There are certainly big questions around ethics and AI. For example, artificial neural networks (ANNs) are a type of AI that uses interconnected processors which mimic the human brain’s neurons. The algorithm for an ANN is not determined by human input. The machine learns and develops its own rules with which to make decisions, and which are usually not easily traceable by humans. This is known as black box AI because of its lack of transparency, which can have legal as well as ethical implications. In healthcare, for instance, who would be liable for a missed or incorrect diagnosis? If used in self-driving car insurance, who would be  liable for a wrong turn of the wheel in a crash?

When it comes to data analytics, there is also the issue of bias: because human programmers define datasets and write algorithms, this can be prone to bias. Historically, the field of data science has not been very diverse, which can lead to demographics being underrepresented and even inadvertently discriminated against. The more diverse the programming community, the more unbiased the algorithms, therefore the more accurate and useful AI applications, can become.

A popular example of problematic use of AI is deepfakes, imagery that has been manipulated or animated so that it appears that someone (usually a politician) has said or done something they haven’t. Deepfakes are linked to fake news and hoaxes which spread via social media. Ironically, just as AI software can clone a human voice or recreate the characteristic facial expressions of an individual, it is also key in combating fake news because it can detect footage that is a deepfake.

What are the challenges in using artificial intelligence?

Machine learning relies on data input from humans. A machine cannot initially simply start to think for itself. Therefore, a human – or a team of humans – has to pinpoint and define the problem first before presenting it in a computable way. 

A common example of what an AI robot cannot do – which most humans can do – is to enter a kitchen and figure out where all the items needed to make a cup of tea or coffee are kept in order to make a hot drink. This kind of task requires the brain to adapt its decision-making and improvise based on previous experience of being in an unfamiliar kitchen. AI currently cannot develop the data processing systems to spontaneously do this, but it is a situation that the neural networks of a human brain can naturally respond to.

What problems can AI solve?

Artificial Intelligence is mainly suited to deep learning which demands the scanning and sifting through of vast amounts of data looking for patterns. These algorithms developed through deep learning can, in turn, help with predictions. For instance, understanding a city’s traffic flow throughout the day and synchronising traffic lights in real-time can be facilitated through AI implementation. AI can also strategise. One of the milestones in the machine learning of AI systems was Google’s DeepMind AlphaGo beating the world’s number one Go player in 2017, Ke Jie. Go is considered to be particularly complex and much harder for machines to learn than chess.

On the practical side, AI can help reduce errors and carry out repetitive or laborious tasks that would take humans much longer to carry out. In order to increase the use of AI responsibly, the UK government launched the National AI Strategy in March 2021 to help the economy grow via AI technologies. Some of the challenges that are hoped to be addressed are tackling climate change and improving public services. 

In conclusion, AI has huge potential, but ethical, safe and trustworthy AI development is reliant on direction from humans. 

If you’re interested in understanding more about artificial intelligence, our MSc Computer Science with Artificial Intelligence at the University of York is for you. Find out how to apply for the 100% online course.