In recent years, artificial intelligence (AI) has woven itself into our daily lives in ways we may not even be aware of. It has become so pervasive that many remain unaware of both its impact and our reliance upon it.
From morning to night, going about our everyday routines, AI technology drives much of what we do. When we wake, many of us reach for our mobile phone or laptop to start our day. Doing so has become automatic, and integral to how we function in terms of our decision-making, planning and information-seeking.
Once we’ve switched on our devices, we instantly plug into AI functionality such as:
- face ID and image recognition
- social media
- Google search
- digital voice assistants like Apple’s Siri and Amazon’s Alexa
- online banking
- driving aids – route mapping, traffic updates, weather conditions
- leisure downtime – such as Netflix and Amazon for films and programmes
AI touches every aspect of our personal and professional online lives today. Global communication and interconnectivity in business is, and continues to be, a hugely important area. Capitalising on artificial intelligence and data science is essential, and its potential growth trajectory is limitless.
Whilst AI is accepted as almost commonplace, what exactly is it and how did it originate?
What is artificial intelligence?
AI is the intelligence demonstrated by machines, as opposed to the natural intelligence displayed by both animals and humans.
The human brain is the most complex organ, controlling all functions of the body and interpreting information from the outside world. Its neural networks comprise approximately 86 billion neurons, all woven together by an estimated 100 trillion synapses. Even now, neuroscientists are yet to unravel and understand many of its ramifications and capabilities.
The human being is constantly evolving and learning; this mirrors how AI functions at its core. Human intelligence, creativity, knowledge, experience and innovation are the drivers for expansion in current, and future, machine intelligence technologies.
When was artificial intelligence invented?
During the Second World War, work by Alan Turing at Bletchley Park on code-breaking German messages heralded a seminal scientific turning point. His groundbreaking work helped develop some of the basics of computer science.
By the 1950s, Turing posited whether machines could think for themselves. This radical idea, together with the growing implications of machine learning in problem solving, led to many breakthroughs in the field. Research explored the fundamental possibilities of whether machines could be directed and instructed to:
- apply their own ‘intelligence’ in solving problems like humans.
Computer and cognitive scientists, such as Marvin Minsky and John McCarthy, recognised this potential in the 1950s. Their research, which built on Turing’s, fuelled exponential growth in this area. Attendees at a 1956 workshop, held at Dartmouth College, USA, laid the foundations for what we now consider the field of AI. Recognised as one of the world’s most prestigious academic research universities, many of those present became artificial intelligence leaders and innovators over the coming decades.
In testimony to his groundbreaking research, the Turing Test – in its updated form – is still applied to today’s AI research, and is used to gauge the measure of success of AI development and projects.
This infographic detailing the history of AI offers a useful snapshot of these main events.
How does artificial intelligence work?
AI is built upon acquiring vast amounts of data. This data can then be manipulated to determine knowledge, patterns and insights. The aim is to create and build upon all these blocks, applying the results to new and unfamiliar scenarios.
Such technology relies on advanced machine learning algorithms and extremely high-level programming, datasets, databases and computer architecture. The success of specific tasks is, amongst other things, down to computational thinking, software engineering and a focus on problem solving.
Artificial intelligence comes in many forms, ranging from simple tools like chatbots in customer services applications, through to complex machine learning systems for huge business organisations. The field is vast, incorporating technologies such as:
- Machine Learning (ML). Using algorithms and statistical models, ML refers to computer systems which are able to learn and adapt without following explicit instructions. In ML, inferences and analysis are discerned in data patterns, split into three main types: supervised, unsupervised and reinforcement learning.
- Narrow AI. This is integral to modern computer systems, referring to those which have been taught, or have learned, to undertake specific tasks without being explicitly programmed to do so. Examples of narrow AI include: virtual assistants on mobile phones, such as those found on Apple iPhone and Android personal assistants on Google Assistant; and recommendation engines which make suggestions based on search or buying history.
- Artificial General Intelligence (AGI). At times, the worlds of science fiction and reality appear to blur. Hypothetically, AGI – exemplified by the robots in programmes such as Westworld, The Matrix, and Star Trek – has come to represent the ability of intelligent machines which understand and learn any task or process usually undertaken by a human being.
- Strong AI. This term is often used interchangeably with AGI. However, some artificial intelligence academics and researchers believe it should apply only once machines achieve sentience or consciousness.
- Natural Language Processing (NLP). This is a challenging area of AI within computer science, as it requires enormous amounts of data. Expert systems and data interpretation are required to teach intelligent machines how to understand the way in which humans write and speak. NLP applications are increasingly used, for example, within healthcare and call centre settings.
- Deepmind. As major technology organisations seek to capture the machine learning market, they are developing cloud services to tap into sectors such as leisure and recreation. For example, Google’s Deepmind has created a computer programme, AlphaGo, to play the board game Go, whereas IBM’s Watson is a super-computer which famously took part in a televised Watson and Jeopardy! Challenge. Using NLP, Watson answered questions with identifiable speech recognition and response, causing a stir in public awareness regarding the potential future of AI.
Artificial intelligence career prospects
Automation, data science and the use of AI will only continue to expand. Forecasts for the data analytics industry up to 2023 predict exponential expansion in the big data gathering sector. In The Global Big Data Analytics Forecast to 2023, Frost and Sullivan project growth at 29.7%, worth a staggering $40.6 billion.
As such, there exists much as-yet-untapped potential, with growing career prospects. Many top employers seek professionals with the skills, expertise and knowledge to propel their organisational aims forward. Career pathways may include:
- Robotics and self-driving/autonomous cars (such as Waymo, Nissan, Renault)
- Healthcare (for instance, multiple applications in genetic sequencing research, treating tumours, and developing tools to speed up diagnoses including Alzheimer’s disease)
- Academia (leading universities in AI research include MIT, Stanford, Harvard and Cambridge)
- Retail (AmazonGo shops and other innovative shopping options)
What is certain is that with every technological shift, new jobs and careers will be created to replace those lost.
Gain the qualifications to succeed in the data science and artificial intelligence sector
Are you ready to take your next step towards a challenging, and professionally rewarding, career?
The University of York’s online MSc Computer Science with Data Analytics programme will give you the theoretical and practical knowledge needed to succeed in this growing field.