AI search and recommendation algorithms

Powered by artificial intelligence (AI), search and recommendation algorithms shape our interaction (and satisfaction) with online platforms. Developed to predict user choices, preferences and behaviours, its purpose is to improve the overall user experience of websites, apps, smart assistants and other types of computer programmes.

From Google to Amazon and Netflix, today’s biggest online retailers and service providers are making use of this class of machine learning to improve business conversion and retention rates: pushing products, boosting repeat sales and keeping customers happy and engaged.

How do search and recommendation algorithms work?

Where search algorithms work to retrieve relevant data, information and services in reaction to a user query, recommendation algorithms suggest similar alternate results based upon the user’s search (or purchase) history. 

Put simply, search algorithms assist users in finding exactly what they want, while recommendation algorithms help users find more of what they like.

Search algorithms

A search algorithm locates specific data within a larger collection of data. According to Internet Live Stats, on average, Google processes over 100,000 search queries per second. That’s an immense demand on a system, but while 98% of all internet users frequent a search engine monthly, it’s imperative that they be built to produce accurate results quickly and efficiently. 

Basic site-search has quickly become an essential feature of almost any website, while the search function is considered a fundamental procedure in computing overall (extending to coding, development and data science). Intended to keep all types of users happy and informed, search algorithms step in to get the right resources in front of the right people.

All search algorithms operate via a search key (or bar) and work by returning a success or failure status based on the entered information. They break a query down into separate words, and using text-matching, link those words to matching titles and descriptions in the data sets. 

Different search algorithms vary in terms of performance and efficiency, depending on how they are used and the available data. Some of the more commonly used search algorithms include:

  • linear search algorithm
  • binary search algorithm
  • depth-first search algorithm
  • breadth-first search algorithm

More complex algorithms can identify typing and auto-correct mistakes, as well as offering synonym recognition. Advanced algorithms can produce more refined results, factoring in popular answers, product rankings and other key metrics.

Google’s search algorithm

Google attributes its success to meticulous testing and complex search experiments. A combination of its infamous crawling “spider” bots, data-driven indexing and rigorous ranking system enables the search engine to meet its exemplary standards of relevance and quality. 

Analysing everything from sitemap to content, images and URLs used, Google is able to identify the best pages to signpost your query in a fraction of a second. The search engine even boasts a freshness algorithm that, in response to trending topics and keywords, shows users the most up-to-date online articles available in realtime. 

Good search engines boast another important feature: related results. This can make the difference between a bounce and purchase as customers are encouraged to keep browsing the site. This is where recommendation algorithms become useful.

Recommendation algorithms

Recommendation algorithms rely on data science to filter and recommend personalised suggestions (whether that be related search results or product recommendations) based on a user’s previous actions. Recommendation algorithms can generally be separated into two types: content-based filtering and collaborative filtering.

Content-based filtering

These algorithms factor in information (such as keywords and attributes) of both the user and the chosen item or product profile to generate recommendations. By utilising the customer’s personal data (such as gender, occupation and more), content-based filtering algorithms are able to assess popular products by age-group or locale, for example. Similarly, by analysing the product characteristics, the system can recommend other items with similar attributes. The more people that use the platform, the more data can be mined and improve the specificity of the suggestions.

The Netflix recommendation engine

Netflix states that recommendation algorithms are at the core of its product. In fact, 80% of viewer activity is driven by personalised recommendations from its engine. Netflix began experimenting with data as early as 2006 to improve the accuracy of its preference algorithms. As a result, the Netflix recommendation engine tracks numerous data points, from browsing behaviours to binge-watching habits, and filters over 3,000 titles at a time using 1,300 recommendation clusters based on user preferences. The platform has taken data beyond rating prediction and into personalised ranking, page generation, search, image selection, messaging, marketing, and more.

Collaborative filtering

These algorithms accumulate data from all users on a platform and work like a word-of-mouth recommendation. By comparing datasets, such as purchase or rating information, the algorithms help the platform identify kindred customer profiles and recommend other products or services favoured by these ‘similar users’.  

InData Labs notes the greatest merits of collaborative filtering systems as:

  • capability of accurately recommending complex items (such as films, books or clothing) without requiring an “understanding” of the item itself
  • basing recommendations on more personalised ‘similar’ users, without needing a more comprehensive knowledge of all products or all users of the platform
  • ability to be applied to any domain and provide more versatile cross-domain recommendations

Why are search and recommendation algorithms so essential?

The number of digital purchases continues to climb each year, cementing e-commerce as an indispensable function of the global retail framework. And, in the world of online shopping, customers want accuracy, ease of use and appropriate suggestions.

For online businesses and service providers, some of the key benefits to using a search or recommendation algorithm include:

  • improve the relevance of search results and reduce the time it takes to find specific products and services
  • boost key metrics, including web visits and purchase rate, plus improve overall user loyalty and customer satisfaction
  • aid in the selection process for an undecided customer, encourage them to interact with more products and and enter other potential purchases into their field of vision
  • obtain data to target the right people with personalised ads and other digital marketing strategies to encourage users to frequent the website or platform

The quality of search and recommendation systems can significantly impact key business conversions, such as lead generations, customer sentiment scores and closing sales.

Amazon’s AI algorithms

As the leader of the global e-commerce market, Amazon is an almost-unrivalled product discovery and purchase platform, thanks to its optimal machine learning model. Built upon comprehensive ranking systems, the company’s A9 search algorithm analyses sales data, observes historical traffic patterns and indexes all product description text before a customer search query even begins, ensuring the best products are placed in front of the most likely buyers. 

The platform’s combination of intelligent recommendation tools forms the personalised shopper experience that has become so popular with consumers.

Get ahead with AI

Develop specialist skills spanning data analytics, neural networks, machine learning and more with University of York’s 100% online MSc Computer Science with Artificial Intelligence

This intensive online course, flexibly designed for remote learning, is geared to equip you for a range of roles in computer science and software development. One of the biggest trending careers in today’s jobs market, secure your space in this highly skilled, in-demand and lucrative field.

Tech basics: An introduction to text editors

Autocorrect: the maker or breaker of an awkward situation. As smart device users, we’re certainly au fait with the ways in which software like spell checkers can protect against common (and costly) linguistic mistakes. In our technological age, most of our digital practice involves using platforms built on text editors – but, if a conversation on coding still leaves you in a cold sweat, read on.

What is a text editor?

A text editor refers to any form of computer program that enables users to create, change, edit, open and view plain text files. They come already installed on most operating systems but their dominant application has evolved from notetaking and creating documents to crafting complex code. Today, text editors are a core part of a developer’s toolbox and are most commonly used to create computer programmes, edit hypertext markup language (HTML), and build and design web pages.

Examples of commonly used text editors include:

  • Android Studio
  • Atom
  • Notepad++
  • Sublime Text
  • VS Code

Text editors typically fall into two distinct categories: line editors and screen oriented editors. The latter allows more advanced flexibility for making modifications.

What’s the difference between a text editor and a word processor?

Text editors deal in plain text, which exclusively consists of character representation. Each character is represented by a fixed-length sequence of one, two, or four bytes, in accordance with specific character encoding conventions (such as ASCII, ISO/IEC 2022, UTF-8, and Unicode). These conventions define many printable characters, as well as non-printing characters that control the flow of the text, such as space, line break, and page break. 

Text editors should not be confused with word processors – such as Microsoft Office – which enable users to edit rich plain text too. Rich plain text is more complex, consisting of metadata such as character formatting data (typeface, size and style), paragraph formatting data (indentation and alignment commands) and page specification data (margins). Word processors are what we use to produce streamlined, formatted documents such as letters, essays or articles.

Features and functions of a text editor

Basic features of a text editor include the ability to cut, paste and copy text, find and replace words or characters, create bulleted lists, line-wrap text, and undo or redo a last command. They’re also equipped to open very large files (too big for a computer’s main memory to process) and read them at speed. Whether you’re coding with Linux or text editing with a Windows PC or a Mac device, the software should be functional, reliable and easy to use.

Other platforms (preferred by software developers) offer advanced features for more complex source code editing, including:

Syntax highlighting

Reading through endless reams of code can be overwhelming and time-consuming not to mention messy. This feature allows users to colour code text based on the programming or markup language it is written in (such as HTML and Javascript) for ease of reference.

Intelligent code completion

A context-aware software that speeds up the coding process by reducing typos, correcting common mistakes and offering auto-completion suggestions for syntax errors.

Snippets

An essential feature that enables users to quickly substitute longer pieces of content or code with a shortcut phrase which is great for creating forms, formatting articles or replicating chunks of information that you’re likely to repeat in your day-to-day workload.

Code folding

Also called expand and collapse, the code folding feature hides or displays certain sections of code or text, allowing for a streamlined and decluttered display – great for if you’re working on a long document.

Vertical selection editing

A useful tool that enables users to select, edit or add to multiple lines of code simultaneously, which is great for making repeat small changes (such as adding the same character to the end of every line, or deleting recurring errors).

Where and how are text editors used?

Most of us use text editors unconsciously. Almost everyone has a text and code editor built into their workflow, as they’re the engine that drive businesses all over the world. 

Developers and user experience (UX) designers use text editors to customise and enhance company web pages, ensuring they meet the needs of customers and clients. IT departments and other site administrations utilise this form of tech to keep internal systems fluent and functioning, while editors and creators use these applications to produce programs and content to funnel out to their global audience.

Going mobile: text editors and smartphones

So, where does autocorrect come in? Text editors appeal to the needs of the average tech user too, with forms of the software built into our iPhone and Android devices. 

The autocorrect feature (a checker and suggestion tool for misspelt words) is a prime example, combining machine-learning algorithms and a built-in dynamic dictionary to correct typos and offer replacement words in texts and Google searches.

A sophisticated mode of artificial intelligence, the autocorrect algorithm computes a considerable number of factors every time you type a singular character from the placement of your fingers on the keyboard to the grammar of other words in the sentence, while also accounting for phrases you currently use. The machine-learning algorithms absorb and relay back to what is documented on the internet.

Or perhaps not. To side-step the well-cited irritations of predictive text, you may have found yourself scrabbling with your settings, creating your own shortcuts and abbreviations for words commonly used in your communications. If that’s the case, congratulations. You may be more familiar with text editors than you first thought as you’ve accidentally tapped into an intelligent code completion tool!

Get to grips with text editors and more as part of our MSc Computer Science with Artificial Intelligence

This 100% online computer science and artificial intelligence programme will equip you for a range of sought-after roles in software development. 

Develop core abilities of computational thinking, computational problem solving and software development, while acquiring specialist knowledge across increasingly sought-after skill sets spanning neural networks, genetic algorithms and data analytics. You’ll even undertake your own independent artificial intelligence project.

With employer demand for this expertise at an all-time high, enrol now and be part of this thrillingly fast-paced, far-reaching and ground-breaking field.

The real world impact of facial detection and recognition

From visual confirmation of rare diseases to securing smartphones, facial detection and recognition technologies have become embedded in both the background of our daily lives and the forefront of solving real-world problems. 

But is the resulting impact an invasive appropriation of personal data, or a benchmark in life-saving security and surveillance? Wherever you stand on the deep-learning divide, there is no denying the ways in which this ground-breaking biometric development is influencing the landscape of artificial intelligence (AI) application.

What is facial detection and recognition technology?

Facial detection and recognition systems are forms of AI that use algorithms to identify the human face in digital images. Trained to capture more detail than the human eye, they fall under the category of ‘neural networks’; aptly-named computer softwares modelled on the human brain, built to recognise relationships and patterns in given datasets.

Key differences to note

Face detection is a broader term given to any system that can identify the presence of a human face in a visual image. Face detection has numerous applications, including people-counting, online marketing, and even the auto-focus of a camera lens. Its core purpose is to flag the presence of a face. Facial recognition, however, is more specialised, and relates specifically to softwares primed for individual authentication. Its job is to identify whose face is present.

How does it work?

Facial recognition software follows a three-part process. Here’s a more granular overview, according to Toolbox:

Detection

A face is detected and extracted from a digital image. Through marking a vast array of facial features (such as eye distance, nose shape, ethnicity and demographic data, and even facial expressions), a unique code called a ‘faceprint’ is created to identify the assigned individual.

Matching

This faceprint is then fed through a database, which utilises several layers of technology to match against other templates stored on the system. The algorithms are trained to capture nuance and consider differences in lighting, angle and human emotion.

Identification

This step depends on what the facial recognition software is used for — surveillance or authentication. The technology should ideally produce a one-to-one match for the subject, passing through various complex layers to narrow down options. (For example, some software providers even analyse skin texture along with facial recognition algorithms to increase accuracy.)

Biometrics in action

If you’re an iPhone X user, you’ll be familiar with Apple’s Face ID authentication system as an example of this process. The gadget’s camera captures a face map using specific data points, allowing the stored user to unlock their device with a simple glance.

Some other notable face recognition softwares include:

  • Amazon Rekognition: features include user verification, people counting and content moderation, often used by media houses, market analytics firms, ecommerce sites and credit solutions
  • BioID: GDPR-compliant solution used to prevent online fraud and identity theft
  • Cognitec: recognises faces in live video streams, with clients ranging from law enforcement to border control
  • FaceFirst: a security solution which aims to use DigitalID to replace cards and passwords
  • Trueface.ai: services span to weapon detection, utilised by numerous sectors including education and security

Real-world applications

As outlined in the list above, reliance on this mode of machine learning has permeated almost all areas of society, extending wider still to healthcare and law enforcement agencies. This illustrates a prominent reliance on harvesting biometric data to solve large-scale global problems, spanning – at the extreme – to the life-threatening and severe. 

Medical diagnoses

We are beginning to see documented cases of physicians using these AI algorithms to detect the presence of rare and compromising diseases in children. According to The UK Rare Diseases Framework, 75% of rare diseases affect children, while more than 30% of children with a rare disease die before their fifth birthday. With 6% of people slated to be impacted by a difficult to diagnose condition in their lifetime, this particular application of deep learning is imperative.

Criminal capture

It was recently reported that the Metropolitan Police deployed the use of facial recognition technology in Westminster, resulting in the arrests of four people. The force announced that this was part of a ‘wider operation to tackle serious and violent crime’ in the London borough. The software used was a vehicle-mounted LFR system, which enables police departments to identify passers-by in real-time by scanning their faces and matching them against a database of stored facial images. According to the Met Police website, other applications of face identification include locating individuals on their ‘watchlist’ and providing essential information when there is an unconscious, non-communicative or seriously injured party on the scene.

Surveillance and compliance

A less intensive example, but one that could prove essential to our pandemic reality. Surveillance cameras equipped with facial detection were used to filter face mask compliance at a school in Atlanta, while similar technology has been applied elsewhere to conduct gun control.

Implications of procuring biometric information

Of course, no form of emerging or evolving technology comes without pitfalls. According to Analytics Insight, the accuracy rates of facial recognition algorithms are notably low in the case of minorities, women and children, which is dangerously problematic. Controversy surrounding data protection, public monitoring and user privacy persists, while the generation of deepfake media (and softwares like it), used to replicate, transpose and project one individual’s face in replacement of another, gives rise to damaging – and potentially dangerous – authentication implications. Returning to the aforementioned Met Police arrests, even in this isolated sample, reports of false positives were made, sparking outcry within civil rights groups.

At the centre of this debate, however, one truth is abundantly clear; as a society, we are becoming rapidly reliant on artificial intelligence to function, and the inception of these recognition algorithms is certainly creating an all new norm for interacting with technology.

Want to learn more about facial detection softwares? 

Dive deeper into the helps and harms and real-world applications of this mode of machine learning (and more) as part of our MSc Computer Science with Artificial Intelligence

On this course, you’ll develop core abilities of computational thinking, computational problem solving and software development, while acquiring specialist knowledge across increasingly sought-after skill sets spanning neural networks, genetic algorithms and data analytics. You’ll even undertake your own independent artificial intelligence project.

With employer demand for this expertise at an all-time high, enrol now and be part of this thrillingly fast-paced, far-reaching and ground-breaking field.

The Internet of Things in the age of interconnectivity

Global online interconnectivity has woven itself seamlessly into our lives. How many of us can conceive of a modern life without the internet?

Going about our daily lives both personally and professionally, we reach for our mobile phones and devices for news, information, entertainment, and to communicate with each other. The ease and expectation of accessing information online 24/7 is taken as a matter of course. What most people may not consider, however, is how all this information technology is delivered to us. Digital transformation, due to emerging technologies, continues to grow exponentially. The Internet of Things (IoT) is an essential, and integral, element in ensuring current and future business success.

What is the Internet of Things and how did it evolve?

Simply put, the IoT is the concept of networking connected devices so that they can collect and transmit data. Nowadays, it enables digital technology to be embedded in our physical world, such as in our homes, cars, and buildings, via vast networks connecting to computers.

Historically, the concept of IoT devices has an interesting timeline, and its early pioneers are names that remain well-known to many of us today:

  • 1832. Baron Schilling creates the electromagnetic telegraph.
  • 1833. Carl Friedrich Gauss and Wilhelm Weber invent a code-enabling telegraphic communication.
  • 1844. Samuel Morse transmits the first Morse code public message from Washington D.C. to Baltimore.
  • 1926. Nikola Tesla conceives of a time when what we know as a mobile phone will become a reality.
  • 1950. Alan Turing foresees the advent of artificial intelligence.
  • 1989. Tim Berners-Lee develops the concept of the World Wide Web.

Even ordinary physical objects became the subject of IoT applications:

  • 1982. Carnegie-Mellon University students install micro switches to check the inventory levels of Coca-Cola vending machines and to see whether they were cold enough to drink.
  • 1990. John Romkey and Simon Hackett connect a toaster to the internet.

As the technology and research grew exponentially from the 1960s onwards, the actual term ‘Internet of Things’ was coined in 1999 by Proctor & Gamble’s Kevin Ashton. By 2008, the first international conference on IoT was held in Switzerland. By 2021, it was reported that there were 35.82 billion IoT devices installed globally, with projections of 75.44 billion worldwide by 2025.

Real-world application

Given the huge potential of IoT technology, the scale of its cross-sector assimilation is unsurprising. For example, it impacts:

  • The consumer market. Designed to make life easier, consider the sheer number of internet-enabled smart devices – including wearables and other goods – that are in daily use. Common examples include smartphones and smartwatches, fitness trackers, home assistants, kitchen appliances, boilers, and home security cameras. We interact with internet connectivity every day; increasingly, many of us are already living in ‘smart’ homes. Optimising the customer experience is key to business success. Whether related to data-collecting thermostats which monitor energy consumption, or wifi providers which supply the best Bluetooth packages, all are driven by IoT systems.
  • Physical world infrastructure. On a grander scale, IoT technology is now focused on developing smart buildings and, in the long run, smart cities. In buildings, elements such as heating, lighting, lifts and security are already directed by automation. In the outside world, real-time, data-gathering traffic systems and networks rely on IoT to share data using machine learning and artificial intelligence.
  • Industrial and domestic sectors. Where, previously, many items and goods were manufactured and serviced off-grid, everything is now internet-connected. Items used domestically include washing machines, doorbells, thermostats, and gadgets and virtual assistant technology such as Alexa and Siri. Amazon distribution centres, factories and international mail delivery systems are all examples of environments that are reliant on IoT platforms.
  • Transportation. In an area of highly complex logistics, keeping the supply chain moving and reaching its destination is critical. The same can be applied to all other modes of transport, such as aeroplanes, ships, trains and vehicles. For the individual, connected cars are already a reality. Many vehicles have the ability to communicate with other systems and devices, sharing both internet access and data.
  • Healthcare. The impact of the global Covid pandemic has taken a huge toll on our lives. The stresses on worldwide healthcare and medical business models have become ever more pressing. The need for strategies and solutions to deliver optimal healthcare, as modelled on IoT, is being researched by many organisations including Microsoft. Devices such as pacemakers, cochlear implants, digital pills, and wearable tech such as diabetic control sensors, are making invaluable contributions to patients across the sector.

The technology behind the Internet of Things

IoT technology presents immeasurable benefits in our lives, and its scope is seemingly limitless. IoT platforms are interconnected networks of devices which constantly source, exchange, gather and share big data using cloud computing or physical databases. They consist of:

  • Devices. These connect to the internet and incorporate sensors and software which connect with other devices. For example, the Apple watch connects to the internet, uses cloud computing, and also connects with the Apple iPhone.
  • Communications. Examples include Bluetooth, MQTT, wifi and Zigbee.
  • Cloud computing. This refers to the internet-based network on which data from IoT devices and applications is stored.
  • Edge computing. The use of tools such as IBM’s edge computing uses artificial intelligence to help solve business problems, increase security, and enhance both capacity and resilience.
  • Maintenance and monitoring. Monitoring and troubleshooting these devices and communications is essential to ensure optimum functionality.

Inevitably, while the benefits to both international businesses and organisations are immense, IoT technology also attracts cybercrime and hackers. Cyber security threats target all areas of IoT – from businesses to individual users.

IoT has been hailed as the fourth Industrial Revolution. Future technology is already blending artificial intelligence with IoT, with the aim of enabling our personal and professional lives to become simpler, safer and more personalised. In fact, in terms of IoT security, artificial intelligence can be used for:

  • Evaluating information for optimisation
  • Learning previous routines
  • Decreasing down times in functionality
  • Increasing the efficiency and efficacy of procedures
  • Creating solutions to ward off potential threats, thus enhancing security

Career prospects

Career opportunities within the computer science and artificial intelligence fields may include, but are not limited to:

  • Natural language processing
  • Machine learning engineering
  • Semantic technology
  • Data science
  • Business intelligence development
  • Research science
  • Big data engineering/architecture

Choosing the right AI and computer science course for you

If you’re looking for the qualifications to help you succeed in the fast-paced and highly rewarding field of IoT, then choose the University of York’s online MSc Computer Science with Artificial Intelligence programme.

Digital influences on the way we live and work

The exponential growth of digital connection in our world is all-pervasive and touches on every aspect of our daily lives, both personally and professionally.

Today, many people would be hard-pressed to imagine life before the advent of digital technology. It has blended and integrated seamlessly into everyday living. Global interconnectivity and the ability to communicate and network instantly is now a ‘given’. This expectation has markedly transformed the human experience across all areas, and the ‘always on’ culture has led to the creation of a vast online and computerised world. 

Creatively, this has inevitably resulted in phenomenally new ways of working and interpreting the world through data. Artificial intelligence (AI), and its attendant strands of computational science, are a vital link in the chain of twenty-first century life.

Whose choice is it anyway?

As the everyday world becomes saturated with digital information, the element of choice and decision-making becomes harder to navigate. The sheer amount of data available has led to the development of programs to enable and equip end-users to make such choices bespoke. Examples can be simplistic – from choosing the best shampoo to suit your hair type or which restaurant to choose for a special night out – or, more complexly, looking for a new home in a different area.

Recommender systems are built into many technological platforms and are used for both individual and ecommerce purposes. Although choice availability appears straightforward, the process behind it is remarkably elaborate and sophisticated.

The science behind the experience

The recommender system is an information filtering system run by machine learning algorithms programmed to anticipate and predict user interest, user preferences and ratings in relation to products and/or information browsed online.

Currently, there are three main types of recommender systems:

  1. Content-based filtering. This is driven and influenced by user behaviour, and picks up what has been previously or currently searched for. Keyword-dependent, it seeks out the filtering approach and patterns regarding items in order to inform decision making.
  2. Collaborative filtering recommender. This uses a more-advanced approach in that similar users are selected based on their choice of similar items. Collaborative filtering methods are centred on analysing the importance of user interactions on like-for-like user items and common selections, enabling comparisons to be made. 
  3. Hybrid recommender. An amalgam of the two previous types, this system creates a hybrid once recommended items have been generated. 

The optimal functionality of recommendation engines depends upon information and raw data extracted from user experience and user ratings. When combined, these facilitate the building of user profiles to inform ecommerce targets and aims.

Multiple commonly accessed corporations and e-markets are highly visible and instantly recognisable on the online stage. Household names such as Amazon and Netflix are brands that immediately spring to mind. These platforms invest massively in state-of-the-art operations and big data collection to constantly improve, evolve and calibrate their commercial aims and marketing.

Computer architecture and system software are predicated on a myriad of sources and needs, and rely heavily on machine learning and deep learning.These two terms are often considered interchangeable buzzwords, but deep learning is an evolution of the former. Using programmable neural networks, machines have the ability to make accurate and precise decisions without human intervention. Within the machine learning environment, the term ‘nearest neighbour’ is an essential classification algorithm – not to be confused with its traditional association in the pre-computer era.

Servicing enabling protocols, technologies and real-world applications requires in-depth skills and knowledge across multiple disciplines. By no means an exhaustive list, familiarity with, and indeed specialist awareness of, the following terms are integral to the optimisation of recommendation algorithms and the different types of recommendation models:

  • Matrix factorization. This refers to the collaborative filtering algorithms used in recommender systems. New user items are decomposed into the product of two lower-dimensionality, rectangular matrices. Mathematical modelling further splits these entities into smaller entries in order to discover the features or information leading to interactions between different users and items. Once alerted by the search engine, matrix factorization generates product recommendations.
  • Cold-start problem. This is an issue which presents in both supervised and unsupervised machine learning and is frequently addressed.
  • Cosine similarity. Needing to determine the nearest user to provide recommendations, this is an approach to measure similarities between two non-zero vectors.
  • Data sparsity. Many commercial recommender systems are based around large datasets. As such, the user-item matrices used in collaborative filtering could be large and sparse. Therefore, such data sparsity could present a challenge in terms of optimal recommendation performance.
  • Data science. IBM’s overview offers a comprehensive explanation, and introduction to, the employment of data science within its use of data mining and complex metadata.
  • Programming languages. Globally used programming languages include Scala, Perl, SQL, C++ and Python. Python is one of the foremost languages used. Managed by Grouplens Research at the University of Minnesota, MovieLens makes use of Python in collaborative filtering. Its programme predicts film ratings based on user profiles, plus user ratings, and overall user experience. 

What’s happening with social media?

In recent years, recommender systems have become integral to the continued growth of social media. Due to the nature of the interconnected online community across locations and social demographics, a higher volume of traffic is both generated and triggered by recommendations enforced by likes and shares.

Online shopping has exploded as a result of the global pandemic. Websites such as Meta (formerly Facebook) and Etsy have been joined by new e-businesses and ‘shop fronts’, all of which incorporate the latest recommender technology. Targeted focus centres on growing user profiles by analysing purchase history and the browsing of new items. The aim is to both attract new users and retain existing ones. These embeddings are made possible through the use of recommender systems.

Careers in artificial intelligence and computer science      

Professionally relevant associations such as the Institute of Electrical and Electronics Engineers  (IEEE), and digital libraries such as Association for Computing Machinery (ACM), exist to provide further knowledge and support to those working in this fascinating field. 

Whichever specialisation appeals – computer science, software development, programming, AI-oriented solutions development – the many pathways are leveraged to build a rewarding career. There are many in-demand roles and no shortage of successful and creative organisations in which to work as evidenced in 50 Artificial Intelligence Companies to watch in 2022.

Further your learning in this fast-paced field

If you’re looking for a university course offering up-to-date theoretical and practical knowledge with holistic, pedagogical and real-world expertise, then choose the University of York’s online MSc Computer Science with Artificial Intelligence course and take your next step towards a fulfilling and stimulating career.

What is computer vision?

Research has shown that 84% of UK adults own a smartphone. As a result, taking a photo or recording a video and sharing it with friends has never been easier. Whether sharing directly with friends on popular messaging app WhatsApp, or uploading to booming social media platforms Instagram, TikTok, or YouTube, the digital world is an increasingly more visual one than ever before.

Internet algorithms index and search text with ease. When you use Google to search for something, chances are the results are fairly accurate or answer your question. However, images and videos aren’t indexed or searchable in the same way. 

When uploading an image or video, the owner has the option to add meta descriptions. This is a text string which isn’t visible on screen but which tells algorithms what is in that particular piece of media. However, not all rich media has associated meta descriptions and they aren’t always accurate.

Computer vision is the field of study focused on solving the problem of making computers see by developing methods that reproduce the capability of human vision, and aims to enable computers to understand the content of digital images. It is a multidisciplinary field encompassing artificial intelligence, machine learning, statistical methods, and other engineering and computer science fields.

How computer vision applications operate

Many computer vision applications involve trying to identify and classify objects from image data. They do this using the following methods to answer certain questions.

  • Object classification: What broad category of object is in this photograph?
  • Object identification: Which type of a given object is in this photograph?
  • Object verification: Is the object in the photograph?
  • Object detection: Where are the objects in the photograph?
  • Object landmark detection: What are the key points for the object in the photograph?
  • Object segmentation: What pixels belong to the object in the image?
  • Object recognition: What objects are in this photograph and where are they?

Other methods of analysis used in computer vision include:

  • video motion analysis to estimate the velocity of objects in a video or the camera itself;
  • image segmentation where algorithms partition images into multiple sets of views;
  • scene reconstruction which creates a 3D model of a scene inputted through image or video; and
  • image restoration where blurring is removed from photos using machine learning filters.

Why computer vision is difficult to solve

The early experiments of computer vision began in the 1950s. Since then it has spanned robotics and mobile robot navigation, military intelligence, human computer interaction, image retrieval in digital libraries, and the rendering of realistic scenes in computer graphics.

Despite decades of research, computer vision remains an unsolved problem. While some strides have been made, specialists are yet to reach the same level of success in computers as is innate in humans.

For fully-sighted humans, seeing and understanding what we’re looking at is effortless. Because of this ease, computer vision engineers originally believed that reproducing this behaviour within machines would also be a fairly simple problem to solve. That, it turns out, has not been the case.

While we know that human vision is simple for us, psychologists and biologists don’t yet have a complete understanding as to why and how it’s so simple. There is still a knowledge gap in being able to explain the complete workings of our eyes and the interpretation of what our eyes see within our brains. 

As humans, we are also able to interpret what we see under a variety of different conditions – different lighting, angles, and distances. With a range of variables, we can still reach the same conclusion and correctly identify an object. 

Without understanding the complexities of human vision as a whole, it’s difficult to replicate or adapt for success in computer vision.

Recent progress in computer vision

While the problem of computer vision doesn’t yet have an entire solution, progress has been made in the field due to innovations in artificial intelligence – particularly in deep learning and neural networks. 

As the amount of data generated every day continues to grow, so do the capabilities in computer vision. Visual data is booming, with over three billion images being shared online per day, and computer science advancements mean the computing power to analyse this data is now available. Computer vision algorithms and hardware have evolved in their complexity, resulting in higher accuracy rates for object identification.

Facial recognition in smartphones has become a key feature of unlocking our mobile devices in recent years, a success which is down to computer vision. 

Other problems which have been solved in this vast field also include:

  • optical character recognition (OCR) which allows software to read the text from within an image, PDF, or a handwritten scanned document
  • 3D model building, or photogrammetry, which may be a stepping stone to reproducing the identification of images from different angles
  • safety in autonomous vehicles, or self-driving cars, where lane line and object detection has been developed
  • revolutionising healthcare with image analysis features to detect symptoms in medical imaging and X-rays
  • augmented reality and mixed reality, which uses object tracking in the real world to determine the location of a virtual object on the device’s display 

The ultra-fast computing machines available today, along with quick and reliable internet connections, as well as cloud networks make the process of deciphering an image using computer vision much faster than when this field was first being investigated. Plus, with companies like Meta, Google, IBM and Microsoft also sharing their artificial intelligence research through open sourcing, it’s certain that computer vision research and discoveries will progress at a quicker speed than was seen in the past.

The computer vision and hardware market is expected to be worth $48.6 billion, making it a lucrative industry where the pace of change is accelerating.

Specialise in artificial intelligence

If you have an interest in computer vision, expanding your skills and knowledge in artificial intelligence is the place to start. With this grounding, you could be the key that solves many unanswered questions in computer vision – a field with potential for huge growth.

The University of York’s online MSc Computer Science with Artificial Intelligence will set you up for success. Study entirely online and part-time around current commitments, whether you already have experience in computer science or you’re looking to change your career into this exciting industry, this master’s degree is for you.

Where robotics and artificial intelligence meet

Even beyond the fully autonomous robots that appeared in the second half of the 20th Century, human beings have long been fascinated by automata. From Da Vinci’s mechanical knight  (designed around 1495) to Vaucanson’s “digesting duck” of 1739, and John Dee’s flying beetle of 1543 (which caused him to be charged with sorcery), the history of robotics stretches well before 1941 when Isaac Asimov first coined the term in one of his short stories.

Robotics combines computer science and mechanical engineering to construct robots that are able to assist humans in various tasks. We’re used to seeing industrial robots in manufacturing, for example, in the construction of cars. And, robots have been, and continue to be, particularly useful in heavy industry to help with processes that could be dangerous for humans resulting in injury or even death. Robotic arms, sometimes known as manipulators, were originally utilised in the handling of radioactive or biohazardous materials which would cause damage to human tissues and organs with exposure.

ABB Robotics is one of the leading multinational companies that deals in service robotics for manufacturing. Robotics applications include welding, heavy lifting, assembly, painting and coating, bonding and sealing, drilling, polishing, and palletising and packaging. These are all heavy-duty tasks that require a variety of end effectors, but robotics technology has progressed considerably and can also be seen in healthcare carrying out sophisticated medical procedures.

With the global pandemic, advances in robotics for surgery have been invaluable, allowing surgeons to remotely control the procedure from a safe distance. Now this technology is being developed further to allow surgeons to log in to operating theatres anywhere in the world, using remote control systems on a tablet or a laptop. They can then carry out surgery with the assistance of medical robotics on site. The technology was created by Nadine Hachach-Haram, who grew up in war-torn Lebanon, and there is no doubt it will be put to good use in locations where there is medical inequality or conflict or both.    

Mobile robots are also fairly common in various industries but we have yet to see them take over domestic chores in a way that perhaps the inventors of Roomba (the autonomous robotic vacuum cleaner) may have hoped they would. And yet, research has shown that people can tell what kind of personality a Neato Botvac has simply by the way that it moves. The study has provided interesting insights into human-robot interaction. Another study in 2017 demonstrated that anthropomorphic robots actually made people feel less lonely. People who work alongside collaborative robots, whether in the military or in manufacturing, also tend to express affection for their “cobots”.

Building on this affection that it seems people can and do develop for robots, Amazon has created Astro, a rolling droid that incorporates AI and is powered by Alexa, Amazon’s voice assistant. Astro also offers a detachable cup holder if your hands are too full to carry your coffee into the next room. Amazon has revolutionised automation with the use of Kiva robots in its warehouses and data science in its online commerce. So, it will be interesting to see if it can succeed where others have failed in making home robots popular.

What’s the difference between artificial intelligence and robotics?

While artificial intelligence refers only to the computer programs that process immense amounts of information in order to “think”, robotics refers to a machine designed to carry out assistive tasks that don’t necessarily require intelligence. Artificial intelligence has given us neural networks built on machine learning, which mimic the neural pathways of the human brain. It seems like an obvious next step would be to integrate these powerful developments with robotic systems to create intelligent robots.

Tesla’s self-driving cars contain neural networks that use autonomy algorithms to support the car’s real-time object detection, motion planning, and decision-making in complicated real-world situations. This level of advanced architectures and programming in the use of robotics is now being proposed with the Tesla Bot, a humanoid robot that Elon Musk says is designed to eliminate dangerous, repetitive, and boring tasks. Andrew Maynard, Associate Dean at the College of Global Futures, Arizona State University voices caution with regard to “a future that, judging by Musk’s various endeavours, will be built on a set of underlying interconnected technologies that include sensors, actuators, energy and data infrastructures, systems integration and substantial advances in computer power.” He adds that “before technology can become superhuman, it first needs to be human – or at least be designed to thrive in a human-designed world.”

It’s an interesting perspective, and one that goes beyond the usual moral and ethical concerns of whether robots could be used for ill, a theme often explored in science fiction films like The Terminator, Chappie, and Robot and Frank. Science fiction has always provided inspiration for actual technologies but it also serves to reflect back some of our own moral conundrums. If, as Elon Musk has indicated, the Tesla Bot “has the potential to be a generalised substitute for human labour over time,” what does this mean for artificially intelligent robotics? Would we essentially be enslaving robots? And, of course, this is built on the belief that the foundation of the economy will continue to be labour.

Tesla Bot hasn’t yet reached prototype stage yet, so whether Musk’s vision becomes a reality remains to be seen. The kinematics required to support bi-ped humanoid robots, though, are extremely complex. When it comes to bipedal robot design, LEONARDO (LEgs ONboARD drOne) is a quadcopter with legs. Only 76cm tall with an extremely light structure, even with an exoskeleton, LEONARDO looks far from the humanoids that the robotics industry may believe will become embedded in our everyday lives. Yet his multimodal locomotion system solves (or perhaps simply avoids) some of the issues real-world bipedal robots experience related to weight and centre of gravity. Mechatronics is an interdisciplinary branch of engineering that concerns itself with these kinds of issues. Some would say that mechatronics is where control systems, computers, electronic systems, and mechanical systems overlap. However, others would say that it’s simply a buzzword which is interchangeable with automation, robotics, and electromechanical engineering.

Build your knowledge in artificial intelligence with an MSc

Artificial intelligence is vital to take robotics research to the next level and enable robots to go beyond the relatively simple tasks they can complete on their own, as well as the more complex tasks they support humans with. Research in areas such as machine learning, distributed artificial intelligence, computer vision, and human-machine interaction will all be key to the future of robotics. 

Inspired to discover more about how you can specialise in AI? Study an 100% online and part-time MSc Computer Science with Artificial Intelligence with the University of York.

 

What is reinforcement learning?

Reinforcement learning (RL) is a subset of machine learning that allows an AI-driven system (sometimes referred to as an agent) to learn through trial and error using feedback from its actions. This feedback is either negative or positive, signalled as punishment or reward with, of course, the aim of maximising the reward function. RL learns from its mistakes and offers artificial intelligence that mimics natural intelligence as closely as it is currently possible.

In terms of learning methods, RL is similar to supervised learning only in that it uses mapping between input and output, but that is the only thing they have in common. Whereas in supervised learning, the feedback contains the correct set of actions for the agent to follow. In RL there is no such answer key. The agent decides what to do itself to perform the task correctly. Compared with unsupervised learning, RL has different goals. The goal of unsupervised learning is to find similarities or differences between data points. RL’s goal is to find the most suitable action model to maximise total cumulative reward for the RL agent. With no training dataset, the RL problem is solved by the agent’s own actions with input from the environment.

RL methods like Monte Carlo, state–action–reward–state–action (SARSA), and Q-learning offer a more dynamic approach than traditional machine learning, and so are breaking new ground in the field.

There are three types of RL implementations: 

  • Policy-based RL uses a policy or deterministic strategy that maximises cumulative reward
  • Value-based RL tries to maximise an arbitrary value function
  • Model-based RL creates a virtual model for a certain environment and the agent learns to perform within those constraints

How does RL work?

Describing fully how reinforcement learning works in one article is no easy task. To get a good grounding in the subject, the book Reinforcement Learning: An Introduction by Andrew Barto and Richard S. Sutton is a good resource.

The best way to understand reinforcement learning is through video games, which follow a reward and punishment mechanism. Because of this, classic Atari games have been used as a test bed for reinforcement learning algorithms. In a game, you play a character who is the agent that exists within a particular environment. The scenarios they encounter are analogous to a state. Your character or agent reacts by performing an action, which takes them from one state to a new state. After this transition, they may receive a reward or punishment. The policy is the strategy which dictates the actions the agent takes as a function of the agent’s state as well as the environment.

To build an optimal policy, the RL agent is faced with the dilemma of whether to explore new states at the same time as maximising its reward. This is known as Exploration versus Exploitation trade-off. The aim is not to look for immediate reward, but to optimise for maximum cumulative reward over the length of training. Time is also important – the reward agent doesn’t just rely on the current state, but on the entire history of states. Policy iteration is an algorithm that helps find the optimal policy for given states and actions.

The environment in a reinforcement learning algorithm is commonly expressed as a Markov decision process (MDP), and almost all RL problems are formalised using MDPs. SARSA is an algorithm for learning a Markov decision. It’s a slight variation of the popular Q-learning algorithm. SARSA and Q-learning are the two most typically used RL algorithms.

Some other frequently used methods include Actor-Critic, which is a Temporal Difference version of Policy Gradient methods. It’s similar to an algorithm called REINFORCE with baseline. The Bellman equation is one of the central elements of many reinforcement learning algorithms. It usually refers to the dynamic programming equation associated with discrete-time optimisation problems.

The Asynchrous Advantage Actor Critic (A3C) algorithm is one of the newest developed in the field of deep reinforcement learning algorithms. Unlike other popular deep RL algorithms like Deep Q-Learning (DQN) which uses a single agent and a single environment, A3C uses multiple agents with their own network parameters and a copy of the environment. The agents interact with their environments asynchronously, learning with every interaction, contributing to the total knowledge of a global network. The global network also allows agents to have more diversified training data. This mimics the real-life environment in which humans gain knowledge from the experiences of others, allowing the entire global network to benefit.

Does RL need data?

In RL, the data is accumulated from machine learning systems that use a trial-and-error method. Data is not part of the input that you would find in supervised or unsupervised machine learning.

Temporal difference (TD) learning is a class of model-free RL methods that learn via bootstrapping from a current estimate of the value function. The name “temporal difference” comes from the fact that it uses changes – or differences – in predictions over successive time steps to push the learning process forward. At any given time step, the prediction is updated, bringing it closer to the prediction of the same quantity at the next time step. Often used to predict the total amount of future reward, TD learning is a combination of Monte Carlo ideas and Dynamic Programming. However, whereas learning takes place at the end of any Monte Carlo method, learning takes place after each interaction in TD.

TD Gammon is a computer backgammon program that was developed in 1992 by Gerald Tesauro at IBM’s Thomas J. Watson Research Center. It used RL and, specifically, a non-linear form of the TD algorithm to train computers to play backgammon to the level of grandmasters. It was an instrumental step in teaching machines how to play complex games.

Monte Carlo methods represent a broad class of algorithms that rely on repeated random sampling in order to gain numerical results that point to probability. Monte Carlo methods can be used to calculate the probability of:

  • an opponent’s move in a game like chess
  • a weather event occurring in the future
  • the chances of a car crash under specific conditions

Named after the casino in the city of the same name in Monaco, Monte Carlo methods first arose within the field of particle physics and contributed to the development of the first computers. Monte Carlo simulations allow people to account for risk in quantitative analysis and decision making. It’s a technique used in a wide variety of fields including finance, project management, manufacturing, engineering, research and development, insurance, transportation, and the environment.

In machine learning or robotics, Monte Carlo methods provide a basis for estimating the likelihood of outcomes in artificial intelligence problems using simulation. The bootstrap method is built upon Monte Carlo methods, and is a resampling technique for estimating a quantity, such as the accuracy of a model on a limited dataset.

Applications of RL

RL is the method used by DeepMind to initiate artificial intelligence in how to play complex games like chess, Go, and shogi (Japanese chess). It was used in the building of AlphaGo, the first computer program to beat a professional human Go player. From this grew the deep neural network agent AlphaZero, which taught itself to play chess well enough to beat the chess engine Stockfish in just four hours.

AlphaZero has only two parts: a neural network, and an algorithm called Monte Carlo Tree Search. Compare this with the brute force computing power of Deep Blue, which, even in 1997 when it beat world chess champion Garry Kasparov, allowed the consideration of 200 million possible chess positions per second. The representations of deep neural networks like those used by AlphaZero, however, are opaque, so our understanding of their decisions is restricted. The paper Acquisition of Chess Knowledge in AlphaZero explores this conundrum.

Deep RL is being proposed in the use of unmanned spacecraft to navigate new environments, whether it’s Mars or the Moon. MarsExplorer is an OpenAI Gym compatible environment that has been developed by a group of Greek scientists. There are four deep reinforcement learning algorithms that the team has trained on the MarsExplorer environment, A3C, Ranbow, PPO, and SAC, with PPO performing best. MarsExplorer is the first open-AI compatible reinforcement learning framework that is optimised for the exploration of unknown terrain.

Reinforcement learning is also used in self-driving cars, in trading and finance to predict stock prices, and in healthcare for diagnosing rare diseases.

Deepen your learning with a Masters

These complex learning systems created by reinforcement learning are just one facet of the fascinating and ever-expanding world of artificial intelligence. Studying a Masters degree can allow you to contribute to this field, which offers numerous possibilities and solutions to societal problems and the challenges of the future. 

The University of York offers a 100% online MSc Computer Science with Artificial Intelligence to expand your learning, and your career progression.

The role of natural language processing in AI

What is natural language processing?

Natural language processing (NLP) is a branch of artificial intelligence within computer science that focuses on helping computers to understand the way that humans write and speak. This is a difficult task because it involves a lot of unstructured data. The style in which people talk and write (sometimes referred to as ‘tone of voice’) is unique to individuals, and constantly evolving to reflect popular usage.

Understanding context is also an issue – something that requires semantic analysis for machine learning to get a handle on it. Natural language understanding (NLU) is a sub-branch of NLP and deals with these nuances via machine reading comprehension rather than simply understanding literal meanings. The aim of NLP and NLU is to help computers understand human language well enough that they can converse in a natural way.

Real-world applications and use cases of NLP include:

  • Voice-controlled assistants like Siri and Alexa.
  • Natural language generation for question answering by customer service chatbots.
  • Streamlining the recruiting process on sites like LinkedIn by scanning through people’s listed skills and experience.
  • Tools like Grammarly which use NLP to help correct errors and make suggestions for simplifying complex writing.
  • Language models like autocomplete which are trained to predict the next words in a text, based on what has already been typed.

All these functions improve the more that we write, speak, and converse with computers: they are learning all the time. A good example of this iterative learning is a function like Google Translate which uses a system called Google Neural Machine Translation (GNMT). GNMT is a system that operates using a large artificial neural network to increase fluency and accuracy across languages. Rather than translating one piece of text at a time, GNMT attempts to translate whole sentences. Because it scours millions of examples, GNMT uses broader context to deduce the most relevant translation. It also finds commonality between many languages rather than creating its own universal interlingua. Unlike the original Google Translate which used the lengthy process of translating from the source language into English before translating into the target language, GNMT uses “zero-shot translate” – translating directly from source to target.

Google Translate may not be good enough yet for medical instructions, but NLP is widely used in healthcare. It is particularly useful in aggregating information from electronic health record systems, which is full of unstructured data. Not only is it unstructured, but because of the challenges of using sometimes clunky platforms, doctors’ case notes may be inconsistent and will naturally use lots of different keywords. NLP can help discover previously missed or improperly coded conditions.  

How does natural language processing work?

Natural language processing can be structured in many different ways using different machine learning methods according to what is being analysed. It could be something simple like frequency of use or sentiment attached, or something more complex. Whatever the use case, an algorithm will need to be formulated. The Natural Language Toolkit (NLTK) is a suite of libraries and programs that can be used for symbolic and statistical natural language processing in English, written in Python. It can help with all kinds of NLP tasks like tokenising (also known as word segmentation), part-of-speech tagging, creating text classification datasets, and much more.

These initial tasks in word level analysis are used for sorting, helping refine the problem and the coding that’s needed to solve it. Syntax analysis or parsing is the process that follows to draw out exact meaning based on the structure of the sentence using the rules of formal grammar. Semantic analysis would help the computer learn about less literal meanings that go beyond the standard lexicon. This is often linked to sentiment analysis.

Sentiment analysis is a way of measuring tone and intent in social media comments or reviews. It is often used on text data by businesses so that they can monitor their customers’ feelings towards them and better understand customer needs. In 2005 when blogging was really becoming part of the fabric of everyday life, a computer scientist called Jonathan Harris started tracking how people were saying they felt. The result was We Feel Fine, part infographic, part work of art, part data science. This kind of experiment was a precursor to how valuable deep learning and big data would become when used by search engines and large organisations to gauge public opinion.

Simple emotion detection systems use lexicons – lists of words and the emotions they convey from positive to negative. More advanced systems use complex machine learning algorithms for accuracy. This is because lexicons may class a word like “killing” as negative and so wouldn’t recognise the positive connotations from a phrase like, “you guys are killing it”. Word sense disambiguation (WSD) is used in computational linguistics to ascertain which sense of a word is being used in a sentence.

Other algorithms that help with understanding of words are lemmatisation and stemming. These are text normalisation techniques often used by search engines and chatbots. Stemming algorithms work by using the end or the beginning of a word (a stem of the word) to identify the common root form of the word. This technique is very fast but can lack accuracy. For example, the stem of “caring” would be “car” rather than the correct base form of “care”. Lemmatisation uses the context in which the word is being used and refers back to the base form according to the dictionary. So, a lemmatisation algorithm would understand that the word “better” has “good” as its lemma.

Summarisation is an NLP task that is often used in journalism and on the many newspaper sites that need to summarise news stories. Named entity recognition (NER) is also used on these sites to help with tagging and displaying related stories in a hierarchical order on the web page.

How does AI relate to natural language processing?

Natural language processing – understanding humans – is key to AI being able to justify its claim to intelligence. New deep learning models are constantly improving AI’s performance in Turing tests. Google’s Director of Engineering Ray Kurzweil predicts that AIs will “achieve human levels of intelligence” by 2029.

What humans say is sometimes very different to what humans do though, and understanding human nature is not so easy. More intelligent AIs raise the prospect of artificial consciousness, which has created a new field of philosophical and applied research.

Interested in specialising in NLP?

Whether your interest is in data science or artificial intelligence, the world of natural language processing offers solutions to real-world problems all the time. This fascinating and growing area of computer science has the potential to change the face of many industries and sectors and you could be at the forefront. 

Find out more about NLP with an MSc Computer Science with Artificial Intelligence from the University of York.

What you need to know about blockchain

Blockchain technology is best known for its role in fintech and making cryptocurrency a reality, but what is it? 

Blockchain is a database that stores information in a string of blocks rather than in tables, and which can be decentralised by being made public. Bitcoin, one of the most talked about and unpredictable cryptocurrencies, uses blockchain as does Ether, the currency of Ethereum. 

Although cryptocurrencies have been linked with criminal activity, blockchain’s mechanism of storing data with time stamps provides offers transparency and traceability. Although central banks and financial institutions have been wary of the lack of regulation, retailers are increasingly accepting Bitcoin transactions. It’s said that Bitcoin founder, Satoshi Nakamoto, created the cryptocurrencies as a response to the 2008 financial crash. It was a way of circumnavigating financial institutions by saving and transferring digital currency in a peer-to-peer network without the involvement of a central authority.

Ethereum is a blockchain network that helped shift the focus away from cryptocurrencies when it opened in 2015 by offering general purpose blockchain that can be used in different ways. In a white paper written in 2013, the founder of Ethereum, Vitalik Buterin, wrote about the need for application development beyond the blockchain technology of Bitcoin, that would lead to attachment to real-world assets such as stocks and property. Ethereum blockchain has also provided the ability to create and exchange non-fungible tokens (NFTs). NFTs are mainly known as digital artworks but can also be digital assets, such as viral video clips, gifs, music, or avatars. They’re attractive because once bought, the owner has exclusive rights to the content. They also protect the intellectual property of the artist by being tamper-proof.

There has recently been a lot of hype around NFTs because the piece Everydays: The First 5000 Days by digital artist Beeple (Mike Winkelmann) sold for a record-breaking $69,346,250 at auction. That’s the equivalent of 42,329 Ether, which was what Vignesh Sundaresan, owner of Metapurse, used to purchase the piece that combines 5,000 images created and collated over 13 years. NFTs may seem like a new technology but they’ve actually been around since 2014.

IOTA is the first cryptocurrency to make possible free micro-transactions between Internet of Things (IoT) objects. While Ethereum moved the focus away from cryptocurrency, IOTA is looking to move cryptocurrency beyond blockchain. By using a Directed Acyclic Graph called the Tangle, IOTA manages to rid any need for miners, allows for near-infinite scaling, and removes fees entirely.   

How blockchain works

Blockchain applications are many and varied including the decentralisation of financial services, healthcare, internet browsing, real estate, government, voting, music, art, and video games. Blockchain solutions are increasingly utilised across industries, for example, to provide transparency in the supply chain, or in lowering administrative overheads with smart contracts.  

But how does it actually work? Blockchain uses several technologies including distributed ledger technology, digital signatures, distributed networks and encryption methods to link the blocks of the ledger for record-keeping. Information is collected in groups which make up the blocks. The blocks have certain capacities which, once filled, become chained to the previously filled block. This creates a timeline because each block is given a timestamp which cannot be overwritten.

The benefits of blockchain are seen not just in cryptocurrencies but in legal contracts and stock inventories as well as in the sourcing of products such as coffee beans. There are notoriously many steps between coffee leaving the farm where it was grown and reaching your coffee cup. Because of the complexity of the coffee market, coffee farmers often only receive a fraction of what the end-product is worth. Consumers also increasingly want to know where their coffee has come from and that the farmer received a fair price. Initially used as an effective way to cut out the various middlemen and streamline operations, blockchain is now being used as an added reassurance for supermarket customers. In 2020, Farmer Connect partnered with Folger’s coffee in using the IBM blockchain platform to connect producers with customers. A simple QR code helps consumers see how the coffee they hold in their hand was brought to the shelf. Walmart is another big name providing one of many case studies for offering transparency with blockchain by using distributed ledger software called Hyperledger Fabric.

Are blockchains hackable?

In theory, blockchains are hackable, however the time and resources – including a vast network of computers – needed to achieve a successful hack are beyond the average hacker. Even if a hacker did manage to simultaneously control and alter 51% of the copies of the blockchain in order to gain control of the ledger and make their own copy the majority copy, each block would then have different timestamps and hash codes (the cryptographic algorithm). The deliberate design of blockchain – using decentralisation, consensus, and cryptography – makes it impossible to alter the chain without it being noticed by others and irreversibly changing the data along the whole chain.

Blockchain is not invulnerable to cybersecurity attack through phishing and ransomware but it is currently one of the most secure forms of data storage. Permissioned blockchain adds an additional access control layer – actions performed only by identifiable users allow access. These blockchains are different to both public blockchains and private blockchains.

Are blockchains good investments?

Currencies like Bitcoin and Ether are proving to be good investments both in the short-term and the long-term; NFTs are slightly different though. A good way to think about NFTs is as collector’s items in digital form. Like anything that’s collectable, it’s best to buy something because you truly admire it rather than because it’s valuable, especially in the volatile cryptocurrency ecosystem. It’s also worth bearing in mind that the values of NFTs are based entirely on what someone is prepared to pay rather than any history of worth – demand drives price.

Anyone can start investing but as most digital assets like NFTs can only be bought with cryptocurrency, you’ll need to purchase some, which you can easily do with a credit card on any of the crypto platforms. You will also need a digital wallet in which to store your cryptocurrency and assets. You’ll be issued with a public key, which works like an email address when sending and receiving funds, and a private key, which is like a password that unlocks your virtual vault. Your public key is generated by your private key which makes them a pair and adds to the security of your wallet. Some digital wallets like Coinbase also serve as crypto bank accounts for savings. Although banks occasionally freeze accounts with relation to Bitcoin transactions, they are becoming more accustomed to cryptocurrencies. Investment banks such as JP Morgan and Barclays even show interest in the asset class despite the New York attorney general declaring “Play by the rules or we will shut you down” in March 2021.

Are blockchain transactions traceable?

In a blockchain, each node (a bank of computers) has a complete record of the data that has been stored on the blockchain since it began. So for example, the data held by a Bitcoin is the entire history of its transactions. If one node presents an error in its data, the thousands of other nodes help by providing a reference point for the error so it can correct itself. This architecture means that no single node in the network has the power to alter information held within it. It also means that the record of transactions in each block that make up Bitcoin’s blockchain is irreversible. This also means that any Bitcoins extracted by a hacker can be easily traced by the transactions that appear in the wake of the hack.

Blockchain explorers allow anyone to see transactions happening in real-time.

Learn more about cryptocurrencies and blockchain

Whether you’re interested in improving cybersecurity or becoming a blockchain developer, looking for enhanced expertise in data science or artificial intelligence, specialist online Master’s degrees from University of York cover some of the hottest topics in these areas.

Discover more and get a step ahead with the MSc Computer Science with Data Analytics or the MSc Computer Science with Artificial Intelligence.

The next step in machine learning: deep learning

What is deep learning?

Deep learning is a sector of artificial intelligence (AI) concerned with creating computer structures that mimic the highly complex neural networks of the human brain. Because of this, it is also sometimes referred to as deep neural learning or deep neural networks (DNNs). 

A subset of machine learning, the artificial neural networks utilised in deep learning are capable of sorting much more information from large data sets to learn and consequently use in making decisions. These vast amounts of information that DNNs scour for patterns are sometimes referred to as big data.

Is deep learning machine learning?

The technology used in deep learning means that computers are closer to thinking for themselves without support or input from humans (and all the associated benefits and potential dangers of this). 

Traditional machine learning requires rules-based programming and a lot of raw data preprocessing by data scientists and analysts. This is prone to human bias and is limited by what we are able to observe and mentally compute ourselves before handing over the data to the machine. Supervised learning, unsupervised learning, and semi-supervised learning are all ways that computers become familiar with data and learn what to do with it. 

Artificial neural networks (sometimes called neural nets for short) use layer upon layer of neurons so that they can process a large amount of data quickly. As a result, they have the “brain power” to start noticing other patterns and create their own algorithms based on what they are “seeing”. This is unsupervised learning and leads to technological advances that would take humans a lot longer to achieve. Generative modelling is an example of unsupervised learning.

Real-world examples of deep learning

Deep learning applications are used (and built upon) every time you do a Google search. They are also used in more complicated scenarios like in self-driving cars and in cancer diagnosis. In these scenarios, the machine is almost always looking for irregularities. The decisions the machine makes are based on probability in order to predict the most likely outcome. Obviously, in the case of automated driving or medical testing, accuracy is more crucial, so computers are rigorously tested on training data and learning techniques.

Everyday examples of deep learning are augmented by computer vision for object recognition and natural language processing for things like voice activation. Speech recognition is a function that we are familiar with through use of voice-activated assistants like Siri or Alexa, but a machine’s ability to recognise natural language can help in surprising ways. Replika, also referred to as “My AI Friend”, is essentially a chatbot that gets to know a user through questioning. It uses a neural network to have an ongoing one-to-one conversation with the user to gather information. Over time, Replika begins to speak like the user, giving the impression of emotion and empathy. In April 2020, at the height of the pandemic, half a million people downloaded Replika, suggesting curiosity about AI but also a need for AI, even if it does simply mirror back human traits. This is not a new idea as in 1966, computer scientist Joseph Weizenbaum created what was a precursor to the chatbot with the program ELIZA, the computer therapist.

How does deep learning work?

Deep learning algorithms make use of very large datasets of labelled data such as images, text, audio, and video in order to build knowledge. In its computation of the content – scanning through and becoming familiar with it – the machine begins to recognise and know what to look for. Like the human brain, each computer neuron has a role in processing data, it provides an output by applying the algorithm to the input data provided. Hidden layers contain groups of neurons.

At the heart of machine learning algorithms is automated optimisation. The goal is to achieve the most accurate output so we need the speed of machines to efficiently assess all the information they have and to begin detecting patterns which we may have missed. This is also core to deep learning and how artificial neural networks are trained.

TensorFlow is an open source platform created by Google, written in Python. A symbolic maths library, it can be utilised for many tasks, but primarily for training, transfer learning, and developing deep neural networks with many layers. It’s particularly useful for reinforcement learning because it can calculate large numbers of gradients. The gradient is how the data is seen on a graph. So, for example, the gradient descent algorithm would be used to minimise error function and would be represented graphically as the gradient at its lowest possible point. The algorithm used to calculate the gradient of an error function is “backpropagation”, short for “backward propagation of errors”.

One of the most used deep learning models in reinforcement learning, particularly for image recognition, Convolutional Neural Networks (CNN) can learn increasingly abstract features by using deeper layers. CNNs can be accelerated by using Graphics Processing Units (GPUs) because they can process many pieces of data simultaneously. They can help perform feature extraction by analysing pixel colour and brightness or vectors in the case of grayscale.

Recurrent Neural Networks (RNNs) are considered state of the art because they are the first of their kind to use an algorithm that lets them remember their input. Because of this, RNNs are used in speech recognition and natural language processing in applications like Google Translate.

Can deep learning be used for regression?

Neural networks can be used for both classification and regression. However, regression models only really work well if they’re the right fit for the data and that can affect the network architecture. Classifiers in something like image recognition, have more of a compositional nature compared with the many variables that can make up a regression problem. Regression offers a lot more insight than simply, “Can we predict Y given X?”, because it explores the relationship between variables. Most regression models don’t fit the data perfectly, but neural networks are flexible enough to be able to pick the best type of regression. To add to this, hidden layers can always be added to improve prediction.

Knowing when to use regression or not to solve a problem may take some research. Luckily, there are lots of tutorials online to help, such as How to Fit Regression Data with CNN Model in Python.

Ready to discover more about deep learning?

The University of York’s online MSc Computer Science with Artificial Intelligence from the University of York is the ideal next step if your career ambitions lie in this exciting and fast-paced sector. 

Whether you already have knowledge of machine learning algorithms or want to immerse yourself in deep learning methods, this master’s degree will equip you with the knowledge you need to get ahead.

What is machine learning?

Machine learning is considered to be a branch of both artificial intelligence (AI) and computer science. It uses algorithms to replicate the way that humans learn but can also analyse vast amounts of data in a short amount of time. 

Machine learning algorithms are usually written to look for recurring themes (pattern recognition) and spot anomalies, which can help computers make predictions with more accuracy. This kind of predictive modelling can be for something as basic as a chatbot anticipating what your question may be about to something quite complex, like a self-driving car knowing when to make an emergency stop

It was an IBM employee, Arthur Samuel, who is credited with creating the phrase “machine learning” in his 1959 research paper, “Some studies in machine learning using the game of checkers”. It’s amazing to think that machine learning models were being studied as early as 1959 given that computers now contribute to society in important areas as diverse as healthcare and fraud detection.

Is machine learning AI?

Machine learning represents just a section of AI capabilities. There are three major areas of interest that use AI – machine learning, deep learning, and artificial neural networks. Deep learning is a field within machine learning, and neural networks is a field within deep learning. Traditionally, machine learning is very structured and requires more human intervention in order for the machine to start learning via supervised learning algorithms. Training data is chosen by data scientists to help the machine determine the features it needs to look for within labelled datasets. Validation datasets are then used to ensure an unbiased evaluation of a model fit on the training data set. Lastly, test data sets are used to finalise the model fit.

Unsupervised learning also needs training data, but the data points are unlabelled. The machine begins by looking at unstructured or unlabelled data and becomes familiar with what it is looking for (for example, cat faces). This then starts to inform the algorithm, and in turn helps sort through new data as it comes in. Once the machine begins this feedback loop to refine information, it can more accurately identify images (computer vision) and even carry out natural language processing. It’s this kind of deep learning that also gives us features like speech recognition. 

Currently, machines can tell whether what they’re listening to or reading was spoken or written by humans. The question is, could machines then write and speak in a way that is human? There have already been experiments to explore this, including a computer writing music as though it were Bach.

Semi-supervised learning is another learning technique that combines a small amount of labelled data within a large group of unlabelled data. This technique helps the machine to improve its learning accuracy.

As well as supervised and unsupervised learning (or a combination of the two), reinforcement learning is used to train a machine to make a sequence of decisions with many factors and variables involved, but no labelling. The machine learns by following a gaming model in which there are penalties for wrong decisions and rewards for correct decisions. This is the kind of learning carried out to provide the technology for self-driving cars.

Is clustering machine learning?

Clustering, also known as cluster analysis, is a form of unsupervised machine learning. This is when the machine is left to its own devices to discover what it perceives as natural grouping or clusters. Clustering is helpful in data analysis to learn more about the problem domain or understand arising patterns, for example, customer segmentation. In the past, segmentation was done manually and helped construct classification structures such as the phylogenetic tree, a tree diagram that shows how all species on earth are interconnected. From this example alone, we can see how what we now call big data could take years for humans to sort and compile. AI can manage this kind of data mining in a much quicker time frame and spot things that we may not, thereby helping us to understand the world around us. Real-world use cases include clustering DNA patterns in genetics studies, and finding anomalies in fraud detection.

Clusters can overlap, where data points belong to multiple clusters. This is called soft or fuzzy clustering. In other cases, the data points in clusters are exclusive – they can exist only in one cluster (also known as hard clustering). K-means clustering is an exclusive clustering method where data points are placed into various K groups. K is defined in the algorithm by the number of centroids (centre of a cluster) in a set, which it then uses to allocate each data point to the nearest cluster. The “means” in K-means refers to the average, which is worked out from the data in order to find the centroid. A larger K value is an indication of many, smaller groups, whereas a small K value shows larger, broader groups of data.

Other unsupervised machine learning methods include hierarchical clustering, probabilistic clustering (including the Gaussian Mixture Model), association rules, and dimensionality reduction.

Principal component analysis is an example of dimensionality reduction – reducing larger sets of variables in the input data without losing variance. It is also a useful method for the visualisation of high-dimensional data because it ranks principal components according to how much they contribute to patterns in the data. Although more data is generally helpful for more accurate results, it can lead to overfitting, which is when the machine starts picking up on noise or granular detail from its training data set.

The most common use of association rules is for recommendation engines on sites like Amazon, Netflix, LinkedIn, and Spotify to offer you products, films, jobs, or music similar to those that you have already browsed. The Apriori algorithm is the most commonly used for this function.

How does machine learning work?

Machine learning starts with an algorithm for predictive modelling, either self-learnt or programmed that leads to automation. Data science is the means through which we discover the problems that need solving and how that problem can be expressed through a readable algorithm. Supervised machine learning requires either classification or regression problems. 

On a basic level, classification predicts a discrete class label and regression predicts a continuous quantity. There can be an overlap in the two in that a classification algorithm can also predict a continuous value. However, the continuous value will be in the form of a probability for a class label. We often see algorithms that can be utilised for both classification and regression with minor modification in deep neural networks.

Linear regression is when the output is predicted to be continuous with a constant slope. This can help predict values within a continuous range such as sales and price rather than trying to classify them into categories. Logistic regression can be confusing because it is actually used for classification problems. The algorithm is based on the concept of probability and helps with predictive analysis.

Support Vector Machines (SVM) is a fast and much-used algorithm that can be used for both classification and regression problems but is most commonly used in classification. The algorithm is favoured because it can analyse and class even when there is a limited amount of data available. It groups data into classes even when the classes are not immediately clear because it looks at the data three-dimensionally and uses a hyperplane rather than a line to separate it. SVMs can be used for functions like helping your mailbox to detect spam.

How to learn machine learning

With an online MSc Computer Science with Data Analytics or an online MSc Computer Science with Artificial Intelligence from University of York, you’ll get an introduction to machine learning systems and how they are transforming the data science landscape. 

From big data to how artificial neurons work, you’ll understand the fundamentals of this exciting area of technological advances. Find out more and secure your place on one of our cutting-edge master’s courses.