What is neuromorphic computing?

Compared with first-generation artificial intelligence (AI), neuromorphic computing allows AI learning and decision-making to become more autonomous. Currently, neuromorphic systems are immersed in deep learning to sense and perceive skills used in, for example, speech recognition and complex strategic games like chess and Go. Next-generation AI will mimic the human brain in its ability to interpret and adapt to situations rather than simply working from formulaic algorithms. 

Rather than simply looking for patterns, neuromorphic computing systems will be able to apply common sense and context to what they are reading. Google famously demonstrated the limitations of computer systems that simply use algorithms when its Deep Dream AI was trained to look for dog faces. It ended up converting any imagery that looked like it might contain dog faces into dog faces.

How does neuromorphic computing work?

This third generation of AI computation aims to imitate the complex network of neurons in the human brain. This requires AI to compute and analyse unstructured data that rivals the highly energy-efficient biological brain. Human brains can consume less than 20 watts of power and still outperform supercomputers, demonstrating their unique energy efficiency. The AI version of our neural network of synapses is called spiking neural networks (SNN). Artificial neurons are arranged in layers and each of the spiking neurons can fire independently and communicate with the others, setting in motion a cascade of change in response to stimuli.

Most AI neural network structures are based on what is known as von Neumann architecture – meaning that the network uses a separate memory and processing units. Currently, computers communicate by retrieving data from the memory, moving it to the processing unit, processing data, and then moving back to the memory. This back and forth is both time consuming and energy consuming. It creates a bottleneck which is further emphasised when large datasets need processing. 

In 2017, IBM demonstrated in-memory computing using one million phase change memory (PCM) devices, which both stored and processed information. This was a natural progression from IBM’s TrueNorth neuromorphic chip which they unveiled in 2014. A major step in reducing neuromorphic computers’ power consumption, the massively parallel SNN chip uses one million programmable neurons and 256 million programmable synapses. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, described it as “literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid.”

An analogue revolution was triggered by the successful building of nanoscale memristive devices also known as memristors. They offer the possibility of building neuromorphic hardware that performs computational tasks in place and at scale. Unlike silicon complementary metal oxide semiconductors (CMOS) circuitry, memristors are switches that store information in their resistance/conductance states. They can also modulate conductivity based on their programming history, which they can recall even if they lose power. Their function is similar to that of human synapses.

Memristive devices need to demonstrate synaptic efficacy and plasticity. Synaptic efficacy refers to the need for low power consumption to carry out the task. Synaptic plasticity is similar to brain plasticity, which we understand through neuroscience. This is the brain’s ability to forge new pathways based on new learnings or, in the case of memristors, new information.

These devices contribute to the realisation of what is known as a massively parallel, manycore supercomputer architecture like SpiNNaker (spiking neural network architecture). SpiNNaker is the largest artificial neural network using a million general purpose processors. Despite the high number of processors, it is a low-power, low-latency architecture and, more importantly, highly scalable. To save energy, chips and whole boards can be switched off. The project is supported by the European Human Brain Project (HBP) and its creators hope to model up to a billion biological neurons in real time. To understand the scale, one billion neurons is just 1% of the scale of the human brain. The HBP grew out of BrainScaleS, an EU-funded research project, which began in 2011. It has benefitted from the collaboration of 19 research groups from 10 European companies. Now with neuromorphic tech evolving fast, it seems the race is on. In 2020, Intel Corp announced that it was working on a three-year project with Sandia National Laboratories to build a brain-based computer of one billion or more artificial neurons.

We will see neuromorphic devices used more and more to complement and enhance the use of CPUs (central processing units), GPUs (graphics processing units) and FPGA (field programmable gate arrays) technologies. Neuromorphic devices can carry out complex and high-performance tasks – for example, learning, searching, sensing – using extremely low power. A real-world example would be instant voice recognition in mobile phones without the processor having to communicate with the cloud.

Why do we need neuromorphic computing?

Neuromorphic architectures, although informed by the workings of the brain, may help uncover the many things we don’t know about the brain by allowing us to see the behaviour of synapses in action. This could lead to huge strides in neuroscience and medicine. Although advances in neuromorphic processors that power supercomputers continue at unprecedented levels, there is still some way to go in achieving the full potential of neuromorphic technology.

A project like SpiNNaker, although large-scale, can only simulate relatively small regions of the brain. However, even with its current capabilities, it has been able to simulate a part of the brain known as the Basal Ganglia, a region that we know is affected in Parkinson’s Disease. Further study of the simulated activity with the assistance of machine learning  could provide scientific breakthroughs in understanding why and how Parkinson’s happens.

Intel Labs is a key player in neuromorphic computer science. Researchers from Intel Labs and Cornell University were able to use Intel’s neuromorphic chip, known as Loihi, so that AI could recognise the odour of hazardous chemicals. Loihi chips use an asynchronous spiking neural network to implement adaptive fine-grained computations in parallel that are self-modifying, and event driven. This kind of computation allows this level of odour recognition even when surrounded by ‘noise’ by imitating the architecture of the human olfactory bulb. The neuroscience involved in the sense of smell is notoriously complex, so this is a huge first for AI and wouldn’t be possible with the old-style transistors used in processing. This kind of discovery could lead to further understanding around memory and illnesses like Alzheimer’s, which has been linked to loss of smell.

Learn more about neuromorphic computing and its applications

Artificial intelligence is already helping us to make strides in everyday life from e-commerce to medicine, finance to security. There is so much more that supercomputers could potentially unlock to help us with society’s biggest challenges. 

Interested to know more? Find out about the University of York’s MSc Computer Science with Artificial Intelligence.

 

Why big data is so important to data science

What is big data?

Big data is the term for the increasing amount of data collected for analysis. Every day, vast amounts of unsorted data is drawn from various apps and social media, requiring data processing. 

Creating data sets for such a volume of data is more complex than creating those used in traditional data sorting. This is because the value of the data needs defining; without a definition it is just a lot of detail with no real meaning. Despite the term only relatively recently coming into everyday usage, big data has been around since the 1960s with the development of the relational database. It was the exponential rise in the amount and speed of data being gathered through sites like Facebook and YouTube that created the drive for big data analytics amongst tech companies. The ‘Three Vs’ model characterises big data by volume, variety, and velocity (with veracity and variability sometimes being added as fourth and fifth Vs). Hadoop appeared in 2005 offering the open-source framework to store big data and analyse it. NoSQL, the database for data without a defined structure, also rose in stature around about this time. From that point, big data has been the major focus of data science.

What is big data analytics?

Big data analytics is the sorting of data to uncover valuable insights. Before we had the technology to sort through huge volumes of large data sets using artificial intelligence, this would have been a much more laborious and slower task. The kind of deep learning we can now access through data mining is thanks to machine learning. Data management is much more streamlined now, but it still needs data analysts to define inputs and make sense of outputs. Advances like Natural Language Processing (NLP) may offer the next leap for data analytics, NLP allows machines to simulate the ability to understand language in the way that humans do. This means machines can read content and understand sentences rather than simply scanning for keywords and phrases.    

In 2016, Cisco estimated annual internet traffic had, for the first time, surpassed one zettabyte (10007 or 1,000,000,000,000,000,000,000 bytes) of data. Big data analysis can run into data sets reaching into terabytes (10004) and petabytes (10005). Organisations store these huge amounts of data in what are known as data lakes and data warehouses. Data warehouses store structured data with data points relating to one another that has been filtered for a specific purpose. These offer answers to fast SQL (structured query language) queries, which stakeholders can use for things like operational reporting. Data lakes contain raw data that has not yet been defined, drawn from apps, social media, and Internet of Things devices that await definition and cataloguing in order to be analysed.

The data flow of usable data usually involves capture, pre-processing, storage, retrieval, post-processing, analysis, and visualisation. Data visualisation is important because people tend to grasp concepts quicker through representations like graphs, diagrams, and tables.

What is Spark in big data?

Spark is a leading big data platform for large-scale SQL databases that leads to machine learning. Like Hadoop before it, Spark is a data processing framework, but it works faster and allows stream processing (or real-time processing) as opposed to just batch processing. Spark uses in-memory processing making it 100 times faster than Hadoop. Whereas Hadoop is written only in Java, Spark is written in both Java and Scala, but implementation is in Scala. With less lines of code, this speeds up processing significantly. 

Both Hadoop and Spark are owned by Apache after Spark was acquired from University of California, Berkeley’s AMPLab. Using the two in tandem leads to the best results – Spark for speed and Hadoop for security amongst other capabilities.

How is big data used?

Big data is important because it provides business value that can help companies lead in their sector – it gives a competitive advantage when used correctly.

Increasingly, big data is being used across a wide range of sectors including e-commerce, healthcare, and media and entertainment. Everyday big data uses include eBay using a customer’s purchase history to target them with relevant discounts and offers. As an online retailer, eBay’s use of big data is not new. Yet, within the retail sphere, McKinsey & Company estimate that up to 30% of retailers’ decision-making when it comes to pricing fails to deliver the best price. On average, what feels like a small increase in price of just 1% translates to an impressive 8.7% increase in operating profits (when we assume no loss in volume). Retailers are missing out on these kinds of profits based on a relatively small adjustment by not using big data technologies for price analysis and optimisation.

In healthcare, apps on mobile devices and fitness trackers can track movement and sleep, diet, and hormones creating data sources. All this personal data is fed into big data analysis for further insights into behaviours and habits related to health. Big data can also provide huge strides in some of healthcare’s biggest challenges like treating cancer. During his time as President of the United States, Barack Obama set up the Cancer Moonshot program. Pooling data from genetically sequenced cancer tissue samples is key to its aim of investigating, learning, and maybe finding a cure for cancer. Some of the unexpected results of using these types of data, includes the discovery that the antidepressant, Desipramine, has the capability to help cure certain types of lung cancer.

Within the home, energy consumption can certainly be managed more efficiently with the predictive analytics that a smart meter can provide. Smart meters are potentially part of a larger Internet of Things (IoT) – an interconnected system of objects, which are embedded with sensors and software that feeds data back and forth. This data is specifically referred to as sensor data. As more ‘Things’ become connected to one another, in  theory, the IoT can optimise everything from shopping to travel. Some buildings are designed to be smart ecosystems, where devices throughout are connected and feeding back data to make a more efficient environment. This is already seen in offices where data collection helps manage lighting, heating, storage, meeting room scheduling, and parking.

Which companies use big data?

Jeff Bezos, the founder of Amazon, has become the richest man in the world by making sure big data was core to the Amazon business model from the start. Through this initial investment in machine learning, Amazon has come to dominate the market by getting its prices right for the company and the customer, and managing its supply chains in the leanest way possible.

Netflix, the popular streaming service, takes a successful big data approach to content curation. It uses algorithms to suggest films and shows you might like to watch based on your viewing history, as well as understanding what film productions the company should fund. Once a humble DVD-rental service, Netflix enjoyed 35 leading nominations at the 2021 Academy Awards. In 2020, Netflix overtook Disney as the world’s most valuable media company. 

These are just some of the many examples of harnessing the value of big data across entertainment, energy, insurance, finance, and telecommunications.

How to become a big data engineer

With so much potential for big data in business, there is great interest in professionals like big data engineers and data scientists who can guide an organisation with its data strategy. 

Gaining a master’s that focuses on data science is the perfect first step to a career in data science. Find out more about getting started in this field with University of York’s MSc Computer Science with Data Analytics. You don’t need a background in computer science and the course is 100% online so you can fit it around your current commitments. 

The future of artificial intelligence

Artificial intelligence (AI) is the machine learning of tasks that we associate with the human brain – things like problem-solving, perceiving, learning, reasoning, and even creativity. AI has grown exponentially in recent years. The Covid-19 pandemic, in particular, highlighted the need for AI systems and automation that could respond swiftly to reduced numbers of workers. 

For organisations that had gone through a digital transformation, AI and associated emerging technologies were already being integrated into business processes. However, for many, Covid was the turning point that highlighted the need for AI solutions to be included in their business models. The AI cloud is a cutting-edge concept that will help make AI software more accessible to businesses by bringing together cloud computing and a shared infrastructure for AI use cases.

Healthcare offers many successful AI case studies, most recently for diagnosing and tracking Covid-19 using rapidly gathered big data, but also increasingly in things like cancer diagnostics or detecting the development of psychotic disorders. Other sectors that use real-world AI applications include the military, agriculture, manufacturing, telecommunications, IT and cybersecurity, and finance. AI art, or neural network art, is a genre in its own right. Holly Herndon, who has a PhD in Music and Acoustics from Stanford’s Centre for Computer Research, uses AI technology in her work.

What are the risks of AI?

Science fiction writers have long been fascinated by the idea of AI taking over. From Blade Runner to The Terminator, the fear is that the machines will start to think for themselves and rise up against humans. This moment is known as the ‘singularity’, defined as the point in time when technological growth overtakes human intelligence, creating a superintelligence developed by self-directed computers. Some people believe that this moment is nearer than we think.

In reality, AI offers many benefits, but the most obvious risks it currently poses are in relation to personal data privacy. In order for deep learning to take place, AI needs to draw information from large amounts of data that must come from people’s behaviours being tracked – their personal data. The Data Protection Act 2018, which enacted the general data protection regulation (GDPR), was brought in to ensure that people have to opt in to having their data gathered and stored, rather than having to make the request to opt out. Previously, businesses and organisations were able to simply use their customers’ data without permission.

Some of us may feel suspicious about our data being collected and yet, many of the applications we use are constantly gathering information about us, from the music we like and the books we read to the number of hours we sleep at night and the number of steps we walk in the day. When Amazon makes suggestions for what you might like to read next, it’s based on your purchasing and browsing history. A McKinsey & Company report from 2013 stated that 35% of Amazon’s revenue comes from recommendations generated by AI. AI is also instrumental in the way that LinkedIn helps both people to find jobs and companies to find people with the right skill set.

The more we allow our actions to be tracked, in theory, the more accurately our behaviours can be predicted and catered to, leading to easier decision making. New technologies like the Internet of Things (IoT) could help make this data even more interconnected and useful – a fridge that has already made a shopping order based on what you have run out of, for example.

Can AI be ethical?

There are certainly big questions around ethics and AI. For example, artificial neural networks (ANNs) are a type of AI that uses interconnected processors which mimic the human brain’s neurons. The algorithm for an ANN is not determined by human input. The machine learns and develops its own rules with which to make decisions, and which are usually not easily traceable by humans. This is known as black box AI because of its lack of transparency, which can have legal as well as ethical implications. In healthcare, for instance, who would be liable for a missed or incorrect diagnosis? If used in self-driving car insurance, who would be  liable for a wrong turn of the wheel in a crash?

When it comes to data analytics, there is also the issue of bias: because human programmers define datasets and write algorithms, this can be prone to bias. Historically, the field of data science has not been very diverse, which can lead to demographics being underrepresented and even inadvertently discriminated against. The more diverse the programming community, the more unbiased the algorithms, therefore the more accurate and useful AI applications, can become.

A popular example of problematic use of AI is deepfakes, imagery that has been manipulated or animated so that it appears that someone (usually a politician) has said or done something they haven’t. Deepfakes are linked to fake news and hoaxes which spread via social media. Ironically, just as AI software can clone a human voice or recreate the characteristic facial expressions of an individual, it is also key in combating fake news because it can detect footage that is a deepfake.

What are the challenges in using artificial intelligence?

Machine learning relies on data input from humans. A machine cannot initially simply start to think for itself. Therefore, a human – or a team of humans – has to pinpoint and define the problem first before presenting it in a computable way. 

A common example of what an AI robot cannot do – which most humans can do – is to enter a kitchen and figure out where all the items needed to make a cup of tea or coffee are kept in order to make a hot drink. This kind of task requires the brain to adapt its decision-making and improvise based on previous experience of being in an unfamiliar kitchen. AI currently cannot develop the data processing systems to spontaneously do this, but it is a situation that the neural networks of a human brain can naturally respond to.

What problems can AI solve?

Artificial Intelligence is mainly suited to deep learning which demands the scanning and sifting through of vast amounts of data looking for patterns. These algorithms developed through deep learning can, in turn, help with predictions. For instance, understanding a city’s traffic flow throughout the day and synchronising traffic lights in real-time can be facilitated through AI implementation. AI can also strategise. One of the milestones in the machine learning of AI systems was Google’s DeepMind AlphaGo beating the world’s number one Go player in 2017, Ke Jie. Go is considered to be particularly complex and much harder for machines to learn than chess.

On the practical side, AI can help reduce errors and carry out repetitive or laborious tasks that would take humans much longer to carry out. In order to increase the use of AI responsibly, the UK government launched the National AI Strategy in March 2021 to help the economy grow via AI technologies. Some of the challenges that are hoped to be addressed are tackling climate change and improving public services. 

In conclusion, AI has huge potential, but ethical, safe and trustworthy AI development is reliant on direction from humans. 

If you’re interested in understanding more about artificial intelligence, our MSc Computer Science with Artificial Intelligence at the University of York is for you. Find out how to apply for the 100% online course.