A guide to corporate social responsibility

Responsible business practices and a commitment to global social citizenship are needed to safeguard our shared future – and pave the way for a better world.

Some of the biggest issues facing our planet – including climate change, poverty, social inequality, food insecurity and human rights abuses – are ones that cannot be tackled without critical change within the world of business.

Businesses must play an integral part in shaping what happens next. From their environmental impact, to their work within local communities, to who’s involved in their decision making, any business model should be examined to identify where and how sustainability efforts could be supported.

What is corporate social responsibility?

Corporate social responsibility (CSR) is the idea that a business has a responsibility to the wider world. It’s a management concept whereby companies integrate social and environmental concerns in both their business operations and their interactions with stakeholders, offering a way for companies to achieve a balance of environmental, philanthropic, ethical and economic practices.

The triple bottom line (TBL) is the idea that businesses should prepare three distinct bottom-line measurements, also known as the three Ps: people, planet and profit. The TBL highlights the relationship between business and a ‘green mindset’; it attempts to align organisations with the goal of sustainable development, and positive impact, on a global scale. Ultimately, it offers a more rounded, comprehensive set of working objectives than simply profit-above-all.

CSR issues are wide ranging. They include environmental management, human rights, eco-efficiency, responsible sourcing and production, diversity and inclusion, labour standards and working conditions, social equity, stakeholder engagement, employee and community relations, governance, and anti-corruption policies.

Why is CSR important to businesses?

According to Impact, a leading social value measurement platform, CSR is good for business. They note that:

  • 77% of consumers are more likely to use companies that are committed to making the world a better place
  •  49% of consumers assume that companies who don’t speak on social issues don’t care
  • 25% of consumers and 22% of investors cite a “zero tolerance” policy toward companies that embrace questionable ethical practices
  • Consumers are four times as likely to purchase from a brand with a strong purpose
  • 66% of global consumers are willing to pay more for sustainable goods

On top of this, it’s estimated that CSR initiatives can help companies to avoid losses of roughly 7%. More and more businesses are publishing annual sustainability reports, in a bid for transparency in their efforts and operations and to benefit from its other advantages.

CSR is integral to the development of a more sustainable future. The better question for stakeholders wondering whether they can afford to spend time and energy implementing CSR strategies, is whether they can afford not to.

How can a business demonstrate CSR?

The United Nations Global Compact calls upon organisations to “align their strategies and operations with universal principles on human rights, labour, environment and anti-corruption, and take actions that advance societal goals”.

In addition to supporting businesses to aim for the prescribed United Nations Sustainable Development Goals, it asks them to adhere to ten Principles. The Principles outline measures across each of the key areas listed above. Examples of the measures include: the effective abolition of forced, compulsory and child labour; initiatives to promote greater environmental responsibility; the elimination of discrimination in respect of employment and occupation; and working against corruption in all forms, including extortion and bribery. It offers a framework and starting point for the minimum businesses must do in order to operate responsibly.

Similarly, in 2010, the International Organization for Standardization (ISO) launched new guidance: the ISO 26000. Designed for businesses who are committed to operating in a socially responsible way, it helps organisations to translate social principles into effective actions and shares best practice. Increasingly, a company’s adherence to ISO 26000 is regarded as a commitment to both sustainability and its overall performance.

Where CSR should be implemented in a business strategy depends on where improvement is required. If a business is energy-intensive, could that energy come from renewable sources? Where there’s a lack of diversity and inclusivity among employees, could human resource policies be revised? Could a multinational team of frequent flyers reduce their travel or offset their emissions?

The need for authenticity

Underscoring any CSR efforts is the need for authenticity.

In today’s world, the most respected brands don’t rely on virtue signalling – they live and breathe their values. A brand that is consistent in its actions is more likely to gain loyal followers and cultivate long-term corporate sustainability.

Modern consumers, particularly Millennials and Gen Z, are advocates for positive change. They demand more from brands and companies, increasingly wise to those whose claims ring false. One such example is prominent fast fashion brands who launch ‘sustainable’ or ‘recycled’ clothing lines while, behind the scenes, their predominantly female garment workers receive a less-than-living wage and suffer in deplorable working conditions. To use another example, many businesses also incorporate the rainbow flag in marketing efforts during Pride Month, while failing to support the LGBT+ community in any meaningful way.

Public relations activities fare better when brands are founded on an authentic, purposeful sustainability strategy.

The benefits of CSR

CSR programmes can be a powerful marketing tool. They can help a business to position itself favourably in the eyes of consumers, regulators and investors, boosting brand reputation. By commanding respect in the marketplace and gaining competitive advantage, CSR can result in better financial performance.

By default, business leaders who focus on improving their social impact will scrutinise business practices related to their value chain, consumer offerings, employee relations, and other operational aspects. This can result in new, innovative solutions which may also have cost-saving benefits. A business may reconfigure its manufacturing process to consume less energy and produce less waste; as well as being more environmentally friendly, it may also reduce its overheads.

CSR practices can boost employee engagement and satisfaction. Increasingly, people view their work as an extension of their own identities and convictions. When a brand invites them to share in its objectives, it can drive employee retention and attract quality candidates to roles.

Companies are embracing social responsibility due to moral convictions as well as profit – and reaping the benefits. All these effects of CSR can help to ensure that a company remains profitable and sustainable in the long term.

Champion CSR in your business sector

Are you passionate about environmental sustainability? Want to develop the skills and knowledge to pioneer global corporate citizenship? Interested in learning more about CSR activities?

The University of York’s online MSc International Business, Leadership and Management programme places particular emphasis on the challenges associated with global trade, marketing and sales, together with an overview of relevant management disciplines. You’ll be supported to build your knowledge of practice whilst developing an advanced theoretical understanding of the international business environment.

The next step in machine learning: deep learning

What is deep learning?

Deep learning is a sector of artificial intelligence (AI) concerned with creating computer structures that mimic the highly complex neural networks of the human brain. Because of this, it is also sometimes referred to as deep neural learning or deep neural networks (DNNs). 

A subset of machine learning, the artificial neural networks utilised in deep learning are capable of sorting much more information from large data sets to learn and consequently use in making decisions. These vast amounts of information that DNNs scour for patterns are sometimes referred to as big data.

Is deep learning machine learning?

The technology used in deep learning means that computers are closer to thinking for themselves without support or input from humans (and all the associated benefits and potential dangers of this). 

Traditional machine learning requires rules-based programming and a lot of raw data preprocessing by data scientists and analysts. This is prone to human bias and is limited by what we are able to observe and mentally compute ourselves before handing over the data to the machine. Supervised learning, unsupervised learning, and semi-supervised learning are all ways that computers become familiar with data and learn what to do with it. 

Artificial neural networks (sometimes called neural nets for short) use layer upon layer of neurons so that they can process a large amount of data quickly. As a result, they have the “brain power” to start noticing other patterns and create their own algorithms based on what they are “seeing”. This is unsupervised learning and leads to technological advances that would take humans a lot longer to achieve. Generative modelling is an example of unsupervised learning.

Real-world examples of deep learning

Deep learning applications are used (and built upon) every time you do a Google search. They are also used in more complicated scenarios like in self-driving cars and in cancer diagnosis. In these scenarios, the machine is almost always looking for irregularities. The decisions the machine makes are based on probability in order to predict the most likely outcome. Obviously, in the case of automated driving or medical testing, accuracy is more crucial, so computers are rigorously tested on training data and learning techniques.

Everyday examples of deep learning are augmented by computer vision for object recognition and natural language processing for things like voice activation. Speech recognition is a function that we are familiar with through use of voice-activated assistants like Siri or Alexa, but a machine’s ability to recognise natural language can help in surprising ways. Replika, also referred to as “My AI Friend”, is essentially a chatbot that gets to know a user through questioning. It uses a neural network to have an ongoing one-to-one conversation with the user to gather information. Over time, Replika begins to speak like the user, giving the impression of emotion and empathy. In April 2020, at the height of the pandemic, half a million people downloaded Replika, suggesting curiosity about AI but also a need for AI, even if it does simply mirror back human traits. This is not a new idea as in 1966, computer scientist Joseph Weizenbaum created what was a precursor to the chatbot with the program ELIZA, the computer therapist.

How does deep learning work?

Deep learning algorithms make use of very large datasets of labelled data such as images, text, audio, and video in order to build knowledge. In its computation of the content – scanning through and becoming familiar with it – the machine begins to recognise and know what to look for. Like the human brain, each computer neuron has a role in processing data, it provides an output by applying the algorithm to the input data provided. Hidden layers contain groups of neurons.

At the heart of machine learning algorithms is automated optimisation. The goal is to achieve the most accurate output so we need the speed of machines to efficiently assess all the information they have and to begin detecting patterns which we may have missed. This is also core to deep learning and how artificial neural networks are trained.

TensorFlow is an open source platform created by Google, written in Python. A symbolic maths library, it can be utilised for many tasks, but primarily for training, transfer learning, and developing deep neural networks with many layers. It’s particularly useful for reinforcement learning because it can calculate large numbers of gradients. The gradient is how the data is seen on a graph. So, for example, the gradient descent algorithm would be used to minimise error function and would be represented graphically as the gradient at its lowest possible point. The algorithm used to calculate the gradient of an error function is “backpropagation”, short for “backward propagation of errors”.

One of the most used deep learning models in reinforcement learning, particularly for image recognition, Convolutional Neural Networks (CNN) can learn increasingly abstract features by using deeper layers. CNNs can be accelerated by using Graphics Processing Units (GPUs) because they can process many pieces of data simultaneously. They can help perform feature extraction by analysing pixel colour and brightness or vectors in the case of grayscale.

Recurrent Neural Networks (RNNs) are considered state of the art because they are the first of their kind to use an algorithm that lets them remember their input. Because of this, RNNs are used in speech recognition and natural language processing in applications like Google Translate.

Can deep learning be used for regression?

Neural networks can be used for both classification and regression. However, regression models only really work well if they’re the right fit for the data and that can affect the network architecture. Classifiers in something like image recognition, have more of a compositional nature compared with the many variables that can make up a regression problem. Regression offers a lot more insight than simply, “Can we predict Y given X?”, because it explores the relationship between variables. Most regression models don’t fit the data perfectly, but neural networks are flexible enough to be able to pick the best type of regression. To add to this, hidden layers can always be added to improve prediction.

Knowing when to use regression or not to solve a problem may take some research. Luckily, there are lots of tutorials online to help, such as How to Fit Regression Data with CNN Model in Python.

Ready to discover more about deep learning?

The University of York’s online MSc Computer Science with Artificial Intelligence from the University of York is the ideal next step if your career ambitions lie in this exciting and fast-paced sector. 

Whether you already have knowledge of machine learning algorithms or want to immerse yourself in deep learning methods, this master’s degree will equip you with the knowledge you need to get ahead.

What is machine learning?

Machine learning is considered to be a branch of both artificial intelligence (AI) and computer science. It uses algorithms to replicate the way that humans learn but can also analyse vast amounts of data in a short amount of time. 

Machine learning algorithms are usually written to look for recurring themes (pattern recognition) and spot anomalies, which can help computers make predictions with more accuracy. This kind of predictive modelling can be for something as basic as a chatbot anticipating what your question may be about to something quite complex, like a self-driving car knowing when to make an emergency stop

It was an IBM employee, Arthur Samuel, who is credited with creating the phrase “machine learning” in his 1959 research paper, “Some studies in machine learning using the game of checkers”. It’s amazing to think that machine learning models were being studied as early as 1959 given that computers now contribute to society in important areas as diverse as healthcare and fraud detection.

Is machine learning AI?

Machine learning represents just a section of AI capabilities. There are three major areas of interest that use AI – machine learning, deep learning, and artificial neural networks. Deep learning is a field within machine learning, and neural networks is a field within deep learning. Traditionally, machine learning is very structured and requires more human intervention in order for the machine to start learning via supervised learning algorithms. Training data is chosen by data scientists to help the machine determine the features it needs to look for within labelled datasets. Validation datasets are then used to ensure an unbiased evaluation of a model fit on the training data set. Lastly, test data sets are used to finalise the model fit.

Unsupervised learning also needs training data, but the data points are unlabelled. The machine begins by looking at unstructured or unlabelled data and becomes familiar with what it is looking for (for example, cat faces). This then starts to inform the algorithm, and in turn helps sort through new data as it comes in. Once the machine begins this feedback loop to refine information, it can more accurately identify images (computer vision) and even carry out natural language processing. It’s this kind of deep learning that also gives us features like speech recognition. 

Currently, machines can tell whether what they’re listening to or reading was spoken or written by humans. The question is, could machines then write and speak in a way that is human? There have already been experiments to explore this, including a computer writing music as though it were Bach.

Semi-supervised learning is another learning technique that combines a small amount of labelled data within a large group of unlabelled data. This technique helps the machine to improve its learning accuracy.

As well as supervised and unsupervised learning (or a combination of the two), reinforcement learning is used to train a machine to make a sequence of decisions with many factors and variables involved, but no labelling. The machine learns by following a gaming model in which there are penalties for wrong decisions and rewards for correct decisions. This is the kind of learning carried out to provide the technology for self-driving cars.

Is clustering machine learning?

Clustering, also known as cluster analysis, is a form of unsupervised machine learning. This is when the machine is left to its own devices to discover what it perceives as natural grouping or clusters. Clustering is helpful in data analysis to learn more about the problem domain or understand arising patterns, for example, customer segmentation. In the past, segmentation was done manually and helped construct classification structures such as the phylogenetic tree, a tree diagram that shows how all species on earth are interconnected. From this example alone, we can see how what we now call big data could take years for humans to sort and compile. AI can manage this kind of data mining in a much quicker time frame and spot things that we may not, thereby helping us to understand the world around us. Real-world use cases include clustering DNA patterns in genetics studies, and finding anomalies in fraud detection.

Clusters can overlap, where data points belong to multiple clusters. This is called soft or fuzzy clustering. In other cases, the data points in clusters are exclusive – they can exist only in one cluster (also known as hard clustering). K-means clustering is an exclusive clustering method where data points are placed into various K groups. K is defined in the algorithm by the number of centroids (centre of a cluster) in a set, which it then uses to allocate each data point to the nearest cluster. The “means” in K-means refers to the average, which is worked out from the data in order to find the centroid. A larger K value is an indication of many, smaller groups, whereas a small K value shows larger, broader groups of data.

Other unsupervised machine learning methods include hierarchical clustering, probabilistic clustering (including the Gaussian Mixture Model), association rules, and dimensionality reduction.

Principal component analysis is an example of dimensionality reduction – reducing larger sets of variables in the input data without losing variance. It is also a useful method for the visualisation of high-dimensional data because it ranks principal components according to how much they contribute to patterns in the data. Although more data is generally helpful for more accurate results, it can lead to overfitting, which is when the machine starts picking up on noise or granular detail from its training data set.

The most common use of association rules is for recommendation engines on sites like Amazon, Netflix, LinkedIn, and Spotify to offer you products, films, jobs, or music similar to those that you have already browsed. The Apriori algorithm is the most commonly used for this function.

How does machine learning work?

Machine learning starts with an algorithm for predictive modelling, either self-learnt or programmed that leads to automation. Data science is the means through which we discover the problems that need solving and how that problem can be expressed through a readable algorithm. Supervised machine learning requires either classification or regression problems. 

On a basic level, classification predicts a discrete class label and regression predicts a continuous quantity. There can be an overlap in the two in that a classification algorithm can also predict a continuous value. However, the continuous value will be in the form of a probability for a class label. We often see algorithms that can be utilised for both classification and regression with minor modification in deep neural networks.

Linear regression is when the output is predicted to be continuous with a constant slope. This can help predict values within a continuous range such as sales and price rather than trying to classify them into categories. Logistic regression can be confusing because it is actually used for classification problems. The algorithm is based on the concept of probability and helps with predictive analysis.

Support Vector Machines (SVM) is a fast and much-used algorithm that can be used for both classification and regression problems but is most commonly used in classification. The algorithm is favoured because it can analyse and class even when there is a limited amount of data available. It groups data into classes even when the classes are not immediately clear because it looks at the data three-dimensionally and uses a hyperplane rather than a line to separate it. SVMs can be used for functions like helping your mailbox to detect spam.

How to learn machine learning

With an online MSc Computer Science with Data Analytics or an online MSc Computer Science with Artificial Intelligence from University of York, you’ll get an introduction to machine learning systems and how they are transforming the data science landscape. 

From big data to how artificial neurons work, you’ll understand the fundamentals of this exciting area of technological advances. Find out more and secure your place on one of our cutting-edge master’s courses.

 

The importance of branding

The word branding originally came from a time when cattle farmers branded their animals with a hot iron to mark their ownership. Each farm or ranch would have its own brand mark usually made up of initials to identify its animals. Although branding and commerce have both grown significantly since then, the idea of the brand logo has not changed much: a simple, bold image that stays in the mind. But a logo is just part of a wider branding exercise that every company should carefully consider.

What is branding in marketing?

Branding is the way a company communicates itself both visually and verbally so that it becomes instantly recognisable to customers. A brand is an intangible concept and yet it forms a very clear idea of what a company does based on its values and identity. Brand identity comprises all visual communications, from the logo design to the typography and colour palette a brand uses on things like its packaging and website. By creating a cohesive identity, the brand experience becomes seamless, and the customer can identify the brand through cues like colour and style before they have even read the brand name. Although brand name is equally important in creating a memorable brand. For instance, the name Amazon recalls the diversity of the rainforest. Amazon.com wanted to become the number one destination for a wide variety of products. In fact, anything that the customer could possibly want, despite originally selling only books – the desire to expand was always there and evident in the name.

Once a visual identity has been decided, a brand guidelines document is usually created to communicate it across the business. By following these guidelines when designing marketing materials, employees and third parties (like a branding agency) can keep the brand’s aesthetics intact across all touchpoints. As well as the brand’s visual style, tone of voice is also very important, representing how the brand speaks. Tone of voice can either be part of the brand guidelines or a separate keystone document.

Why is branding important?

Branding can be the difference between success and failure, all depending on how well it is executed. It may be an area that isn’t given much thought, particularly if a company is hastily created or because founding members feel that things like graphic design are an unnecessary expense. And yet, without the consistency that branding provides, new products or services can easily get lost among competitors with a stronger brand. A powerful brand can create loyal customers, so it’s vital for a new company to think about how it wants to be seen before it launches.

Apple is often cited as a strong brand. It broke the mould on many fronts, from its name and logo having nothing to do with computers (although originally called Apple Computer Company) to its founder’s attitude and personality having a strong effect on the brand’s identity. Although there were originally three co-founders, Steve Jobs eventually drove the brand to prominence, and it was his desire for precision and minimalism that became inextricably linked with the brand. When he died, these design principles were maintained by Chief Design Officer, Jony Ive. Apple was also a change-maker in that its slogan, “Think Different” was purely inspirational and again, did not refer to the product. Other brands which have managed this successfully include Nike, with their “Just Do It” strapline. Both are short, snappy, and aspirational, so that the customer then connects the brand’s products with the chance to live and make real that philosophy for themselves. This creates brand equity, meaning that the brand’s value increases as people begin to perceive the products as being better and more desirable than other brands because of how they make them feel.

Part of brand management is assessing when a company may need a rebrand. This is not uncommon – it can just be a new logo, or it can be an entire rebranding exercise, changing the look and feel of a brand totally. This can be because the brand feels dated, or because the brand’s values have changed. If for example, a brand promise no longer feels relevant or true, updating that one aspect of the branding requires that all aspects be reviewed.

What are branding strategies?

A brand strategy is a document used by all stakeholders in planning the operations of a company. It is a plan that outlines the company’s goals for the brand, one year, three years, five years or further down the line. Activities are planned within the timeline to raise brand awareness among existing and new customers. These tend to be milestone events like new product launches and associated campaigns on social media or through more traditional advertising. Within the brand strategy there will likely be other strategies such as the content strategy, outlining the marketing assets which will be required like blogs, design templates, and copy for social media. The focus of these brand-building activities, especially for marketers, is on creating a brand experience, gaining competitive advantage, and improving financial performance.

As a company expands, it may have multiple products, ranges, and potential sub-brands, all with their own brand strategies. Large, global brands that have expanded over the years sometimes take the decision to unify the brand strategy and simplify communications. This may come after market research shows, for example, that the customer is confused by the differences between the various products, what they offer, and which one is right for them. Coca Cola’s One Brand strategy unites its various products like Diet Coke and Coca Cola Zero Sugar. The products have their own brands and target audiences, but at their core they are all part of the same family.

What is personal branding?

With the rise of social media has emerged the concept of personal branding. Your personal brand is how you present yourself to the world – particularly as a public figure or an expert in your field. 

Increasingly, with social media offering a platform for comment and opinion, many people are public figures whether they intended it or not. Either way, it’s important to think about how you appear to others, what you communicate, and how you communicate it in order to establish a strong personal brand. Someone’s business persona can become a brand in itself or a brand can grow out of a person’s popularity.

What is corporate branding?

Corporate branding is more about pushing the brand as a whole, rather than focusing on products or services. What are a company’s brand values? Does the company have a mission statement that resonates with the times in which we live? These are questions that investors or potential employees may ask. But it also applies to customers who buy into the brand as a whole and are most likely to be early adopters when it comes to any new products the company launches. 

Things that may come under corporate branding that are increasingly important to customers include Corporate Social Responsibility (CSR) and the company’s HR policies. In fact, how a company is perceived by potential employees is down to employer branding, another arm of corporate branding. This includes how the company nurtures its internal culture, how it treats its employees, and how this is communicated.

Learn more about branding in international business

Branding is key to all business and is particularly important to international businesses operating in different territories. Understanding the message that certain logos or words send, as well as the symbolism of particular colours in different cultures, is crucial when operating globally. 

Add to your knowledge and expertise with an MSc International Business, Leadership and Management from University of York.

What is neuromorphic computing?

Compared with first-generation artificial intelligence (AI), neuromorphic computing allows AI learning and decision-making to become more autonomous. Currently, neuromorphic systems are immersed in deep learning to sense and perceive skills used in, for example, speech recognition and complex strategic games like chess and Go. Next-generation AI will mimic the human brain in its ability to interpret and adapt to situations rather than simply working from formulaic algorithms. 

Rather than simply looking for patterns, neuromorphic computing systems will be able to apply common sense and context to what they are reading. Google famously demonstrated the limitations of computer systems that simply use algorithms when its Deep Dream AI was trained to look for dog faces. It ended up converting any imagery that looked like it might contain dog faces into dog faces.

How does neuromorphic computing work?

This third generation of AI computation aims to imitate the complex network of neurons in the human brain. This requires AI to compute and analyse unstructured data that rivals the highly energy-efficient biological brain. Human brains can consume less than 20 watts of power and still outperform supercomputers, demonstrating their unique energy efficiency. The AI version of our neural network of synapses is called spiking neural networks (SNN). Artificial neurons are arranged in layers and each of the spiking neurons can fire independently and communicate with the others, setting in motion a cascade of change in response to stimuli.

Most AI neural network structures are based on what is known as von Neumann architecture – meaning that the network uses a separate memory and processing units. Currently, computers communicate by retrieving data from the memory, moving it to the processing unit, processing data, and then moving back to the memory. This back and forth is both time consuming and energy consuming. It creates a bottleneck which is further emphasised when large datasets need processing. 

In 2017, IBM demonstrated in-memory computing using one million phase change memory (PCM) devices, which both stored and processed information. This was a natural progression from IBM’s TrueNorth neuromorphic chip which they unveiled in 2014. A major step in reducing neuromorphic computers’ power consumption, the massively parallel SNN chip uses one million programmable neurons and 256 million programmable synapses. Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing, described it as “literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid.”

An analogue revolution was triggered by the successful building of nanoscale memristive devices also known as memristors. They offer the possibility of building neuromorphic hardware that performs computational tasks in place and at scale. Unlike silicon complementary metal oxide semiconductors (CMOS) circuitry, memristors are switches that store information in their resistance/conductance states. They can also modulate conductivity based on their programming history, which they can recall even if they lose power. Their function is similar to that of human synapses.

Memristive devices need to demonstrate synaptic efficacy and plasticity. Synaptic efficacy refers to the need for low power consumption to carry out the task. Synaptic plasticity is similar to brain plasticity, which we understand through neuroscience. This is the brain’s ability to forge new pathways based on new learnings or, in the case of memristors, new information.

These devices contribute to the realisation of what is known as a massively parallel, manycore supercomputer architecture like SpiNNaker (spiking neural network architecture). SpiNNaker is the largest artificial neural network using a million general purpose processors. Despite the high number of processors, it is a low-power, low-latency architecture and, more importantly, highly scalable. To save energy, chips and whole boards can be switched off. The project is supported by the European Human Brain Project (HBP) and its creators hope to model up to a billion biological neurons in real time. To understand the scale, one billion neurons is just 1% of the scale of the human brain. The HBP grew out of BrainScaleS, an EU-funded research project, which began in 2011. It has benefitted from the collaboration of 19 research groups from 10 European companies. Now with neuromorphic tech evolving fast, it seems the race is on. In 2020, Intel Corp announced that it was working on a three-year project with Sandia National Laboratories to build a brain-based computer of one billion or more artificial neurons.

We will see neuromorphic devices used more and more to complement and enhance the use of CPUs (central processing units), GPUs (graphics processing units) and FPGA (field programmable gate arrays) technologies. Neuromorphic devices can carry out complex and high-performance tasks – for example, learning, searching, sensing – using extremely low power. A real-world example would be instant voice recognition in mobile phones without the processor having to communicate with the cloud.

Why do we need neuromorphic computing?

Neuromorphic architectures, although informed by the workings of the brain, may help uncover the many things we don’t know about the brain by allowing us to see the behaviour of synapses in action. This could lead to huge strides in neuroscience and medicine. Although advances in neuromorphic processors that power supercomputers continue at unprecedented levels, there is still some way to go in achieving the full potential of neuromorphic technology.

A project like SpiNNaker, although large-scale, can only simulate relatively small regions of the brain. However, even with its current capabilities, it has been able to simulate a part of the brain known as the Basal Ganglia, a region that we know is affected in Parkinson’s Disease. Further study of the simulated activity with the assistance of machine learning  could provide scientific breakthroughs in understanding why and how Parkinson’s happens.

Intel Labs is a key player in neuromorphic computer science. Researchers from Intel Labs and Cornell University were able to use Intel’s neuromorphic chip, known as Loihi, so that AI could recognise the odour of hazardous chemicals. Loihi chips use an asynchronous spiking neural network to implement adaptive fine-grained computations in parallel that are self-modifying, and event driven. This kind of computation allows this level of odour recognition even when surrounded by ‘noise’ by imitating the architecture of the human olfactory bulb. The neuroscience involved in the sense of smell is notoriously complex, so this is a huge first for AI and wouldn’t be possible with the old-style transistors used in processing. This kind of discovery could lead to further understanding around memory and illnesses like Alzheimer’s, which has been linked to loss of smell.

Learn more about neuromorphic computing and its applications

Artificial intelligence is already helping us to make strides in everyday life from e-commerce to medicine, finance to security. There is so much more that supercomputers could potentially unlock to help us with society’s biggest challenges. 

Interested to know more? Find out about the University of York’s MSc Computer Science with Artificial Intelligence.

 

Why big data is so important to data science

What is big data?

Big data is the term for the increasing amount of data collected for analysis. Every day, vast amounts of unsorted data is drawn from various apps and social media, requiring data processing. 

Creating data sets for such a volume of data is more complex than creating those used in traditional data sorting. This is because the value of the data needs defining; without a definition it is just a lot of detail with no real meaning. Despite the term only relatively recently coming into everyday usage, big data has been around since the 1960s with the development of the relational database. It was the exponential rise in the amount and speed of data being gathered through sites like Facebook and YouTube that created the drive for big data analytics amongst tech companies. The ‘Three Vs’ model characterises big data by volume, variety, and velocity (with veracity and variability sometimes being added as fourth and fifth Vs). Hadoop appeared in 2005 offering the open-source framework to store big data and analyse it. NoSQL, the database for data without a defined structure, also rose in stature around about this time. From that point, big data has been the major focus of data science.

What is big data analytics?

Big data analytics is the sorting of data to uncover valuable insights. Before we had the technology to sort through huge volumes of large data sets using artificial intelligence, this would have been a much more laborious and slower task. The kind of deep learning we can now access through data mining is thanks to machine learning. Data management is much more streamlined now, but it still needs data analysts to define inputs and make sense of outputs. Advances like Natural Language Processing (NLP) may offer the next leap for data analytics, NLP allows machines to simulate the ability to understand language in the way that humans do. This means machines can read content and understand sentences rather than simply scanning for keywords and phrases.    

In 2016, Cisco estimated annual internet traffic had, for the first time, surpassed one zettabyte (10007 or 1,000,000,000,000,000,000,000 bytes) of data. Big data analysis can run into data sets reaching into terabytes (10004) and petabytes (10005). Organisations store these huge amounts of data in what are known as data lakes and data warehouses. Data warehouses store structured data with data points relating to one another that has been filtered for a specific purpose. These offer answers to fast SQL (structured query language) queries, which stakeholders can use for things like operational reporting. Data lakes contain raw data that has not yet been defined, drawn from apps, social media, and Internet of Things devices that await definition and cataloguing in order to be analysed.

The data flow of usable data usually involves capture, pre-processing, storage, retrieval, post-processing, analysis, and visualisation. Data visualisation is important because people tend to grasp concepts quicker through representations like graphs, diagrams, and tables.

What is Spark in big data?

Spark is a leading big data platform for large-scale SQL databases that leads to machine learning. Like Hadoop before it, Spark is a data processing framework, but it works faster and allows stream processing (or real-time processing) as opposed to just batch processing. Spark uses in-memory processing making it 100 times faster than Hadoop. Whereas Hadoop is written only in Java, Spark is written in both Java and Scala, but implementation is in Scala. With less lines of code, this speeds up processing significantly. 

Both Hadoop and Spark are owned by Apache after Spark was acquired from University of California, Berkeley’s AMPLab. Using the two in tandem leads to the best results – Spark for speed and Hadoop for security amongst other capabilities.

How is big data used?

Big data is important because it provides business value that can help companies lead in their sector – it gives a competitive advantage when used correctly.

Increasingly, big data is being used across a wide range of sectors including e-commerce, healthcare, and media and entertainment. Everyday big data uses include eBay using a customer’s purchase history to target them with relevant discounts and offers. As an online retailer, eBay’s use of big data is not new. Yet, within the retail sphere, McKinsey & Company estimate that up to 30% of retailers’ decision-making when it comes to pricing fails to deliver the best price. On average, what feels like a small increase in price of just 1% translates to an impressive 8.7% increase in operating profits (when we assume no loss in volume). Retailers are missing out on these kinds of profits based on a relatively small adjustment by not using big data technologies for price analysis and optimisation.

In healthcare, apps on mobile devices and fitness trackers can track movement and sleep, diet, and hormones creating data sources. All this personal data is fed into big data analysis for further insights into behaviours and habits related to health. Big data can also provide huge strides in some of healthcare’s biggest challenges like treating cancer. During his time as President of the United States, Barack Obama set up the Cancer Moonshot program. Pooling data from genetically sequenced cancer tissue samples is key to its aim of investigating, learning, and maybe finding a cure for cancer. Some of the unexpected results of using these types of data, includes the discovery that the antidepressant, Desipramine, has the capability to help cure certain types of lung cancer.

Within the home, energy consumption can certainly be managed more efficiently with the predictive analytics that a smart meter can provide. Smart meters are potentially part of a larger Internet of Things (IoT) – an interconnected system of objects, which are embedded with sensors and software that feeds data back and forth. This data is specifically referred to as sensor data. As more ‘Things’ become connected to one another, in  theory, the IoT can optimise everything from shopping to travel. Some buildings are designed to be smart ecosystems, where devices throughout are connected and feeding back data to make a more efficient environment. This is already seen in offices where data collection helps manage lighting, heating, storage, meeting room scheduling, and parking.

Which companies use big data?

Jeff Bezos, the founder of Amazon, has become the richest man in the world by making sure big data was core to the Amazon business model from the start. Through this initial investment in machine learning, Amazon has come to dominate the market by getting its prices right for the company and the customer, and managing its supply chains in the leanest way possible.

Netflix, the popular streaming service, takes a successful big data approach to content curation. It uses algorithms to suggest films and shows you might like to watch based on your viewing history, as well as understanding what film productions the company should fund. Once a humble DVD-rental service, Netflix enjoyed 35 leading nominations at the 2021 Academy Awards. In 2020, Netflix overtook Disney as the world’s most valuable media company. 

These are just some of the many examples of harnessing the value of big data across entertainment, energy, insurance, finance, and telecommunications.

How to become a big data engineer

With so much potential for big data in business, there is great interest in professionals like big data engineers and data scientists who can guide an organisation with its data strategy. 

Gaining a master’s that focuses on data science is the perfect first step to a career in data science. Find out more about getting started in this field with University of York’s MSc Computer Science with Data Analytics. You don’t need a background in computer science and the course is 100% online so you can fit it around your current commitments. 

What you need to know about mergers and acquisitions

Mergers and acquisitions (M&A) is the term used for the business of merging or acquiring limited companies (also known as private companies) and public limited companies (PLCs). It’s considered a specialism of corporate finance.

What is the difference between a merger and an acquisition?

A merger is when two separate entities combine. The most common structures are either a vertical merger or a horizontal merger. A vertical merger is when two or more companies come together, each with a different supply chain but the same end product or service. This kind of merger often results in synergies leading to reduced costs and increased productivity by gaining greater control of the supply chain.

A horizontal merger usually happens between competitors operating in the same space that want to increase their market share by joining forces and becoming one entity. A joint venture is slightly different – it involves two companies creating a new entity in which they both invest and share profit, loss and control. This business is entirely separate from both parties’ other companies.

Acquisition is when a larger acquiring company selects a target company to acquire through a buyout. Usually these are friendly acquisitions, but there can be what is known as a hostile takeover. This is when the acquirer aims to buy controlling interest directly from a  company’s shareholders without the consent of its directors. It is a completely legal M&A process but because of the ‘unfriendly’ nature of it, it can affect morale and damage company culture.

Consolidation can refer specifically to an amalgamation, which is the acquiring – and sometimes merging – of many smaller companies that then become part of a larger holding group. This is often seen in the creative industries or with startups.

Are mergers and acquisitions good for the economy?

Mergers and acquisitions tend to be good for the economy because they stimulate business growth, create new jobs and offer investment opportunities for all. Cross-border transactions can also help brands and businesses grow in new territories. However, if a company is looking to grow with the intention of merging or acquiring, it needs working capital to do this. A way of increasing capital is to offer shares on the stock market via an Initial Public Offering (IPO).

IPOs and the rise of SPACs

Famous IPOs in recent decades include when Facebook became a public company in 2012 and Alibaba’s record-breaking IPO in 2014. Facebook’s IPO was one of the most anticipated in history, with stock price steadily increasing up to the opening-day of May 18, and some investors suggesting a valuation of $40 per share in the build-up. The year before, LinkedIn’s stock had doubled in value on its first day of trading, from $45 to $90. Yet on the day, numerous factors – including computer glitches on the part of the Nasdaq stock exchange – led to Facebook share prices actually dropping considerably. This continued for the next couple of weeks with stock closing at $27.72 on June 1. Other tech companies took a hit and investment firms faced considerable losses. Nasdaq offered reimbursements, which its rival, the New York Stock Exchange called a “harmful precedent”. Despite these issues, the stock set a new record for trading volume of an IPO at 460 million shares.

Increasingly seen in global M&A, particularly in the US, is a Special Purpose Acquisition Company (SPAC). SPACs are created specifically to raise capital through an IPO and merge with another company. They’re not new, they’ve been around since the 90s, but SPACs have gained popularity with blue-chip private equity firms, investment banks like Goldman Sachs, and leading entrepreneurs. In turn, this kind of backing encourages more private companies to consider going public. 2020 saw the highest global IPO activity in a decade for the USA as well as the largest SPAC IPO in history.

The role of private equity

Private equity is capital which is not listed on a public stock exchange. Private equity firms are major players when it comes to M&A deals because they are powerful enough to keep on investing capital over an extended period of time. They have a pool of money accrued from previous M&A transactions, which then feeds into further deals. They also receive private equity from limited partners, pension funds, and capital from other companies. The firm, or fund as it is sometimes known, has good cash flow because of this.

Real estate is a form of private equity. There are private equity firms that specialise solely in the purchase of real estate, and by building a property portfolio they create further investment capital. Property is then improved and rented out, or sold on at a higher price, keeping the fund topped-up. Private investors see a return on their investment, and the money that’s left becomes working capital for the fund’s next venture.

Due diligence and pre-M&A analysis

One of the key steps in any merger or acquisition is the due diligence process. This is when investigations and audits are carried out to verify that all financial information provided by the target company is correct and that the purchase price is justified. Discounted Cash Flow (DCF) analysis is part of due diligence. It’s a method used to estimate the value of investment based on its predicted future cash flows.

Another important tool is Accretion Dilution Analysis. This is a basic test carried out before an offer is even made to determine whether a merger or acquisition will increase (accretion) or decrease (dilution) the Earnings per Share (EPS) once completed.

Intellectual Property (IP) must be taken into account as well. Acquisitions with an interest in gaining IP assets can have transaction values of billions. A thorough understanding of the complexities of such high-stake transactions is needed in order to derive precise valuation numbers when negotiating a deal.

Why work in mergers and acquisitions?

Global M&A is seeing growth in all sectors, even as the pandemic has seen some major companies fold. The way that we do business continues to be reshaped by world events, and the flux means that there are many business opportunities to take advantage of through mergers and acquisitions. Global M&A in financial services is seeing a boom with the start of 2021 being the busiest since 1980. The predominance of SPACs is set to spread outside of North America, and brings with it a demand for experienced managers and management teams.

Diversification acquisition will see larger companies offering size and scale to smaller companies which perhaps do not have the capital or resources to adapt in their offering, but which are otherwise doing well. At the other end of the spectrum, specialism may be needed, for example in healthcare, which is seeing a spike in demand for home healthcare solutions. Either way, business continues to seek a competitive advantage and mergers and acquisitions continue to provide this.

If you’re interested in learning more about the world of mergers and acquisitions, there are numerous finance-focused podcasts which look specifically at global M&A activity.

Learn more about mergers and acquisitions 

Mergers and acquisitions are a cornerstone of international businesses. Find out how you can sharpen your expertise in international business and mergers and acquisitions with the University of York’s MSc International Business Leadership and Management.

The future of artificial intelligence

Artificial intelligence (AI) is the machine learning of tasks that we associate with the human brain – things like problem-solving, perceiving, learning, reasoning, and even creativity. AI has grown exponentially in recent years. The Covid-19 pandemic, in particular, highlighted the need for AI systems and automation that could respond swiftly to reduced numbers of workers. 

For organisations that had gone through a digital transformation, AI and associated emerging technologies were already being integrated into business processes. However, for many, Covid was the turning point that highlighted the need for AI solutions to be included in their business models. The AI cloud is a cutting-edge concept that will help make AI software more accessible to businesses by bringing together cloud computing and a shared infrastructure for AI use cases.

Healthcare offers many successful AI case studies, most recently for diagnosing and tracking Covid-19 using rapidly gathered big data, but also increasingly in things like cancer diagnostics or detecting the development of psychotic disorders. Other sectors that use real-world AI applications include the military, agriculture, manufacturing, telecommunications, IT and cybersecurity, and finance. AI art, or neural network art, is a genre in its own right. Holly Herndon, who has a PhD in Music and Acoustics from Stanford’s Centre for Computer Research, uses AI technology in her work.

What are the risks of AI?

Science fiction writers have long been fascinated by the idea of AI taking over. From Blade Runner to The Terminator, the fear is that the machines will start to think for themselves and rise up against humans. This moment is known as the ‘singularity’, defined as the point in time when technological growth overtakes human intelligence, creating a superintelligence developed by self-directed computers. Some people believe that this moment is nearer than we think.

In reality, AI offers many benefits, but the most obvious risks it currently poses are in relation to personal data privacy. In order for deep learning to take place, AI needs to draw information from large amounts of data that must come from people’s behaviours being tracked – their personal data. The Data Protection Act 2018, which enacted the general data protection regulation (GDPR), was brought in to ensure that people have to opt in to having their data gathered and stored, rather than having to make the request to opt out. Previously, businesses and organisations were able to simply use their customers’ data without permission.

Some of us may feel suspicious about our data being collected and yet, many of the applications we use are constantly gathering information about us, from the music we like and the books we read to the number of hours we sleep at night and the number of steps we walk in the day. When Amazon makes suggestions for what you might like to read next, it’s based on your purchasing and browsing history. A McKinsey & Company report from 2013 stated that 35% of Amazon’s revenue comes from recommendations generated by AI. AI is also instrumental in the way that LinkedIn helps both people to find jobs and companies to find people with the right skill set.

The more we allow our actions to be tracked, in theory, the more accurately our behaviours can be predicted and catered to, leading to easier decision making. New technologies like the Internet of Things (IoT) could help make this data even more interconnected and useful – a fridge that has already made a shopping order based on what you have run out of, for example.

Can AI be ethical?

There are certainly big questions around ethics and AI. For example, artificial neural networks (ANNs) are a type of AI that uses interconnected processors which mimic the human brain’s neurons. The algorithm for an ANN is not determined by human input. The machine learns and develops its own rules with which to make decisions, and which are usually not easily traceable by humans. This is known as black box AI because of its lack of transparency, which can have legal as well as ethical implications. In healthcare, for instance, who would be liable for a missed or incorrect diagnosis? If used in self-driving car insurance, who would be  liable for a wrong turn of the wheel in a crash?

When it comes to data analytics, there is also the issue of bias: because human programmers define datasets and write algorithms, this can be prone to bias. Historically, the field of data science has not been very diverse, which can lead to demographics being underrepresented and even inadvertently discriminated against. The more diverse the programming community, the more unbiased the algorithms, therefore the more accurate and useful AI applications, can become.

A popular example of problematic use of AI is deepfakes, imagery that has been manipulated or animated so that it appears that someone (usually a politician) has said or done something they haven’t. Deepfakes are linked to fake news and hoaxes which spread via social media. Ironically, just as AI software can clone a human voice or recreate the characteristic facial expressions of an individual, it is also key in combating fake news because it can detect footage that is a deepfake.

What are the challenges in using artificial intelligence?

Machine learning relies on data input from humans. A machine cannot initially simply start to think for itself. Therefore, a human – or a team of humans – has to pinpoint and define the problem first before presenting it in a computable way. 

A common example of what an AI robot cannot do – which most humans can do – is to enter a kitchen and figure out where all the items needed to make a cup of tea or coffee are kept in order to make a hot drink. This kind of task requires the brain to adapt its decision-making and improvise based on previous experience of being in an unfamiliar kitchen. AI currently cannot develop the data processing systems to spontaneously do this, but it is a situation that the neural networks of a human brain can naturally respond to.

What problems can AI solve?

Artificial Intelligence is mainly suited to deep learning which demands the scanning and sifting through of vast amounts of data looking for patterns. These algorithms developed through deep learning can, in turn, help with predictions. For instance, understanding a city’s traffic flow throughout the day and synchronising traffic lights in real-time can be facilitated through AI implementation. AI can also strategise. One of the milestones in the machine learning of AI systems was Google’s DeepMind AlphaGo beating the world’s number one Go player in 2017, Ke Jie. Go is considered to be particularly complex and much harder for machines to learn than chess.

On the practical side, AI can help reduce errors and carry out repetitive or laborious tasks that would take humans much longer to carry out. In order to increase the use of AI responsibly, the UK government launched the National AI Strategy in March 2021 to help the economy grow via AI technologies. Some of the challenges that are hoped to be addressed are tackling climate change and improving public services. 

In conclusion, AI has huge potential, but ethical, safe and trustworthy AI development is reliant on direction from humans. 

If you’re interested in understanding more about artificial intelligence, our MSc Computer Science with Artificial Intelligence at the University of York is for you. Find out how to apply for the 100% online course. 

Everything you need to know about data analytics

Data analytics is a key component of most business operations, from marketing to supply chain. But what does data analytics mean, and why are so many organisations utilising it for business growth and success?

What is data analytics?

Data analytics is all about studying data – and increasingly big data – to uncover patterns and trends through analysis that leads to insight and predictability. Data analytics emerged from mathematics, statistics and computer programming before becoming a field in its own right. It’s related to data science and it’s a skill that is highly desirable and in demand. 

We live in a world full of data gleaned from our various devices, which track our habits in order to understand and predict behaviours as well as help decision-making. Algorithms are created based upon the patterns that arise from our usage. Data can be extracted from almost any activity, whether it’s tracking sleep patterns or measuring traffic flow through a city. All you need are defined metrics. Although much of data extraction is automated, the role of data analysts is to define subsets, look at the data and make sense if it, thereby providing insight that can improve everyday life

Why is data analytics important?

Data analytics is particularly important in providing business intelligence that helps with problem-solving across organisations. This is known as business analytics, and it’s become a key skill and requirement for many companies in making business decisions. Data mining, statistical modelling, and machine learning are all major elements of predictive analytics which uses historical data. Rather than simply looking at what happened in the past, businesses can get a good idea of what will happen in the future through analysis and modelling of different types of data. This can then help them assess risk and opportunity when planning ahead.

In healthcare, for example, data analytics helps streamline operations and reduce wait times, so patients are seen more quickly. During the pandemic, data analysis has been crucial in analysing figures related to the rate of infection, which then helps in identifying hotspots, and forecasting either an increase or decrease in infections.

Becoming qualified as a data analyst can lead to work in almost any sector. Data analysis is essential for managing global supply chains and for planning in banking, insurance, healthcare, retail and telecommunications.

The difference between data analytics and data analysis

Although it may seem like data analytics and data analysis are the same, they are understood slightly differently. Data analytics is an overarching term that defines the practice, while data analysis is just a section of the entire process. Once data sets have been  prepared, usually using machines to speed up the sorting of unstructured data, data analysts use techniques such as data cleansing, data transforming and data modelling to build insightful statistical information. This is then used to help improve and optimise everyday processes with data analytics as a whole.

What is machine learning?

Machine learning – a form of artificial intelligence – is a method of data analysis that uses automation for analytical model building. Once the machine has learnt to identify patterns through algorithms, it can make informed decisions without the need for human input. Machine learning helps speed up data analysis considerably, but this relies on data and parameters being accurate and unbiased, something that still needs human intervention and moderation. It’s a current area of interest because the way that data analysis progresses and supports us is reliant on a more diverse representation amongst data analysts. 

Currently, most automated machine learning is based on simple, straightforward problems. More complex problems still require at least two people to work on them, so artificial intelligence is not going to take over any time soon. Human consciousness is still a mystery to us, but it is what makes the human brain’s ability to analyse unique.

What are data analytics tools?

There are a number of tools that help with analysis and overall analytics, and many businesses utilise them at least some of them for their day-to-day operations. Here are some of the more popular ones, which you may have heard of:

  • Microsoft Excel is one of the most well-known and useful tools for tabular data.
  • Tableau is business intelligence software that helps to make data analysis fast and easy by linking with Excel spreadsheets.
  • Python is a programming language used by data analysts and developers which makes it easy to collaborate on machine learning and data visualization amongst other things.
  • SQL is a domain-specific programming language that uses structured query language.
  • Hadoop is a distributed file system that can store and process large volumes of data.

Analysts also use databases that provide storage for data which is relational (SQL) and non-relational (NoSQL). Learning about all of these tools and becoming fluent in how to use them is necessary to become a data analyst.

How to get into data analytics

Working in data analytics requires a head for numbers and statistical techniques. But it also requires the ability to spot problems that need solving and the understanding of the criteria needed for data measurement and analysis to provide the solutions. 

You need to become familiar with the wide range of methods used by analysts such as regression analysis (investigating the relationship between variables), Monte Carlo simulation (frequently used for risk analysis) and cluster analysis (classifying relative groups). In a way, you are telling a story through statistical data so you need to be a good interpreter of data and communicator of your findings. You will also need patience because, in order to start your investigations, it’s important to have good quality data. This is where the human eye is needed to spot things like coding errors and to transform data into something meaningful.

Studying for an MSc Computer Science with Data Analytics online

You can become a data analyst with the postgraduate course, MSc Computer Science with Data Analytics from the University of York. The course is 100% online with six starts per year so you can study anywhere, any time. 

You can also pay per module with topics covered such as Big Data Analytics, Data Mining and Text Analysis, and Artificial Intelligence and Operating Systems. Once you’ve completed the learning modules you can embark on an Individual Research Project in a field of your choice. 

Take the next step in your career by mastering the science of data analytics.

Can leaders be flawless?

One of the key traits that many people identify in leaders is confidence and an aura of strength, whether it’s in making tough choices or in guiding your team and company through a challenging business landscape. There is a perception that this confidence and strength means that leaders are unerring and never falter, either in their actions, their attitude or assuredness of their own abilities. We expect our leaders to be flawless.

Can a leader ever be perfect?

While nobody likes to make mistakes, everybody does. It’s admitting to errors, taking responsibility and owning the solution that makes people seem open, honest and transparent. In a leader this can appear, more ‘human’ and therefore relatable. Employees working for a leader who is seen as a real person may find them more approachable, meaning that teams are more cohesive, can resolve problems faster and communicate and collaborate more effectively.

In contrast, leaders who are perceived as being too ‘perfect’ may find that their employees feel less able to approach them when things aren’t going well. On top of this, the weight of expectation placed on a ‘perfect’ leader may cause stress, hamper their ability to seek assistance and increase feelings of isolation and loneliness.

Should leaders aim for perfection?

While leaders should strive to exemplify the highest standards and inspire employees to do the same, an aversion to being seen as anything less than perfect can be very restrictive. At its worst, perfectionism can prevent positive actions being taken, just in case they go wrong. Fear of failure can be a key limiting factor to the success of the company.

Beyond striving for perfection, leaders who become over-confident can be a liability. Believing success to be assured, they can fail to accurately assess risks and, when things go wrong, may seek to shift the blame on to others. You can’t be perfect if you don’t recognise your own limitations.

Is there a balance between confidence and humanity?

Leaders who can balance their confidence and assuredness with approachability and humility could find that their role is easier.

Candidates who can demonstrate their leadership credentials with critical skills such as effective communication and the ability to critically analyse and solve workplace problems, including what to do when mistakes occur often find themselves in high demand. This is where the University of York’s 100% online Masters degree courses in Leadership and Management come in. Gaining a prestigious postgraduate degree from a Russell Group university could help you to differentiate yourself from other aspiring leaders. As all learning materials are delivered digitally you can study online when it suits you. There’s no need to take an extended study break: you can keep your current role and apply what you learn as you go.

As you can keep earning while you’re learning, it minimises the financial impact of study. It’s also possible to pay-per-module and you may be able to apply for a government-backed loan to assist with course fees. There are six start dates per year, meaning that you can start studying within weeks.

Find out more and to begin your application.

Can we ever eliminate cyber security threats?

High profile data breaches have frequently made the headlines over the last few years, with household names and respected tech brands like Facebook and Uber falling victim to large scale attacks. The fact that some of the biggest and most profitable companies in the world have been duped by such attacks highlights just how difficult the situation is.

Cyber-attacks may not be a new concept, but they’re certainly increasing in volume. The growing sophistication of attacks means that measures which once worked to prevent or minimise damage no longer have any effect.

The problem

As the volume of attacks, alerts and threats increases, IT teams are put under increasing pressure. Each potential cyber security threat flagged by the system needs to be explored to determine its credibility and the impact it could have on the business. If a serious threat is identified then the team must take further action to prevent or minimise damage.

Across the world there’s a chronic computer science skills shortage, and the picture is no different in the UK. Businesses are already stretched as the number of unfilled tech roles is set to grow from the current level of 600,000 to 1 million by 2020, couple that with an increase in workload due to the proliferation of cyber security threats and it is perhaps easy to see why so many businesses are struggling to fend off attacks.

The solution

There are many options when it comes to technology which can help prevent attacks. Researchers are developing new ways to fend off threats all the time. More and more frequently, companies are deploying artificial intelligence (AI) in order to support IT teams and free up some of the time it takes to identify legitimate threats. There are now a plethora of cost-effective products available to businesses which utilise AI, data and machine learning to help detect breaches, helping IT teams to detect attacks faster and more accurately, to minimise their frequency and severity.

Taking this time-consuming work away from IT departments frees more time up to shore up cyber defences: ensuring that employees are informed of how they could be used as a conduit for an attack through phishing scams; looking at the security of legacy software; checking old code to ensure there are no weaknesses which could be taken advantage of.

It is unlikely that we’ll ever be able to entirely eliminate the threat of cyber-attacks, but with an increased use of AI, businesses are able to manage the threats more effectively. It takes skill and an in-depth understanding of cyber security issues to implement and maintain these systems. This is why the University of York has introduced the 100% online MSc Computer Science with Cyber Security, for ambitious individuals looking to move into computer science roles.

The course covers specific topics such as cyber security threats and security engineering. It also covers key areas of computer science expertise, including advanced programming and artificial intelligence, giving ambitious students the skills required to pursue a career in cyber security.

There’s no need to take a career break or juggle family commitments as the course is delivered 100% online, with all programme materials accessible from a wide variety of devices at any time. There’s also a choice of six start dates per year and a pay per module option which eliminates the need for a large upfront payment. All this means you can earn a prestigious Masters degree from a Russell Group University in a flexible way that suits you.

Find more information and to begin your application.

What is an Entrepreneurial Leader?

Expectations placed upon leadership are understandably high. We expect our leaders to have perfect strategy and superhuman decision-making skills. The truth is, some of the most important attributes for a leader are curiosity, learning and a constant desire to iterate and improve. Without the ability to listen to others, to embrace new ideas and to change course in the face of new information, success will always be out of reach.

An enquiring nature and an open mind should be true of any leader, but for entrepreneurial leaders it is of paramount importance. Being able to absorb and assimilate new information gives the greatest chance of success. Entrepreneurs thrive on the new and the innovative and are highly desirable to companies of all sizes to help them generate and test out new ideas and to keep fresh and disruptive thinking at the forefront of the business.

The willingness to change

Being able to take on board new concepts and ideas isn’t always enough to enable effective leadership – sometimes it can effectively take ‘unlearning’ what you think you know. There are many examples of new, exciting things being repeatedly rejected by those who couldn’t accept something radical; for example engineer Steven Sasson of Kodak invented a digital camera in 1975, but the company were not convinced by the new technology. They were couldn’t understand why consumers would want to view images on a TV screen when film was so inexpensive. They eventually made the move to digital 18 years later.

Fostering an open culture

Another skill that can be highly beneficial to leadership is a level of empathy that allows you to pick up on how employees are feeling. Approachable and understanding leaders that balance their skills and expertise with a forthright and open attitude may find that their employees are harder-working and more loyal.

In new markets, start-up environments and areas of business that are being created from scratch, the ability to bring together a cohesive team can make all the difference. As technologies and businesses develop, teams must communicate closely and react quickly to cope with the pace of change.

Confidence is key

Having the confidence to look at a range of options, make sound judgements and be decisive is a key business skill. Experience in business can give you a level of assuredness, but it’s also important to have confidence in the people around you. Entrepreneurial leaders rely on their colleagues to fill gaps in skills and knowledge. By using sound decision-making processes, they increase the chances of success in new fields.

An entrepreneurial mindset isn’t just something certain people have and others don’t. It’s a learned skillset which you can develop, given the right environment.

The University of York has a suite of 100% online Masters degrees in Leadership and Management to develop these skills in aspiring business leaders. As all learning materials are delivered completely online, you can study around work or family commitments whenever it suits you. This in turn means there’s no need to take an extended study break and you can apply what you learn as you go, keeping your current grade and salary. There are pay-per-module options available, to reduce large, up-front costs and also the a government-backed loan for those that are eligible to assist with course fees. With six start dates per year, you can begin your studies and personal growth as soon as you’re ready.

Find out more and begin your application.