Intellectual capital: driving business growth and innovation

How can a business maximise its growth and development? What can be done to increase competitive advantage? Are businesses making the best possible use of all their assets?

In an increasingly crowded global economy, all businesses must work hard to remain relevant, competitive and profitable. Innovation is key to maximising business growth and, for many businesses, they already possess the means to achieve it. Alongside this, developing customer-focused, personalised experiences – and adding value through the customer journey – is key. An organisation’s intellectual capital has the potential to achieve both aims, and add significant economic benefit – but what is it, and how is it best utilised?

What is intellectual capital?

Intellectual capital (IC) refers to the value of an organisation’s collective knowledge and resources that can provide it with some form of economic benefit. It encompasses employee knowledge, skill sets and professional training, as well as information and data.

In this way, IC identifies intangible assets, separating them into distinct, meaningful categories. Although not accounted for on a balance sheet, these non-monetary assets remain central to decision making and can have a profound impact on a company’s bottom line. More than ever, IC is recognised as one of most critical strategic assets for businesses.

Broadly speaking, there are three main categories:

  • Human capital: the ‘engine’ and ‘brain’ of a company is its workforce. Human capital is an umbrella term, referring to the skills, expertise, education and knowledge of an organisation’s staff – including how effectively such resources are used by those in management and leadership positions. A pool of talented employees, with a wealth of both professional and personal skills, adds significant value to a workplace. Companies who prioritise investing in the training, development and wellbeing of their teams are actively investing in their human capital. It can bring a host of benefits, including increased productivity and profitability.
  • Relational capital: this category refers to any useful relationships an organisation maintains – for example, with suppliers, customers, business partners and other stakeholders – as well as brand, reputation and trademarks. Customer capital is adjacent to this, and refers to current and future revenues from customer relationships.
  • Structural capital: structural capital relates to system functionality. It encompasses the processes, organisation and operations by which human and relational capital are supported. This may include intellectual property and innovation capital, data and databases, culture, hierarchy, non-physical infrastructure and more.

Each area offers the means for value creation – which is integral to increasing competitiveness. As such, business leaders should prioritise intellectual capital, and its role within operational strategy, in both short-term and long-term planning.

How is intellectual capital measured?

As stated, while IC is counted among a company’s assets, it is not included in its balance sheet. While there are various ways to measure intellectual capital, there isn’t one widely agreed, consistent method for doing so. Together, these aspects mean that quantifying it can be challenging.

Three main methods are generally used to measure IC:

  • The balanced scorecard method examines four key areas of a business to identify whether they are ‘balanced’. They are:
    1. customer perspective – how customers view the business; 
    2. internal perspective – how a company perceives its own strengths; 
    3. innovation and learning perspective – examining growth, development and shortfalls;
    4. financial perspective – whether shareholder commitments are being met. 

A visual tool which communicates organisational structure and strategic metrics, the scorecard provides a detailed overview without overwhelming leaders with information.

  • The Skandia Navigator method uses a series of markers to develop a well-rounded overview of organisational performance. It focuses on five key areas: 
    1. financial focus – referring to overall financial health; 
    2. customer focus – including aspects such as returning customers and satisfaction scores; 
    3. process focus – how efficient and fit-for-purpose businesses processes are; 
    4. renewal and development focus – which looks at long-term business strategy and sustainability;
    5. human focus – sitting at the centre of the others, human focus encompasses employee wellbeing, experience, expertise and skills.
  • Market value-to-book value ratio is calculated by comparing a company’s book value with its market value, and aims to identify both undervalued and overvalued assets. A ratio above one indicates that there may be undervalued assets which are not being utilised; a ratio below one indicates there may be overvalued assets which action could be taken to strengthen.

How can a business increase its intellectual capital?

Intellectual capital acts as a value-driver in our twenty-first-century economy. As such, it’s no surprise that many businesses are pivoting to focus on human, relational and structural assets over others. Given both its relative importance and the returns an organisation can expect, finding ways to increase IC could be key to achieving key business goals.

For Forbes, efforts to increase IC mean adopting either a solution-focused or perspective-focused approach. The first refers to the methods by which specific results can be achieved – the what, when, why and where. The second refers to how IC can utilise industry and marketplace trends, forecasts and insights to seize opportunities. Whichever approach a business opts for, there are a number of ways in which to boost intellectual capital efforts. These include:

  • Improving employee satisfaction to increase retention rates
  • Recruiting individuals with specific knowledge, competencies and skill sets that are currently lacking among the existing workforce
  • Auditing and enhancing systems and processes
  • Gathering research and data to inform decision making
  • Investing in training and development opportunities for employees
  • Improving employer branding to both attract and retain the best talent
  • Creating new products, services and initiatives through innovation

Influential contributors and further reading

Early and current proponents and authors of intellectual capital thinking include:

  • Patrick H Sullivan who wrote ‘A Brief History of the Intellectual Capital Movement’. He presented a concise overview of the beginnings of the discipline in which he traced it back to three origins. These were: Hiroyuki Hami, who studied invisible assets pertaining to Japanese operational management; the work of various economists (Penrose, Rumelt, Wernerfelt et al) which was included in Dr David J. Teece’s 1986 article relating to technical commercialisation; and Karl Erik-Sveiby, who focused on human capital in terms of employee competences and knowledge base. His model of intellectual capital, published in 1997, was a seminal contribution in the field.
  • Dr David J Teece published ‘Managing Intellectual Capital’ in 2002, and further publications by him are available on Google Scholar.
  • Leif Edvinsson’s 2002 book, ‘Corporate Longitude’, concerned itself with the measurement, valuation and economic impact of the knowledge economy.
  • Thomas A Stewart, a pioneer in the field, authored ‘The New Wealth of Organizations’ in 1997. He delved into areas such as unlocking potential hidden assets, spotting and mentoring talented employees, and investigating methods to identify and retain customer and brand loyalty.

The field of intellectual capital continues to expand and evolve globally. Many well-known international figures such as Johan Roos and Nick Bontis continue to explore both its ramifications and applications.

Develop the specialist skills to succeed in fast-paced, global business environments

Become adept at the management of intellectual capital – alongside a wide variety of other business and leadership skills – with the University of York’s 100% online MSc International Business Leadership and Management programme.

You’ll gain in-depth, real-world know-how and tools to navigate the global business marketplace, exploring the challenges and opportunities associated with leadership and business management. Supported by our experts, you’ll further your knowledge in marketing, operations, strategy, project management, finance, people management and more. 

As well as providing a broad overview of management disciplines, this flexible programme will develop vital decision-making, critical-thinking, problem-solving and communication skills.

The importance of innovation management in business

In a constantly changing commercial world, the challenge is to not be left behind. Gaining and sustaining a competitive edge is key to thriving in today’s global marketplace. Innovation management has become an essential component in navigating this increasingly complex and international business environment.

What is innovation management?

Applied to business, innovation is all about generating new ways of solving problems, using different models, theories and frameworks. It is a creative process which uses techniques such as brainstorming and prototyping, and plays a critical role in the design thinking process. 

There are as many ways to innovate as there are problems to solve. The goal is either to introduce new or improved products or services to gain competitive advantage. By developing a sustainable and ongoing innovation process, a company’s brand image and advancement is set on an upward trajectory.

How innovation management happens

Coming up with innovative ideas, products and services is directly down to the pool of talent available in the workforce. Traditionally, companies would generate ideas in-house, but many are now turning to  open innovation. This refers to companies and organisations working with external agencies such as academic and research institutions, suppliers and clients. It fosters a working model very different from the traditional one, but is advantageous to all parties. 

Initiatives are carried out by an organisation, with the aim to identify and create new business openings through:

  • generating ideas
  • exploring future areas of growth
  • modelling products and services
  • experimenting and testing new concepts.

Not everything needs to start from scratch. Many existing products or services may already work well, and simply need to be approached differently – for example, through adaptation and modification.

Successful innovation in business relies on certain criteria, including:

  • Business models. Your company must be flexible enough to rethink the business and find new revenue streams. Companies may resist looking at new ways of managing existing systems and operations. However, actively challenging current and long-held assumptions is important in order to discover potential opportunities. 
  • Employee engagement. The human resources element available to businesses is invaluable. By tapping innovative ideas directly from the workforce, and engaging employees in showcasing skills and knowledge, ideation and innovation can be disseminated to everyone’s benefit.
  • Use of technology. Most of us have accepted the seamless integration of technological innovation in our professional lives. Although not every innovative idea will involve costly technological input and outlay, in today’s global, fast-moving market many will. Much of the world’s commercial thrust is reliant on the acquisition of data and knowledge. Google, for example, invests heavily in managing the innovation process.
  • Marketing. Brand awareness and visibility is a vital part of a company’s profile. There is no point in developing or producing a product or service if people are unaware of it. Marketing is one of the major factors in driving international sales and profitability.

Key aspects of innovation management

Different types of innovation have been identified within the innovation management process:

  • Incremental innovation. As its name suggests, in this strategy an existing product or service is subject to continual improvement and updates. Although such changes may be small or large, they still require defined methodologies and strategies to ensure continuous improvement. Starting out with its prototype in the early twentieth-century, Gillette is a high-profile brand which continually upgrades its razors with new features while retaining its core design. Likewise, in the current mobile phone market, innovation is delivered through frequent small updates to software.
  • Disruptive innovation. This occurs when product development results in a paradigm shift which has a radical impact on a business market. It can take a long time to get to the creation stage – often months and years in planning and execution. A great deal of project management, research, testing and evaluation is required. A classic example of disruptive innovation is demonstrated by Apple. When the iPhone was introduced in June 2007, it was an instant global success. It wasn’t the first mobile phone, but it overtook the existing competition and effectively launched the smartphone revolution.  
  • Architectural innovation. Introduced by Professor Rebecca Henderson and Dean Kim Clark of Harvard Business School in 1990, architectural innovation involves reconfiguring components of in-use products or services. Whether seeking a new target audience, or adding value to the existing market, it makes changes without radically altering either technologies or parts. As with the other innovation strategies, alterations must be questioned, evaluated and tested to determine whether clients and customers would value any changes.
  • Major innovation. This business process is arguably the ultimate in achievement: it seeks to introduce a brand new sector or industry. Inventions such as the printing press, the telephone and the internal combustion engine have literally changed the world. Forbes has listed some of the top innovation companies in recent times. All are focused on attaining the pinnacle in terms of product and service procurement.

Ways of participating in innovation management

Opportunities in innovation culture are limitless; global trade, marketing and sales continue to grow exponentially. The sector is populated not only with more ‘traditional’ business models – small or large organisations with employees – but also attracts those who are motivated by entrepreneurship and prefer to set up start-ups.

Global commerce both fosters and demands cross-cultural management and organisation. Awareness and servicing of contemporary issues in international business requires professionals with the knowledge and skill set to tackle any and all situations. Areas of interest may include:

  • Providing consultancy services
  • Sourcing new business and sales
  • Forming partnerships with external organisations using open innovation
  • Working with stakeholders, including shareholders, customers, suppliers and employees
  • Dealing with legal matters such as intellectual property and ethical concerns
  • Portfolio management

Choosing the right course for you

Gain the qualifications to help you succeed in the international business sector with the University of York’s online MSc International Business Leadership and Management programme. All practical information regarding the MSc programme – such as modules, topics, entry requirements, tuition fees and English language qualifications – can be found on our course page.

Artificial intelligence and its impact on everyday life

In recent years, artificial intelligence (AI) has woven itself into our daily lives in ways we may not even be aware of. It has become so pervasive that many remain unaware of both its impact and our reliance upon it. 

From morning to night, going about our everyday routines, AI technology drives much of what we do. When we wake, many of us reach for our mobile phone or laptop to start our day. Doing so has become automatic, and integral to how we function in terms of our decision-making, planning and information-seeking.

Once we’ve switched on our devices, we instantly plug into AI functionality such as:

  • face ID and image recognition
  • emails
  • apps
  • social media
  • Google search
  • digital voice assistants like Apple’s Siri and Amazon’s Alexa
  • online banking
  • driving aids – route mapping, traffic updates, weather conditions
  • shopping
  • leisure downtime – such as Netflix and Amazon for films and programmes

AI touches every aspect of our personal and professional online lives today. Global communication and interconnectivity in business is, and continues to be, a hugely important area. Capitalising on artificial intelligence and data science is essential, and its potential growth trajectory is limitless.

Whilst AI is accepted as almost commonplace, what exactly is it and how did it originate?

What is artificial intelligence?

AI is the intelligence demonstrated by machines, as opposed to the natural intelligence displayed by both animals and humans. 

The human brain is the most complex organ, controlling all functions of the body and interpreting information from the outside world. Its neural networks comprise approximately 86 billion neurons, all woven together by an estimated 100 trillion synapses. Even now, neuroscientists are yet to unravel and understand many of its ramifications and capabilities. 

The human being is constantly evolving and learning; this mirrors how AI functions at its core. Human intelligence, creativity, knowledge, experience and innovation are the drivers for expansion in current, and future, machine intelligence technologies.

When was artificial intelligence invented?

During the Second World War, work by Alan Turing at Bletchley Park on code-breaking German messages heralded a seminal scientific turning point. His groundbreaking work helped develop some of the basics of computer science. 

By the 1950s, Turing posited whether machines could think for themselves. This radical idea, together with the growing implications of machine learning in problem solving, led to many breakthroughs in the field. Research explored the fundamental possibilities of whether machines could be directed and instructed to:

  • think
  • understand
  • learn
  • apply their own ‘intelligence’ in solving problems like humans.

Computer and cognitive scientists, such as Marvin Minsky and John McCarthy, recognised this potential in the 1950s. Their research, which built on Turing’s, fuelled exponential growth in this area.  Attendees at a 1956 workshop, held at Dartmouth College, USA, laid the foundations for what we now consider the field of AI. Recognised as one of the world’s most prestigious academic research universities, many of those present became artificial intelligence leaders and innovators over the coming decades.

In testimony to his groundbreaking research, the Turing Test – in its updated form – is still applied to today’s AI research, and is used to gauge the measure of success of AI development and projects.

This infographic detailing the history of AI offers a useful snapshot of these main events.

How does artificial intelligence work?

AI is built upon acquiring vast amounts of data. This data can then be manipulated to determine knowledge, patterns and insights. The aim is to create and build upon all these blocks, applying the results to new and unfamiliar scenarios.

Such technology relies on advanced machine learning algorithms and extremely high-level programming, datasets, databases and computer architecture. The success of specific tasks is, amongst other things, down to computational thinking, software engineering and a focus on problem solving.

Artificial intelligence comes in many forms, ranging from simple tools like chatbots in customer services applications, through to complex machine learning systems for huge business organisations. The field is vast, incorporating technologies such as:

  • Machine Learning (ML). Using algorithms and statistical models, ML refers to computer systems which are able to learn and adapt without following explicit instructions. In ML, inferences and analysis are discerned in data patterns, split into three main types: supervised, unsupervised and reinforcement learning.
  • Narrow AI. This is integral to modern computer systems, referring to those which have been taught, or have learned, to undertake specific tasks without being explicitly programmed to do so. Examples of narrow AI include: virtual assistants on mobile phones, such as those found on Apple iPhone and Android personal assistants on Google Assistant; and recommendation engines which make suggestions based on search or buying history.
  • Artificial General Intelligence (AGI). At times, the worlds of science fiction and reality appear to blur. Hypothetically, AGI – exemplified by the robots in programmes such as Westworld, The Matrix, and Star Trek – has come to represent the ability of intelligent machines which understand and learn any task or process usually undertaken by a human being.
  • Strong AI. This term is often used interchangeably with AGI. However, some artificial intelligence academics and researchers believe it should apply only once machines achieve sentience or consciousness.
  • Natural Language Processing (NLP). This is a challenging area of AI within computer science, as it requires enormous amounts of data. Expert systems and data interpretation are required to teach intelligent machines how to understand the way in which humans write and speak. NLP applications are increasingly used, for example, within healthcare and call centre settings.
  • Deepmind. As major technology organisations seek to capture the machine learning market, they are developing cloud services to tap into sectors such as leisure and recreation. For example, Google’s Deepmind has created a computer programme, AlphaGo, to play the board game Go, whereas IBM’s Watson is a super-computer which famously took part in a televised Watson and Jeopardy! Challenge. Using NLP, Watson answered questions with identifiable speech recognition and response, causing a stir in public awareness regarding the potential future of AI.

Artificial intelligence career prospects

Automation, data science and the use of AI will only continue to expand. Forecasts for the data analytics industry up to 2023 predict exponential expansion in the big data gathering sector. In The Global Big Data Analytics Forecast to 2023, Frost and Sullivan project growth at 29.7%, worth a staggering $40.6 billion.

As such, there exists much as-yet-untapped potential, with growing career prospects. Many top employers seek professionals with the skills, expertise and knowledge to propel their organisational aims forward. Career pathways may include:

  • Robotics and self-driving/autonomous cars (such as Waymo, Nissan, Renault)
  • Healthcare (for instance, multiple applications in genetic sequencing research, treating tumours, and developing tools to speed up diagnoses including Alzheimer’s disease)
  • Academia (leading universities in AI research include MIT, Stanford, Harvard and Cambridge)
  • Retail (AmazonGo shops and other innovative shopping options)
  • Banking
  • Finance

What is certain is that with every technological shift, new jobs and careers will be created to replace those lost.

Gain the qualifications to succeed in the data science and artificial intelligence sector

Are you ready to take your next step towards a challenging, and professionally rewarding, career?

The University of York’s online MSc Computer Science with Data Analytics programme will give you the theoretical and practical knowledge needed to succeed in this growing field.

Digital influences on the way we live and work

The exponential growth of digital connection in our world is all-pervasive and touches on every aspect of our daily lives, both personally and professionally.

Today, many people would be hard-pressed to imagine life before the advent of digital technology. It has blended and integrated seamlessly into everyday living. Global interconnectivity and the ability to communicate and network instantly is now a ‘given’. This expectation has markedly transformed the human experience across all areas, and the ‘always on’ culture has led to the creation of a vast online and computerised world. 

Creatively, this has inevitably resulted in phenomenally new ways of working and interpreting the world through data. Artificial intelligence (AI), and its attendant strands of computational science, are a vital link in the chain of twenty-first century life.

Whose choice is it anyway?

As the everyday world becomes saturated with digital information, the element of choice and decision-making becomes harder to navigate. The sheer amount of data available has led to the development of programs to enable and equip end-users to make such choices bespoke. Examples can be simplistic – from choosing the best shampoo to suit your hair type or which restaurant to choose for a special night out – or, more complexly, looking for a new home in a different area.

Recommender systems are built into many technological platforms and are used for both individual and ecommerce purposes. Although choice availability appears straightforward, the process behind it is remarkably elaborate and sophisticated.

The science behind the experience

The recommender system is an information filtering system run by machine learning algorithms programmed to anticipate and predict user interest, user preferences and ratings in relation to products and/or information browsed online.

Currently, there are three main types of recommender systems:

  1. Content-based filtering. This is driven and influenced by user behaviour, and picks up what has been previously or currently searched for. Keyword-dependent, it seeks out the filtering approach and patterns regarding items in order to inform decision making.
  2. Collaborative filtering recommender. This uses a more-advanced approach in that similar users are selected based on their choice of similar items. Collaborative filtering methods are centred on analysing the importance of user interactions on like-for-like user items and common selections, enabling comparisons to be made. 
  3. Hybrid recommender. An amalgam of the two previous types, this system creates a hybrid once recommended items have been generated. 

The optimal functionality of recommendation engines depends upon information and raw data extracted from user experience and user ratings. When combined, these facilitate the building of user profiles to inform ecommerce targets and aims.

Multiple commonly accessed corporations and e-markets are highly visible and instantly recognisable on the online stage. Household names such as Amazon and Netflix are brands that immediately spring to mind. These platforms invest massively in state-of-the-art operations and big data collection to constantly improve, evolve and calibrate their commercial aims and marketing.

Computer architecture and system software are predicated on a myriad of sources and needs, and rely heavily on machine learning and deep learning.These two terms are often considered interchangeable buzzwords, but deep learning is an evolution of the former. Using programmable neural networks, machines have the ability to make accurate and precise decisions without human intervention. Within the machine learning environment, the term ‘nearest neighbour’ is an essential classification algorithm – not to be confused with its traditional association in the pre-computer era.

Servicing enabling protocols, technologies and real-world applications requires in-depth skills and knowledge across multiple disciplines. By no means an exhaustive list, familiarity with, and indeed specialist awareness of, the following terms are integral to the optimisation of recommendation algorithms and the different types of recommendation models:

  • Matrix factorization. This refers to the collaborative filtering algorithms used in recommender systems. New user items are decomposed into the product of two lower-dimensionality, rectangular matrices. Mathematical modelling further splits these entities into smaller entries in order to discover the features or information leading to interactions between different users and items. Once alerted by the search engine, matrix factorization generates product recommendations.
  • Cold-start problem. This is an issue which presents in both supervised and unsupervised machine learning and is frequently addressed.
  • Cosine similarity. Needing to determine the nearest user to provide recommendations, this is an approach to measure similarities between two non-zero vectors.
  • Data sparsity. Many commercial recommender systems are based around large datasets. As such, the user-item matrices used in collaborative filtering could be large and sparse. Therefore, such data sparsity could present a challenge in terms of optimal recommendation performance.
  • Data science. IBM’s overview offers a comprehensive explanation, and introduction to, the employment of data science within its use of data mining and complex metadata.
  • Programming languages. Globally used programming languages include Scala, Perl, SQL, C++ and Python. Python is one of the foremost languages used. Managed by Grouplens Research at the University of Minnesota, MovieLens makes use of Python in collaborative filtering. Its programme predicts film ratings based on user profiles, plus user ratings, and overall user experience. 

What’s happening with social media?

In recent years, recommender systems have become integral to the continued growth of social media. Due to the nature of the interconnected online community across locations and social demographics, a higher volume of traffic is both generated and triggered by recommendations enforced by likes and shares.

Online shopping has exploded as a result of the global pandemic. Websites such as Meta (formerly Facebook) and Etsy have been joined by new e-businesses and ‘shop fronts’, all of which incorporate the latest recommender technology. Targeted focus centres on growing user profiles by analysing purchase history and the browsing of new items. The aim is to both attract new users and retain existing ones. These embeddings are made possible through the use of recommender systems.

Careers in artificial intelligence and computer science      

Professionally relevant associations such as the Institute of Electrical and Electronics Engineers  (IEEE), and digital libraries such as Association for Computing Machinery (ACM), exist to provide further knowledge and support to those working in this fascinating field. 

Whichever specialisation appeals – computer science, software development, programming, AI-oriented solutions development – the many pathways are leveraged to build a rewarding career. There are many in-demand roles and no shortage of successful and creative organisations in which to work as evidenced in 50 Artificial Intelligence Companies to watch in 2022.

Further your learning in this fast-paced field

If you’re looking for a university course offering up-to-date theoretical and practical knowledge with holistic, pedagogical and real-world expertise, then choose the University of York’s online MSc Computer Science with Artificial Intelligence course and take your next step towards a fulfilling and stimulating career.

What is data visualisation?

Data visualisation, sometimes abbreviated to dataviz, is a step in the data science process. Once data has been collected, processed, and modelled, it must be visualised for patterns, trends, and conclusions to be identified from large data sets.

Used interchangeably with the terms ‘information graphics’, ‘information visualisation’ and ‘statistical graphs’, data visualisation translates raw data into a visual element. This could be in a variety of ways, including charts, graphs, or maps.

The use of big data is on the rise, and many businesses across all sectors use data to drive efficient decision making in their operations. As the use of data continues to grow in popularity, so too does the need to be able to clearly communicate data findings to stakeholders across a company.

The importance of effective data visualisation

When data is presented to us in a spreadsheet or in it’s raw form, it can be hard to draw quick conclusions without spending time and patience on a deepdive into the numbers to understand results. However, when information is presented to us visually, we can quickly see trends and outliers. 

A visual representation of data allows us to internalise it, and be able to understand the story that the numbers tell us. This is why data visualisation is important in business – the visual art communicates clearly, grabs our interest quickly, and tells us what we need to know instantly.

In order for data visualisation to work effectively, the data and the visual must work in tandem. Rather than choosing a stimulating visual which fails to convey the right message, or a plain graph which doesn’t show the full extent of the data findings, a balance must be found. 

Every data analysis is unique, and so a one-size-fits-all approach doesn’t work for data visualisation. Choosing the right visual method to communicate a particular dataset is important.

Choosing the right data visualisation method

There are many different types of data visualisation methods. So, there is something to suit every type of data. While your knowledge of some of these methods may span back to your school days, there may be some which you are yet to encounter.

There are also many different data visualisation tools available, with free options available on Google Charts and the open sourced Tableau Public.

Examples of data visualisation methods:

  • Charts: data is represented by symbols – such as bars in a bar chart, lines in a line chart, or slices in a pie chart. 
  • Tables: data is held in a table format within a database, consisting of columns and rows – this format is seen most commonly in Microsoft Excel sheets.
  • Graphs: diagrams which show the relation between two variable quantities which are measured along two axes (usually x-axis and y-axis) at right angles.
  • Maps: used most often to display location data, advancements in technology mean that maps are often digital and interactive which offers more valuable context of the data.
  • Infographics: a visual representation of information, infographics can include a variety of elements including images, icons, texts and charts which conveys more than one key piece of information quickly and clearly.
  • Dashboards: graphical user interfaces which provide at-a-glance views of key performance indicators relevant to a particular objective or business process.
  • Scatter plots: represents values for two different numerical variables by using dots to indicate values for an individual data point on a graph with a horizontal and vertical axis
  • Bubble charts: an extension of scatter plots which displays three dimensions of data – two values in their dot placement, and a third value through its size.
  • Histograms: a graphical representation which looks similar to a bar graph but condenses large data sets by grouping data points into logical ranges.
  • Heat maps: show the magnitude of a phenomenon as a variation of two colour dimensions which gives cues on how the phenomenon is clustered or varied over physical space.
  • Treemaps: uses nested figures – typically rectangles – to display large amounts of hierarchical data
  • Gantt charts: a type of bar chart which illustrates a project schedule, showing the dependency relationships between activities and current schedule status.

Data visualisation and the Covid-19 pandemic

The Covid-19 outbreak was an unprecedented event which had never been seen in our lifetimes. Because of the scale of the virus, its impacts on our daily lives, and the sudden nature of abrupt change, the way public health messages and evolving information on the situation were communicated was often through data visualisation.

Being able to visually see the effects of Covid-19 enabled us to try to make sense of a situation we weren’t prepared for. 

As Eye Magazine outlines in the article ‘The pandemic that launched a thousand visualisations’: ‘Covid-19 has generated a growth in information design and an opportunity to compare different ways of visualising data’. 

The John Hopkins University (JHU) Covid-19 Dashboard included key statistics alongside a bubble map to indicate the spread of the virus. A diagram from the Imperial College London Covid-19 Response Team was influential in communicating the need to ‘flatten the curve’. Line graphs from the Financial Times created visual representations of how values such as case numbers by country changed from the start of the outbreak to present day. 

On top of this, data scientists within the NHS digital team built their capabilities in data and analytics, business intelligence, and data dashboards quickly to evaluate the rates of shielded patients, e-Referrals, and Covid-19 testing across the UK. 

The use of data visualisation during the pandemic is a case study which will likely hold a place in history. Not only did these visualisations capture new data as it emerged and translate it for the rest of the world, they will also live on as scientists continue to make sense of the outbreak and the prevention of it happening again.

Make your mark with data visualisation

If you have ambitions to become a data analyst who could play an important role in influencing decision making within a business, an online MSc Computer Science with Data Analytics will give you the skills you need to take a step into this exciting industry.

This University of York Masters programme is studied part-time around your current commitments, and you’ll gain the knowledge you need to succeed. Skilled data analytics professionals are in high demand as big data continues to boom. With us, we’ll prepare you for a successful future.

What is computer vision?

Research has shown that 84% of UK adults own a smartphone. As a result, taking a photo or recording a video and sharing it with friends has never been easier. Whether sharing directly with friends on popular messaging app WhatsApp, or uploading to booming social media platforms Instagram, TikTok, or YouTube, the digital world is an increasingly more visual one than ever before.

Internet algorithms index and search text with ease. When you use Google to search for something, chances are the results are fairly accurate or answer your question. However, images and videos aren’t indexed or searchable in the same way. 

When uploading an image or video, the owner has the option to add meta descriptions. This is a text string which isn’t visible on screen but which tells algorithms what is in that particular piece of media. However, not all rich media has associated meta descriptions and they aren’t always accurate.

Computer vision is the field of study focused on solving the problem of making computers see by developing methods that reproduce the capability of human vision, and aims to enable computers to understand the content of digital images. It is a multidisciplinary field encompassing artificial intelligence, machine learning, statistical methods, and other engineering and computer science fields.

How computer vision applications operate

Many computer vision applications involve trying to identify and classify objects from image data. They do this using the following methods to answer certain questions.

  • Object classification: What broad category of object is in this photograph?
  • Object identification: Which type of a given object is in this photograph?
  • Object verification: Is the object in the photograph?
  • Object detection: Where are the objects in the photograph?
  • Object landmark detection: What are the key points for the object in the photograph?
  • Object segmentation: What pixels belong to the object in the image?
  • Object recognition: What objects are in this photograph and where are they?

Other methods of analysis used in computer vision include:

  • video motion analysis to estimate the velocity of objects in a video or the camera itself;
  • image segmentation where algorithms partition images into multiple sets of views;
  • scene reconstruction which creates a 3D model of a scene inputted through image or video; and
  • image restoration where blurring is removed from photos using machine learning filters.

Why computer vision is difficult to solve

The early experiments of computer vision began in the 1950s. Since then it has spanned robotics and mobile robot navigation, military intelligence, human computer interaction, image retrieval in digital libraries, and the rendering of realistic scenes in computer graphics.

Despite decades of research, computer vision remains an unsolved problem. While some strides have been made, specialists are yet to reach the same level of success in computers as is innate in humans.

For fully-sighted humans, seeing and understanding what we’re looking at is effortless. Because of this ease, computer vision engineers originally believed that reproducing this behaviour within machines would also be a fairly simple problem to solve. That, it turns out, has not been the case.

While we know that human vision is simple for us, psychologists and biologists don’t yet have a complete understanding as to why and how it’s so simple. There is still a knowledge gap in being able to explain the complete workings of our eyes and the interpretation of what our eyes see within our brains. 

As humans, we are also able to interpret what we see under a variety of different conditions – different lighting, angles, and distances. With a range of variables, we can still reach the same conclusion and correctly identify an object. 

Without understanding the complexities of human vision as a whole, it’s difficult to replicate or adapt for success in computer vision.

Recent progress in computer vision

While the problem of computer vision doesn’t yet have an entire solution, progress has been made in the field due to innovations in artificial intelligence – particularly in deep learning and neural networks. 

As the amount of data generated every day continues to grow, so do the capabilities in computer vision. Visual data is booming, with over three billion images being shared online per day, and computer science advancements mean the computing power to analyse this data is now available. Computer vision algorithms and hardware have evolved in their complexity, resulting in higher accuracy rates for object identification.

Facial recognition in smartphones has become a key feature of unlocking our mobile devices in recent years, a success which is down to computer vision. 

Other problems which have been solved in this vast field also include:

  • optical character recognition (OCR) which allows software to read the text from within an image, PDF, or a handwritten scanned document
  • 3D model building, or photogrammetry, which may be a stepping stone to reproducing the identification of images from different angles
  • safety in autonomous vehicles, or self-driving cars, where lane line and object detection has been developed
  • revolutionising healthcare with image analysis features to detect symptoms in medical imaging and X-rays
  • augmented reality and mixed reality, which uses object tracking in the real world to determine the location of a virtual object on the device’s display 

The ultra-fast computing machines available today, along with quick and reliable internet connections, as well as cloud networks make the process of deciphering an image using computer vision much faster than when this field was first being investigated. Plus, with companies like Meta, Google, IBM and Microsoft also sharing their artificial intelligence research through open sourcing, it’s certain that computer vision research and discoveries will progress at a quicker speed than was seen in the past.

The computer vision and hardware market is expected to be worth $48.6 billion, making it a lucrative industry where the pace of change is accelerating.

Specialise in artificial intelligence

If you have an interest in computer vision, expanding your skills and knowledge in artificial intelligence is the place to start. With this grounding, you could be the key that solves many unanswered questions in computer vision – a field with potential for huge growth.

The University of York’s online MSc Computer Science with Artificial Intelligence will set you up for success. Study entirely online and part-time around current commitments, whether you already have experience in computer science or you’re looking to change your career into this exciting industry, this master’s degree is for you.

What is a franchise?

Franchises are a good option for people who want to be their own boss and run their own business, but are lacking the knowledge or resources to launch a new product or service on their own. Franchising is also a fairly financially-safe way of being your own boss, as there is a much higher rate of survival than in new businesses and startups.

At its core, a franchise is a partnership between an individual (the franchisee) and an existing organisation (the franchisor).

There are three types of franchise systems:

  • Product: This is when a franchisor gives a franchisee permission to sell a product using their logo, trademark, and brand name.
  • Manufacturing: This is when a franchisor partners with a franchisee to manufacture and sell their products using their logo, trademark, and brand name.
  • Business: This is when a franchisor licences their brand to a franchisee and provides regulations around how the business operates and is managed.

How franchises work

A franchisor grants a franchisee the right to market and/or trade their products and services. When a franchisee purchases a franchise, there is an initial fee to the corporation they’re going into partnership with, and they will usually also pay regular royalties to cover the cost of initial and/or ongoing training, business support and marketing. 

By paying for the organisation to manage these areas of the business, the franchisee can concentrate on the day-to-day running of their business. They also avoid the cost of organising these services in-house.

The franchisee agreement is a contract which governs the partnership between franchisee and franchisor. Within this, the partnership is tied in for a set period of time – generally between five and twenty years. Once the period of time is up, the contract tends to be renewable.

The history of franchises

The franchising model was created in the 1800s, by Isaac Singer – inventor of the widely-used Singer sewing machine. After the US Civil War in the 1860s, he was mass-producing his famous machines, but needed a system in place that would enable repairs and maintenance to cover the whole country. Initially, local merchants across the US were sold licences that permitted them to service the machines. This then grew to enable the merchants to sell the machines, too. The contract used was the earliest form of franchise agreement.

During the Second World War, companies like Coca-Cola and Pepsi looked to expand quickly and began franchising. As the 1950s and 1960s saw growth in population, economic output and social change, franchising grew in popularity in the UK, especially amongst food retailers such as the fast-food chains Wimpy, McDonald’s and KFC, and ice cream brands Lyons Maid and Mr Softee.

Today, franchising is a function of many established brands across multiple sectors, with franchise opportunities in food, pet grooming, homecare agencies, beauty salons, recruitment companies and many more.

How popular are franchises?

The British Franchise Association’s 2018 bfa NatWest Franchise Survey found that the franchise industry is growing more than ever before. At the time of survey, there were 48,600 franchise units in the UK, and 935 business format franchise systems – around double of what existed twenty years prior.

It’s been widely reported that Millennials are turning to self-employment at a faster rate than any previous generations, and so it is no surprise the results of the franchise survey show this as an attractive option to this group. 18% of franchisees were found to be under the age of 30 – a significant rise in recent years. 

With 4 in 10 franchise systems operational from a home office, following the work from home orders issued throughout the recent Covid-19 pandemic, is it possible that this way of working could continue to thrive?

Are franchises a good investment?

The cost of owning a franchise can vary wildly, with the initial fee ranging from £1,000 to £500,000. On top of this, you also need to have budgeted for start-up costs, working capital, monthly rent, salaries, inventory, software and utilities.

The franchise industry contributes £17.2 billion per annum to UK GDP, and employs 710,000 people. The 2018 bfa NatWest Franchise Survey also found that franchises are a largely successful business model, with 93% of franchisees claiming profit. 60% have an annual turnover of more than £250,000.

To ensure franchise business success, a franchisee must first do their due diligence by making sure there is room for expansion in the territory they’ll be working in, understanding how much training and ongoing support will be offered, researching the success of other franchisees, and budgeting and planning for fee payments. With this knowledge, a successful business plan can be written, and a successful future awaits them.

Prepare for success in business

If you have been considering starting your own franchise and are looking to increase your business acumen, the University of York’s 100% online MSc International Business Leadership and Management could equip you with the skills and knowledge you need to succeed. 

This online master’s degree will give you a thorough grounding of multiple areas of business, so you will be prepared to take your career to the next level – whether your ambitions lie in opening a franchise or progressing at an existing company.

With us you’ll develop an understanding of business strategy, operations management, finance, leading and managing people, marketing and sales. As you study part-time, you can continue to earn while you learn, applying the knowledge you gain to your existing role. You’ll connect with a global network of peers as you study alongside professionals from all over the world.

What is the difference between leadership and management?

In the past couple of decades, leadership skills have been under the magnifying glass as society and people’s expectations of work have changed. Even in April 2019, less than a year before Covid-19 impacted the world, Deloitte was highlighting the new challenges that leadership faced. Those at the top need to be inspiring leaders who can guide the business, while their management teams are hands-on in facilitating the processes that direct the business towards its organisational goals. The difference between leadership and management is that leaders tend to be big picture thinkers, while managers  implement their leader’s vision in realistic and practical ways that result in measurable success.

Traditionally, leadership teams may have had very little interaction with employees, leaving that to line managers. However, as hierarchies have flattened, senior leaders have had to become more visible and available to teams in the working environment. While our expectations of leaders may be greater, there is still a gap between that expectation and the reality in most organisations. How leaders engage with stakeholders and progress with their leadership development is a hot topic that continues to evolve.

Deloitte’s 2019 Global Human Capital Trends report lists perennial leadership skills including the ability to manage operations, supervise teams, make decisions, prioritise investments, and manage the bottom line. It also recognises vital new management skills such as leading through ambiguity, managing increasing complexity, being tech-savvy, managing changing customer and talent demographics, and handling national and cultural differences. Some of these competencies can be taught, but others come only from experience.

The age of the outspoken CEO

As CEOs have felt the need to be more demonstrably involved with the day-to-day lives of team members, they have also felt the pressures of taking a stance on political, environmental, and cultural issues. 

Previously, there was a feeling that getting involved in current affairs could be detrimental to the reputation (and the share price) of companies. However, those companies and brands that have taken strong stances which authentically reflect their core values have increased their reach and traction with key audiences. An example of this was Oreo’s rainbow cookie image which was posted on Facebook to support Pride. At the time, this was one of the most overt demonstrations of support for the LBTGQ+ community from a global corporation. Responses were both positive and negative with a lot of debate erupting on social media. Despite the controversy, parent company Kraft set a precedent, and many other brands followed suit.

Patagonia is an American outdoor clothing company renowned for its environmental and political activism. The company has also led on family-friendly human resources policies, including paternity and maternity leave, as well as providing on-site childcare for parents. 

While many companies rely heavily on their social media teams to plan and strategise their messaging, some CEOs are becoming bolder in voicing their beliefs and goals. This includes Ryan Gellert, CEO of Patagonia, who has stated that there is “a special place in hell” for those corporations that claim to be going ”all in” on climate change and yet do not back this up with their actions.

What is a servant leader?

Servant leadership has been popularised through agile working methods in which scrum masters support teams in organising themselves rather than telling them what to do. 

The servant leader is different to a traditional leader in senior management; they are seen to put their teams first and themselves second. It is a democratic leadership style that has similarities with transformational leadership. Leading by serving is a mentality that can be effectively adopted at all management levels.  

The circle of influence

With so much on the to-do list of today’s inspiring leaders, how do they manage to stay focused? Many issues that are important to CEOs straddle the line between personal development and professional development, and so learning resources such as podcasts or books on topics such as emotional intelligence are all legitimate continued professional development (CPD). Great leaders are usually life-long learners who are interested in constantly upping their game and remaining relevant

The 1989 book The 7 Habits of Highly Effective People by Stephen Covey is still much read, quoted, and referenced by leaders. It seems that people’s appetite for understanding what makes someone effective and therefore successful has also influenced many articles on the habits of CEOs. Morning routines provided a particularly popular point of discussion on LinkedIn with high achievers and their waking times cited. Tim Cook (CEO of Apple) apparently rises at 3:45am, Anna Wintour (Vogue editor) plays tennis at 5:45am and Howard Schultz (former Starbucks CEO) reportedly gets up at 4:30am.

Whether you wake before 6am or not, Stephen Covey’s Circle of Influence is almost certainly a contributing factor to the high productivity of many successful CEOs. Covey states that proactive people focus on what they can do and who they can influence in any given situation. This focus on their immediate circle of influence actually causes the circle to increase. Those who are reactive focus their energy on things which are beyond their control, which only acts to shrink their circle of influence.

Putting your energy into the things you can change is undoubtedly a route to productivity. It results in a sense of satisfaction in what you can achieve, rather than frustration in what you can’t. This seemingly simple approach can be applied to everything from problem-solving and decision-making to mentoring and staffing.

Learn how to be a leader, not just a manager

The landscape of international business is rapidly changing with new developments and challenges emerging every day. 

Whether you’re already in a management role or have set your sights on moving up through the ranks, the 100% online MSc International Business, Leadership and Management from the University of York is a first-class approach to improving both your knowledge and standing.

Where robotics and artificial intelligence meet

Even beyond the fully autonomous robots that appeared in the second half of the 20th Century, human beings have long been fascinated by automata. From Da Vinci’s mechanical knight  (designed around 1495) to Vaucanson’s “digesting duck” of 1739, and John Dee’s flying beetle of 1543 (which caused him to be charged with sorcery), the history of robotics stretches well before 1941 when Isaac Asimov first coined the term in one of his short stories.

Robotics combines computer science and mechanical engineering to construct robots that are able to assist humans in various tasks. We’re used to seeing industrial robots in manufacturing, for example, in the construction of cars. And, robots have been, and continue to be, particularly useful in heavy industry to help with processes that could be dangerous for humans resulting in injury or even death. Robotic arms, sometimes known as manipulators, were originally utilised in the handling of radioactive or biohazardous materials which would cause damage to human tissues and organs with exposure.

ABB Robotics is one of the leading multinational companies that deals in service robotics for manufacturing. Robotics applications include welding, heavy lifting, assembly, painting and coating, bonding and sealing, drilling, polishing, and palletising and packaging. These are all heavy-duty tasks that require a variety of end effectors, but robotics technology has progressed considerably and can also be seen in healthcare carrying out sophisticated medical procedures.

With the global pandemic, advances in robotics for surgery have been invaluable, allowing surgeons to remotely control the procedure from a safe distance. Now this technology is being developed further to allow surgeons to log in to operating theatres anywhere in the world, using remote control systems on a tablet or a laptop. They can then carry out surgery with the assistance of medical robotics on site. The technology was created by Nadine Hachach-Haram, who grew up in war-torn Lebanon, and there is no doubt it will be put to good use in locations where there is medical inequality or conflict or both.    

Mobile robots are also fairly common in various industries but we have yet to see them take over domestic chores in a way that perhaps the inventors of Roomba (the autonomous robotic vacuum cleaner) may have hoped they would. And yet, research has shown that people can tell what kind of personality a Neato Botvac has simply by the way that it moves. The study has provided interesting insights into human-robot interaction. Another study in 2017 demonstrated that anthropomorphic robots actually made people feel less lonely. People who work alongside collaborative robots, whether in the military or in manufacturing, also tend to express affection for their “cobots”.

Building on this affection that it seems people can and do develop for robots, Amazon has created Astro, a rolling droid that incorporates AI and is powered by Alexa, Amazon’s voice assistant. Astro also offers a detachable cup holder if your hands are too full to carry your coffee into the next room. Amazon has revolutionised automation with the use of Kiva robots in its warehouses and data science in its online commerce. So, it will be interesting to see if it can succeed where others have failed in making home robots popular.

What’s the difference between artificial intelligence and robotics?

While artificial intelligence refers only to the computer programs that process immense amounts of information in order to “think”, robotics refers to a machine designed to carry out assistive tasks that don’t necessarily require intelligence. Artificial intelligence has given us neural networks built on machine learning, which mimic the neural pathways of the human brain. It seems like an obvious next step would be to integrate these powerful developments with robotic systems to create intelligent robots.

Tesla’s self-driving cars contain neural networks that use autonomy algorithms to support the car’s real-time object detection, motion planning, and decision-making in complicated real-world situations. This level of advanced architectures and programming in the use of robotics is now being proposed with the Tesla Bot, a humanoid robot that Elon Musk says is designed to eliminate dangerous, repetitive, and boring tasks. Andrew Maynard, Associate Dean at the College of Global Futures, Arizona State University voices caution with regard to “a future that, judging by Musk’s various endeavours, will be built on a set of underlying interconnected technologies that include sensors, actuators, energy and data infrastructures, systems integration and substantial advances in computer power.” He adds that “before technology can become superhuman, it first needs to be human – or at least be designed to thrive in a human-designed world.”

It’s an interesting perspective, and one that goes beyond the usual moral and ethical concerns of whether robots could be used for ill, a theme often explored in science fiction films like The Terminator, Chappie, and Robot and Frank. Science fiction has always provided inspiration for actual technologies but it also serves to reflect back some of our own moral conundrums. If, as Elon Musk has indicated, the Tesla Bot “has the potential to be a generalised substitute for human labour over time,” what does this mean for artificially intelligent robotics? Would we essentially be enslaving robots? And, of course, this is built on the belief that the foundation of the economy will continue to be labour.

Tesla Bot hasn’t yet reached prototype stage yet, so whether Musk’s vision becomes a reality remains to be seen. The kinematics required to support bi-ped humanoid robots, though, are extremely complex. When it comes to bipedal robot design, LEONARDO (LEgs ONboARD drOne) is a quadcopter with legs. Only 76cm tall with an extremely light structure, even with an exoskeleton, LEONARDO looks far from the humanoids that the robotics industry may believe will become embedded in our everyday lives. Yet his multimodal locomotion system solves (or perhaps simply avoids) some of the issues real-world bipedal robots experience related to weight and centre of gravity. Mechatronics is an interdisciplinary branch of engineering that concerns itself with these kinds of issues. Some would say that mechatronics is where control systems, computers, electronic systems, and mechanical systems overlap. However, others would say that it’s simply a buzzword which is interchangeable with automation, robotics, and electromechanical engineering.

Build your knowledge in artificial intelligence with an MSc

Artificial intelligence is vital to take robotics research to the next level and enable robots to go beyond the relatively simple tasks they can complete on their own, as well as the more complex tasks they support humans with. Research in areas such as machine learning, distributed artificial intelligence, computer vision, and human-machine interaction will all be key to the future of robotics. 

Inspired to discover more about how you can specialise in AI? Study an 100% online and part-time MSc Computer Science with Artificial Intelligence with the University of York.

 

The use of statistics in data science

Statistics is the study of data. It’s considered a mathematical science and it involves the collecting, organising, and analysing of data with the intent of deriving meaning, which can then be actioned. Our everyday usage of the internet and apps across our phones, laptops, and fitness trackers has created an explosion of information that can be grouped into data sets and offer insights through statistical analysis. Add to this, 5.6 billion searches a day on Google alone and this means big data analytics is big business.

Although we may hear the phrase data analytics more than we hear reference to statistics nowadays, for data scientists, data analysis is underpinned by knowledge of statistical methods. Machine learning takes out a lot of the statistical methodology that statisticians would usually use. However, a foundational understanding of some basics in statistics supports strategy in exercises like hypothesis testing. Statistics contribute to technologies like data mining, speech recognition, vision and image analysis, data compression, artificial intelligence, and network and traffic modelling.

When analysing data, probability is one of the most used statistical testing criteria. Being able to predict the likelihood of something happening is important in numerous scenarios, from understanding how a self-driving car should react in a collision to recognising the signs of an upcoming stock market crash. A common use of probability in predictive modelling is forecasting the weather, a practice which has been refined since it first arose in the 19th century. For data-driven companies like Spotify or Netflix, probability can help predict what kind of music you might like to listen to or what film you might enjoy watching next.

Aside from our preferences in entertainment, research has recently been focused on the ability to predict seemingly unpredictable events such as a pandemic, an earthquake, or an asteroid strike. Because of their rarity, these events have historically been difficult to study through the lens of statistical inference – the sample size can be so small that the variance is pushed towards infinity. However, “black swan theory” could help us navigate unstable conditions in sectors like finance, insurance, healthcare, or agriculture, by knowing when a rare but high-impact event is likely to occur. 

The black swan theory was developed by Nassim Nicholas Taleb, who is a critic of the widespread use of the normal distribution model in financial engineering. In finance, the coefficient of variation is often used in investment to assess volatility and risk, which may appeal more to someone looking for a black swan. In computer science though, normal distributions, standard variation, and z-scores can all be useful to derive meaning and support predictions.

Some computer science-based methods that overlap with elements of statistical principles include:

  • Time series, ARMA (auto-regressive) processes, correlograms
  • Survival models
  • Markov processes
  • Spatial and cluster processes
  • Bayesian statistics
  • Some statistical distributions
  • Goodness-of-fit techniques
  • Experimental design
  • Analysis of variance (ANOVA)
  • A/B and multivariate testing
  • Random variables
  • Simulation using Markov Chain Monte-Carlo methods
  • Imputation techniques
  • Cross validation
  • Rank statistics, percentiles, outliers detection
  • Sampling
  • Statistical significance

While statisticians tend to incorporate theory from the outset into solving problems of uncertainty, computer scientists tend to focus on the acquisition of data to solve real-world problems. 

As an example, descriptive statistics aims to quantitatively describe or summarise a sample rather than use the data to learn about the population that the data sample represents. A computer scientist may perhaps find this approach to be reductive, but, at the same time, could learn from the clearer consideration of objectives. Equally, a statistician’s experience of working on regression and classification could potentially inform the creation of neural networks. Both statisticians and computer scientists can benefit from working together in order to get the most out of their complementary skills.

In creating data visualisations, statistical modelling, such as regression models, is often used. Regression analysis is typically used in determining the strength of predictors, trend forecasting, and forecasting an effect, which can be represented in graphs. Simple linear regression relates two variables (X and Y) with a straight line. Nonlinear regression relates to two variables in a nonlinear relationship, represented by a curve. In data analysis, scatter plots are often used to show various forms of regression. Matplotlib allows you to build scatter plots using Python; Plotly will allow the construction of an interactive version.

Traditionally, statistical analysis has been key in helping us understand demographics through a census – a survey through which citizens of a country offer up information about themselves and their households. From the United Kingdom, where we have the Office for National Statistics to New Zealand, where the equivalent public service department is called StatsNZ, these official statistics allow governments to calculate data such as gross domestic product (GDP). In contrast, Bhutan famously measures Gross National Happiness (GNH). 

This mass data collection, mandatory upon every household in the UK, which goes back to the Domesday Book in England, could be said to hold the origins of statistics as a scientific field. But it wasn’t until the early 19th century that the census was really used statistically to offer insights into populations, economies, and moral actions. It’s why statisticians still refer to an aggregate of objects, events or observations as the population and use formulae like the population mean, which doesn’t have to refer to a dataset that represents citizens of a country.

Coronavirus has been consistently monitored through statistics since the pandemic began in early 2020. The chi-square test is a statistical method often used in understanding disease  because it allows the comparison of two variables in a contingency table to see if they are related. This can show which existing health issues could cause a more life-threatening case of Covid-19, for example. 

Observational studies have also been used to understand the effectiveness of vaccines six months after a second dose. These studies have shown that effectiveness wanes. Even more ground-breaking initiatives are seeking to use the technology that most of us hold in our hands every day to support data analysis. The project EAR asks members of the public to use their mobile phones to record the sound of their coughs, breathing, and voices for analysis. Listening to the breath and coughs to catch an indication of illness is not new – it’s what doctors have practised with stethoscopes for decades. What is new is the use of machine learning and artificial intelligence to pick up on what the human ear might miss. There are currently not enough large data sets of the sort needed to train machine learning algorithms for this project. However, as the number of audio files increases, there will hopefully be valuable data and statistical information to share with the world. 

A career that’s more than just a statistic

Studying data science could make you one of the most in-demand specialists in the job market. Data scientists and data analysts have skills that are consistently valued across different sectors, whether you desire a career purely in tech or want to work in finance, healthcare, climate change research, or space exploration

Take the first step to upgrading your career options and find out more about starting a part-time, 100% online MSc Computer Science with Data Analytics today.

What is reinforcement learning?

Reinforcement learning (RL) is a subset of machine learning that allows an AI-driven system (sometimes referred to as an agent) to learn through trial and error using feedback from its actions. This feedback is either negative or positive, signalled as punishment or reward with, of course, the aim of maximising the reward function. RL learns from its mistakes and offers artificial intelligence that mimics natural intelligence as closely as it is currently possible.

In terms of learning methods, RL is similar to supervised learning only in that it uses mapping between input and output, but that is the only thing they have in common. Whereas in supervised learning, the feedback contains the correct set of actions for the agent to follow. In RL there is no such answer key. The agent decides what to do itself to perform the task correctly. Compared with unsupervised learning, RL has different goals. The goal of unsupervised learning is to find similarities or differences between data points. RL’s goal is to find the most suitable action model to maximise total cumulative reward for the RL agent. With no training dataset, the RL problem is solved by the agent’s own actions with input from the environment.

RL methods like Monte Carlo, state–action–reward–state–action (SARSA), and Q-learning offer a more dynamic approach than traditional machine learning, and so are breaking new ground in the field.

There are three types of RL implementations: 

  • Policy-based RL uses a policy or deterministic strategy that maximises cumulative reward
  • Value-based RL tries to maximise an arbitrary value function
  • Model-based RL creates a virtual model for a certain environment and the agent learns to perform within those constraints

How does RL work?

Describing fully how reinforcement learning works in one article is no easy task. To get a good grounding in the subject, the book Reinforcement Learning: An Introduction by Andrew Barto and Richard S. Sutton is a good resource.

The best way to understand reinforcement learning is through video games, which follow a reward and punishment mechanism. Because of this, classic Atari games have been used as a test bed for reinforcement learning algorithms. In a game, you play a character who is the agent that exists within a particular environment. The scenarios they encounter are analogous to a state. Your character or agent reacts by performing an action, which takes them from one state to a new state. After this transition, they may receive a reward or punishment. The policy is the strategy which dictates the actions the agent takes as a function of the agent’s state as well as the environment.

To build an optimal policy, the RL agent is faced with the dilemma of whether to explore new states at the same time as maximising its reward. This is known as Exploration versus Exploitation trade-off. The aim is not to look for immediate reward, but to optimise for maximum cumulative reward over the length of training. Time is also important – the reward agent doesn’t just rely on the current state, but on the entire history of states. Policy iteration is an algorithm that helps find the optimal policy for given states and actions.

The environment in a reinforcement learning algorithm is commonly expressed as a Markov decision process (MDP), and almost all RL problems are formalised using MDPs. SARSA is an algorithm for learning a Markov decision. It’s a slight variation of the popular Q-learning algorithm. SARSA and Q-learning are the two most typically used RL algorithms.

Some other frequently used methods include Actor-Critic, which is a Temporal Difference version of Policy Gradient methods. It’s similar to an algorithm called REINFORCE with baseline. The Bellman equation is one of the central elements of many reinforcement learning algorithms. It usually refers to the dynamic programming equation associated with discrete-time optimisation problems.

The Asynchrous Advantage Actor Critic (A3C) algorithm is one of the newest developed in the field of deep reinforcement learning algorithms. Unlike other popular deep RL algorithms like Deep Q-Learning (DQN) which uses a single agent and a single environment, A3C uses multiple agents with their own network parameters and a copy of the environment. The agents interact with their environments asynchronously, learning with every interaction, contributing to the total knowledge of a global network. The global network also allows agents to have more diversified training data. This mimics the real-life environment in which humans gain knowledge from the experiences of others, allowing the entire global network to benefit.

Does RL need data?

In RL, the data is accumulated from machine learning systems that use a trial-and-error method. Data is not part of the input that you would find in supervised or unsupervised machine learning.

Temporal difference (TD) learning is a class of model-free RL methods that learn via bootstrapping from a current estimate of the value function. The name “temporal difference” comes from the fact that it uses changes – or differences – in predictions over successive time steps to push the learning process forward. At any given time step, the prediction is updated, bringing it closer to the prediction of the same quantity at the next time step. Often used to predict the total amount of future reward, TD learning is a combination of Monte Carlo ideas and Dynamic Programming. However, whereas learning takes place at the end of any Monte Carlo method, learning takes place after each interaction in TD.

TD Gammon is a computer backgammon program that was developed in 1992 by Gerald Tesauro at IBM’s Thomas J. Watson Research Center. It used RL and, specifically, a non-linear form of the TD algorithm to train computers to play backgammon to the level of grandmasters. It was an instrumental step in teaching machines how to play complex games.

Monte Carlo methods represent a broad class of algorithms that rely on repeated random sampling in order to gain numerical results that point to probability. Monte Carlo methods can be used to calculate the probability of:

  • an opponent’s move in a game like chess
  • a weather event occurring in the future
  • the chances of a car crash under specific conditions

Named after the casino in the city of the same name in Monaco, Monte Carlo methods first arose within the field of particle physics and contributed to the development of the first computers. Monte Carlo simulations allow people to account for risk in quantitative analysis and decision making. It’s a technique used in a wide variety of fields including finance, project management, manufacturing, engineering, research and development, insurance, transportation, and the environment.

In machine learning or robotics, Monte Carlo methods provide a basis for estimating the likelihood of outcomes in artificial intelligence problems using simulation. The bootstrap method is built upon Monte Carlo methods, and is a resampling technique for estimating a quantity, such as the accuracy of a model on a limited dataset.

Applications of RL

RL is the method used by DeepMind to initiate artificial intelligence in how to play complex games like chess, Go, and shogi (Japanese chess). It was used in the building of AlphaGo, the first computer program to beat a professional human Go player. From this grew the deep neural network agent AlphaZero, which taught itself to play chess well enough to beat the chess engine Stockfish in just four hours.

AlphaZero has only two parts: a neural network, and an algorithm called Monte Carlo Tree Search. Compare this with the brute force computing power of Deep Blue, which, even in 1997 when it beat world chess champion Garry Kasparov, allowed the consideration of 200 million possible chess positions per second. The representations of deep neural networks like those used by AlphaZero, however, are opaque, so our understanding of their decisions is restricted. The paper Acquisition of Chess Knowledge in AlphaZero explores this conundrum.

Deep RL is being proposed in the use of unmanned spacecraft to navigate new environments, whether it’s Mars or the Moon. MarsExplorer is an OpenAI Gym compatible environment that has been developed by a group of Greek scientists. There are four deep reinforcement learning algorithms that the team has trained on the MarsExplorer environment, A3C, Ranbow, PPO, and SAC, with PPO performing best. MarsExplorer is the first open-AI compatible reinforcement learning framework that is optimised for the exploration of unknown terrain.

Reinforcement learning is also used in self-driving cars, in trading and finance to predict stock prices, and in healthcare for diagnosing rare diseases.

Deepen your learning with a Masters

These complex learning systems created by reinforcement learning are just one facet of the fascinating and ever-expanding world of artificial intelligence. Studying a Masters degree can allow you to contribute to this field, which offers numerous possibilities and solutions to societal problems and the challenges of the future. 

The University of York offers a 100% online MSc Computer Science with Artificial Intelligence to expand your learning, and your career progression.

Corporate culture: the building blocks of a business

It’s not always easy to put a finger on what makes a workplace feel the way it does. Is it rooted in the work that takes place there? The other employees? The work environment itself?

In fact, it’s all of this – and more. The unique culture of a business is the golden thread that runs through every aspect of its operations.

Increasingly, people are seeking to address their work-life balance – acutely evidenced by what has been dubbed “The Great Resignation” witnessed throughout the pandemic. An organisation’s culture is often at the heart of decisions to leave or join an employer. With many now viewing work as more than a paycheck – in a world where going solo is seen as less of a risk – businesses can’t afford to ignore substandard cultures. To retain talented individuals, they need to create environments in which people can thrive.

What is corporate culture?

Corporate culture describes and governs the ways in which a business operates. It refers to its personality and character: shared values, beliefs and assumptions about how people should act; how decisions should be made; and how work activities should be carried out. Culture denotes the particular ideas and customs that make each organisation unique. For example, its leadership, job roles, company values, workspace, pay, initiatives and perks, rewards and recognition. Cumulatively, it should be the foundation upon which people can work to the best of their ability.

The core elements that make up a company’s culture include:

  • Leadership
  • Vision and values
  • Recognition
  • Operations
  • Learning and development
  • Environment
  • Communication
  • Pay and benefits
  • Wellbeing

From small start-ups to established, global brands, all businesses have a workplace culture – and they vary dramatically. The various types include conventional, clan, progressive, market, adhocracy, authority organisation, and more.

Take the retail brand Zappos, a company where creating an inclusive culture is the top priority. Their core value lies in celebrating every employee’s diversity and individuality – and they’re famous for it. However, this approach is starkly different to that taken by countless other businesses.

Why is corporate culture important?

Experts in the company culture space, Liberty Mind, make a compelling case for why improving corporate culture should be prioritised:

  • 88% of employees believe a distinct workplace culture is important to company success
  • Companies with strong cultures saw 4x increase in revenue growth
  • 58% of people say that they trust strangers more than their own boss
  • 78% of executives included culture among the top 5 things that add value to their company
  • Job turnover in organisations with positive cultures is 13.9%, whereas in organisations with poor cultures it’s 48.4%
  • Only 54% of employees recommend their company as a good place to work
  • More than 87% of the global workforce is not engaged, yet engaged workplaces are 21% more profitable

However, it seems many organisations are struggling to get it right; 87% of organisations cite culture and employee engagement as one of their top challenges.

Strengths and weaknesses of culture

Clearly, culture matters to employees – and therefore has a direct impact on a business. Workplaces with poor or non-existent cultures are more likely to encounter low morale, brand reputation issues and decreased productivity. As people leave their roles, poor employee retention necessitates costs associated with recruitment, training and – at least in the short-term – an increased workload for already-beleaguered employees. Worse still, workplaces with toxic cultures breed resentment, fear, frustration and poor mental health among their employees. If staff do not simply leave, as many will, they are likely to take more sick days and be less productive.

In contrast, companies with nurturing, strong cultures can expect to reap the rewards. They are likely to feature good teamwork: teams working towards shared goals are more driven and productive, with the ability to resolve issues more quickly. Brand reputation will soar as employees – who have belief in company leaders and their shared values – spread the word, acting as brand ambassadors. These businesses are in a better position to weather change, attract and retain high quality applicants, and to take risks and make decisions. Together, these positive, culture-building aspects are likely to improve a company’s bottom line.

Improving company culture: more than a mission statement

By first assessing the current cultural status, a company is in a stronger position to identify – and design a roadmap to achieve – its desired culture. With input from stakeholders, leaders should examine the current culture, including core values, strengths, and organisational impact. Harvard Business Review designed a tool to understand an organisation’s cultural profile, supporting this investigative work. It guides leaders to examine cultural styles and types of cultures, the prominence of company culture, and demographic aspects of how culture operates.

Next, leaders must understand how strategy and business environment impact the culture. Are there any current or future external conditions or strategic decisions that will influence cultural styles? If so, how can the styles respond? Any robust culture target will need to support, or respond to, future changes.

It’s critical to ground the target in business realities. Leaders should frame any culture targets in response to real-world problems and value-adding solutions – far more practical and effective than selling them as shiny, culture change initiatives.

Leaders must be prepared to drive cultural change through every area of the business. Indeed suggest further actions to support a company-wide cultural improvement:

  • Hire the right people
  • Appoint a cultural ambassador
  • Set specific, achievable goals with clear metrics
  • Encourage open communication
  • Reward success and offer incentives
  • Organise meaningful team-building and social events

Boosting employee engagement through company culture is not about making people happy. Instead, leaders should focus on making them feel connected to the business and motivated to help achieve its goals, even during times of adversity.

Master the international business environment

How strong is your company’s organisational culture? Is a culture change overdue?

Advance your practical knowledge and gain a solid theoretical understanding of the global business environment with the University of York’s online MSc International Business Leadership and Management programme. As well as the challenges associated with global trade, your studies will encompass marketing, sales, and a detailed overview of relevant management disciplines, including human resource management. Develop the skills to succeed in the fast-paced world of business and learn in a flexible way that suits you, supported by experts.