Software systems: an explainer

Software is what most people interact with when they use a computer or mobile device. Written in programming code, software is the set of instructions that enable computer hardware, systems, programmes, and applications to operate and perform tasks. 

Types of software 

There are several different types of software, all serving different functions. The software systems used today can largely be categorised into a few key areas:

  • system software
  • application software
  • utility software
  • programming software.

System software

System software manages a computer system’s resources. It works in partnership with computer hardware and other software – such as applications – to provide the end-user interface.

Examples of system software include:

Operating systems

An operating system, or OS, is one of the most important pieces of software on a computer, managing all of the other computer programmes installed on the device. Popular examples include Microsoft Windows, Mac OS, Android OS, and Linux.

Application software typically uses an application programme interface (API) to interact with the OS. Users, meanwhile, interact with the operating system itself through one of two user interfaces:

  • a command-line interface (CLI) user interface, such as MS-DOS
  • more commonly, a graphical user interface (GUI), such as Windows

Device drivers

Driver software manages any device or peripherals that connect to a computer. For example, a printer connected to a computer will need an appropriate driver in order to work as expected. 

Other devices that require drivers include:

  • computer mice
  • keyboards
  • speakers and headphones
  • modems and routers
  • sound cards
  • USB storage devices.


Firmware is an essential piece of software because it ensures that hardware works as it’s intended, and manages some of the most basic functions of a machine. Firmware is typically embedded and installed directly into a piece of hardware by its manufacturer. Once a device is switched on, firmware is what boots up the computer by initialising its hardware components and loading its operating system. 

Application software

Application software is responsible for performing specific tasks and functions for users. Rather than managing how the computer or device operates, this type of software is designed and developed according to the specifications and needs of people using the machines.

Examples of application software include:

Web browsers

Web browsers, such as Google Chrome or Apple’s Safari, are software applications that allow people to access and use the web. Anyone accessing a standard website uses a web browser to do so.

Word processors

Word processors, used to write and edit text, are among the oldest computer applications. Examples include Microsoft Word and Google Docs.

Multimedia software

Multimedia applications are used to view, create, edit, and manage media content. This includes:

  • images
  • videos
  • audio.

Windows Media Player, iTunes, and Adobe PhotoShop are all examples of multimedia software. When referring specifically to graphics content, such as videos, images, infographics, and so on, the term graphics software is also applicable.

Communication software

Any computer programme that’s used to communicate with other people – including through text, audio, and video – is an example of communication software. This includes:

  • Microsoft Outlook and Teams
  • Skype
  • Zoom.

Utility software

Utility software is often considered a subtype of system software, but its focus is specifically on helping to configure, maintain, or optimise a computer’s hardware and software architecture.

Examples of utility software include:

Security software

Security software, such as antivirus software, protects a device’s hardware and software from viruses and other threats. It ideally monitors a computer in real time, scanning existing programmes, incoming files and downloads, and shielding against any attacks by cybercriminals.

File compression software

Compression software helps condense files and other data so it takes up less storage space on a device. These tools also ensure condensed data can be safely managed and restored to its original format when required.


Middleware straddles systems and application software, and effectively enables different computer programmes to interact with one another. According to IBM, middleware “enables developers to build applications without having to create a custom integration every time they need to connect to application components (services or microservices), data sources, computing resources or devices.”

Programming software

Programming software is what programmers and developers use to write code and develop new software programmes.

Examples of programming software include:

Programming language translators

Computing or programming language translators can translate one form of code into other programming languages. There are three main types:

  1. Compilers, which convert whole programmes written in programming languages, such as Java or C++, into machine language.
  2. Assemblers, which convert code in assembly languages into machine language.
  3. Interpreters, which can execute instructions written in a programming or scripting language, rather than requiring that these languages be first compiled into machine code enabling rapid programme development, ease of use, portability, and safety.


Debugging tools are used by programmers to test for – and then resolve – errors and issues within programmes. 

Design software and start your career in computer science

Deepen your understanding of software systems, as well as computer science more broadly, with the 100% online MSc Computer Science from the University of York. This Masters degree includes a module in software engineering, allowing you to focus on designing and building software systems. You will look at principles and patterns in software design, where to apply them, and how they inform design choices, and learn techniques for ensuring the systems you build behave correctly.

You will also study other key areas required for software development, such as advanced programming, computer architecture and operating systems, algorithms and data structures, and artificial intelligence and machine learning.

This flexible degree has been designed for working professionals and graduates who may not currently have a computer science background but want to launch a career in the cutting-edge field. You can learn part-time around your current work and home commitments, and because the degree is taught exclusively online, you can study whenever and wherever you want. 

For further information about tuition fees, English language requirements, coursework, and other courses available with the Department of Computer Science at the University of York, please visit the University of York website.

Building interconnected worlds with network architecture

Today’s internet users – from global businesses to individuals with network devices – have come to rely on instant, seamless, reliable and flexible methods of connecting. As such, our digital age is founded on the design and maintenance of network operating systems that enable us to live, work and communicate with ease – wherever we happen to be in the world.

Organisations who rely on technological advancements – for example, telecommunication, shared networks, algorithms, and software that enables application programming interface (API) – rely on individuals with specialist computer science skills such as network design and network functions. An ever-growing sector, computing expertise – and the ability to apply it to achieve business goals – is in increasing demand.

What is network architecture?

Computer networks are built and designed to serve the needs of clients and users. Network architecture, therefore, is the way in which these computer networks are structured to meet device connectivity requirements. In this context, devices refers to servers, end-user devices and smart technologies.

There are different types of network architecture that are used for various purposes and applications. Some common examples of networks include:

  • access networks and local-area networks (LANs) – used to support, connect and on-board users and share systems within a distinct geographical area via a central server, such as a workforce within an office building
  • wide-area networks (WANs) – used to connect users, often over long distances, such as healthcare professionals to health systems and applications
  • data centres – used to connect servers where data and applications are hosted and make them accessible to users
  • intranets – used to connect computers for a certain group of users across a network
  • cloud computing – used to meet the on-demand delivery of resources over the Internet, including private clouds, public clouds, multi-clouds and hybrid clouds.

Systems are set up in a variety of ways, depending on need. For example, businesses can choose between options such as peer-to-peer architecture (P2P) – where all devices on the system have the same capabilities, used by platforms such as Bitcoin and BitTorrent – or more traditional client/server networks where some devices are set up to ‘serve’ others, used by Amazon and for devices such as the Apple watch.

Computer science specialists working to design and arrange intricate systems will also need to consider network topology: how various connections and nodes are arranged, both logically and physically, in a network. Examples of network topologies include bus, star, ring, mesh, tree and hybrid.

What are the components of network architectures?

Building and maintaining networks can be complex and challenging – especially in a world where expectations are ever-higher, and needs and requirements change over time. To offer solutions that help to manage modern network architectures, network architects have a variety of components at their disposal.

Controller-led set-ups are critical to scaling and securing networks. Controllers respond to evolving business needs and aim to drastically simplify operations; business intent is translated into device configurations and network functions are automated. Controller-led systems continuously monitor devices connected to the network to ensure that performance and security standards are met and maintained.

Multi-domain (or cross-network) integrations are designed to share and exchange relevant operating parameters, with multiple networks communicating via controllers. This helps to ensure that organisational outcomes which span networking domains are delivered.

Intent-based working (IBN) focuses on setting up networks in order to achieve an organisation’s desired outcomes. It relies heavily on automation to integrate business processes, review network performance, identify issues and enable security measures.

What is the open systems interconnection (OSI) model?

The OSI model enables disparate and diverse systems to communicate using standard protocols. A conceptual model developed by the International Organization for Standardization, it’s best thought of as a single, universal language required for computer networking. It helps to identify and troubleshoot issues with networks.

There are seven layers to the OSI model, each responsible for a specific task and required to communicate with the layers both above and below itself:

  1.       Physical layer
  2.       Data link layer
  3.       Network layer
  4.       Transport layer
  5.       Session layer
  6.       Presentation layer
  7.       Application layer.

What is the role of a network architect?

With the expansion of wireless and mobile networks – alongside more traditional versions – network architects are in increasing demand.

A network architect’s job is to create and implement layouts and plans for data communication networks. Their responsibilities are likely to include advising organisations on where they might need networks, how these will work in practice, and any benefits or drawbacks to using particular types of network – so having a keen understanding of organisational goals and wider plans is key. Essentially, they help businesses to create a cohesive framework with which their employees can communicate and share information, access systems and servers, and do their jobs. As a result, most network architects work closely with chief information officers to predict and plan for where new or different networks will be required. They often work within a wider team comprising computer systems engineers and other computer science-related roles.

As well as planning data communication networks and their logistics, further responsibilities of network architects can include:

  • researching new network technologies
  • analysing current data and network traffic to forecast future growth and its implications for networks and bandwidth requirements
  • planning network security measures, such as patches, authentication, back-ups and firewalls, and testing vulnerabilities
  • assessing what additional hardware is required, such as network drivers, cables, wifi capabilities, routers and adaptors, and how this will be implemented.

It can be a lucrative career: job and career specialists, Reed, state that the average salary for a network architect in the UK is £94,842 – a figure that can be far exceeded depending on factors such as individual experience, seniority of role, location and sector type.

Discover how to design, implement and manage network infrastructure and architecture

Kickstart your career in computing, and join a fast-growing, exciting and in-demand sector, with the University of York’s online MSc Computer Science programme.

Our 100% online programme is designed for individuals without a computer science background. As well as developing your foundational and theoretical understanding of the discipline, you’ll gain the expertise to apply your learning and tools to solve real-world issues for organisations and service providers. Through flexible study to suit you, you’ll explore areas such as programming, data analytics and big data, artificial intelligence, network infrastructure and protocols, and cybersecurity.

What are the most important skills in software development?

Software development is the area of computer science focused on designing, building, implementing, and supporting computer software. Software is what enables systems and applications to operate and perform tasks, which means that software development is an essential role in the technology sector and other digital industries.

According to IBM, software development is typically done by programmers, software engineers, and software developers, with plenty of interaction and overlap between these roles.

Programmers, often referred to as coders, write code and typically receive instructions from software developers and engineers.

Software developers are more actively involved than programmers in steering software development. While they may assist in writing code, they can also assist in turning requirements into features, managing development processes, and testing and maintaining developed software.

Software engineers are solutions-focused and apply scientific engineering principles to build software and systems that solve problems. 

It’s worth noting, however, that the essential skills needed for successful software development remain constant regardless of who is doing the development work. So whether a person is a programmer, software developer, or software engineer, they will still need to develop a number of important technical and interpersonal abilities.

Technical knowledge needed for software development

Software development links to several important areas of computer science, so it’s important that software developers have technical skills or knowledge in these core areas.


Computer programming skills and coding are among the most vital for software development, because they are what allow developers to write the source code for software. Software development typically requires knowledge in programming languages or coding languages such as:

  • HTML
  • CSS
  • Java
  • JavaScript
  • C#
  • C++
  • Python
  • Ruby

Programmers aren’t expected to know all programming languages, but should definitely specialise in a few of them.

Operating systems

Operating system software controls computer hardware, and enables things such as applications and programmes to run. The most commonly used operating systems include:

Operating systems on mobile devices, meanwhile, are typically either iOS or Android.

Version control

Also known as source control or source control management (SCM), version control ensures that code revisions during software development – both front-end and back-end development, as well as other types of development work, such as web development – are tracked. This means that multiple developers can work on a project simultaneously, and that changes can be shared, merged, or rolled back as needed.

The most commonly used version control system is Git, a free and open-source tool used to manage and track source code history. Git repositories can be managed through cloud-based hosting services such as GitHub, GitLab, and Apache Subversion (SVN).

Algorithms and data structures

Software is often optimised through algorithms and data structures. Data structures provide organisational frameworks for storing information, and algorithms are commonly used to sort data. Together, they can ensure software performs efficiently and effectively.

Database management

Software development will inevitably require interaction with an organisation’s database, which means software developers must be able to insert, alter, update, delete, secure, and retrieve data from within a database. This typically requires familiarity with structured query language (SQL) databases, such as MySQL or Oracle.

Integrated development environments (IDEs)

An integrated development environment, or IDE, is a user-friendly environment for software development. An IDE typically includes a source code editor, debug tool, and compiler, among other useful tools. Popular examples include Eclipse and Visual Studio.


Knowing how to use development containers, such as Docker, is becoming one of the fundamentals among software engineer skills. Containers package up software code and all dependencies so that the application can be deployed efficiently and reliably.

Current and emerging tech trends

As technology continues to rapidly evolve, so too must software engineers and developers. There are a number of growing areas within software development, such as the following.

  • Artificial intelligence (AI) and machine learning: AI and machine learning have grown significantly in the past decade, and this trend is expected to continue as organisations and businesses continue to expand their use in areas such as automation and personalisation. 
  • Cloud computing: Cloud-based platforms offer computer resources – particularly data storage – remotely. Commonly used cloud platforms include Amazon Web Services (AWS) and Microsoft Azure.
  • Blockchain: Blockchain is the technology that supports cryptocurrencies, but its applications across all sectors and industries are virtually limitless.

Interpersonal skills needed for software development

There are several non-technical but still essential software developer skills. These interpersonal skills can help ensure that business requirements are clearly understood and met, that project management runs smoothly and seamlessly, and that issues are quickly identified and resolved.


Professional software development is rarely, if ever, done in true isolation. The development process usually requires a development team to support everything from initial design ideas to testing and maintaining the software. Team members may be called upon to collaborate together, so it’s important that developers understand how to interact respectfully and productively with one another, with a focus on cooperation and problem-solving.


Communication skills are among the most important interpersonal abilities or soft skills a software developer needs. Good communication ensures that requirements are fully understood, and that challenges can be clearly articulated and addressed. This is particularly important in fields such as software development, where ideas and information are often complex, and clear feedback is required at most stages of the process.

Attention to detail

A software development project typically has several moving pieces, so it’s important that software engineers and developers can keep a close eye on small details that could create large issues if not addressed early on.

Start your career in software development

Deepen your understanding of software development as well as computer science more widely with the 100% online MSc Computer Science from the University of York. This Masters degree includes a module in software engineering, allowing you to focus on designing and building software systems. You will look at principles and patterns in software design, where to apply them, and how they inform design choices, and learn techniques for ensuring the systems you build behave correctly.

You will also study other key areas required for software development, such as advanced programming, computer architecture and operating systems, algorithms and data structures, and artificial intelligence and machine learning.

This flexible degree has been designed for working professionals and graduates who may not currently have a computer science background but want to launch a career in the cutting-edge field. You can learn part-time around your current work and home commitments, and because the degree is taught exclusively online, you can study whenever and wherever you want.

What are mobile networks?

What are mobile networks?

A mobile network, also known as a cellular network, enables wireless communication between many end users, and across vast distances, by transmitting signals using radio waves. 

Most portable communication devices – including mobile phone handsets, laptops, tablets, and so on – are equipped to connect to a mobile network and enable wireless communication through phone calls, electronic messages and mail, and data. 

How do mobile networks work?

Mobile networks are effectively a web of what’s known as base stations. These base stations each cover a specific geographical land area – called a cell – and are equipped with at least one fixed-location transceiver antenna that enables the cell to send and receive transmissions between devices using radio waves. 

When people experience poor reception or connection using their mobile devices, this is usually because they aren’t in close enough range to a base station. This is also why, in order to provide the best possible network coverage, many network providers and operators will employ as many base station transceivers as they can, and overlap their cell areas. 

How mobile devices connect to mobile networks

In the past, mobile phones – or portable transceivers – used an analog technology called AMPS (Advanced Mobile Phone System) to connect to cellular networks. Today, however, portable communication devices such as the Apple iPhone or Samsung Galaxy Android phone use digital cellular technologies to send and receive transmissions.

These technologies can include:

  • global system for mobile communications (GSM)
  • code division multiple access (CDMA).
  • time division multiple access (TDMA).

What is the difference between GSM and CDMA?

Devices that use the global system for mobile communications (GSM):

  • can transmit data and voice at the same time
  • do not have built-in encryption, and are typically less secure
  • store data on a subscriber identity module (SIM) card that can be transferred between devices

Devices that use code division multiple access (CDMA), on the other hand:

  • cannot send both data types at the same time
  • have built-in encryption and more security
  • store data on the mobile device itself, rather than a SIM

Another key difference is in terms of usage: GSM is the predominant technology used in Europe and other parts of the world, while CDMA is used in fewer countries.

What are the different types of mobile networks?

Mobile networks have become progressively faster and more advanced over the past few decades.


2G dates back to the early 1990s and eventually enabled early SMS and MMS messaging on mobile phones. It is also noteworthy because it marked the move from the analog 1G to digital radio signals. Its use has been phased out in some areas of the world, such as Europe and North America, but 2G is still available in many developing regions.


3G was introduced in the early 2000s, and is based on universal mobile telecommunication service (UMTS) standards. For the first time, mobile devices could use web browsers and stream music and videos. 3G is still widely in use around the world today. 


4G was first introduced around 2010 and offered a significant step forward for mobile networks. Speed increases significantly with 4G, enabling advanced streaming capabilities and better connectivity and performance for mobile games and other smartphone apps even when not connected to WiFi.


5G is the newest addition to the family of mobile networks, rolling out at the end of the 2010s and still being introduced in major centres around the world today. Through high-frequency radio waves, the 5G network offers significantly increased bandwidth and is approximately 100 times faster than the upper limit of 4G.

Different mobile networks providers in the UK

UK networks vary in the United Kingdom, but all are regulated by Ofcom, the regulators and competition authority for UK communication industries such as fixed-line telecoms, mobiles, and wireless device airwaves. It’s worth noting that mobile networks can also fall under the jurisdiction of the Financial Conduct Authority when offering services such as phone insurance.

What are the UK’s main mobile networks?

The UK has four main mobile network providers:

  1. Vodafone
  2. EE
  3. O2
  4. Three

Between them, these four mobile operators – known as the big four – own and manage the UKs mobile network infrastructure. They’re also known as host mobile phone networks, supporting all other mobile service providers – called mobile virtual network operators (MVNOs) – in the UK.

Examples of mobile virtual network operators in the UK

  • ID Mobile, which uses the Three network
  • GiffGaff, which uses the O2 network
  • Tesco Mobile, which uses the O2 network
  • Virgin Mobile from Virgin Media, which uses the Vodafone and O2 networks
  • Sky Mobile, which uses the O2 network
  • BT Mobile, which uses the EE network
  • Plusnet Mobile, which uses the EE network
  • Asda Mobile, which uses the Vodafone network
  • VOXI, which uses the Vodafone network
  • SMARTY, which uses the Three network
  • Talkmobile, which uses the Vodafone network
  • Lebara, which uses the Vodafone network

Other mobile phone businesses, such as Carphone Warehouse, work with multiple providers to offer consumers several options in one place when looking for a new phone provider.

Competition between mobile providers

Regardless of which mobile provider that UK mobile customers choose, there are just four networks supporting the provider’s service. This means that having the UK’s fastest or most reliable network is a huge selling point, and many customers use a dedicated coverage checker to investigate their preferred option. It also means that providers offer a number of additional perks and mobile phone deals to help secure mobile phone contracts.

These benefits might include:

  • reduced tariffs for customers who sign up for a rolling monthly contract
  • data plans such as an unlimited data allowance or data rollover, which allows customers to rollover any unused data at the end of the month into the next month
  • deals and discounts for other services offered by the providers, such as household broadband deals or mobile broadband services
  • access to affiliated entertainment services, such as Netflix, Amazon Prime, or BT Sport
  • discounted SIM-only deals and plans such as a reduced one-month rolling SIM or a 12-month SIM

Explore mobile and computer networks

Discover more about mobile networks and advance your career in computer science with the 100% online MSc Computer Science from the University of York. This flexible Masters programme has been designed for working professionals and graduates who may not currently have a computer science background and want to launch their career in this cutting-edge and lucrative field.

One of the key modules on this programme covers computer and mobile networks, so you will examine internet architecture, protocols, and technologies – as well as their real-world applications. You will also discuss networks and the internet, network architecture, communication protocols and their design principles, wireless and mobile networks, network security issues, and networking standards, as well as related social, privacy, and copyright issues.

Internet protocols: making the world wide web possible

E-commerce, streaming platforms, work, social media, communication – whatever we’re using the internet for, we’re using it on a widespread, wide-ranging and constant basis.

DataReportal states that most of the connected post-pandemic world continues to grow faster than it did previously. Their Digital 2022 Global Overview Report, published in partnership with Hootsuite and We Are Social, states:

  • There are 4.95 billion internet users, accounting for 62.5 per cent of the global population.
  • Internet users grew by 192 million over the past 12 months.
  • The typical global internet user spends almost seven hours per day using the internet across all devices, with 35 per cent of that total focused on social media.
  • The number of people who remain “unconnected” to the internet has dropped below 3 billion for the first time.

With faster mobile connections, more powerful devices set to become even more accessible and more of our lives playing out digitally than ever before, greater convergence across digital activities is likely. Our reliance on it? Greater still.

But what of the structures and processes behind these billions of daily interactions? And just how many of us actually know how the internet works? Individuals with the skills and specialist expertise in the computer science space are in high demand across a huge range of industries – and there’s never been a better time to get involved.

What is internet protocol?

Cloudflare defines internet protocol (IP) as a set of rules for routing and addressing packets of data so that they can travel across networks and arrive at the correct destination. Essentially, it’s a communications protocol. Data packets – smaller pieces of data that traverse the internet that have been divided from greater quantities – each have IP information attached to them. It’s this IP information that routers use to ensure IP packets are data transferred to the right places.

Each device and each domain that has the ability to access the internet has a designated IP address in order for internet communication to work. As packets are sent to IP addresses, the information and data, therefore, arrives at its intended destination. IP is a host-to-host protocol, used to deliver a packet from a source host to a destination host.

There are two different versions of IP, providing unique IP identifiers to cover all devices: IPv4 and IPv6. IPv4 – a 32-bit addressing scheme to support 4.3 billion devices – was originally thought to be sufficient to meet users’ needs, however the explosion in both number of devices and internet usage has meant it’s no longer enough. Enter IPv6 in 1998, a 128-bit addressing scheme to support 340 trillion trillion devices.

The OSI model

The Open Systems Interconnection (OSI) network model is a conceptual framework that divides telecommunications and networking into seven layers.

 Each of the seven layers is tasked with its own function:

  • Physical – represents the electrical and physical system representation.
  • Data link – provides node-to-node data transfer and handles error correction from the physical layer.
  • Network – responsible for data packet forwarding and routing through different routers.
  • Transport – coordinates data transfer between end systems and hosts.
  • Session – a session allows two devices to communicate with each other, and involves set-up, coordination and termination.
  • Presentation – designated with the preparation, or translation, of application format into network format, or vice versa. For example, data encryption and decryption.
  • Application – closest to the end-user, application involves receiving information from users and displaying incoming data to users. For example, web browsers are communications that rely on layer seven.

The OSI model is valuable in understanding technical and security risks and vulnerabilities as it identifies where data resides, offers an inventory of applications, and facilitates understanding of cloud infrastructure migrations.

Transport protocols and other types of protocol

After a packet arrives at its destination, it’s handled accordingly by the transport layer – the fourth later in the OSI model – and the corresponding transport protocol, being used in relation to the IP. Transport layer protocols are port-to-port protocols working on top of internet protocols to deliver the data packet from the origin port to the IP services, before delivering it from the IP services to the destination port.

Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) represent the transport layer. TCP is a connection-oriented protocol that provides complete transport layer services to applications – often referred to as Transmissions Control Protocol/Internet Protocol, TCP/ICP or the internet protocol suite. It features stream data transfer, reliability, flow control, multiplexing, logical connections, and full duplex. UDP provides non-sequenced transport functionality and is a connectionless protocol. It’s valuable when speed and size can be prioritised over security and reliability. The packet it produces is an IP datagram containing source port address, destination port address, total length and checksum information.

Other common types of protocol include:

  • File Transfer Protocol (FTP), where users transfer multimedia, text, programme and document files to each other.
  • Post Office Protocol (POP), for receiving incoming email communications.
  • Hypertext Transfer Protocol (HTTP) and Hypertext Transfer Protocol Secure (HTTPS), which transfers hypertext, the latter of which is encrypted.
  • Telnet, which provides remote login to connect one system with another.
  • Gopher, used for searching, retrieving and displaying documents from isolated web pages and sites.

There is also Ethernet protocol. Ethernet is a method of connecting computers and other devices in a physical space – via packet-based communication – and is often referred to as a Local Area Network (LAN). The Institute of Electrical and Electronics Engineers (IEEE) maintains IEEE 802.3, a working group of standard specifications for Ethernet.

There exist a variety of other protocols that co-function alongside other primary protocols. These include, for example: ARP; DHCP; IMAP4; SIP; SMTP; RLP; RAP; L2TP; and TFTP.

Understand the internet protocol requirements of your business

Gain sought-after skills to excel in a high-demand field with the University of York’s online MSc Computer Science programme.

Our flexible programme is designed to boost your employability, preparing you for a rewarding career in all manner of industries. You’ll gain a strong foundation in a wide range of computing areas, including programming, architecture and operating systems, AI and machine learning, big data analytics, software engineering, cybersecurity, and much more. 

Cybersecurity threats and how to avoid them

Issues of cybersecurity are issues for all of us, and exist at an individual, national and international level.

Our interconnected, digital world relies on technological and computer infrastructure. Almost every element of our lives – including how we bank, travel, communicate, work, shop and socialise – intersects with information technology and digital operating systems in some way. 

While such technological advances offer untold benefits and opportunities, they also carry with them the risk of presenting vulnerabilities to individuals and organisations who seek to benefit from disrupting these systems. 

We all know the importance of creating strong passwords, avoiding suspicious links and installing appropriate security software. However, good digital etiquette only gets us so far.

Cybercrime is increasing year on year. The following cyberthreat-related statistics demonstrate the scope and scale of the issue:

  • the global cost of online crime will reach $300 billion by 2024
  • a business falls victim to a ransomware attack every 14 seconds
  • phishing emails are used to launch 91% of cyber-attacks
  • small businesses are the target of 43% of all cyber-attacks
  • on average, it takes six months to detect a data breach.

What are cybersecurity threats?

A cybersecurity threat is any malicious, deliberate activity that targets computer systems, computer networks, information technology assets, intellectual property or sensitive data. The aim of such threats vary, but generally they seek to gain some benefit from the attack, such as disrupting digital life, gaining unauthorised access, or damaging or stealing data. While many cybersecurity attacks originate from unknown individuals or organisations in remote locations, they can also originate from insiders, within an organisation. All are labelled ‘threat actors’, with common types including:

  • Hostile nation-states, who engage in cyber warfare such as disruption of critical infrastructure, espionage, propaganda and website defacement.
  • Hackers, ranging from those who seek to steal data and confidential information, to those who gain access to systems as a challenge.
  • Hacktivists, who are pursuing a political agenda, generally through the sharing of propaganda.
  • Terrorist groups, who seek to damage national interests and national security.
  • Insiders and third-party vendors, who can deliberately expose sensitive data, or accidentally introduce malware that leads to a data breach.

It’s not just a pressing issue for large entities such as financial institutions, national governments and tech companies; small-to-medium-sized businesses, as well as individuals, are among the most vulnerable to cyberthreats and should take steps to defend themselves.

What are the most common threats to cyber security?

Common types of cyberthreats and cyber-attacks include:

  • Malware. Computer viruses, spyware, worms and ransomware are all forms of malicious software, known as malware. They target vulnerabilities in information systems and networks, typically via malicious links and email attachments that introduce dangerous software into the system. Malware can: render systems inoperable, install additional harmful software, obtain information covertly and block access to network components.
  • Phishing. Phishing attacks are an incredibly common cyberthreat. They use fraudulent communications (generally emails), that appear to come from a known or reputable sender to steal personal and sensitive data – such as credit card information, passwords and login information – or install malware onto a system. Spear phishing refers to a phishing attack that targets a specific individual or organisation.
  • Man-in-the-middle (MitM) attack. MitM attacks – also referred to as eavesdropping attacks – involve cybercriminals inserting themselves into a two-party transaction (and so becoming the ‘man in the middle’) to interrupt traffic,filter or steal data.
  • Denial-of-service attack. These attacks flood computer networks, servers and systems with traffic in a bid to cripple bandwidth and resources so legitimate requests cannot be fulfilled. There is also a Distributed-Denial-of-Service (DDoS) attack; a DDoS attack involves the use of multiple devices to stage a cyber-attack.
  • Structured Query Language (SQL) injection. Malicious code is ‘injected’ into a database in order to gain access to sensitive information or data. It’s an example of a ‘backdoor’ cyberthreat.
  • Zero-day exploit. These attacks exploit networks at times when they are vulnerable or compromised – crucially, before solutions or patches are introduced.
  • DNS tunnelling. These attacks re-route DNS requests to a cybercriminal’s server, providing them with a command, control channel and data extraction path in order to obtain data. They are notoriously tricky to detect.

This list is not exhaustive. Other types of cyber-attacks include Trojans, XSS attacks, drive-by attacks, brute force attacks, whale-phishing attacks, ransomware, data breaches and URL interpretation.

How can you protect networks from cyber security threats?

Every organisation should invest in protecting itself from cybercriminals and cyber protection should form part of any risk management plan. This can be achieved by implementing various security measures.

One is to ensure that all team members throughout the business are alert to the dangers of cyber security; they should be trained to prevent breaches and detect potential threats. 

As many issues of data security occur through accidental insider-user error, this is one of the most effective ways to combat digital vulnerability. Employees should be alert to malicious links, check sender information, maintain strong password etiquette – never share passwords and use two-factor authentication – and take care when handling sensitive information.

From a systems perspective, it’s critical that all hardware and software is up to date and fit for purpose. This includes:

  • supporting patch management systems
  • ensuring networks are behind firewalls
  • implementing endpoint protection
  • backing up data in a secure way
  • controlling and monitoring user access to all systems
  • securing wifi networks
  • establishing personal accounts for all employees.

Protect your systems from potential cyber-attacks

Cybersecurity risks aren’t going away, so individuals and security teams with the specialist skills and expertise to safeguard businesses from these attacks are in high demand. People with these skills can often choose from a wide range of rewarding careers.

Whether you have a computing background or not, you can develop the knowledge and skills to succeed in the industry with the University of York’s online MSc Computer Science programme. You’ll gain in-depth understanding in each of the main areas of computer science, learning how to apply your new skills to real-work environments. Through flexible modules, you’ll explore programming, software development, data analysis, computer networks and much more.


What are the three categories of computer architecture?

Every computer, from the simple device to the most complex machine, operates according to its architecture. This architecture – the rules, methods, and procedures that tell the computer system what to do and how to work – can be broken into three main sub-categories.

Instruction set architecture

An instruction set architecture, or ISA, is a collection of instructions that a computer processor reads. It outlines how the central processing unit (CPU) is controlled by its software, and effectively acts as the interface between the machine’s hardware components and its software. In fact, the instruction set architecture provides the only means of interacting with the hardware, visible to assembly language programmers, compilers, and application managers.

There are two main types of instruction classifications:

  • Reduced Instruction Set Computer (RISC), which implements only the single instruction formats that are frequently used in computer programmes. These include what’s known as MIPS (microprocessor without interlocked pipelined stages), which was developed by John Hennessy at Stanford University in the 1980s.
  • Complex Instruction Set Computer (CISC), which can implement several specialised instructions.

The ISA also defines and supports a number of key elements within the CPU, such as:

Data types

Supported data types are defined by the instruction set architecture. This means that through the ISA, a computer will understand the type of data item, its values, the programming languages it uses, and what operations can be performed on or through it.


Registers store short-term data and manage the hardware’s main memory – the random access memory (RAM). They are located within processors, microprocessors, microcontrollers, and so on, and store instructions for decoding or executing commands. Registers include:

  • the programme counter (PC), which indicates where a computer is in its programme sequence. The PC may also be referred to as the instruction pointer (IP), instruction address register (IAR), the instruction counter, or the instruction sequencer. 
  • the memory address register (MAR), which holds the address of an instruction’s related data.
  • the memory data register (MDR), which stores the data that will be sent to – or fetched from – memory.
  • the current instruction register (CIR), which stores the instructions that are currently being decoded and executed by the central processing unit.
  • the accumulator (ACC), which stores the results of calculations.
  • the interrupt control register (ICR), which generates interrupt signals to tell the central processing unit to pause its current task and start executing another.

Key features

The instruction set architecture outlines how the hardware will support fundamental features, such as:

  • Memory consistency models, which essentially guarantee that if a programmer follows set rules for operations on memory, then memory will be consistent, and the results of reading, writing, or updating memory will be predictable.
  • Memory addressing modes, which are the methods used for locating data and instructions from the RAM or the cache. Mode examples include immediate memory access, direct memory access, indirect memory access, and indexed memory access.
  • Virtual memory, also known as virtual storage, which utilises both hardware and software to allow a computer to temporarily transfer data from RAM to disk.


Also called computer organisation, microarchitecture is an important sub-category of computer architecture. There is an inherent interconnection between the microarchitecture and the instruction set architecture, because the microarchitecture outlines how a processor implements its ISA.

Important aspects of microarchitecture include:

  • Instruction cycles. These are the steps required to run programmes: reading and decoding an instruction; finding data associated with the instruction; processing the instruction; and then writing out the results.
  • Multicycle architecture. Multicycle architectures are typically the smallest and simplest architectures because they recycle the minimum required number of logic design elements in order to operate. 
  • Instruction pipelining. Instruction pipelining is a tool for improving processor performance because it allows several instructions to occur at the same time.

System design

System design incorporates all of a computer’s physical hardware elements, such as its data processors, multiprocessors, and graphic processing units (GPUs). It also defines how the machine will meet user requirements. For example, which interfaces are used, how data is managed, and so on. In fact, because of its link with meeting specified user requirements, system design is often mentioned alongside product development and marketing.

Other types of computer architecture

Von Neumann Architecture

Also known as Princeton architecture, the Neumann model of computer architecture was developed by John von Neumann in the 1940s. It outlines a model of computer architecture with five elements:

  1. A processing unit that has both an arithmetic and logic unit (ALU) as well as processors with attached registers.
  2. A control unit that can hold instructions in the programme counter or the instruction register.
  3. Memory that stores data and instructions, and communicates through connections called a data bus, address bus, and control bus.
  4. External mass storage, or secondary storage.
  5. Mechanisms for input/output devices.

Harvard Architecture

Harvard architecture uses separate memory storage for instructions and for data. This differs from, for example, Von Neumann architecture, with programme instructions and data sharing the same memory and pathways.

Single instruction, multiple data (SIMD) architecture

Single instruction, multiple data processing computers can process multiple data points simultaneously. This paved the way for supercomputers and other high-performance machines, at least until developers at organisations like Intel and IBM started moving into multiple instruction, multiple data (MIMD) models.

Multicore architecture

Multicore architecture uses a single physical processor to incorporate the core logic of more than one processor, with the aim of creating systems that complete multiple tasks at the same time in the name of optimisation and better overall system performance.

Explore the concepts of modern computer architecture

Deepen your understanding of computer architecture with the 100% online MSc Computer Science from the University of York. This Masters degree includes a module in computer architecture and operating systems, so you’ll delve into how computer systems execute programmes, store information, and communicate. You will also learn the principles, design, and implementation of system software such as operating systems, in addition to developing skills and knowledge in wider computer science areas, such as algorithms, data processing, and artificial intelligence.

This flexible degree has been designed for working professionals and graduates who may not currently have a computer science background but want to launch a career in the cutting-edge field. You can learn part-time around your current work and home commitments, and because the degree is taught exclusively online, you can study whenever and wherever you want.

What is computational thinking?

Computational thinking (CT) is a problem-solving technique that imitates the process computer programmers go through when writing computer programmes and algorithms. This process requires programmers to break down complex problems and scenarios into bite size pieces that can be fully understood in order to then develop solutions that are clear to both computers and humans. So, like programmers, those who apply computational thinking techniques will break down problems into smaller, simpler fragments, and then outline solutions to address each problem in terms that any person can comprehend. 

Computational thinking requires:

  • exploring and analysing problems thoroughly in order to fully understand them
  • using precise and detailed language to outline both problems and solutions
  • applying clear reasoning at every stage of the process

In short, computational thinking encourages people to approach any problem in a systematic manner, and to develop and articulate solutions in terms that are simple enough to be executed by a computer – or another person. 

What are the four parts of computational thinking?

Computational thinking has four foundational characteristics or techniques. These include:


Decomposition is the process of breaking down a problem or challenge – even a complex one – into small, manageable parts.


Also known as generalisation, abstraction requires computational thinkers to focus only on the most important information and elements of the problem, and to ignore anything else, particularly irrelevant details or unnecessary details.

Pattern recognition

Also known as data and information visualisation, pattern recognition involves sifting through information to find similar problems. Identifying patterns makes it easier to organise data, which in turn can help with problem solving.  

Algorithm design

Algorithm design is the culmination of all the previous stages. Like a computer programmer writing rules or a set of instructions for a computer algorithm, algorithmic thinking comes up with step-by-step solutions that can be followed in order to solve a problem.

Testing and debugging can also occur at this stage to ensure that solutions remain fit for purpose.

Why is computational thinking important?

For computer scientists, computational thinking is important because it enables them to better work with data, understand systems, and create workable algorithms and computation models.

In terms of real-world applications outside of computer science, computational thinking is an effective tool that can help students and learners develop problem-solving strategies they can apply to both their studies as well as everyday life. In an increasingly complicated, digital world, computational thinking concepts can help people tackle a diverse array of challenges in an effective, manageable way. Because of this, it is increasingly being taught outside of a computer science education, from the United Kingdom’s national curriculum to the United States’ K-12 education system.

How can computational thinking be used?

Computational thinking competencies are a requirement for any computer programmer working on algorithms, whether they’re for automation projects, designing virtual reality simulations, or developing robotics programmes.

But this thinking process can also be taught as a template for any kind of problem, and used by any person, particularly within high schools, colleges, and other education settings.

Dr Shuchi Grover, for example, is a computer scientist and educator who has argued that the so-called “four Cs” of 21st century learning – communication, critical thinking, collaboration, and creativity – should be joined by a fifth: computational thinking. According to Grover, it can be beneficial within STEM subjects (science, technology, engineering and mathematics), but is also applicable to the social sciences and language and linguistics.

What are some examples of computational thinking?

The most obvious examples of computational thinking are the algorithms that computer programmers write when developing a new piece of software or programme. Outside of computer programming, though, computational thinking can also be found in everything from instructional manuals for building furniture to recipes for baking a chocolate cake – solutions are broken down into simple steps and communicated clearly and precisely.  

What is the difference between computational thinking and computer science?

Computer science is a large area of study and practice, and includes an array of different computer-related disciplines, such as computing, automation, and information technology. 

Computational thinking, meanwhile, is a problem-solving method created and used by computer scientists – but it also has applications outside the field of computer science.

How can we teach computational thinking?

Teaching computational thinking was popularised following the publication of an essay on the topic in the Communications of the ACM journal. Written by Jeannette Wing, a computer science researcher, the essay suggested that computational thinking is a fundamental skill for everyone and should be integrated into other subjects and lesson plans within schools. 

This idea has been adopted in a number of different ways around the world, with a growing number of resources available to educators online. For example:

Become a computational thinker

Develop computational thinking skills with the online MSc Computer Science at the University of York. Through your taught modules, you will be able to apply computational thinking in multiple programming languages, such as Python and Java, and be equipped to engage in solution generation across a broad range of fields. Some of the modules you’ll study include algorithms and data structures, advanced programming, artificial intelligence and machine learning, cyber security threats, and computer architecture and operating systems.

This master’s degree has been designed for working professionals and graduates who may not have a computer science background, but who want to launch a career in the lucrative field. And because it’s studied 100% online, you can learn remotely – at different times and locations – part-time around your full-time work and personal commitments.

What are data structures?

Data is a core component of virtually every computer programme and software system – and data structures are what store, organise, and manage that data. Data structures ensure that different data types can be efficiently maintained and accessed, and effectively processed and used, in order to perform both basic operations and advanced tasks.

There are different data structure types – some basic and some more complex – that have been designed to meet different requirements, but all of them typically ensure that data can be understood by both machines and humans, and then used in specific ways for specific purposes.

But to understand the different types of data structures, it’s important to first understand the different types of data.

What are the different data types?

Data types are the foundation of data structures. They are what tell the computer compiler or interpreter – which translates programming languages such as Java, JavaScript and Python into machine code – how the programmer intends to use the data. They typically fall into one of three categories.

Primitive data types

Primitive data types are the most basic building blocks of data, and include:

  • Boolean, which has two possible values – true or false
  • characters, such as letters and numerals
  • integers and integer values, which are whole numbers that do not contain a fraction
  • references (also called a pointer or handle), which allow a computer programme to refer to data stored elsewhere, such as in the computer’s memory
  • floating-point numbers, which are numbers that include a decimal
  • fixed-point numbers, which are numbers that include a decimal up to a fixed number of digits

Composite data types

Also known as compound data types, composite data types combine different primitive data types. They include:

  • arrays, which represent a collection of elements, such as values or variables
  • records, which group several different pieces of data together as one unit, such as names and email addresses housed within a table
  • strings, which order data in structured sequences

What is an associative array?

An associative array – also called maps, symbol tables, or dictionaries – is an array that holds data in pairs. These pairs contain a key and a value associated with that key.

Abstract data types

Abstract data types are defined by their behaviour, and include:

  • queues, which order and update data using a first-in-first-out (FIFO) mechanism
  • stacks, which order and update data using a last-in-first-out (LIFO) mechanism
  • sets, which can store unique values without a particular order or sequence

What are the different types of data structures?

There are several different structures that store data items and process them, but they typically fall into one of two categories: linear data structures or non-linear data structures.

The data structure required for any given project will depend upon the operation of the programme or software, or the kinds of sorting algorithms that will be used. 

Examples of linear data structures

Array data structures

Like array data types, array data structures are made up of a collection of elements, and are among the most important and commonly used data structures. Data with an array structure is stored in adjoining memory locations, and each element is accessed with an index key.

Linked list data structures

Linked list data structures are similar to arrays in that they are a collection of data elements, however, the order of these elements is not determined by their place within the machine’s memory allocation. Instead, each element – or node – contains a data item and a pointer to the next item. 

Doubly linked list data structures

Doubly linked lists are more complex than singly linked lists – a node within the list contains a pointer to both the previous node and the next node. 

Stack data structures

Stacks structure data in a linear format, and elements can be inserted or removed from just one side of the list – the top – following the LIFO principle.

Queue data structures

Queues are similar to stacks, but elements can only be inserted or removed from the rear of the list, following the FIFO principle. There are also priority queues, where values are removed on the basis of priority.

Examples of non-linear data structures

Tree data structures

Trees store elements hierarchically and in a more abstract fashion than linear structures. Each node within the structure has a key value, and a parent node will link to child nodes – like branches on a family tree. There are a number of different types of tree structures, including red-black tree, AVL tree, and b-trees.

What is a binary tree?

Binary trees are tree data structures where each node has a maximum of two child nodes – a left child and a right child.

They are not to be confused with binary search trees, which are trees that are structured to be increasingly complex – a node is always more complex than the node that came before it, and the structure’s time complexity to operate will be directly proportional to the height of the tree.

Graph data structures

Graph structures are made up of a set of nodes – known as vertices – that can be visualised like points on a graph, and are connected by lines known as edges. Graph data structures follow the mathematical principles of graph theory

Hash data structures

Hash data structures include hash lists, hash tables, hash trees, and so on. The most commonly known is the hash table, also referred to as a hash map or dictionary, which can store large amounts of data, and maps keys to values through a hash function. They also employ a technique called chaining to avoid collisions, which can occur when two keys are hashed to the same index within the hash table.

Dig deeper into data structures

Explore data structures in depth and prepare for a career in computer science with the online MSc Computer Science at the University of York. One of your key modules covers data structures, so you’ll learn techniques for using algorithms and associated data structures while also studying computer programming, computer architecture and operating systems, and software engineering.

This master’s degree is studied 100% online and has been designed for working professionals and graduates who may not have a computer science background but want to launch a career in the lucrative field.

What is advanced programming?

Advanced programming is shorthand for the advanced-level programming techniques and concepts found in computer science.

Computer programmers typically move through three stages of competency – beginner, intermediate, and advanced – with advanced programmers working on more complex projects and typically earning higher salaries than their more junior colleagues.

Advanced programming concepts

Object-oriented programming

Object-oriented programming, or OOP, is a programming model that all advanced programmers should understand. It’s more advanced than basic procedural programming, which is taught to beginner programmers.

There are four principles of object-oriented programming:

  1. Encapsulation. Encapsulation is effectively the first step of object-oriented programming. It groups related data variables (called properties) and functions (called methods) into single units (called objects) to reduce source code complexity and increase its reusability.
  2. Abstraction. Abstraction essentially contains and conceals the inner-workings of object-oriented programming code to create simpler interfaces. 
  3. Inheritance. Inheritance is object-oriented programming’s mechanism for eliminating redundant code. It means that relevant properties and methods can be grouped into a single object that can then be reused repeatedly – without repeating the code again and again. 
  4. Polymorphism. Polymorphism, meaning many forms, is the technique used in object-oriented programming to render variables, methods, and objects in multiple forms.  

Event-driven programming

Event-driven programming is the programming model that allows for events – like a mouse-click from a user – to determine a programme’s actions. 

Commonly used in graphical user interface (GUI) application or software development, event-driven programming typically relies on user-generated events, such as pressing a key – or series of keys – on a keyboard, clicking a mouse, or touching the screen of a touchscreen device. However, events can also include messages that are passed from one programme to another.

Multithreaded programming

Multithreaded programming is an important component within computer architecture. It’s what allows central processing units (CPUs) to execute multiple sets of instructions – called threads – concurrently as part of a single process.

Operating systems that feature multithreading can perform more quickly and efficiently, switching between the threads within their queues and only loading the new or relevant components. 

Programming for data analysis

Businesses and governments at virtually every level are dependent on data analysis to operate and make informed decisions – and the tools they use for this work require advanced programming techniques and skills.

Through advanced programming, data analysts can:

  • search through large datasets and data types
  • find patterns and spot trends within data
  • build statistical models
  • create dashboards
  • produce useful visualisations to help illustrate data results and learning outcomes
  • efficiently extract data
  • carry out problem-solving tasks

Programming languages

A thorough understanding of programming language fundamentals, as well as expertise in some of the more challenging languages, are prerequisites to moving into advanced programming. It also helps to have knowledge about more complex concepts, such as arrays and recursion, imperative versus functional programming, application programming interfaces (APIs), and programming language specifications.

What are the different levels of programming languages?

Programming languages are typically split into two groups:

  1. High-level languages. These are the languages that people are most familiar with, and are written to be user-centric. High-level languages are typically written in English so that they are accessible to many people for writing and debugging, and include languages such as Python, Java, C, C++, SQL, and so on.
  2. Low-level languages. These languages are machine-oriented – represented in 0 or 1 forms – and include machine-level language and assembly language.

What is the best programming language for beginners?

According to CodeAcademy, the best programming language to learn first will depend on what an individual wants to achieve. The most popular programming languages, however, include:

  • C++, an all-purpose language used to build applications. 
  • C#, Microsoft’s programming language that has been adopted by Windows, Linux (derived from Unix), iOS, and Android, as well as huge numbers of game and mobile app developers.
  • JavaScript, a dynamic programming language that’s typically used for designing interactive websites.
  • Ruby, a general-purpose, dynamic programming language that’s one of the easiest scripting languages to learn.
  • Python, a general-purpose programming language commonly used in data science, machine learning, and web development. It can also support command-line interfaces.
  • SQL, a data-driven programming language commonly used for data analysis.

What are the top 10 programming languages?

TechnoJobs, a job site for IT and technical professionals, states that the top 10 programming languages for 2022 – based on requests from employers and average salaries – are:

  1. Python.
  2. JavaScript.
  3. C.
  4. PHP.
  5. Ruby.
  6. C++.
  7. C#.
  8. Java.
  9. TypeScript.
  10. Perl.

However, it’s worth noting that there are hundreds of programming languages, and the best one will vary depending on the advanced programming assignment, project, or purpose in question.

What’s the most advanced programming language?

Opinions vary on which programming language is the most advanced, challenging, or difficult, but Springboard, a mentoring platform for the tech industry, states the five hardest programming languages are:

  1. C++, because it has complex syntax, permissive language, and ideally requires existing knowledge in C programming before learning the C++ programming language.
  2. Prolog, because of its unconventional language, uncommon data structures, and because it requires a significantly competent compiler.
  3. LISP, because it is a fragmented language with domain-specific solutions, and uses extensive parentheses.
  4. Haskell, because of its jargon.
  5. Malbolge, because it is a self-modifying language that can result in erratic behaviour.

Alternatively, Springboard states that the easiest programming languages to learn are HTML, JavaScript, C, Python, and Java.

Start your career in computer science

Explore advanced programming concepts in greater detail with the MSc Computer Science at the University of York. This flexible master’s programme is designed for working professionals and graduates who may not currently have a computer science background but want to launch their career in the field, and because it’s taught 100% online, you can study around full-time work and personal commitments at different times and locations.

Your coursework will include modules in advanced programming as well as algorithms, artificial intelligence and machine learning, software engineering, and cyber security. As part of this advanced programming course, you will also have the opportunity to explore the social context of computing, such as the social impact of the internet, software piracy, and codes of ethics and conduct.