Cybersecurity threats and how to avoid them

Issues of cybersecurity are issues for all of us, and exist at an individual, national and international level.

Our interconnected, digital world relies on technological and computer infrastructure. Almost every element of our lives – including how we bank, travel, communicate, work, shop and socialise – intersects with information technology and digital operating systems in some way. 

While such technological advances offer untold benefits and opportunities, they also carry with them the risk of presenting vulnerabilities to individuals and organisations who seek to benefit from disrupting these systems. 

We all know the importance of creating strong passwords, avoiding suspicious links and installing appropriate security software. However, good digital etiquette only gets us so far.

Cybercrime is increasing year on year. The following cyberthreat-related statistics demonstrate the scope and scale of the issue:

  • the global cost of online crime will reach $300 billion by 2024
  • a business falls victim to a ransomware attack every 14 seconds
  • phishing emails are used to launch 91% of cyber-attacks
  • small businesses are the target of 43% of all cyber-attacks
  • on average, it takes six months to detect a data breach.

What are cybersecurity threats?

A cybersecurity threat is any malicious, deliberate activity that targets computer systems, computer networks, information technology assets, intellectual property or sensitive data. The aim of such threats vary, but generally they seek to gain some benefit from the attack, such as disrupting digital life, gaining unauthorised access, or damaging or stealing data. While many cybersecurity attacks originate from unknown individuals or organisations in remote locations, they can also originate from insiders, within an organisation. All are labelled ‘threat actors’, with common types including:

  • Hostile nation-states, who engage in cyber warfare such as disruption of critical infrastructure, espionage, propaganda and website defacement.
  • Hackers, ranging from those who seek to steal data and confidential information, to those who gain access to systems as a challenge.
  • Hacktivists, who are pursuing a political agenda, generally through the sharing of propaganda.
  • Terrorist groups, who seek to damage national interests and national security.
  • Insiders and third-party vendors, who can deliberately expose sensitive data, or accidentally introduce malware that leads to a data breach.

It’s not just a pressing issue for large entities such as financial institutions, national governments and tech companies; small-to-medium-sized businesses, as well as individuals, are among the most vulnerable to cyberthreats and should take steps to defend themselves.

What are the most common threats to cyber security?

Common types of cyberthreats and cyber-attacks include:

  • Malware. Computer viruses, spyware, worms and ransomware are all forms of malicious software, known as malware. They target vulnerabilities in information systems and networks, typically via malicious links and email attachments that introduce dangerous software into the system. Malware can: render systems inoperable, install additional harmful software, obtain information covertly and block access to network components.
  • Phishing. Phishing attacks are an incredibly common cyberthreat. They use fraudulent communications (generally emails), that appear to come from a known or reputable sender to steal personal and sensitive data – such as credit card information, passwords and login information – or install malware onto a system. Spear phishing refers to a phishing attack that targets a specific individual or organisation.
  • Man-in-the-middle (MitM) attack. MitM attacks – also referred to as eavesdropping attacks – involve cybercriminals inserting themselves into a two-party transaction (and so becoming the ‘man in the middle’) to interrupt traffic,filter or steal data.
  • Denial-of-service attack. These attacks flood computer networks, servers and systems with traffic in a bid to cripple bandwidth and resources so legitimate requests cannot be fulfilled. There is also a Distributed-Denial-of-Service (DDoS) attack; a DDoS attack involves the use of multiple devices to stage a cyber-attack.
  • Structured Query Language (SQL) injection. Malicious code is ‘injected’ into a database in order to gain access to sensitive information or data. It’s an example of a ‘backdoor’ cyberthreat.
  • Zero-day exploit. These attacks exploit networks at times when they are vulnerable or compromised – crucially, before solutions or patches are introduced.
  • DNS tunnelling. These attacks re-route DNS requests to a cybercriminal’s server, providing them with a command, control channel and data extraction path in order to obtain data. They are notoriously tricky to detect.

This list is not exhaustive. Other types of cyber-attacks include Trojans, XSS attacks, drive-by attacks, brute force attacks, whale-phishing attacks, ransomware, data breaches and URL interpretation.

How can you protect networks from cyber security threats?

Every organisation should invest in protecting itself from cybercriminals and cyber protection should form part of any risk management plan. This can be achieved by implementing various security measures.

One is to ensure that all team members throughout the business are alert to the dangers of cyber security; they should be trained to prevent breaches and detect potential threats. 

As many issues of data security occur through accidental insider-user error, this is one of the most effective ways to combat digital vulnerability. Employees should be alert to malicious links, check sender information, maintain strong password etiquette – never share passwords and use two-factor authentication – and take care when handling sensitive information.

From a systems perspective, it’s critical that all hardware and software is up to date and fit for purpose. This includes:

  • supporting patch management systems
  • ensuring networks are behind firewalls
  • implementing endpoint protection
  • backing up data in a secure way
  • controlling and monitoring user access to all systems
  • securing wifi networks
  • establishing personal accounts for all employees.

Protect your systems from potential cyber-attacks

Cybersecurity risks aren’t going away, so individuals and security teams with the specialist skills and expertise to safeguard businesses from these attacks are in high demand. People with these skills can often choose from a wide range of rewarding careers.

Whether you have a computing background or not, you can develop the knowledge and skills to succeed in the industry with the University of York’s online MSc Computer Science programme. You’ll gain in-depth understanding in each of the main areas of computer science, learning how to apply your new skills to real-work environments. Through flexible modules, you’ll explore programming, software development, data analysis, computer networks and much more.

 

What are the three categories of computer architecture?

Every computer, from the simple device to the most complex machine, operates according to its architecture. This architecture – the rules, methods, and procedures that tell the computer system what to do and how to work – can be broken into three main sub-categories.

Instruction set architecture

An instruction set architecture, or ISA, is a collection of instructions that a computer processor reads. It outlines how the central processing unit (CPU) is controlled by its software, and effectively acts as the interface between the machine’s hardware components and its software. In fact, the instruction set architecture provides the only means of interacting with the hardware, visible to assembly language programmers, compilers, and application managers.

There are two main types of instruction classifications:

  • Reduced Instruction Set Computer (RISC), which implements only the single instruction formats that are frequently used in computer programmes. These include what’s known as MIPS (microprocessor without interlocked pipelined stages), which was developed by John Hennessy at Stanford University in the 1980s.
  • Complex Instruction Set Computer (CISC), which can implement several specialised instructions.

The ISA also defines and supports a number of key elements within the CPU, such as:

Data types

Supported data types are defined by the instruction set architecture. This means that through the ISA, a computer will understand the type of data item, its values, the programming languages it uses, and what operations can be performed on or through it.

Registers

Registers store short-term data and manage the hardware’s main memory – the random access memory (RAM). They are located within processors, microprocessors, microcontrollers, and so on, and store instructions for decoding or executing commands. Registers include:

  • the programme counter (PC), which indicates where a computer is in its programme sequence. The PC may also be referred to as the instruction pointer (IP), instruction address register (IAR), the instruction counter, or the instruction sequencer. 
  • the memory address register (MAR), which holds the address of an instruction’s related data.
  • the memory data register (MDR), which stores the data that will be sent to – or fetched from – memory.
  • the current instruction register (CIR), which stores the instructions that are currently being decoded and executed by the central processing unit.
  • the accumulator (ACC), which stores the results of calculations.
  • the interrupt control register (ICR), which generates interrupt signals to tell the central processing unit to pause its current task and start executing another.

Key features

The instruction set architecture outlines how the hardware will support fundamental features, such as:

  • Memory consistency models, which essentially guarantee that if a programmer follows set rules for operations on memory, then memory will be consistent, and the results of reading, writing, or updating memory will be predictable.
  • Memory addressing modes, which are the methods used for locating data and instructions from the RAM or the cache. Mode examples include immediate memory access, direct memory access, indirect memory access, and indexed memory access.
  • Virtual memory, also known as virtual storage, which utilises both hardware and software to allow a computer to temporarily transfer data from RAM to disk.

Microarchitecture

Also called computer organisation, microarchitecture is an important sub-category of computer architecture. There is an inherent interconnection between the microarchitecture and the instruction set architecture, because the microarchitecture outlines how a processor implements its ISA.

Important aspects of microarchitecture include:

  • Instruction cycles. These are the steps required to run programmes: reading and decoding an instruction; finding data associated with the instruction; processing the instruction; and then writing out the results.
  • Multicycle architecture. Multicycle architectures are typically the smallest and simplest architectures because they recycle the minimum required number of logic design elements in order to operate. 
  • Instruction pipelining. Instruction pipelining is a tool for improving processor performance because it allows several instructions to occur at the same time.

System design

System design incorporates all of a computer’s physical hardware elements, such as its data processors, multiprocessors, and graphic processing units (GPUs). It also defines how the machine will meet user requirements. For example, which interfaces are used, how data is managed, and so on. In fact, because of its link with meeting specified user requirements, system design is often mentioned alongside product development and marketing.

Other types of computer architecture

Von Neumann Architecture

Also known as Princeton architecture, the Neumann model of computer architecture was developed by John von Neumann in the 1940s. It outlines a model of computer architecture with five elements:

  1. A processing unit that has both an arithmetic and logic unit (ALU) as well as processors with attached registers.
  2. A control unit that can hold instructions in the programme counter or the instruction register.
  3. Memory that stores data and instructions, and communicates through connections called a data bus, address bus, and control bus.
  4. External mass storage, or secondary storage.
  5. Mechanisms for input/output devices.

Harvard Architecture

Harvard architecture uses separate memory storage for instructions and for data. This differs from, for example, Von Neumann architecture, with programme instructions and data sharing the same memory and pathways.

Single instruction, multiple data (SIMD) architecture

Single instruction, multiple data processing computers can process multiple data points simultaneously. This paved the way for supercomputers and other high-performance machines, at least until developers at organisations like Intel and IBM started moving into multiple instruction, multiple data (MIMD) models.

Multicore architecture

Multicore architecture uses a single physical processor to incorporate the core logic of more than one processor, with the aim of creating systems that complete multiple tasks at the same time in the name of optimisation and better overall system performance.

Explore the concepts of modern computer architecture

Deepen your understanding of computer architecture with the 100% online MSc Computer Science from the University of York. This Masters degree includes a module in computer architecture and operating systems, so you’ll delve into how computer systems execute programmes, store information, and communicate. You will also learn the principles, design, and implementation of system software such as operating systems, in addition to developing skills and knowledge in wider computer science areas, such as algorithms, data processing, and artificial intelligence.

This flexible degree has been designed for working professionals and graduates who may not currently have a computer science background but want to launch a career in the cutting-edge field. You can learn part-time around your current work and home commitments, and because the degree is taught exclusively online, you can study whenever and wherever you want.

What is computational thinking?

Computational thinking (CT) is a problem-solving technique that imitates the process computer programmers go through when writing computer programmes and algorithms. This process requires programmers to break down complex problems and scenarios into bite size pieces that can be fully understood in order to then develop solutions that are clear to both computers and humans. So, like programmers, those who apply computational thinking techniques will break down problems into smaller, simpler fragments, and then outline solutions to address each problem in terms that any person can comprehend. 

Computational thinking requires:

  • exploring and analysing problems thoroughly in order to fully understand them
  • using precise and detailed language to outline both problems and solutions
  • applying clear reasoning at every stage of the process

In short, computational thinking encourages people to approach any problem in a systematic manner, and to develop and articulate solutions in terms that are simple enough to be executed by a computer – or another person. 

What are the four parts of computational thinking?

Computational thinking has four foundational characteristics or techniques. These include:

Decomposition

Decomposition is the process of breaking down a problem or challenge – even a complex one – into small, manageable parts.

Abstraction

Also known as generalisation, abstraction requires computational thinkers to focus only on the most important information and elements of the problem, and to ignore anything else, particularly irrelevant details or unnecessary details.

Pattern recognition

Also known as data and information visualisation, pattern recognition involves sifting through information to find similar problems. Identifying patterns makes it easier to organise data, which in turn can help with problem solving.  

Algorithm design

Algorithm design is the culmination of all the previous stages. Like a computer programmer writing rules or a set of instructions for a computer algorithm, algorithmic thinking comes up with step-by-step solutions that can be followed in order to solve a problem.

Testing and debugging can also occur at this stage to ensure that solutions remain fit for purpose.

Why is computational thinking important?

For computer scientists, computational thinking is important because it enables them to better work with data, understand systems, and create workable algorithms and computation models.

In terms of real-world applications outside of computer science, computational thinking is an effective tool that can help students and learners develop problem-solving strategies they can apply to both their studies as well as everyday life. In an increasingly complicated, digital world, computational thinking concepts can help people tackle a diverse array of challenges in an effective, manageable way. Because of this, it is increasingly being taught outside of a computer science education, from the United Kingdom’s national curriculum to the United States’ K-12 education system.

How can computational thinking be used?

Computational thinking competencies are a requirement for any computer programmer working on algorithms, whether they’re for automation projects, designing virtual reality simulations, or developing robotics programmes.

But this thinking process can also be taught as a template for any kind of problem, and used by any person, particularly within high schools, colleges, and other education settings.

Dr Shuchi Grover, for example, is a computer scientist and educator who has argued that the so-called “four Cs” of 21st century learning – communication, critical thinking, collaboration, and creativity – should be joined by a fifth: computational thinking. According to Grover, it can be beneficial within STEM subjects (science, technology, engineering and mathematics), but is also applicable to the social sciences and language and linguistics.

What are some examples of computational thinking?

The most obvious examples of computational thinking are the algorithms that computer programmers write when developing a new piece of software or programme. Outside of computer programming, though, computational thinking can also be found in everything from instructional manuals for building furniture to recipes for baking a chocolate cake – solutions are broken down into simple steps and communicated clearly and precisely.  

What is the difference between computational thinking and computer science?

Computer science is a large area of study and practice, and includes an array of different computer-related disciplines, such as computing, automation, and information technology. 

Computational thinking, meanwhile, is a problem-solving method created and used by computer scientists – but it also has applications outside the field of computer science.

How can we teach computational thinking?

Teaching computational thinking was popularised following the publication of an essay on the topic in the Communications of the ACM journal. Written by Jeannette Wing, a computer science researcher, the essay suggested that computational thinking is a fundamental skill for everyone and should be integrated into other subjects and lesson plans within schools. 

This idea has been adopted in a number of different ways around the world, with a growing number of resources available to educators online. For example:

Become a computational thinker

Develop computational thinking skills with the online MSc Computer Science at the University of York. Through your taught modules, you will be able to apply computational thinking in multiple programming languages, such as Python and Java, and be equipped to engage in solution generation across a broad range of fields. Some of the modules you’ll study include algorithms and data structures, advanced programming, artificial intelligence and machine learning, cyber security threats, and computer architecture and operating systems.

This master’s degree has been designed for working professionals and graduates who may not have a computer science background, but who want to launch a career in the lucrative field. And because it’s studied 100% online, you can learn remotely – at different times and locations – part-time around your full-time work and personal commitments.

What are data structures?

Data is a core component of virtually every computer programme and software system – and data structures are what store, organise, and manage that data. Data structures ensure that different data types can be efficiently maintained and accessed, and effectively processed and used, in order to perform both basic operations and advanced tasks.

There are different data structure types – some basic and some more complex – that have been designed to meet different requirements, but all of them typically ensure that data can be understood by both machines and humans, and then used in specific ways for specific purposes.

But to understand the different types of data structures, it’s important to first understand the different types of data.

What are the different data types?

Data types are the foundation of data structures. They are what tell the computer compiler or interpreter – which translates programming languages such as Java, JavaScript and Python into machine code – how the programmer intends to use the data. They typically fall into one of three categories.

Primitive data types

Primitive data types are the most basic building blocks of data, and include:

  • Boolean, which has two possible values – true or false
  • characters, such as letters and numerals
  • integers and integer values, which are whole numbers that do not contain a fraction
  • references (also called a pointer or handle), which allow a computer programme to refer to data stored elsewhere, such as in the computer’s memory
  • floating-point numbers, which are numbers that include a decimal
  • fixed-point numbers, which are numbers that include a decimal up to a fixed number of digits

Composite data types

Also known as compound data types, composite data types combine different primitive data types. They include:

  • arrays, which represent a collection of elements, such as values or variables
  • records, which group several different pieces of data together as one unit, such as names and email addresses housed within a table
  • strings, which order data in structured sequences

What is an associative array?

An associative array – also called maps, symbol tables, or dictionaries – is an array that holds data in pairs. These pairs contain a key and a value associated with that key.

Abstract data types

Abstract data types are defined by their behaviour, and include:

  • queues, which order and update data using a first-in-first-out (FIFO) mechanism
  • stacks, which order and update data using a last-in-first-out (LIFO) mechanism
  • sets, which can store unique values without a particular order or sequence

What are the different types of data structures?

There are several different structures that store data items and process them, but they typically fall into one of two categories: linear data structures or non-linear data structures.

The data structure required for any given project will depend upon the operation of the programme or software, or the kinds of sorting algorithms that will be used. 

Examples of linear data structures

Array data structures

Like array data types, array data structures are made up of a collection of elements, and are among the most important and commonly used data structures. Data with an array structure is stored in adjoining memory locations, and each element is accessed with an index key.

Linked list data structures

Linked list data structures are similar to arrays in that they are a collection of data elements, however, the order of these elements is not determined by their place within the machine’s memory allocation. Instead, each element – or node – contains a data item and a pointer to the next item. 

Doubly linked list data structures

Doubly linked lists are more complex than singly linked lists – a node within the list contains a pointer to both the previous node and the next node. 

Stack data structures

Stacks structure data in a linear format, and elements can be inserted or removed from just one side of the list – the top – following the LIFO principle.

Queue data structures

Queues are similar to stacks, but elements can only be inserted or removed from the rear of the list, following the FIFO principle. There are also priority queues, where values are removed on the basis of priority.

Examples of non-linear data structures

Tree data structures

Trees store elements hierarchically and in a more abstract fashion than linear structures. Each node within the structure has a key value, and a parent node will link to child nodes – like branches on a family tree. There are a number of different types of tree structures, including red-black tree, AVL tree, and b-trees.

What is a binary tree?

Binary trees are tree data structures where each node has a maximum of two child nodes – a left child and a right child.

They are not to be confused with binary search trees, which are trees that are structured to be increasingly complex – a node is always more complex than the node that came before it, and the structure’s time complexity to operate will be directly proportional to the height of the tree.

Graph data structures

Graph structures are made up of a set of nodes – known as vertices – that can be visualised like points on a graph, and are connected by lines known as edges. Graph data structures follow the mathematical principles of graph theory

Hash data structures

Hash data structures include hash lists, hash tables, hash trees, and so on. The most commonly known is the hash table, also referred to as a hash map or dictionary, which can store large amounts of data, and maps keys to values through a hash function. They also employ a technique called chaining to avoid collisions, which can occur when two keys are hashed to the same index within the hash table.

Dig deeper into data structures

Explore data structures in depth and prepare for a career in computer science with the online MSc Computer Science at the University of York. One of your key modules covers data structures, so you’ll learn techniques for using algorithms and associated data structures while also studying computer programming, computer architecture and operating systems, and software engineering.

This master’s degree is studied 100% online and has been designed for working professionals and graduates who may not have a computer science background but want to launch a career in the lucrative field.

What is advanced programming?

Advanced programming is shorthand for the advanced-level programming techniques and concepts found in computer science.

Computer programmers typically move through three stages of competency – beginner, intermediate, and advanced – with advanced programmers working on more complex projects and typically earning higher salaries than their more junior colleagues.

Advanced programming concepts

Object-oriented programming

Object-oriented programming, or OOP, is a programming model that all advanced programmers should understand. It’s more advanced than basic procedural programming, which is taught to beginner programmers.

There are four principles of object-oriented programming:

  1. Encapsulation. Encapsulation is effectively the first step of object-oriented programming. It groups related data variables (called properties) and functions (called methods) into single units (called objects) to reduce source code complexity and increase its reusability.
  2. Abstraction. Abstraction essentially contains and conceals the inner-workings of object-oriented programming code to create simpler interfaces. 
  3. Inheritance. Inheritance is object-oriented programming’s mechanism for eliminating redundant code. It means that relevant properties and methods can be grouped into a single object that can then be reused repeatedly – without repeating the code again and again. 
  4. Polymorphism. Polymorphism, meaning many forms, is the technique used in object-oriented programming to render variables, methods, and objects in multiple forms.  

Event-driven programming

Event-driven programming is the programming model that allows for events – like a mouse-click from a user – to determine a programme’s actions. 

Commonly used in graphical user interface (GUI) application or software development, event-driven programming typically relies on user-generated events, such as pressing a key – or series of keys – on a keyboard, clicking a mouse, or touching the screen of a touchscreen device. However, events can also include messages that are passed from one programme to another.

Multithreaded programming

Multithreaded programming is an important component within computer architecture. It’s what allows central processing units (CPUs) to execute multiple sets of instructions – called threads – concurrently as part of a single process.

Operating systems that feature multithreading can perform more quickly and efficiently, switching between the threads within their queues and only loading the new or relevant components. 

Programming for data analysis

Businesses and governments at virtually every level are dependent on data analysis to operate and make informed decisions – and the tools they use for this work require advanced programming techniques and skills.

Through advanced programming, data analysts can:

  • search through large datasets and data types
  • find patterns and spot trends within data
  • build statistical models
  • create dashboards
  • produce useful visualisations to help illustrate data results and learning outcomes
  • efficiently extract data
  • carry out problem-solving tasks

Programming languages

A thorough understanding of programming language fundamentals, as well as expertise in some of the more challenging languages, are prerequisites to moving into advanced programming. It also helps to have knowledge about more complex concepts, such as arrays and recursion, imperative versus functional programming, application programming interfaces (APIs), and programming language specifications.

What are the different levels of programming languages?

Programming languages are typically split into two groups:

  1. High-level languages. These are the languages that people are most familiar with, and are written to be user-centric. High-level languages are typically written in English so that they are accessible to many people for writing and debugging, and include languages such as Python, Java, C, C++, SQL, and so on.
  2. Low-level languages. These languages are machine-oriented – represented in 0 or 1 forms – and include machine-level language and assembly language.

What is the best programming language for beginners?

According to CodeAcademy, the best programming language to learn first will depend on what an individual wants to achieve. The most popular programming languages, however, include:

  • C++, an all-purpose language used to build applications. 
  • C#, Microsoft’s programming language that has been adopted by Windows, Linux (derived from Unix), iOS, and Android, as well as huge numbers of game and mobile app developers.
  • JavaScript, a dynamic programming language that’s typically used for designing interactive websites.
  • Ruby, a general-purpose, dynamic programming language that’s one of the easiest scripting languages to learn.
  • Python, a general-purpose programming language commonly used in data science, machine learning, and web development. It can also support command-line interfaces.
  • SQL, a data-driven programming language commonly used for data analysis.

What are the top 10 programming languages?

TechnoJobs, a job site for IT and technical professionals, states that the top 10 programming languages for 2022 – based on requests from employers and average salaries – are:

  1. Python.
  2. JavaScript.
  3. C.
  4. PHP.
  5. Ruby.
  6. C++.
  7. C#.
  8. Java.
  9. TypeScript.
  10. Perl.

However, it’s worth noting that there are hundreds of programming languages, and the best one will vary depending on the advanced programming assignment, project, or purpose in question.

What’s the most advanced programming language?

Opinions vary on which programming language is the most advanced, challenging, or difficult, but Springboard, a mentoring platform for the tech industry, states the five hardest programming languages are:

  1. C++, because it has complex syntax, permissive language, and ideally requires existing knowledge in C programming before learning the C++ programming language.
  2. Prolog, because of its unconventional language, uncommon data structures, and because it requires a significantly competent compiler.
  3. LISP, because it is a fragmented language with domain-specific solutions, and uses extensive parentheses.
  4. Haskell, because of its jargon.
  5. Malbolge, because it is a self-modifying language that can result in erratic behaviour.

Alternatively, Springboard states that the easiest programming languages to learn are HTML, JavaScript, C, Python, and Java.

Start your career in computer science

Explore advanced programming concepts in greater detail with the MSc Computer Science at the University of York. This flexible master’s programme is designed for working professionals and graduates who may not currently have a computer science background but want to launch their career in the field, and because it’s taught 100% online, you can study around full-time work and personal commitments at different times and locations.

Your coursework will include modules in advanced programming as well as algorithms, artificial intelligence and machine learning, software engineering, and cyber security. As part of this advanced programming course, you will also have the opportunity to explore the social context of computing, such as the social impact of the internet, software piracy, and codes of ethics and conduct.