Algorithms

Last updated Feb 15, 2026 Published Aug 18, 2023

The content here is under the Attribution 4.0 International (CC BY 4.0) license

Algorithms are the foundation of computer science and programming, providing a systematic approach to problem-solving. They are essential for processing data, executing operations, and implementing solutions in software development. The approach I have been exposed to is to compare it with a recipe (Bell, 2024), where the ingredients are the data and the steps are the instructions to manipulate that data. This analogy helps in understanding how algorithms work and their importance in software engineering. However, for it to make sense, it has to be taken literally.

A jump to the past

One of the first blog posts I have written was about the importance of algorithms in software engineering, in 2015 I was preparing to take interviews and found the importance of algorithms in the process. I wrote a post about it here.

Take, for example, the book Grokking Algorithms. It has a different approach to learning the most popular algorithms that are used out there (Bhargava, 2016). However, things start simple but quickly evolve to other more complex areas such as performance and complexity.

At the university, students are exposed to a variety of those problems and are tested against them from the start. However, not everyone finds the understanding and problem-solving aspect of it straightforward. In this section, the space is dedicated to the study of algorithms, focusing on the implementation and analysis of algorithms from a practical approach, starting with easy problems and gradually moving to more complex ones.

Introduction

Algorithms in computer science are step-by-step procedures or formulas for solving problems. They are essential for tasks such as data processing, calculations, and automated reasoning. Understanding algorithms is crucial for software development, as they provide the foundation for efficient and effective problem-solving. The part that is most challenging is the analysis of algorithms, which involves evaluating their efficiency and performance. This includes understanding time complexity, space complexity, and how algorithms scale with input size.

When transitioning from the recepe analogy to a more practical approach, it is important to understand that algorithms are not just about following steps, but also about making decisions based on the data and the problem at hand. This involves understanding the data structures used, the operations performed, and how they interact with each other.

In the context of this introduction, we will explore various algorithms, their implementations, and their applications in real-world scenarios. We will also delve into the analysis of algorithms, focusing on how to evaluate their performance and efficiency. Note that each algorithm is presented with a specific problem to a specific area of knowledge, for example, sorting algorithms are presented with the problem of sorting data, while graph algorithms are presented with the problem of traversing and analyzing graphs.

Sorting algorithms

These algorithms are used to arrange data in a specific order, such as ascending or descending. They are fundamental in computer science and are used in various applications, from databases to search engines. Sorting algorithms can be categorized into different types, such as comparison-based sorting (e.g., quicksort, mergesort) and non-comparison-based sorting (e.g., counting sort, radix sort). Each type has its own advantages and disadvantages, depending on the data and the specific use case.

The evaluation of sorting algorithms is often done based on their time complexity, which measures how the execution time of the algorithm grows with the size of the input data. Common time complexities for sorting algorithms include O(n log n) for efficient algorithms like mergesort and quicksort, and O(n^2) for less efficient algorithms like bubble sort and insertion sort.

Linear search is a simple algorithm that checks each element in a list sequentially until the desired element is found or the end of the list is reached. It is straightforward but inefficient for large datasets, with a time complexity of O(n).

function linearSearch(arr: number[], target: number): number {
    for (let i = 0; i < arr.length; i++) {
        if (arr[i] === target) {
            return i;
        }
    }
    return -1;
}

Linear search is the simplest algorithm that does not take into account multiple occurrences of the same element. It is a basic algorithm that can be used to understand the concept of searching in a list. It is used for iterating over elements and coming across different scenarios in APIs and data processing tasks. However, it is not efficient for large datasets, as it has a time complexity of O(n).

Use cases for linear search include:

  • Finding an element in an unsorted list.
  • Checking for the existence of an element in a dataset.
  • Iterating through a list to perform operations on each element.

Bubble Sort

Bubble sort is a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The process is repeated until the list is sorted. It has a time complexity of O(n^2) in the worst case, making it inefficient for large datasets.

function bubbleSort(arr: number[]): number[] {
    const n = arr.length;
    for (let i = 0; i < n - 1; i++) {
        for (let j = 0; j < n - i - 1; j++) {
            if (arr[j] > arr[j + 1]) {
                // Swap adjacent elements
                [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
            }
        }
    }
    return arr;
}

Bubble sort works by repeatedly comparing adjacent pairs and swapping them if they’re in the wrong order. The largest element “bubbles” to the end of the array with each pass.

Use cases:

  • Educational purposes for learning sorting concepts
  • Small datasets where simplicity matters more than efficiency
  • Nearly sorted data (can be optimized with early termination)

Selection Sort

Selection sort is a simple sorting algorithm that divides the input list into two parts: a sorted part and an unsorted part. It repeatedly selects the smallest (or largest) element from the unsorted part and moves it to the end of the sorted part. The time complexity is O(n^2), making it inefficient for large datasets.

function selectionSort(arr: number[]): number[] {
    const n = arr.length;
    for (let i = 0; i < n - 1; i++) {
        let minIndex = i;
        // Find the minimum element in the remaining unsorted portion
        for (let j = i + 1; j < n; j++) {
            if (arr[j] < arr[minIndex]) {
                minIndex = j;
            }
        }
        // Swap the found minimum with the element at position i
        if (minIndex !== i) {
            [arr[i], arr[minIndex]] = [arr[minIndex], arr[i]];
        }
    }
    return arr;
}

Why selection sort is not efficient for large datasets:

  • Time complexity of O(n^2) means comparisons grow quadratically with input size
  • Even with optimizations, it cannot compete with O(n log n) algorithms like quicksort or mergesort
  • Makes a fixed number of n-1 swaps regardless of initial order

Use cases:

  • When memory writes are expensive (few swaps needed)
  • Educational purposes
  • Situations requiring predictable performance characteristics

Insertion Sort

Insertion sort builds the sorted array one item at a time by iterating through an array and inserting each element into its correct position within the sorted portion. It’s efficient for small datasets and nearly sorted data.

function insertionSort(arr: number[]): number[] {
    for (let i = 1; i < arr.length; i++) {
        const key = arr[i];
        let j = i - 1;
        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j--;
        }
        arr[j + 1] = key;
    }
    return arr;
}

Use cases: Real-time sorting, small arrays, online sorting scenarios.

Merge Sort

Merge sort is a divide-and-conquer algorithm that guarantees O(n log n) time complexity.

function mergeSort(arr: number[]): number[] {
    if (arr.length <= 1) return arr;
    const mid = Math.floor(arr.length / 2);
    const left = mergeSort(arr.slice(0, mid));
    const right = mergeSort(arr.slice(mid));
    return merge(left, right);
}

function merge(left: number[], right: number[]): number[] {
    const result: number[] = [];
    let i = 0, j = 0;
    while (i < left.length && j < right.length) {
        if (left[i] <= right[j]) {
            result.push(left[i++]);
        } else {
            result.push(right[j++]);
        }
    }
    return result.concat(left.slice(i)).concat(right.slice(j));
}

Use cases: Large datasets, stable sorting, external sorting.

Quick Sort

Quick sort is a divide-and-conquer algorithm with O(n log n) average time complexity.

function quickSort(arr: number[], low: number = 0, high: number = arr.length - 1): number[] {
    if (low < high) {
        const pi = partition(arr, low, high);
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
    return arr;
}

function partition(arr: number[], low: number, high: number): number {
    const pivot = arr[high];
    let i = low - 1;
    for (let j = low; j < high; j++) {
        if (arr[j] < pivot) {
            [arr[++i], arr[j]] = [arr[j], arr[i]];
        }
    }
    [arr[i + 1], arr[high]] = [arr[high], arr[i + 1]];
    return i + 1;
}

Use cases: General-purpose sorting, in-place sorting, most practical datasets.

Shell Sort

Shell sort generalizes insertion sort using a gap sequence, achieving better performance on medium-sized datasets.

function shellSort(arr: number[]): number[] {
    let gap = Math.floor(arr.length / 2);
    while (gap > 0) {
        for (let i = gap; i < arr.length; i++) {
            const temp = arr[i];
            let j = i;
            while (j >= gap && arr[j - gap] > temp) {
                arr[j] = arr[j - gap];
                j -= gap;
            }
            arr[j] = temp;
        }
        gap = Math.floor(gap / 2);
    }
    return arr;
}

Use cases: Embedded systems, memory-constrained environments.

Recursion

Recursion is a programming technique where a function calls itself to solve a problem. It requires a base case to stop the recursion and is used for tree traversals, factorial calculations, and divide-and-conquer algorithms.

// Factorial
function factorial(n: number): number {
    if (n <= 1) return 1;
    return n * factorial(n - 1);
}

// Binary search recursively
function binarySearchRecursive(arr: number[], target: number, low: number, high: number): number {
    if (low > high) return -1;
    const mid = Math.floor((low + high) / 2);
    if (arr[mid] === target) return mid;
    if (arr[mid] < target) return binarySearchRecursive(arr, target, mid + 1, high);
    return binarySearchRecursive(arr, target, low, mid - 1);
}

Use cases: Tree/graph traversals, divide-and-conquer, mathematical computations.

Graph algorithms

Graph algorithms analyze and traverse node-edge data structures. Common approaches: Depth-First Search (DFS) explores deeply, Breadth-First Search (BFS) explores level-by-level.

// DFS
function dfs(graph: Map<number, number[]>, start: number, visited: Set<number> = new Set()): number[] {
    visited.add(start);
    const result: number[] = [start];
    const neighbors = graph.get(start) || [];
    for (const neighbor of neighbors) {
        if (!visited.has(neighbor)) {
            result.push(...dfs(graph, neighbor, visited));
        }
    }
    return result;
}

// BFS
function bfs(graph: Map<number, number[]>, start: number): number[] {
    const visited = new Set<number>();
    const queue: number[] = [start];
    const result: number[] = [];
    visited.add(start);
    while (queue.length > 0) {
        const node = queue.shift()!;
        result.push(node);
        const neighbors = graph.get(node) || [];
        for (const neighbor of neighbors) {
            if (!visited.has(neighbor)) {
                visited.add(neighbor);
                queue.push(neighbor);
            }
        }
    }
    return result;
}

Use cases: Social networks, route finding, web crawling, dependency resolution.

Search algorithms

Search algorithms find specific data within datasets. Linear search checks sequentially (O(n)). Binary search divides and conquers on sorted data (O(log n)).

// Binary search
function binarySearch(arr: number[], target: number): number {
    let low = 0, high = arr.length - 1;
    while (low <= high) {
        const mid = Math.floor((low + high) / 2);
        if (arr[mid] === target) return mid;
        if (arr[mid] < target) {
            low = mid + 1;
        } else {
            high = mid - 1;
        }
    }
    return -1;
}

Time complexity comparison:

  • Linear search: O(n)
  • Binary search: O(log n)
  • Hash table lookup: O(1) average

Use cases: Sorted database queries, range searches, system design optimization.

Greedy algorithms

Greedy algorithms make locally optimal choices at each step, hoping to find a global optimum. They’re fast but don’t always guarantee the best solution.

// Activity selection - maximize number of non-overlapping activities
interface Activity {
    start: number;
    end: number;
}

function selectActivities(activities: Activity[]): Activity[] {
    activities.sort((a, b) => a.end - b.end);
    const selected: Activity[] = [activities[0]];
    let lastEnd = activities[0].end;
    
    for (let i = 1; i < activities.length; i++) {
        if (activities[i].start >= lastEnd) {
            selected.push(activities[i]);
            lastEnd = activities[i].end;
        }
    }
    return selected;
}

Use cases: Huffman coding, minimum spanning tree, activity scheduling, fractional knapsack.

Important: Verify greedy choice property - greedy solution should lead to optimal solution for that problem.

Divide and Conquer Algorithms

Divide and conquer breaks problems into smaller subproblems, solves them independently, and combines results. Mergesort and quicksort exemplify this approach.

Use cases:

  • Sorting (mergesort, quicksort)
  • Binary search
  • Matrix multiplication
  • Closest pair of points

String Algorithms

String algorithms handle pattern matching and text processing.

function naivePatternMatch(text: string, pattern: string): number[] {
    const results: number[] = [];
    for (let i = 0; i <= text.length - pattern.length; i++) {
        if (text.substring(i, i + pattern.length) === pattern) {
            results.push(i);
        }
    }
    return results;
}

Use cases: Search engines, text editors, DNA matching, plagiarism detection.

Randomized Algorithms

Randomized algorithms use randomness to improve average-case performance. Random pivot selection in quicksort avoids O(n²) worst case.

function randomInt(min: number, max: number): number {
    return Math.floor(Math.random() * (max - min + 1)) + min;
}

Use cases: Monte Carlo simulations, load balancing, skip lists.

Backtracking Algorithms

Backtracking explores all solutions by building candidates incrementally and abandoning paths that violate constraints.

function solveNQueens(n: number): string[][] {
    const result: string[][] = [];
    const board = Array(n).fill(0).map(() => Array(n).fill('.'));
    
    function isSafe(row: number, col: number): boolean {
        for (let i = 0; i < row; i++) if (board[i][col] === 'Q') return false;
        for (let i = row, j = col; i >= 0 && j >= 0; i--, j--) if (board[i][j] === 'Q') return false;
        for (let i = row, j = col; i >= 0 && j < n; i--, j++) if (board[i][j] === 'Q') return false;
        return true;
    }
    
    function backtrack(row: number) {
        if (row === n) {
            result.push(board.map(r => r.join('')));
            return;
        }
        for (let col = 0; col < n; col++) {
            if (isSafe(row, col)) {
                board[row][col] = 'Q';
                backtrack(row + 1);
                board[row][col] = '.';
            }
        }
    }
    
    backtrack(0);
    return result;
}

Use cases: N-Queens, Sudoku solver, maze solving, constraint satisfaction.

Bit Manipulation Algorithms

Bit manipulation uses bitwise operations for efficient solutions.

function countSetBits(num: number): number {
    let count = 0;
    while (num) {
        count += num & 1;
        num >>= 1;
    }
    return count;
}

Use cases: Compression, subset generation, optimization problems, graphics.

Dynamic Programming

Dynamic programming solves optimization problems with overlapping subproblems using memoization to avoid redundant computation.

function fibonacciMemo(n: number, memo: Map<number, number> = new Map()): number {
    if (n <= 1) return n;
    if (memo.has(n)) return memo.get(n)!;
    const result = fibonacciMemo(n - 1, memo) + fibonacciMemo(n - 2, memo);
    memo.set(n, result);
    return result;
}

Use cases: Longest common subsequence, knapsack problem, shortest path, edit distance, coin change.

Reference: Mastering Dynamic Programming - How to solve any interview problem (Part 1)

Resources

References

  1. Bell, A. (2024). Lecture 1: Introduction to CS and Programming Using Python. https://www.youtube.com/watch?v=xAcTmDO6NTI&list=PLUl4u3cNGP62A-ynp6v6-LGBCzeH3VAQB
  2. Bhargava, A. (2016). Grokking Algorithms: An Illustrated Guide for Programmers and Other Curious People. Manning Publications.

You also might like