Optimizing iOS App Performance: A Practical Guide to Big O Notation

Hemal Swarnasiri

Associate Technical Lead

Data structures and algorithms are fundamental theories in programming languages. As data structures, they include arrays, linked lists, trees, and hash tables, serving to organize and store data. As algorithms, they appear in sorting, searching, and recursion algorithms. Big O notation is a mathematical notation used to describe the efficiency of an algorithm in terms of time and space complexity, and it is a crucial aspect that every skilled software engineer should be familiar with.

Swift is a powerful and intuitive programming language developed by Apple and the open-source community. It can be used to build applications across the Apple ecosystem, including iOS, macOS, watchOS, and tvOS. I’m using the Swift programming language to demonstrate all the code necessary to understand Big O notation.

In this blog post, I will explain what Big O notation is, common types of complexities, the common operational complexities, how to analyze Big O notation of any simple algorithm, and provide tips for optimizing your app based on these complexities. By the end of this read, you will gain the ability to leverage Big O notation effectively in your projects.

What is a Data Structure?

A data structure is an arrangement of data in a computer’s memory or disk storage.

Example: Arrays, Link lists, Binary trees, Stacks, Queues, Graphs, Hash tables, and more.

Swift’s standard library provides optimized and prebuilt data structures like arrays, sets and dictionaries. We typically use these data structures quite heavily in our day-to-day programming.

What is an Algorithm?

An algorithm is a clearly specified set of steps, or a computational pattern, which transforms
a given input into a specified output.

Example: Sorting Algorithms, String Matching Algorithms, Pathfinding Algorithms, and more.

So, when considering program development, we can express that:

Programs = Algorithms + Data Structures

When talking about algorithms, achieving the same output can be done in various ways. For example, when sorting an array, you can use different algorithms like bubble sort, merge sort, or quicksort. Each method follows its own set of steps or strategies. In such cases, it is essential to choose the right algorithm based on our specific requirements. 

When utilizing data structures in an application, understanding the time and memory consumption of their operations like insertion, deletion, and selection is important.

 In these cases, it’s important to be familiar with Big O notation.

What is Big O Notation?

Big O notation is a way to describe the performance of an algorithm in computer science. It helps to analyze how the runtime (Time spent on executing the given program) or space requirements (Space consumed by the program) of an algorithm change as the size of the input data increases.

Common Types of Operational Complexities:

We can categorize complexities into various types based on how CPU operation time grows with input size. In this section, we’ll explore these common types of operational complexities.

  1. Constant time — O(1):

Same execution time regardless of the input size.

Examples: 

  • Accessing elements in an array.
  •  Accessing properties of an object.
  • Push and pop in a stack. 
  • Enqueue and dequeue in a queue.
  • Accessing elements in a dictionary.
  • Swift Set operations such as checking value containment, insertion, and removal.

Note: Dictionaries and Sets are built on top of hash tables, explaining the O(1) complexity for the above mentioned operations. However, if the hash table experiences more collisions, the complexity may increase.

Here is a sample Swift code with O(1) complexity. It prints the first element of a given array.

func printFirstElement(inputArray: [Int]) {
     if let firstElement = inputArray.first {
          print("First element: \(firstElement)")
     } else {
          print("Input array is empty.")
     }
}

2. Linear — O(n):

The performance is directly proportional to the size of the input.

Examples: 

  • Iteration through an array.
  • Linear search in an array.
  • Linear search in a linked list.
  • Copying elements between arrays.

Here is a sample Swift code with O(n) complexity. It loops through a given array and prints
all elements.

func printAllElements (inputArray: [Int]) {
    for element in inputArray {
        print(element)
    }
}

3. Quadratic — O(n²):

Performance grows with the square of the input size.

Examples: 

  • Nested loops (comparing 2 arrays).
  • Bubble sort.
  • Insertion sort.
  • Selection sort.

Here is a sample Swift code with O(n²) complexity. It prints all possible pairs of elements from a given array.

func printAllPairs(inputArray: [Int]) {
    for i in inputArray {
        for j in array {
            print("(\(i), \(j))")
        }
    }
}

4. Logarithmic — O(log n):

The performance is proportional to the logarithm of the input size. This implies that as the input size increases, the growth rate of the algorithm’s execution time slows down significantly.

Examples:

  • Binary search.
  • Searching in a binary search tree.

Here is an example Swift code with O(log n) complexity. This function searches for a target value in a sorted array and returns true if the target is found, and false otherwise.

func binarySearch(inputArray: [Int], target: Int) -> Bool {
    var low = 0
    var high inputArray.count - 1

    while low <= high {
        let mid = low + (high - low) / 2
        if inputArray [mid] == target {
            return true
        } else if inputArray [mid] < target {
            low = mid + 1
        } else {
            high = mid - 1
        }
    }
    return false
}

5. Quasilinear — O(n log n):

Quasilinear time complexity represents an intermediate growth rate between linear and logarithmic. If an algorithm involves comparing every element with every other element, it may have quasilinear time complexity.

Examples:

  • Merge sort.
  • Heap sort.

Here is an example Swift code with O(n log n) complexity. It uses a divide-and-conquer approach to efficiently sort an array of integers. The algorithm recursively divides the array into halves, sorts them, and then merges them back together in a sorted manner.

func mergeSort(array: [Int]) -> [Int] {
    guard array.count > 1 else {
        return array
    }

    let middle = array.count / 2
    let leftHalf = mergeSort(array: Array(array[..<middle]))
    let rightHalf = mergeSort(array: Array(array[middle...]))

    return merge(left: leftHalf, right: rightHalf)
}

func merge(left: [Int], right: [Int]) -> [Int] {
    var mergedArray = [Int]()
    var leftIndex = 0
    var rightIndex = 0

    while leftIndex < left.count && rightIndex < right.count {
        if left[leftIndex] < right[rightIndex] {
            mergedArray.append(left[leftIndex])
            leftIndex += 1
        } else {
            mergedArray.append(right[rightIndex])
            rightIndex += 1
        }
    }
    return mergedArray + Array(left[leftIndex...]) + Array(right[rightIndex...])
}

6. Exponential — O(2^n):

This complexity indicates a significant growth in execution time with each additional element in the input. It is particularly notable in recursive algorithms that solve problems through repeated subproblems.

Example:

  • Fibonacci Sequence(naive recursive implementation)
func fibonacci(n: Int) -> Int {
    if n <= 1 {
        return n
    }
    return fibonacci(n: n - 1) + fibonacci(n: n - 2)
}

Order of time complexities.

When considering the above complexities, we can express the order of complexity as follows.

O(1) < O(log n) < O(n) < O(n log n) < O() < O(2^n)

The following chart illustrates the time taken for each complexity with an increase in input size.

Types of Asymptotic Notations in Complexity Analysis of Algorithms

The performance of an algorithm depends on the input space. For some inputs, the algorithm works efficiently, and for others, it may not. Asymptotic Notations are mathematical tools that allow you to analyze an algorithm’s running time by identifying its behavior as its input size grows. There are mainly three asymptotic notations and let’s explore it.

1. Best Case (Ω(1)):

The best-case time complexity represents the minimum amount of time an algorithm takes for a given input. In other words, it is a scenario where the algorithm performs the fewest operations. For example, if an algorithm has a best-case time complexity of Ω(1), it means the algorithm’s execution time is constant, regardless of the input size. When denoting the best case, it is represented by Ω for time complexity.

2. Average Case (Θ(f(n))):

The average-case time complexity is the expected or average amount of time an algorithm takes for a random input. This case provides a more realistic view of the algorithm’s performance under typical conditions. The notation Θ(f(n)) denotes a tight bound on the average-case time complexity, where f(n) is a function describing the average behavior. When denoting the average case, it is represented by Θ for time complexity.

3. Worst Case (O(g(n))):

The worst-case time complexity is the maximum amount of time an algorithm takes for any input of size n. It describes the scenario where the algorithm performs the maximum number of operations. The notation O(g(n)) denotes an upper bound on the worst-case time complexity, where g(n) is a function describing the worst-case behavior. When denoting the worst case, it is represented by O for time complexity.

The following chart displays the time complexity for common data structure operations in both average and worst cases. Additionally, space complexity is illustrated for the worst case.

In the following array sorting algorithms, complexities are shown for both average and worst cases in terms of time complexity, with space complexity illustrated for the worst case.

As programmers, we should be able to analyze and determine the complexity of a given algorithm, whether it’s one we’ve written ourselves or encountered elsewhere. Next, let’s explore how we can determine the Big O notation for any simple algorithm.

Analyze and determine the Big O notation of any simple algorithm(Reduced Big O)

Analyzing and determining the Big O notation of any simple algorithm involves the application of several rules. In this section, we’ll cover 4 of these rules along with examples on how to apply them.

1. Drop the non-dominant terms

When expressing the complexity of an algorithm, you focus on the most significant part of the expression and omit less influential terms.

Example:

a). In the following example, the overall complexity is O(1) + O(n) + O(n²). However, you can omit O(1) and O(n), and you can take O(n²) since it’s the most significant part of the expression.

func someFunction(n: Int) {
    var a = 0
    a = 5    // O(1)
    a += 1
 
    for _ in 0..<n {
        // Simple statements - O(n)
    }

    for _ in 0..<n {
        // Simple statements
        for _ in 0..<n {
            // Simple statements - O(n^2)
        }
    }
}

b). In the following example, the ‘if’ part takes O(n), and the ‘else’ part takes O(n²). Therefore, you have to consider the ‘else’ part and need to express the overall complexity as O(n²).

func someConditional(_ n: Int) {
    if n == 2 {
        for _ in 0..<n {
            // simple statements      O(n)
        }
    } else {
        for _ in 0..<n {
            // simple statements
            for _ in 0..<n {
                // simple statements  O(n²)
            }
        }
    }
}

2. Drop the constants

When analyzing the complexity of an algorithm, you disregard numerical coefficients. This means that if an algorithm has a time complexity expression like 5n², you simplify it to O(n²) by dropping the constant factor (in this case, 5). The focus is on capturing the growth rate of the function rather than precise numerical values.

Example:

In the following example, there are three separate ‘for’ loops, and the overall complexity is O(3n). According to this rule, we can drop the constant factor 3 and express the overall complexity as O(n).

func dropConstants(_ n: Int) {
    for _ in 0..<n {
        // simple statements     O(n)
    }

    for _ in 0..<n {
        // simple statements     O(n) 
    }

    for _ in e..<n {
        // simple statements     O(n)
    }
}

3. Add dominant

When loops execute one after the other, we must add the complexity of each loop together.

Example:

In the following code example, there are two ‘for’ loops that run sequentially. The first one takes O(n), and the second one takes O(m). Therefore, we can express the overall time complexity of the function as O(n + m).

func addDominant(_ n: Int, _ m: Int) {
    for _ in 0..<n {
        // simple statements - O(n)
    }

    for _ in 0..<m {
        // simple statements - O(m)
    }
}

4. Multiply nested

When dealing with nested loops, we must multiply the complexity of each outer loop with the inner loop complexity.

Example:

In the following code example, the first for loop has a complexity of O(n), and the inner loop has a complexity of O(m). Therefore, the overall time complexity of the method is O(n * m).

func nested(_ n: Int, _ m: Int) {
    for _ in 0..<n {
        // simple statements - O(n)
        for _ in 0..<m {
            // simple statements - O(m)
        }
    }
}

In iOS application development, it’s crucial to choose the right and efficient data structures and algorithms. In the next section, we’ll discuss tips, along with real-world examples, to guide your selection of efficient data structures and algorithms for your program.

Tips for Choosing Efficient Data Structures and Algorithms 

Navigating the vast landscape of data structures and algorithms is a critical aspect of software development. In this section, I’ll provide insights into practical considerations when making choices for your program. We’ll explore understanding the differences between arrays and sets, optimizing sorting in Swift, exploring the efficiency disparities between arrays and linked lists, and uncovering key tips and techniques. Additionally, I’ll reveal the benefits of dynamic programming and discuss the delicate balance between algorithmic simplicity and efficiency for small datasets. Let’s explore each of these considerations in depth.

1. Array vs Set

Set and dictionary are built using the hash table data structure. Therefore, their operation complexities are equivalent to hash table complexities, indicating “Figure 3: Common Data Structure Operations”. Comparatively, Set search, insertion, and deletion have lower complexities than Array. So, choose Set if your dataset satisfies the following conditions:

a). There are no duplicates in the data.

b). You don’t need to retrieve data from any sequence/index.

In other words, if your dataset has duplicates or you need to retrieve data based on indices or sequences/indexes, use an Array.

2. Swift array sort method.

Choosing the right sorting algorithm depends on factors like data amount, data type, and memory. If you don’t have specific requirements for sorting algorithms, the generally recommended approach is to use the sort() method provided by the Swift standard library. Apple does not expose the algorithm they use, and it may vary with Swift versions, potentially employing a hybrid mechanism for sorting based on the situation.

3. Array vs linked list

Arrays have O(n) complexity for insertion and deletion, whereas linked lists have O(1) complexity for insertion and deletion. Although Swift standard library lacks a linked list class, if your dataset heavily involves insertion and deletion operations, consider using a linked list instead of an array.

4. Dynamic programming

Dynamic Programming algorithms solve each subproblem only once and save their answers for future use. This avoids the need to recompute the answer every time the subproblem is encountered. Techniques like caching and memoization can be considered.

As an example, let’s consider the Fibonacci series and how it can be improved using memoization techniques. The Fibonacci series is defined by the recurrence relation:

F(n) = F(n-1) + F(n-2)

func fibonacciRecursive(_ n: Int) -> Int {
    if n <= 1 {
        return n
    } else {
        return fibonacciRecursive(n - 1) + fibonacciRecursive(n - 2)
    }
}

Complexity: O(2^n)

While this algorithm works for small values of n, for larger values, it takes a considerable amount of time to generate the sequence due to repeated calculations. Memoization can be used to optimize this by storing each F(n) value in a dictionary once it’s calculated and using it when needed, reducing the overall complexity.

Here is the optimized code for the Fibonacci series using Memoization technique:

var memo = [Int: Int]()

func fib(n: Int) -> Int {
    if n == 0 { return 0 }
    else if n == 1 { return 1 }

    if let result = memo[n] { return result }

    memo[n] = fib(n - 1) + fib(n - 2)

    return memo[n]!
}

Complexity after use Memoization technique: O(n)

5. Algorithm/Recursion for Small Amounts of Data:

Sometimes, inefficient algorithms or recursion may be easy to read and understand quickly. If you are working with a small dataset, using such algorithms or recursion might be acceptable for your program. However, it is crucial to document why you chose that specific algorithm and how you can improve it for larger datasets.

What is the Next Step?

In this article, we primarily discussed how to consider complexity when choosing data structures and algorithms and how to assess the efficiency of your own algorithms.

As mentioned earlier, linked lists, stacks, queues, and trees do not have dedicated classes in the standard Swift library. So, as the next step, you should learn how to implement them and add common functions using Swift, understanding how their complexities work in different scenarios.

As a programmer, problem-solving skills are essential. Platforms like HackerRank, GeeksforGeeks and Codility offer problem-solving challenges. Engaging in these exercises whenever possible will improve your analytical and problem-solving skills.

Conclusion

As we wrap up, always remember that in the world of mobile devices including iOS and others, where resources are limited, writing efficient code is not just a skill; it’s a superpower. Armed with a solid understanding of Big O notation, you are now equipped to navigate the challenges of iOS app development.

The art of complexity analysis helps you make appropriate choices, and it also aids in determining the complexity of your own implemented algorithms. Regular practice of these concepts is essential for your programming journey.

Thank you for reading, and happy coding!

Hemal Swarnasiri

Associate Technical Lead
Hemal Swarnasiri, an Associate Technical Lead - Mobile at Surge Global, brings over 12+ years of expertise. He holds a Bachelor of Computer Science degree from the University of Colombo. He demonstrates exceptional proficiency in iOS applications development, architecture design, and optimization. His skill set extends to other mobile development technologies such as Android, React Native, Flutter, and Xamarin. Hemal's career highlights include contributing to diverse large-scale enterprise mobile applications, delivering high-quality solutions.

Share this post

0 Comments

Submit a Comment

How can we help?

Talk to our experts today.