Titas Dey

Been learning about asymptotic analysis.

Definition: Asymptotic analysis is the study of the growth of an algorithm’s running time or space requirement as a function of the input size n, for large values of n, while ignoring constant factors and lower-order terms.

In other words, we analyze how the resource consumption of an algorithm scales as the problem size becomes large.

Time Complexity != Time taken by the algorithm to run

Time complexity is a function f(n) that represents the number of elementary operations performed by an algorithm as a function of input size n ∈ ℕ.

T(n) = number of primitive steps executed for input size n

  • Asymptotic analysis is independent of

    • Hardware specifications
    • Programming language
    • Operating system and runtime environment
    • Input distribution (for a fixed n)

Big-Oh Notation (O-notation)

Definition: Let f(n) and g(n) be functions mapping positive integers to positive real numbers. We say that g(n) is O(f(n)) (read “g of n is big-oh of f of n”) if there exist positive constants c ∈ ℝ⁺ and n₀ ∈ ℕ such that:

g(n) ≤ c·f(n) ∀ n ≥ n₀

The function f(n) is called an asymptotic upper bound for g(n). Intuitively, this means that g(n) grows no faster than f(n), up to a constant multiplicative factor, for sufficiently large values of n.

Visually , if we plot both g(n) and f(n) and can find some point n₀ in the input axis beyond which c·f(n) always stays above g(n). Then g(n) is O(f(n)). i.e We use Big-Oh if we are concerned with the worst case performance of the algorithm

eg:

Say f(n) = 3n² + 5n + 10 , g(n) = n². g(n) = O(f(n)) & f(n) = O(g(n))

If we analyse this

We need to find constants c ∈ ℝ⁺ and n₀ ∈ ℕ such that:

f(n) ≤ c·g(n) ∀ n ≥ n₀

Substituting: 3n² + 5n + 10 ≤ c·n²

For n ≥ 1: – 5n ≤ 5n² – 10 ≤ 10n²

Therefore: 3n² + 5n + 10 ≤ 3n² + 5n² + 10n² = 18n²

Conclusion: Choose c = 18 and n₀ = 1. Then f(n) ≤ 18·g(n) ∀ n ≥ 1.

Hence, f(n) = O(g(n)) or more specifically, f(n) = O(n²).

Again (n) ≤ c·f(n) ∀ n ≥ n₀

Substituting: n² ≤ c(3n² + 5n + 10)

Since f(n) = 3n² + 5n + 10 > 3n² for all n ≥ 1, we have:

n² ≤ 3n² + 5n + 10

This is clearly true for all n ≥ 1.

Conclusion: Choose c = 1 and n₀ = 1. Then g(n) ≤ 1·f(n) ∀ n ≥ 1.

Hence, g(n) = O(f(n)).

Big-Oh Represents a Set of Functions

Say , some T(n) = n³ then
3n² + 5n + 10 = O(T(n))..................i
5n+7 = O(T(n)).........................ii
log(n) = O(T(n))_____iii

if we analytically approach eq(i),eq(ii),eq(iii). All 3 are True But eq(i) != eq(ii) != eq(iii)

When we write f(n) = O(g(n)), the ”=” is not a true equality.

More Precisely: O(g(n)) represents an infinite set of all functions that grow no faster than g(n). So f(n) = O(g(n)) really means f(n) ∈ O(g(n))

Example: O(n²) is the set containing functions like: – n² – 3n² + 5n + 10 – n² + 100n – 50n + 1000 – 5 (constant) – log n

All these functions belong to the set O(n²) because they all grow no faster than n².

When we write O(n²), we're saying that as the input size n becomes very large, the running time grows at most proportionally to n². The constant factors and lower-order terms become negligible in comparison to the dominant term, which is why we drop them in Big-Oh notation. For instance, an algorithm that takes 3n² + 5n + 10 operations is simply O(n²) because the n² term dominates as n approaches infinity.

i.e: We simply ignore the lower order and constant terms when were trying to find the time complexity in Big-Oh


Some Commonly Seen Cases

O(1) – Constant Time: These operations take the same amount of time regardless of input size. Accessing an array element by index, performing arithmetic operations, and returning a value are all O(1) operations. No matter if our array has 10 elements or 10 million elements, accessing the element at index 5 takes the same time.

O(log n) – Logarithmic Time: Algorithms that repeatedly divide the problem space in half exhibit logarithmic complexity. Binary search is the classic example, where we eliminate half of the remaining elements with each comparison. As n doubles, the running time increases by only a constant amount.

O(n) – Linear Time: When we must examine every element exactly once, we have linear complexity. A simple loop that processes each element in an array demonstrates O(n) behavior. If we double the input size, the running time doubles as well.

O(n log n) – Linearithmic Time: Efficient sorting algorithms like merge sort and heap sort operate in O(n log n) time. This complexity arises when we perform a logarithmic number of linear operations or divide-and-conquer with linear merging.

O(n²) – Quadratic Time: Nested loops where each loop runs n times typically result in quadratic complexity. Simple sorting algorithms like bubble sort and selection sort fall into this category. Doubling the input size quadruples the running time.

O(n³) – Cubic Time: Triple-nested loops often produce cubic complexity. Matrix multiplication using the naive algorithm is a prime example.

O(2ⁿ) – Exponential Time: The recursive Fibonacci function is a classic example, where each call branches into two more calls. The running time doubles with each increment of n, making it impractical for even moderately sized inputs.

O(n!) – Factorial Time: Generating all permutations of n elements produces factorial complexity. For n = 10, that's 3,628,800 operations. Yes calculated that in my head,absolutely did not google.

O(1) < O(log n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!)


Analyzing Some Algorithms

Example 1: Array Sum

def array_sum(arr):
    total = 0
    for element in arr:
        total += element
    return total

The loop runs once per element. Each iteration does constant work. Time complexity: O(n).


Example 2: Finding Maximum Element

def find_max(arr):
    if len(arr) == 0:
        return None
    max_val = arr[0]
    for i in range(1, len(arr)):
        if arr[i] > max_val:
            max_val = arr[i]
    return max_val

Single pass through the array with constant-time operations per iteration. Time complexity: O(n).


Example 3: Bubble Sort

def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(n - i - 1):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]

Nested loops result in roughly n² comparisons. Time complexity: O(n²).


Analyzing a Slightly More Complex Program

def binary_search(arr, target):
    left = 0
    right = len(arr) - 1
    
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

Initially, the search space has size n. After each iteration: – 1st iteration: n elements – 2nd iteration: n/2 elements – 3rd iteration: n/4 elements – kth iteration: n/2^(k-1) elements

The loop continues until the search space is reduced to 1 element or becomes empty. We need to find k such that:

n/2^(k-1) ≤ 1

Solving for k: – n ≤ 2^(k-1)

  • log₂(n) ≤ k – 1

  • k ≥ log₂(n) + 1

Therefore, the loop executes at most ⌈log₂(n)⌉ + 1 times (ceiling of log₂(n) plus 1).

Since each iteration performs O(1) operations (comparisons, assignments, arithmetic), the total time complexity is : T(n) = O(1) + O(log n)·O(1) + O(1) = O(log n) .

i.e T(n) = O(log(n)) for Binary Search

Conclusion:

  • Big-Oh gives an upper bound on how an algorithm scales as input size increases
  • We analyse for n → ∞
  • Constant terms and lower order terms are ignored
print("Titas,Signing Out!")

Well the title is a clickbait,but it’s true in a limited sense.

If you've written some C code you've probably used most of the features in C like structure,functions,pointers,arrays and perhaps even the preprocessor.

However, I will talk about one of the lesser used features in C – union


The Union

A union allocates a single shared block of memory, large enough to hold its largest member (with some padding, depending on alignment). Unlike a struct, which allocates distinct memory for each member, a union allows multiple members to occupy the same memory space.

For example:

#include<stdio.h>
#include<string.h>

struct xyz {
    int x;
    float y;
    char z[10];
};

union tuv {
    int t;
    float u;
    char v[10];
};

int main(void) {
    struct xyz st_eg;
    union tuv un_eg;

    printf("%d\n", sizeof(st_eg)); // O/P: 20 bytes (4 + 4 + 10 + 2 bytes padding)
    printf("%d\n", sizeof(un_eg)); // O/P: 12 bytes (10 bytes for v + 2 bytes padding)

    strcpy(&un_eg.v, "HelloWorld");

    printf("%s\n", un_eg.v);  // O/P: HelloWorld 
    printf("%f\n", un_eg.u);  // O/P: 1143139122437582505939828736.000000

    return 0;
}

Here, both the integer, float, and character array occupy the same memory region. When "HelloWorld" is copied into the character array v, reading that memory as a float outputs the string "HelloWorld" typecasted into float a short essay on union.


  • But why do we need union ?
  • Why to allocate memory for only the largest member and not all of them using struct ?

A union is valuable when you want different interpretations of the same memory.


Example 1: Storing an IPv4 Address

#include<stdio.h>

typedef union {
    unsigned int ip_add;
    unsigned char bytes[4];
} ipv4_add;

int main(void) {
    ipv4_add my_address = {0};

    my_address.bytes[0] = 127;
    my_address.bytes[1] = 55;
    my_address.bytes[2] = 115;
    my_address.bytes[3] = 0;

    printf("%x\n", my_address.ip_add); // O/P: 73377f
    return 0;
}

Explanation

Using a union, we can store both the integer representation and the byte-wise representation of an IPv4 address within the same space. This approach eliminates the need for explicit bit-shifting or manual conversions.


Example 2: Unions in Embedded Programming

Unions are widely used in embedded systems to represent hardware registers that can be accessed both as a whole and as individual fields.

#include<stdio.h>

union HWRegister {
    struct { // annonymous structure
        unsigned char parity;
        unsigned char control;
        unsigned char stopbits;
        unsigned char direction;
    };
    unsigned int reg;
};

int main(void) {
    union HWRegister gpioa;

    gpioa.reg = 0x14424423;
    printf("%x\n", gpioa.stopbits); // O/P: 14

    return 0;
}

In this example, the same memory can be accessed as a single 32-bit register or through specific bit fields. This design improves clarity while maintaining memory efficiency — a common requirement in low-level programming.


Example 3: A Glimpse of Polymorphism in C

Now coming back to the title , we can do something similar to OOP in C:

#include<stdio.h>

typedef enum {
    JSON_STR,
    JSON_BYTE,
    JSON_INT,
} json_type_t;

#define JSON_MAX_STR 64

typedef struct {
   json_type_t type;
   union {
       char str[JSON_MAX_STR];
       char byte;
       int number;
   };
} json_t;

void printJSON(json_t *json) {
    switch (json->type) {
        case JSON_STR:
            printf("%s\n", json->str);
            break;
        case JSON_BYTE:
            printf("%c\n", json->byte);
            break;
        case JSON_INT:
            printf("%d\n", json->number);
            break;
    }
}

int main(void) {
    json_t myJSON;
    myJSON.type = JSON_INT;
    myJSON.number = 97;

    printJSON(&myJSON);
    return 0;
}

Here, the structure json_t can hold one of several possible data types — a string, a single byte, or an integer. The active type is determined at runtime using the type field.

There are some issues in this , in C the types are not tightly enforced by the compiler , so if we do

myJSON.type = JSON_STR;// // instead of JSON_INT
myJSON.number = 97;
printJSON(&myJSON); // O/P: a 
  • The output will be : a (the ascii charector of value 97)

And that's all.

print("Titas , signing out ")

Arrow Function vs Regular Function in JavaScript

Yeah, everyone already knows the syntax is different. No need to waste time on that.

Let’s look at what actually matters — how they behave differently.


1. arguments

Regular functions come with this built-in thing called arguments object. Even if you don’t define any parameters, you can still access whatever got passed when the function was called.

Arrow functions? Nope. No arguments object. Try using it, and it’ll just throw an error.

Regular function:

function test() {
  console.log(arguments);
}

test(1, "hello world", true); 
// o/p
// { '0': 1, '1': 'hello world', '2': true }

Arrow function:

const test = () => {
  console.log(arguments); 
};

test(1, "hello world", true); // Throws ReferenceError

2. return

Arrow functions have implicit return but regular functions don't. i.e We can return the result automatically if we write it in a single line , inside a parenthesis in arrow functions. Regular functions always require the return keyword.

Regular function:

function add(a, b) {
 const c = a + b;
}

console.log(add(5, 10)); // o/p : undefined 

Arrow function:

const add = (a, b) => (a + b);

console.log(add(5, 10)); // o/p : 15

3. this

Arrow functions do not have their own this binding. Instead, they lexically inherit this from the surrounding (parent) scope at the time of definition. This means the value of this inside an arrow function is fixed and cannot be changed using .call(), .apply(), or .bind().

Regular functions, on the other hand, have dynamic this binding — it depends on how the function is invoked. When called as a method, this refers to the object; when called standalone, this can be undefined (in strict mode) or refer to the global object (in non-strict mode).

Because of this behavior, arrow functions are commonly used in cases where you want to preserve the outer this context, such as in callbacks or within class methods that rely on this from the class instance.

Regular function :

const obj = {
  name: "Titas",
  sayHi: function () {
    console.log(this.name);
  }
};

obj.sayHi(); // o/p : Titas

Arrow function :

const obj = {
  name: "Titas",
  sayHi: () => {
    console.log(this.name);
  }
};

obj.sayHi(); // o/p :  undefined

print("Titas signing out !")

The college is really gonna teach us Data Structures and Algorithms in C . And make me write 100 line codes in paper for lab manuals .