God-level SEO, advanced UI, mobile-first responsive design. Correct answers highlighted.
Data structures are organized formats to store and manage data efficiently. Choosing the right data structure impacts time and space complexity of algorithms.
Contiguous block of memory storing fixed-size elements. O(1) access by index, O(n) insertion/deletion unless at end. Use for static collections and when index-based access is essential.
// Example: array of ints
int a[5]; a[0] = 10;
Nodes containing data and pointer to next (singly) or prev (doubly). Dynamic size, O(1) insertion/deletion at known position, O(n) random access. Important for implementing stacks, queues, adjacency lists.
struct Node { int data; struct Node* next; };
Stack: LIFO — push/pop. Useful for function calls, DFS, undo. Queue: FIFO — enqueue/dequeue. Useful for BFS, scheduling. Variants: circular queue, deque, priority queue (heap-backed).
Hierarchical structures. Binary Tree (each node up to 2 children). Binary Search Tree (BST) — left<node < right for O(log n) average search/insert/delete. Balanced trees: AVL, Red-Black ensure O(log n) worst-case.
// Inorder traversal of BST prints sorted order
Complete binary tree represented as array. Max-heap: parent >= children. Min-heap: parent <= children. Useful for priority queues and heap-sort (O(n log n)).
Vertices connected by edges. Representation: adjacency list (sparse graphs), adjacency matrix (dense graphs). Traversals: BFS (shortest unweighted path), DFS. Algorithms: Dijkstra (shortest path), Kruskal/Prim (MST).
Maps keys to indices via hash function. Handle collisions via chaining or open addressing (linear probing, quadratic, double hashing). Average O(1) lookup/insert/delete; worst-case O(n) with poor hashing.
Common sorts: Bubble O(n^2), Insertion O(n^2), Merge O(n log n), Quick O(n log n) average (O(n^2) worst), Heap O(n log n). Stable sorts preserve relative order of equal elements.
Time complexity describes how runtime grows with input size; space complexity measures memory usage. Big-O (upper bound), Big-Theta (tight), Big-Omega (lower bound). Amortized analysis useful for dynamic arrays and hash tables.
Tries (prefix trees) for string operations, Segment Trees and Fenwick Trees for range queries, Disjoint Set Union (DSU) for union-find problems, B-Trees for databases, Suffix Arrays/Tries for string matching.
Cache-friendly structures (arrays) often outperform pointer-heavy ones (linked lists) due to locality. Consider space overhead for pointers and metadata.
1. Which data structure provides O(1) average time for lookup, insert and delete?
2. Stack follows which order?
3. Queue is best described as:
4. Which operation is O(1) in singly linked list (given pointer to node)?
5. Average time complexity for search in a balanced BST is:
6. A max-heap ensures:
7. Which traversal finds shortest path in unweighted graph?
8. DFS uses which data structure for iterative implementation?
9. Adjacency list representation is preferred when graph is:
10. Which collision resolution uses linked lists at each bucket?
11. Trie is mainly used for:
12. DSU (Disjoint Set Union) supports which operations efficiently?
13. Segment tree is primarily used for:
14. Fenwick tree (BIT) is used for:
15. Time complexity of BFS on adjacency list is:
16. Average time complexity of quicksort is:
17. Which sort is stable and O(n log n)?
18. Which sort is in-place?
19. Topological sort applies to:
20. Kruskal's algorithm finds:
21. Bloom filter is used for:
22. Amortized analysis is useful for:
23. Reversing a singly linked list can be done in:
24. Which structure has better cache locality?
25. Which DS can help check palindrome efficiently?
26. Binary search requires:
27. Load factor in hash table is defined as:
28. Red-Black trees ensure:
29. Which traversal uses more memory in worst case for wide graphs?
30. Traveling Salesman Problem (TSP) is:
31. Suffix arrays are used for:
32. Bellman-Ford algorithm can handle:
33. Which data structure is best for implementing LRU cache?
34. Bloom filters can have:
35. Good hash functions aim for:
36. B-Trees are commonly used in databases because:
37. Persistent data structures preserve:
38. Doubling strategy for dynamic array gives amortized insert time:
39. Cuckoo hashing attempts to:
40. Linked lists are preferred over arrays when:
41. Comparator function for sorting should be:
42. Edge list representation stores:
43. Big-O focuses on:
44. Stability in sorting preserves:
45. Floyd's cycle-finding algorithm detects cycle in linked list using:
46. Articulation points in graphs are found using:
47. For very sparse graphs, preferred storage is:
48. Priority queue can be implemented with:
49. Strongly connected components in directed graphs can be found by:
50. Which data structure supports fast prefix queries for strings?