Friday, February 13, 2026

Special matrix - Determine the sum of all the elements of the matrix (Medium)

Index of "Algorithms: Design and Analysis"
<<< Previous    Next >>>

Note: I'd mark this problem for 'medium complexity', though it is rated as 'easy' on Hacker Earth -- probably because it tests the student for some well known tips, tricks and shortcuts.


Solve on Hacker Earth
Problem

You have an N x M matrix, where 1 to N are rows and 1 to M are columns.

Element at the intersection of 
 row and 
 column is equal to 
, where 
 is equal to the number of prime divisors of x.

Determine the sum of all the elements of the matrix.

Input

The first line contains an integer T denoting the number of test cases.
For each test case, the first line contains two space-separated integers N, M.
Output

For each test case, print the sum of all the elements of the matrix  in a new line.

Constraints


Sample Input
1
3 3
Sample Output
10
Time Limit: 1
Memory Limit: 256
Source Limit:
Explanation
The matrix and the corresponding values are:

 = 

Hence, the sum of all the elements is 10.

Code first (and explanation later)

import sys
input = sys.stdin.readline

def precompute_f(max_val):
    F = [0] * (max_val + 1)
    for i in range(2, max_val + 1):
        if F[i] == 0:  # prime
            for j in range(i, max_val + 1, i):
                F[j] += 1
    return F

def solve():
    T = int(input())
    queries = []
    max_sum = 0
    
    for _ in range(T):
        N, M = map(int, input().split())
        queries.append((N, M))
        max_sum = max(max_sum, N + M)
    
    F = precompute_f(max_sum)
    
    for N, M in queries:
        total = 0
        
        for s in range(2, N + M + 1):
            low = max(1, s - M)
            high = min(N, s - 1)
            
            if low <= high:
                count = high - low + 1
                total += F[s] * count
        
        print(total)

solve()

User Prompt

I am learning Data Structures and Algorithms.

I will provide a screenshot of a problem statement. 

Your task is NOT to directly give me the final code solution unless I explicitly ask for it.

Instead, please do the following:

1. First, carefully restate the problem in your own words to confirm correct understanding.
   - If anything is unclear in the screenshot, ask me clarification questions before proceeding.

2. Identify:
   - What category of problem this is (e.g., prefix sum, greedy, DP, two pointers, binary search, graph, etc.)
   - What core DSA concepts are being tested
   - What mathematical observations (if any) are involved
   - What constraints are important and how they affect the approach

3. Explain:
   - What naive/brute-force solution would look like
   - Why that may or may not work under the given constraints
   - What optimization insight is required

4. Clearly explain the key trick or insight needed to solve it.
   - Why does that trick work?
   - What pattern should I remember for future problems?

5. Provide:
   - The time complexity and space complexity of the optimal approach
   - Why it satisfies the constraints

6. If there are multiple valid approaches, compare them.

7. Only after all conceptual explanation, provide clean and correct Python code.

8. Avoid guessing unclear details.
   - If any part of the screenshot is ambiguous, ask me instead of assuming.

Focus on correctness over speed. Be explicit in reasoning.


Excellent — this is a very strong math + counting + precomputation problem. Let’s go step-by-step exactly in the structured way you requested.


1️⃣ Restating the Problem (Confirm Understanding)

We are given:

  • A matrix of size N × M

  • Each element at position (i, j) is:

A[i][j]=F(i+j)A[i][j] = F(i + j)

Where:

  • F(x)F(x) = number of distinct prime divisors of x.

We must compute:

i=1Nj=1MF(i+j)\sum_{i=1}^{N} \sum_{j=1}^{M} F(i+j)

We do this for T test cases.


Example: N = 3, M = 3

Matrix becomes:

i+jF(i+j)
21
31
41
51
62

Final sum = 10.


2️⃣ Identify Category & Concepts

📌 Category

This is:

Number Theory + Frequency Counting + Prefix / Diagonal Counting + Sieve


📌 Core DSA Concepts Being Tested

  • Prime factorization

  • Sieve of Eratosthenes

  • Frequency counting trick

  • Observing symmetry in matrix

  • Avoiding O(NM) brute force


📌 Mathematical Observation

Important:

The matrix depends only on:

i+ji + j

So instead of thinking in 2D:

We count how many times each value of (i + j) appears.


Range of (i + j)

Minimum:

1+1=21 + 1 = 2

Maximum:

N+MN + M

So we only care about numbers in range:

[2,N+M][2, N + M]

3️⃣ Naive / Brute Force

Brute force approach:

css
for i in 1..N: for j in 1..M: compute F(i+j) add to answer

Time complexity:

O(NM)O(NM)

But:

N, M ≤ 10⁶

Worst case:

101210^{12}

Impossible.

So brute force is dead.


4️⃣ Optimization Insight

Instead of iterating cells, count diagonals.

Each diagonal corresponds to constant:

s=i+js = i + j

For each possible s, count:

How many (i, j) pairs produce that sum?

Then:

Answer=s=2N+MF(s)×count(s)Answer = \sum_{s=2}^{N+M} F(s) \times count(s)

5️⃣ How Many Times Does Each s Appear?

We must count number of solutions of:

i+j=si + j = s

Where:

1iN1 \le i \le N 1jM1 \le j \le M

Rewrite:

j=sij = s - i

So i must satisfy:

1iN1 \le i \le N 1siM1 \le s - i \le M

Which gives:

sMis1s - M \le i \le s - 1

Combine bounds:

max(1,sM)imin(N,s1)max(1, s - M) \le i \le min(N, s - 1)

So:

count(s)=max(0,min(N,s1)max(1,sM)+1)count(s) = max(0, min(N, s-1) - max(1, s-M) + 1)

This is the core counting trick.


6️⃣ Next Problem: Compute F(x)

F(x) = number of distinct prime divisors.

Example:

6 = 2 × 3 → F(6) = 2
12 = 2² × 3 → F(12) = 2
7 = prime → F(7) = 1

We need F(x) for all:

xN+Mx \le N + M

Since N, M ≤ 10⁶
Worst N+M = 2×10⁶

We can precompute using modified sieve.


7️⃣ Modified Sieve

Initialize array F of size N+M.

For each prime p:

For multiples of p:
increment F[multiple] by 1

Time complexity:

O(NloglogN)O(N \log \log N)

Very manageable.


8️⃣ Final Algorithm

For each test case:

  1. Precompute F up to max(N+M) across all tests.

  2. For s = 2 to N+M:

    • Compute count(s)

    • Add F[s] × count(s)


9️⃣ Why This Trick Works

Because:

  • Matrix values depend only on i+j

  • i+j forms diagonals

  • Instead of iterating cells, iterate diagonals

This is a classic trick:

When value depends on i+j → count diagonals

Remember this pattern.


🔟 What This Problem Teaches

  • Transform 2D iteration → 1D frequency counting

  • Precompute number theory values

  • Sieve modification

  • Exploit symmetry

  • Avoid brute force by counting contributions


1️⃣1️⃣ Time & Space Complexity

Precomputation:

O((N+M)loglog(N+M))O((N+M) \log \log (N+M))

Per test case summation:

O(N+M)O(N+M)

Total across constraints manageable.

Space:

O(N+M)O(N+M)

1️⃣2️⃣ Clean Python Code

Now that the reasoning is clear, here is the correct implementation:

python
import sys input = sys.stdin.readline def precompute_f(max_val): F = [0] * (max_val + 1) for i in range(2, max_val + 1): if F[i] == 0: # prime for j in range(i, max_val + 1, i): F[j] += 1 return F def solve(): T = int(input()) queries = [] max_sum = 0 for _ in range(T): N, M = map(int, input().split()) queries.append((N, M)) max_sum = max(max_sum, N + M) F = precompute_f(max_sum) for N, M in queries: total = 0 for s in range(2, N + M + 1): low = max(1, s - M) high = min(N, s - 1) if low <= high: count = high - low + 1 total += F[s] * count print(total) solve()

🔥 Pattern to Remember for Interviews

If matrix element depends on:

  • i + j → count diagonals

  • i - j → count anti-diagonals

  • i × j → factor pair counting

Always convert 2D iteration into counting frequencies.


If you’d like next:

  • 🔎 Optimization using prefix sums

  • ⚡ Reducing constant factor

  • 🧠 More advanced diagonal counting problems

  • 📊 Similar Codeforces level question

Tell me your goal (CP rating / interviews / placements).



can you please explain this more clearly, with more patience... 4️⃣ Optimization Insight

What is a diagonal

Perfect. Let’s slow down properly and build intuition from scratch.

You asked:

What is a diagonal?
And how does the optimization using diagonals work?

Let’s go very gently.


Step 1: Look at a Small Example

Take:

N = 3
M = 3

So indices look like this (we’ll write i+j in each cell):

(i,j) → i+j (1,1)=2 (1,2)=3 (1,3)=4 (2,1)=3 (2,2)=4 (2,3)=5 (3,1)=4 (3,2)=5 (3,3)=6

Now observe something very important:

Cells that have the same value of i+j are grouped like this:

2 3 3 4 4 4 5 5 6

These groups form slanted lines.

Those slanted lines are called diagonals.


Step 2: What Is a Diagonal (Intuition)

In a matrix, a diagonal is a line of cells that slants across.

There are two common types:

  1. Main diagonal (top-left to bottom-right)

  2. Anti-diagonal (top-right to bottom-left)

In this problem, we care about:

👉 All cells where i + j is the same.

That forms anti-diagonals (slanted like /).

Example:

All cells where i+j = 4:

scss
(1,3) (2,2) (3,1)

They form a slanted line.

That line is one diagonal.


Step 3: Why Diagonals Matter Here

The matrix value is:

A[i][j] = F(i + j)

Notice:

The value depends ONLY on i + j.

That means:

Every cell on the same diagonal has the SAME value.

So instead of computing:

for each cell (i,j):
add F(i+j)

We can instead say:

For each diagonal s:
count how many cells have i+j = s
multiply by F(s)

This avoids looping through all N × M cells.

That’s the optimization insight.


Step 4: Why Is This Powerful?

Without optimization:

You would loop over every cell.

Worst case:

N = 10⁶
M = 10⁶

Total cells = 10¹² ❌ impossible.

But number of diagonals?

Possible i+j values go from:

2 → N+M

Maximum around 2×10⁶.

That’s manageable.

So we reduce:

From O(NM)
To O(N+M)

That’s the entire trick.


Step 5: How Many Cells in One Diagonal?

Now the only remaining question:

For a fixed s, how many (i,j) satisfy:

i + j = s
1 ≤ i ≤ N
1 ≤ j ≤ M

Rewrite:

j = s − i

So i must satisfy:

1 ≤ i ≤ N
1 ≤ s − i ≤ M

From second condition:

1 ≤ s − i ≤ M
→ s − M ≤ i ≤ s − 1

Now combine:

i must be in:

max(1, s − M) ≤ i ≤ min(N, s − 1)

If this interval is valid:

count = high − low + 1

That’s how many cells in that diagonal.


Step 6: Visual Understanding of Count Shape

In a square matrix 3×3:

Diagonal sizes are:

1
2
3
2
1

They grow until middle, then shrink.

In rectangle N×M:

They grow until min(N,M), then stay flat, then shrink.

That’s the pattern.


Step 7: The Big Lesson

Whenever you see:

Matrix value depends on:

  • i + j → think diagonals

  • i − j → think anti-diagonals

  • only i → think row-wise

  • only j → think column-wise

This is a very common competitive programming trick.


Step 8: Why This Problem Is Actually Teaching You Something Deep

It’s teaching:

Instead of iterating all cells,

👉 Count how many times each value contributes.

This idea appears everywhere in advanced problems.

We are converting:

Double loop → frequency counting

That’s a huge skill jump.


If you'd like, I can next:

  • Draw a bigger visual example

  • Show how diagonal sizes look for rectangular case

  • Give you 2–3 similar practice problems

  • Or explain how to derive the counting formula more intuitively

Tell me what you’d like 😊



can you explain this: 7️⃣ Modified Sieve with a bit more patience, a bit more clearly? what is sieve? 
And how the complexity here being calculated?

Perfect. This is a very important concept in number theory problems.
Let’s go very slowly and build this from zero.


Step 1: What Is a Sieve?

A sieve is a method used to find prime numbers efficiently.

The most famous one is:

Sieve of Eratosthenes

The idea is simple:

Instead of checking each number one by one to see if it's prime,
we eliminate multiples of primes.

Think of it like filtering.


Step 2: Why Do We Need a Sieve Here?

We need:

F(x)=number of distinct prime divisors of xF(x) = \text{number of distinct prime divisors of } x

Example:

6 = 2 × 3 → F(6) = 2
12 = 2² × 3 → F(12) = 2
7 = prime → F(7) = 1

We need F(x) for all x up to:

N+MN + M

That can be up to 2,000,000.

If we factor each number individually:

  • Factor 1 number = O(√x)

  • Factor 2 million numbers → too slow

So we use a sieve-style precomputation.


Step 3: Classic Sieve of Eratosthenes (Prime Finding)

Let’s first understand the normal sieve.

Suppose we want primes up to 20.

We:

  1. Assume all numbers are prime.

  2. Start from 2.

  3. Mark all multiples of 2 as not prime.

  4. Move to next unmarked number (3).

  5. Mark all multiples of 3.

  6. Continue.

After finishing, unmarked numbers are primes.

Time complexity of classic sieve:

O(nloglogn)O(n \log \log n)

We’ll explain why soon.


Step 4: How Is Our Sieve Different?

Instead of marking primes,

We want:

For each number,
count how many distinct primes divide it.

So we slightly modify the sieve.


Step 5: The Modified Sieve Idea

Initialize:

r
F[x] = 0 for all x

Now:

For each number i from 2 to max_val:

If F[i] == 0 → it means i has not been marked yet → i is prime.

Then for all multiples of i:

r
F[j] += 1

Why?

Because i is a prime divisor of j.

So we increment count of distinct prime divisors.


Step 6: Example Walkthrough

Let’s compute F up to 10.

Initially:

ini
F = [0,0,0,0,0,0,0,0,0,0,0]

i = 2

F[2] == 0 → prime.

Mark multiples of 2:

2,4,6,8,10

Increase F:

F[2]=1
F[4]=1
F[6]=1
F[8]=1
F[10]=1


i = 3

F[3] == 0 → prime.

Mark multiples:

3,6,9

F[3]=1
F[6]=2
F[9]=1


i = 4

F[4] != 0 → not prime → skip


i = 5

F[5] == 0 → prime.

Mark multiples:

5,10

F[5]=1
F[10]=2


Continue...

Final:

yaml
Number : F(x) 2 : 1 3 : 1 4 : 1 5 : 1 6 : 2 7 : 1 8 : 1 9 : 1 10: 2

Correct.


Step 7: Why Does This Work?

Because:

Each prime p visits all its multiples.

Each number gets incremented once per distinct prime divisor.

That automatically counts distinct prime divisors.


Step 8: Now Let’s Talk Complexity (Very Carefully)

We have:

r
for i in range(2, max_val): if F[i] == 0: for j in range(i, max_val, i): F[j] += 1

Let’s count how many operations happen.

For prime 2:
We touch ~ n/2 numbers.

For prime 3:
We touch ~ n/3 numbers.

For prime 5:
~ n/5 numbers.

For prime p:
~ n/p numbers.

Total work ≈

n(12+13+15+17+...)n \left(\frac{1}{2} + \frac{1}{3} + \frac{1}{5} + \frac{1}{7} + ...\right)

This sum over primes is known to be:

loglogn\log \log n

So total complexity:

O(nloglogn)O(n \log \log n)

Step 9: Why Is log log n So Small?

For n = 1,000,000:

log n ≈ 14
log log n ≈ 2.6

Very tiny.

So n log log n ≈ almost linear.

That’s why sieve is extremely efficient.


Step 10: Why This Is Better Than Factoring Each Number

Factoring each number individually:

Worst case:

O(nn)O(n \sqrt{n})

Impossible for 2 million.

Sieve:

O(nloglogn)O(n \log \log n)

Very manageable.


Step 11: What This Teaches

This problem is secretly testing:

  • Precomputation thinking

  • Recognizing when multiple values are needed

  • Transforming repeated factorization into sieve

  • Understanding harmonic series behavior


Step 12: When Should You Think “Sieve”?

When:

  • You need prime info for many numbers

  • You need divisor counts for all numbers

  • Constraints go up to 10⁶ or 10⁷

  • You must answer many queries quickly


Final Mental Model

Instead of:

"Factor each number"

Think:

"Let primes update their multiples"

That shift is the core sieve idea.


If you'd like, I can now:

  • Derive formally why harmonic sum gives log log n

  • Compare classic sieve vs SPF sieve

  • Show how to get full prime factorization array

  • Or give 2–3 practice problems to solidify this

Tell me how deep you want to go 😊



Quantum Computing Just Got Real -- What Google’s 13,000× Breakthrough Actually Means (Feb 2026)

See All Articles
<<< Previous Post on Quantum Computing

Let’s start with the headline: Google says its quantum computer solved a problem 13,000 times faster than the world’s fastest supercomputer.

That sounds dramatic. It sounds futuristic. It sounds like something out of a sci‑fi movie.

But what does it actually mean?

In this post, I’ll break it down in plain English — no physics degree required.


First, What Is Quantum Computing (Really)?

Traditional computers — your laptop, your phone, even the biggest supercomputers — process information using bits. A bit is either 0 or 1.

Quantum computers use something called qubits. And qubits are weird.

Instead of being just 0 or 1, a qubit can be 0 and 1 at the same time. This is called superposition.

On top of that, qubits can become linked together in a strange way called entanglement, where changing one instantly affects another.

The result?

Quantum systems can represent and process a massive number of possibilities simultaneously — far more efficiently for certain problems than classical computers.

But here’s the catch.

For years, researchers have been claiming “quantum advantage” — saying quantum machines can beat classical ones — but critics kept asking:

“Can you prove it in a way that others can independently verify?”

That’s where this breakthrough matters.


The Big Problem Quantum Researchers Faced

Imagine trying to simulate a quantum system using a classical computer.

If you have 65 qubits, the number of possible states explodes exponentially. The math grows out of control very quickly.

To simulate certain quantum behaviors, even the world’s fastest supercomputer (Frontier) would need years to compute what a quantum processor can do directly.

But previous demonstrations of quantum advantage were criticized.

Some experiments were seen as artificial benchmarks — clever but not clearly useful. Others were difficult to independently verify.

So the scientific community wanted something stronger:

  1. A problem that classical computers genuinely struggle with.

  2. A quantum result that can be independently verified.

  3. A repeatable, measurable outcome.

Google’s new “Quantum Echoes” experiment claims to deliver exactly that.


What Did Google Actually Do?

Instead of just running a random quantum experiment, researchers used something called a time-reversal quantum echo.

Let’s simplify this.

Imagine scrambling an egg.

Now imagine reversing time and perfectly unscrambling it — except for one tiny change you introduced earlier.

That tiny change would reveal how the scrambling process spread information.

That’s basically what they did.

Step 1: Scramble the Quantum System

The quantum processor runs a sequence of operations that entangles over 100 qubits. Information spreads across the system in a complex way.

This is called forward evolution.

Step 2: Nudge One Qubit

They make a tiny change to just one qubit — like flicking one domino in a massive chain.

Step 3: Reverse the Entire Process

Then they run the whole system backwards.

If everything were perfectly reversible, the system would return to its original state.

But because of the tiny disturbance introduced earlier, something interesting happens.

The disturbance leaves a measurable fingerprint.

That measurable signal — called an “out-of-time-order correlator” (don’t worry about the name) — captures how information spread through the system.

And here’s the key point:

Classical computers struggle enormously to simulate this behavior.

But the quantum processor does it naturally.


The Speed Difference

According to the published results:

• Classical simulation would take about 3.2 years on the fastest supercomputer. • The quantum processor completed it in 2.1 hours.

That’s roughly a 13,000× speedup.

And this wasn’t a one-off experiment.

They ran over a trillion quantum measurements to demonstrate reliability and repeatability.

That matters.

Because quantum computers are fragile. Noise, instability, and error have always been major problems.

Showing consistent results at that scale is a big step forward.


Why This Is Different From Previous “Quantum Supremacy” Claims

Earlier quantum breakthroughs were often criticized because they solved problems that weren’t clearly useful.

This new method produces measurable quantities that different quantum systems can independently verify.

In simple terms:

It’s not just “trust us, we did something fast.”

It’s “here is a measurable physical quantity — go reproduce it yourself.”

That makes the claim much stronger scientifically.


So… Is This Useful Yet?

Not immediately for your daily life.

You won’t see quantum laptops next year.

But this type of quantum measurement is closely connected to understanding:

• Molecular behavior • Quantum materials • Chemical reactions • Drug discovery simulations

In industries like pharmaceuticals, materials science, and energy, simulating quantum systems is extremely expensive using classical computers.

If quantum processors can accelerate certain calculations from months to weeks, that’s transformative.

Researchers suggest that by the end of this decade, we could see quantum-enhanced sensing and molecular simulation pipelines becoming commercially relevant.

That’s a big deal.


The Bigger Strategy: Hybrid Quantum + Classical

Here’s something important.

The future is not “quantum replaces classical.”

Instead, the real opportunity lies in hybrid systems.

Think of it like this:

Use classical computers for what they’re great at — data handling, optimization, scaling.

Use quantum processors for very specific sub-problems where interference and entanglement offer an advantage.

Then combine both.

This hybrid approach is what many researchers are experimenting with now.

For example:

• Using quantum routines to extract complex features from high-dimensional data. • Using quantum optimization algorithms for certain combinatorial problems. • Feeding quantum-generated insights into classical machine learning pipelines.

Right now, results are mixed.

Quantum hardware is still noisy. It’s still limited.

But the direction is clear.


Meanwhile, The AI Race Continues

While quantum computing moves forward, the AI industry is accelerating in parallel.

Major investors are increasing stakes in advanced AI startups.

Companies are building their own AI chips to reduce dependence on dominant hardware providers.

European firms are investing billions in data center infrastructure to compete globally.

Why does this matter?

Because compute power — whether classical AI chips or quantum processors — is becoming strategic infrastructure.

The next decade isn’t just about smarter models.

It’s about who controls the hardware, the data centers, and the compute stack.

Quantum is slowly entering that conversation.


What This Means For You

If you’re a business leader:

You don’t need a quantum team tomorrow.

But you should start tracking developments and understanding where hybrid workflows might eventually fit.

If you’re in ML or data science:

The future skill set won’t just be “train bigger models.”

It will involve orchestrating different compute paradigms efficiently.

If you’re just curious about technology:

This is one of those moments where science fiction inches closer to reality.

Not with flashy consumer products — but with deep infrastructure shifts.


The Bottom Line

For decades, quantum computing was mostly theoretical hype.

This experiment doesn’t mean quantum has “won.”

But it does mark something important:

A verifiable, measurable demonstration that quantum systems can outperform classical machines on specific tasks.

Not by 2×.

Not by 10×.

But by 13,000×.

And when performance gaps become that large, industries pay attention.

We are still early.

But for the first time, the conversation has shifted from:

“Will quantum ever matter?”

To:

“Where will quantum matter first?”

That’s a very different question.

And the next few years will be fascinating to watch.

References

  • Google's own research blog explains how the Quantum Echoes algorithm achieves repeatable, verifiable results — something critics of earlier quantum supremacy claims have long asked for. Google Research

  • According to major tech coverage, the Willow processor completed a specialized physics simulation roughly 13,000× faster than the Frontier supercomputer — taking hours instead of years to compute. The Quantum Insider+1

  • The breakthrough was published in Nature, where researchers described measuring quantum correlators that are fundamentally hard for classical machines to simulate but naturally accessible on a quantum device. Nature

  • Wikipedia’s article on out-of-time-ordered correlators (OTOC) explains the physics behind why these measurements are so difficult for classical computers to perform — making them a strong candidate for demonstrating genuine quantum advantage. Wikipedia

Thursday, February 12, 2026

Quiz: Collaborative Writing and Coding with OpenAI Canvas

View Course on DeepLearning.AI    View Other Courses Audited By Us


Excursion - Calculate the number of rooms you need to book in the hotel (Super Easy)

Index of "Algorithms: Design and Analysis"
<<< Previous    Next >>>
Solve on HackerEarth
Problem
A group of students wants to visit a tour in some city 
. In total, the group has 
 boys and 
 girls. To do this, they need to stay in a hotel.

Calculate the number of rooms you need to book in the hotel, if each hotel room has 
 seats, provided that the boys can not live with the girls in the same room.

Input format

The first line specifies a number 
 denoting the number of test cases.
In each line of the test case, there are three numbers, 
.
Output format

For each test case, print one number denoting the number of rooms to be booked at the hotel.

Constraints





Sample Input
4
13 7 2
5 5 3
2 2 2
5 4 5
Sample Output
11
4
2
2
Time Limit: 1
Memory Limit: 256
Source Limit:
Explanation
For test 13 7 2 answer is 11, because need for boys 7 rooms and for girls need 4 rooms.

For test 5 5 3 answer is 4, because need for boys 2 rooms and for girls need 2 rooms.




Code

import math

def solve(N, M, K):
    out = math.ceil(N / K) + math.ceil(M / K)
    return out

T = int(input())
for _ in range(T):
    N, M, K = map(int, input().split())
    out = solve(N, M, K)
    print(out)