So - picking up where we left off . . .

We have a method that will find the next prime number that occurs after any given number.

But I wanted to take the process a bit further and test the Unfortunately, the previous code falls down pretty hard when looking at large numbers.

So I decided to look at some ways to speed up the process. The obvious sticking point was the [for..next] loop, particularly the method in which we determine the next occurring number that doesn't have a divisor lower than the base case.

So the next step was to try to reduce the number of values we are able to evaluate to increase performance. If we work form the assumption that the next prime number will certainly *not* be divisiable by 1/2 of the base number, we can set this number as a cutoff point.

So the logic changes as follows:

- Take the current value, add 1 as x.
- For each number between 2 and one half of ((x) +1) as y
- See if x is divisible from y.
- If it is, exit the loop, increment the value of x by 1 and repeat the process.
- If not, continue through the loop.
- If we reach the end of the loop without finding a match, we can assume that the number is prime.

- Return the value of x.

public int nextPrime(int lastNo)

{

int currentNo = lastNo;

bool isNotPrime = true; while (isNotPrime)

{

isNotPrime = false;

currentNo++;

for (int i = 2; i < (currentNo/2+1); i++)

{

if (currentNo % i == 0)

{

isNotPrime = true;

break;

}

}

return currentNo;

}

}

The previous method wasn't very efficient for large number sets, as we can see from the run chart below, but our update cuts the runtime down quite a bit:

Number of Records | Initial Code | Optimized Code |

100 | 0.0007 sec | 0.0005 sec |

1000 | 0.057 sec | 0.031 sec |

10000 | 3.47 sec | 1.59 sec |

100000 | 411.75 sec | 209.99 sec |

But this is still not as optimized as it could be. I made a logical choice about how to reduce the number of checks it makes to determine if a number is prime. I assumed that the number greater that 1/2 of x would be disqualified as a dividing candidate, but could we narrow that down a bit more?

Turns out that we can:

public int nextPrime(int lastNo)

{

int currentNo = lastNo;

bool isNotPrime = true; while (isNotPrime)

{

isNotPrime = false;

currentNo++;

for (int i = 2; i < (Math.Sqrt(currentNo+1); i++)

{

if (currentNo % i == 0)

{

isNotPrime = true;

break;

}

}

return currentNo;

}

}

So - what are we doing now?

- Take the current value, add 1 as x.
- For each number between 2 and the
**square root**of ((x) +1) as y- See if x is divisible from y.
- If it is, exit the loop, increment the value of x by 1 and repeat the process.
- If not, continue through the loop.
- If we reach the end of the loop without finding a match, we can assume that the number is prime.

- Return the value of x.

This way, we've significantly reduced the number of steps taken in determining the next prime number, as the testing demonstrates:

Number of Records | Initial Code | Optimized Code ((x/2) + 1) | Optimized Code (√(x+1)) |

100 | 0.0007 sec | 0.0005 sec | 0.0006 sec |

1,000 | 0.057 sec | 0.031 sec | 0.0016 sec |

10,000 | 3.47 sec | 1.59 sec | 0.034 sec |

100,000 | 411.75 sec | 209.99 sec | 1.02 sec |

1,000,000 | n/a | n/a | 24.90 sec |

5,000,000 | n/a | n/a | 240.06 sec |

Based on the last run of tests, it didn't seem practical to run the first two algorithms through more than 100,000 records. It is clear that at a certain point, the gains in speed go a bit linear. Maybe I'll create a process to collect the data into a database by number of iterations as an excuse to play with Power BI.

Now - I'm running my test on a Dell XPS 13 9730 (i7 4 core processor, 16GB), so I wasn't expecting the earth to actually move. But I am wondering:

- What updates could I make to optimize the process even further?
- At what points are each algorithm alternatives optimal for the number of items?
- Is there any practical use for being able to generate 10,000,000 prime numbers quickly?

(shrugs)