## Thursday, May 28, 2020

### New Probabilistic Approach to Factoring Big Numbers

Product of two large primes are at the core of many encryption algorithms, as factoring the product is very hard for numbers with a few hundred digits. The two prime factors are associated with the encryption keys (public and private keys). Here we describe a new approach to factoring a big number that is the product of two primes of roughly the same size. It is designed especially to handle this problem and identify flaws in encryption algorithms.
Riemann zeta function in the complex plane
While at first glance it appears to substantially reduce the computational complexity of traditional factoring, at this stage there is still a lot of progress needed to make the new algorithm efficient. An interesting feature is that the success depends on the probability of two numbers to be co-prime, given the fact that they don't share the first few primes (say 2, 3, 5, 7, 11, 13) as common divisors. This probability can be computed explicitly and is about 99%.
The methodology relies heavily on solving systems of congruences, the Chinese Remainder Theorem, and the modular multiplicative inverse of some carefully chosen integers. We also discuss computational complexity issues. Finally, the off-the-beaten-path material presented here leads to many original exercises or exam questions for students learning probability, computer science, or number theory: proving the various simple statements made in my article.
Content
Some Number Theory Explained in Simple English
• Co-primes and pairwise co-primes
• Probability of being co-prime
• Modular multiplicative inverse
• Chinese remainder theorem, version A
• Chinese remainder theorem, version B
The New Factoring Algorithm
• Improving computational complexity
• Five-step algorithm
• Probabilistic optimization
• Compact Formulation of the Problem
Other Math Articles by Same Author
Here is a selection of articles pertaining to experimental math and probabilistic number theory:

## Wednesday, May 6, 2020

### Simple Trick to Dramatically Improve Speed of Convergence

We discuss a simple trick to significantly accelerate the convergence of an algorithm when the error term decreases in absolute value over successive iterations, with the error term oscillating (not necessarily periodically) between positive and negative values.
We first illustrate the technique on a well known and simple case: the computation of log 2 using its well know, slow-converging series. We then discuss a very interesting and more complex case, before finally focusing on a more challenging example in the context of probabilistic number theory and experimental math.
The technique must be tested for each specific case to assess the improvement in convergence speed. There is no general, theoretical rule to measure the gain, and if the error term does not oscillate in a balanced way between positive and negative values, this technique does not produce any gain. However, in the examples below, the gain was dramatic.
Let's say you run an algorithm, for instance gradient descent. The input (model parameters) is x, the output if f(x), for instance a local optimum. We consider f(x) to be univariate, but it easily generalizes to the multivariate case, by applying the technique separately for each component. At iteration k, you obtain an approximation f(k, x) of f(x), and the error is E(k, x) = f(x) - f(k, x). The total number of iterations is N. starting with first iteration k = 1.
The idea consists in first running the algorithm as is, and then compute the "smoothed" approximations, using the following m steps.
Content
• General framework and simple illustration
• A strange function
• Even stranger functions

### Fuzzy Regression: A Generic, Model-free, Math-free Machine Learning Technique

A different way to do regression with prediction intervals. In Python and without math. No calculus, no matrix algebra, no statistical eng...