[wpdreams_ajaxsearchpro_results id=1 element='div']

Solomonoff induction: what is it?

[ad_1]

Solomonoff induction is a mathematically idealized form of induction that predicts future events based on previous experience. It is based on Bayes’ theorem and Kolmogorov complexity, but is not computable. It is an organizing principle in the evolution of animals and may inspire the construction of real artificial intelligence.

Solomonoff induction is a mathematically rigorous and idealized form of induction, that is, predicting what will happen in the future based on previous experience. It is part of algorithmic information theory. This induction scheme is theoretically optimal, i.e., given enough data, it will always be able to assign probabilities to future events as accurately as possible. The only problem with Solomonoff’s induction is that it’s not computable—that is, it would require a computer with infinite processing power to work. However, all successful inductive schemes and machines, including animals and humans, are approximations of Solomonoff induction.

Any verbal argument containing advice for better induction, to the extent that it actually works, acts to cause the listener to modify his inductive strategy so that it fits better with the theory. The idea that induction can be formalized mathematically in this way is quite profound and many generations of logicians and philosophers have argued that it could not. The theory grew out of the work of Ray Solomonoff, Andrey Kolmolgorov and Gregory Chaitin in the 1960s. Their underlying motivation was to formalize probability theory and induction using axioms, in the same way that algebra and geometry have been formalized. The theory is based on an inductive rule called Bayes’ theorem, which describes a precise mathematical way to update beliefs based on incoming data.

A weakness of Bayes’ theorem is that it depends on a prior probability for a certain event. For example, the probability of an asteroid hitting the Earth in the next 10 years can be given based on historical data on asteroid impacts. However, when the sample size of previous events is small, such as the number of times a neutrino has been detected in a neutrino trap, it becomes very difficult to predict the likelihood of the event happening again based solely on past experience.

This is where Solomonoff induction comes into play. Using an objective measure of complexity called Kolmogorov complexity, the theory can make an educated guess about the likelihood of a future event occurring. Kolmogorov complexity is based on a principle called MDL (Minimum Description Length), which evaluates the complexity of a bit string based on the shortest algorithm capable of producing that string. While Kolmogorov complexity initially applied only to bit strings, it can be translated to describe the complexity of events and objects.

Solomonoff’s induction integrates Kolmogorov complexity into Bayesian reasoning, giving us justified precedents for events that may never have happened. The prior probability of an arbitrary event is judged based on its overall complexity and specificity. For example, the probability that two random raindrops in a storm will hit the same square meter is quite low, but much higher than the probability that ten or a hundred random raindrops will hit that square meter.

Some scientists have studied the theory in the context of neuroanatomy, showing how optimal induction is an organizing principle in the evolution of animals that need accurate induction to survive. When real Artificial Intelligence is created, principles will likely be the inspiration behind its construction.

[ad_2]