Jump to content

Siegel's theorem on integral points

From Wikipedia, the free encyclopedia

In mathematics, Siegel's theorem on integral points states that for a smooth algebraic curve C of genus g defined over a number field K, presented in affine space in a given coordinate system, there are only finitely many points on C with coordinates in the ring of integers O of K, provided g > 0.

The theorem was first proved in 1929 by Carl Ludwig Siegel and was the first major result on Diophantine equations that depended only on the genus and not any special algebraic form of the equations. For g > 1 it was superseded by Faltings's theorem in 1983.

History

[edit]

In 1926, Siegel proved the theorem effectively in the special case , so that he proved this theorem conditionally, provided the Mordell's conjecture is true.

In 1929, Siegel proved the theorem unconditionally by combining a version of the Thue–Siegel–Roth theorem, from diophantine approximation, with the Mordell–Weil theorem from diophantine geometry (required in Weil's version, to apply to the Jacobian variety of C).

In 2002, Umberto Zannier and Pietro Corvaja gave a new proof by using a new method based on the subspace theorem.[1]

Effective versions

[edit]

Siegel's result was ineffective for (see effective results in number theory), since Thue's method in diophantine approximation also is ineffective in describing possible very good rational approximations to almost all algebraic numbers of degree . Siegel proved it effectively only in the special case in 1926. Effective results in some cases derive from Baker's method.

See also

[edit]

References

[edit]
  1. ^ Corvaja, P. and Zannier, U. "A subspace theorem approach to integral points on curves", Compte Rendu Acad. Sci., 334, 2002, pp. 267–271 doi:10.1016/S1631-073X(02)02240-9