Jump to content

Alexey Ivakhnenko

From Wikipedia, the free encyclopedia
Alexey Ivakhnenko
Born
Oleksiy Hryhorovych Ivakhnenko

(1913-03-30)March 30, 1913
DiedOctober 16, 2007(2007-10-16) (aged 94)
NationalityUkrainian
Alma materLeningrad Electrotechnical Institute (M.Sc)
Known forGroup method of data handling,
deep learning,
inductive modelling
AwardsHonorary Scientist of the USSR
Two State Prizes of the USSR
Scientific career
FieldsArtificial intelligence,
machine learning,
computer science
InstitutionsGlushkov Institute of Cybernetics [uk],
Kyiv Electrotechnical Institute,
Kyiv Polytechnic Institute (D.Sc)
Thesis Theory of Combined Systems for Automatic Control of Electric Motors  (1954)

Alexey Ivakhnenko (Ukrainian: Олексíй Григо́рович Іва́хненко; 30 March 1913 – 16 October 2007) was a Soviet and Ukrainian mathematician most famous for developing the group method of data handling (GMDH), a method of inductive statistical learning.

Early life and education

[edit]

Aleksey was born in Kobelyaky, Poltava Governorate in a family of teachers.[1] In 1932 he graduated from Electrotechnical college in Kyiv and worked for two years as an engineer on the construction of large power plant in Berezniki. Then in 1938, after graduation from the Leningrad Electrotechnical Institute, Ivakhnenko worked in the All-Union Electrotechnical Institute in Moscow during wartime. There he investigated the problems of automatic control in the laboratory, led by Sergey Lebedev.

He continued research in other institutions in Ukraine after return to Kyiv in 1944. In that year he received the Ph.D. degree and later, in 1954 had received D.Sc. degree. In 1964, he was appointed as a Head of the Department of Combined Control Systems at the Institute of Cybernetics. Simultaneously working at first as a Lecturer, and from 1961, as a Professor of Automatic Control and Technical Cybernetics at the Kyiv Polytechnic Institute.

Research

[edit]

Ivakhnenko is known to be the founder of Inductive modelling, a scientific approach used for pattern recognition and complex systems forecasting.[2] He had used this approach during development of the group method of data handling (GMDH). In 1968 the journal "Avtomatika" had published his article "Group Method of Data Handling – a rival of the method of stochastic approximation",[3] marking the beginning of a new stage in his scientific work. He led the development of this approach, with a professional team of mathematicians and engineers at the Institute of Cybernetics.

Group method of data handling

[edit]

The GMDH method presents a unique approach to solving problems in artificial intelligence and even a new philosophy to scientific research, which became possible using modern computers.[2] A researcher may not adhere precisely to traditional deductive way of building models "from general theory – to a particular model": monitoring an object, studying its structure, understanding the principles of its operation, developing theory and testing the model of an object. Instead, the new approach is proposed "from specified data – to a general model": after the input of data, a researcher selects a class of models, the type of models-variants generation and sets the criterion for model selection. As most routine work is transferred to a computer, the impact of human influence on the objective result is minimised. In fact, this approach can be considered as one of the implementations of the artificial intelligence thesis, which states that a computer can act as powerful advisor to humans.

The development of GMDH consists of a synthesis of ideas from different areas of science: the cybernetic concept of "black box" and the principle of successive genetic selection of pairwise features, Godel's incompleteness theorems and the Gabor's principle of "freedom of decisions choice",[4] the Adhémar's incorrectness and the Beer's principle of external additions.[5]

GMDH is the original method for solving problems for structural-parametric identification of models for experimental data under uncertainty.[6] Such a problem occurs in the construction of a mathematical model that approximates the unknown pattern of investigated object or process.[7] It uses information about it that is implicitly contained in data. GMDH differs from other methods of modelling by the active application of the following principles: automatic models generation, inconclusive decisions, and consistent selection by external criteria for finding models of optimal complexity. It had an original multilayered procedure for automatic models structure generation, which imitates the evolutionary process of biological selection with consideration of pairwise successive features. Such procedure is currently used in deep learning networks.[8] To compare and choose optimal models, two or more subsets of a data sample are used. This makes it possible to avoid preliminary assumptions, because sample division implicitly acknowledges different types of uncertainty during the automatic construction of the optimal model.

In the early 1980s Ivakhnenko had established an organic analogy between the problem of constructing models for noisy data and signal passing through the channel with noise.[9] This made possible to lay the foundations of the theory of noise-immune modelling.[6] The main result of this theory is that the complexity of optimal predictive model depends on the level of uncertainty in the data: the higher this level (e.g. due to noise) - the simpler must be the optimal model (with less estimated parameters). This initiated the development of the GMDH theory as an inductive method of automatic adaptation of optimal model complexity to the level of information in fuzzy data. Therefore, GMDH is often considered to be the original information technology for knowledge extraction from experimental data.

Results

[edit]

Alongside to GMDH, Ivakhnenko had developed the following set of results:

  • Principle of construction of self-organizing deep learning networks.[2]
  • Theory of models self-organization according to experimental data.[10]
  • Method of control with forecast optimization.[11]
  • New principles of automatic control of speed for AC and asynchronous electric motors.[12]
  • Theory of invariant systems for adaptive control with compensation of measured disturbances.[13] He had developed the principle of indirect measurement of disturbances, called as "differential fork" that was used later in practice.
  • Principle of combined control (with negative feedback for the controlled variables and positive feedback for the controlled disturbances).[14] A number of such systems, for the speed control of electric motors had been implemented in practice. That proved the practical feasibility of invariant conditions in a combined control systems that unite the advantages of closed systems for control by deviation (high precision) and open systems (performance).
  • The non-searching extreme regulators on the basis of situations recognition.[15]
  • Principle of self-learning pattern recognition. It was demonstrated at first in the cognitive system "Alpha",[7] created under his leadership.
  • Basis for the construction of cybernetic prediction devices.[16]
  • Noise-immune principles of robust modelling for data with noises.[6]
  • Design of multilayered neural networks with active neurons, where each neuron is an algorithm.

Ivakhnenko is well known for his achievements in the theory of invariance and theory of combined automatic control systems, that operates on the principle of measured disturbances compensation. He had developed devices and methods for the adaptive control of systems with magnetic amplifiers and motors.

He is the author of the first Soviet monograph on engineering cybernetics,[15] which was published worldwide in seven languages.[17] In his study, a further development of the principles of combined control was connected with the implementation of methods of evolutionary self-organisation, pattern recognition and forecasting in control systems.

In recent years, his main innovation - the GMDH method - was developed as a method of inductive modelling, complex processes, and systems forecasting. His ideas are utilised now in deep learning networks.[18] The effectiveness of the method was confirmed repeatedly during the solution of real complex problems in ecology, meteorology, economics and technology, which aided increase its popularity among the international scientific community.[19] In parallel, there were conducted developments of evolutionary self-organising algorithms in a related field - clustering problems of pattern recognition.[20] Advances in the modelling of environmental processes reflected in the monographs,[21][9] economic processes - in the books. [11] [22] The results of exploration of recurrent multilayered GMDH algorithms are described in the books.[10][2]

Scientific school

[edit]

From 1963 to 1989 Ivakhnenko was the editor of the specialized scientific journal "Avtomatika" (later "Problems of management and computer science"), that played a crucial role in the formation and development of the Ukrainian school of Inductive modelling. Throughout these years the magazine was translated and reprinted in the United States as "Soviet Automatic Control" (later "Journal of Automation and Information Sciences").

Alongside constant innovation in his field since 1945, Ivakhnenko maintained an active teaching career, at first as the Assistant Professor at the Department of Theoretical Mechanics, and then at the Control Systems faculty. Since 1960 as Professor of the Department of Technical Cybernetics in Kyiv Polytechnic Institute, he contributed lectures to the University and student body, as well as oversaw the work of many graduate students. In 1958-1964 he was an organiser of the All-Union Conferences of Invariance in Kiev, where the development of the invariant control systems theory was restored after prohibition.[23]

His inexhaustible enthusiasm helped more than 220 young scientists to prepare and successfully defend their Ph.D. dissertations under his leadership in the KPI and the Institute of Cybernetics and nearly 30 of his students defended their post-doctoral dissertations. Scientific school of Ivakhnenko was and is a real cradle of highly qualified scientific professionals. Furthermore, his students V.M.Kuntsevych, V.I.Kostyuk, V.I.Ivanenko, V.I.Vasiliev, A.A.Pavlov and others had created their own respected scientific schools. Ivakhnenko was a shining example of a scientist, with a keen sense of new and remarkable scientific intuition. Until his last days, he continued to work actively and generously generated original scientific ideas and results.

Awards and honours

[edit]

Ivakhnenko is the Honorary Scientist of the USSR (1972), two-time winner of the State Prize (1991, 1997) for his works on the theory of invariant automatic systems and set of publications on Information technology in the field of Artificial intelligence. Author of 40 books and over 500 scientific articles. Honorary Doctor of National Technical University "KPI" (2003) and Lviv Polytechnic (2005). He was the Corresponding Member of Academy of Sciences USSR (1961) and Academician of NAS of Ukraine (2003).

Selected works

[edit]
  • Ivakhnenko A.G. Heuristic Self-Organization in Problems of Engineering Cybernetics, Automatica, vol.6, 1970 — p. 207-219.
  • Ivakhnenko A.G. Polynomial Theory of Complex Systems, IEEE Transactions on Systems Man and Cybernetics, 4, 1971 — p. 364-378.
  • Ivakhnenko, A.G.; Ivakhnenko, G.A. (1995). "The Review of Problems Solvable by Algorithms of the Group Method of Data Handling (GMDH)" (PDF). Pattern Recognition and Image Analysis. 5 (4): 527–535. CiteSeerX 10.1.1.19.2971.
  • Ivakhnenko, A.G.; Müller, J.-A. (1997). "Recent Developments of Self-Organising Modeling in Prediction And Analysis of Stock Market" (PDF).

References

[edit]
  1. ^ Бобрищев, К.В. (2002). Отчий Край (in Ukrainian). Полтава: Дивосвіт. pp. 284–293. ISBN 978-966-95846-9-4.
  2. ^ a b c d Madala, H.R.; Ivakhnenko, A.G. (1994). Inductive Learning Algorithms for Complex Systems Modeling (PDF). Boca Raton: CRC Press. ISBN 978-0849344381.
  3. ^ Ivakhnenko, A.G. (1968). "The Group Method of Data Handling - a Rival of the Method of Stochastic Approximation". Soviet Automatic Control. 13 (3): 43–55.
  4. ^ Gabor, D. (1971). Perspectives of Planing. Organization of Economic Cooperation and Development. London: Imp.Coll.
  5. ^ Beer, S. (1959). Cybernetics and Management. London: English Univ. Press.
  6. ^ a b c Ivakhnenko, A.G.; Stepashko, V.S. (1985). Pomekhoustojchivost' Modelirovanija (Noise Immunity of Modeling) (PDF). Kyiv: Naukova Dumka.
  7. ^ a b Ivakhnenko, A.G.; Lapa, V.G. (1967). Cybernetics and Forecasting Techniques (Modern Analytic and Computational Methods in Science and Mathematics, v.8 ed.). American Elsevier. ISBN 978-0444000200.
  8. ^ Takao, S.; Kondo, S.; Ueno, J.; Kondo, T. (2017). "Deep feedback GMDH-type neural network and its application to medical image analysis of MRI brain images". Artificial Life and Robotics. 23 (2): 161–172. doi:10.1007/s10015-017-0410-1. S2CID 44190434.
  9. ^ a b Ivahnenko, A.G. (1982). Inductive Method of Models Self-organisation for Complex Systems (PDF) (in Russian). Kyiv: Naukova Dumka.
  10. ^ a b Ivakhnenko, A.G.; Yurachkovsky, Yu.P. (1986). Modelirovanie Slozhnykh Sistem po Exsperimentalnym Dannym (Modelling of Complex Systems on Experimental Data). M: Radio i Svyaz.
  11. ^ a b Ivakhnenko, A.G.; Müller, J.A. (1985). Self-Organisation of Forecasting Models (PDF) (in Russian). Kyiv: Tehnika.
  12. ^ Ivakhnenko, A.G. (1953). Automatic Control of Velocity of Asynchronous Motors with Moderate Power. Kiev: Izd.AN USSR.
  13. ^ Ivakhnenko, A.G. (1954). Theory of Combined Automatic Control of Electric Motors. Kiev: Izd.KPI.
  14. ^ Ivakhnenko, A.G. (1954). Electroautomatika. Kiev: Gostekhizdat.
  15. ^ a b Ivakhnenko, A.G. (1959). Tekhnicheskaya Kibernetika. Kiev: Gostechizdat USSR.
  16. ^ Ivakhnenko, A.G. (1969). Self-learning systems of recognition and automatic control. Kyiv: Tehnika.
  17. ^ Ivachnenko, A.G. (1962). Techniche kybernetik. Berlin: Verlag Technik.
  18. ^ Schmidhuber, J. (January 2015). "Deep Learning in Neural Networks: An Overview" (PDF). Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637. S2CID 11715509. Retrieved 2019-12-26.
  19. ^ Farlow, S.J., ed. (1984). Self-organizing Methods in Modeling: GMDH Type Algorithms (Statistics: Textbooks and Monographs, vol.54 ed.). Marcel Dekker Inc. ISBN 978-0824771614. Retrieved 2019-12-26.
  20. ^ Ivahnenko, A.G.; Zaychenko, Yu.P.; Dimitrov, V.D. (1976). Decision Making on the basis of Self-Organisation. M: Sov.Radio.
  21. ^ Ivahnenko, A.G. (1975). Long-term Forecasting and Complex Systems Control (PDF) (in Russian). Kyiv: Tehnika.
  22. ^ Ivachnenko, A.G.; Müller, J.A. (1984). Selbstorganisation von Vorhersagemodellen. Berlin: Veb Verlag Technik.
  23. ^ "Nauka i Promyshlennost' (Science and Industry)". Pravda. Communist Party of the USSR. 16 May 1941.
[edit]