Machine Learning in a Non-Euclidean Space
“Is our comfortable and familiar Euclidean space and its linear structure always the right place for machine learning? Recent research argues otherwise: it is not always needed and sometimes harmful, as demonstrated by a wave of exciting work. Starting with the notion of hyperbolic representations for hierarchical data two years ago, a major push has resulted in new ideas for representations in non-Euclidean spaces, new algorithms and models with non-Euclidean data and operations, and new perspectives on the underlying functionality of non-Euclidean ML.” by Fred Sala, Ines Chami, Adva Wolf, Albert Gu, Beliz Gunel and Chris Ré, 2019
Before going further in this series about non-Euclidean geometry applied to Machine Learning (ML), I had to answer an important question. Is it worth learning more about non-Euclidean ML?
Toanswer such a question, I started by researching non-Euclidean ML. I quickly ended up finding a couple of resources. The very first one is from Stanford and the citation above is extracted from it. The authors argue that Machine Learning was designed with a certain geometry, namely the Euclidean geometry, more by tradition or convenience, than by rational thinking.
0 Comments