Channel Coding as a Benchmark for Studying Task Distribution and Adaptation in Meta-Learning

Channel Coding as a Benchmark for Studying Task Distribution and Adaptation in Meta-Learning

Meta-learning is an alternative family of methods which is well known and can be used in learning new tasks from few examples. However, some important aspects in the vector/meta-learning have remained hard to explore so far, which, for the sake of efficiency one does not afford to encounter in practice. Performance degradation occurs in realistic environments where the meta-learner is compelled to learn from a single wide and possibly multi-task training course; and when there is more than one possible distribution shift, that occurs between meta-train tasks and tasks distributed to the learner at meta-testing. Such issues are usually difficult to analyze due to the fact that distortion also known as the shift between the task distributions, and the metrics of the task distributions is not easy to control or define in usual benchmark tests. We suggest the channel coding problem as a benchmark for meta-learning. Channel learning is a practical and useful domain where task distributions are present and quick learning of novel tasks is practically significant. We utilize this benchmark to investigate various facets of meta-learning such as how the breadth and shift of task distributions affect the performance of the meta-learner, which is amenable to manipulation in the coding problem. This way, this benchmark offers the community a framework to assess the potentiality and insufficiency of subsequent learning and to propel the advancement of efficient and practical meta-learners.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Read More