Victor Kolev

Aspiring reseacher in AI.

About me

I am a Stanford University student passionate math and computer sceince.

I have been involved in research for 3 years, working on projects in machine learning, data science, and mathematics, and I have been been awarded multiple international honors //todo add link to news? . My latest research was published at the KR2ML@NeurIPS 2020 workshop (Knowledge Representation Meets Machine Learning), dealing with the abstract reasoning capacity of neural networks.

During the Summer of 2021, I was an machine learning intern at Efemarai, Inc. where I was working with Deep Generative Models applied to large-scale unstructured image data to derive meaningful latent representations. In 2020, I participated in the Research Science Institute, a 6-week research internship program, organized by the Center Excellence in Education in collaboration with the Massachussetts Institute of Technology, where I was further distinguished with the Top 5 paper and Top 10 presentation awards.

Research Works

ARC

Neural Abstract Reasoner

KR2ML Workshop @ NeurIPS 2020

Victor Kolev, Bogdan Georgiev, Svetlin Penkov

Abstract pdf
Abstract reasoning and logic inference are difficult problems for neural networks, yet essential to their applicability in highly structured domains. In this work we demonstrate that a well known technique such as spectral regularization can significantly boost the capabilities of a neural learner. We introduce the Neural Abstract ReasTwor (NAR), a memory augmented architecture capable of learning and using abstract rules. We show that, when trained with spectral regularization, NAR achieves 78.8% accuracy on the Abstraction and Reasoning Corpus, improving performance 4 times over the best known human hand-crafted symbolic solvers. We provide some intuition for the effects of spectral regularization in the domain of abstract reasoning based on theoretical generalization bounds and Solomonoff's theory of inductive inference.

ARC

Towards Neural Abstract Logical Reasoning

Research Science Institute 2020

Victor Kolev
Supervisors: Dimitar Vasilev, Svetlin Penkov

Abstract pdf
The ability to learn abstract concepts and to quickly grasp logical rules are hallmarks of human intelligence, yet these traits are difficult for neural networks. In our work we take a backward design approach and we develop a neural framework based on abstract principles, in order to learn abstract rules. Our model uses a Differentiable Neural Computer in order to capture algorithmic logic and allow true computation. We apply spectral norm regularization, which adds preference to functions with low complexity, which we use as a proxy for abstraction. Model-agnostic Meta-learning is employed to adapt parameters to a given task at test time. We show that our approach outperforms classical baselines on the Abstraction and reasoning corpus, a few-shot pattern manipulation benchmark.

ARC

Memory-augmented Neural Agents for Deep Model-free Reinforcement Learning

Victor Kolev
Supervisors: Rafael Rafailov, Svetlin Penkov

Abstract pdf
While deep reinforcement learning has achieved some impressive results in the past years, it suffers from poor sample efficiency and an inability to generalize, since they learn to manipulate specific data distributions and cannot extract principles from them. Although at its core, RL takes inspiration from the way humans learn and interact with the world, current algorithms do not reflect the human way of thinking. Memory-augmented neural networks can potentially solve these problems - the Differentiable Neural Computer represents a model of the hippocampus by accounting that humans have long and short term memory, as well as fixed routines. Furthermore, the DNC is designed to decouple learning and remembering data, which should substantially aid generalization. In this paper, we show preliminary results that the DNC exhibits great potential in the field, improving both sample efficiency and robustness to noise. Moreover, the rigid memory structures of the architecture allow for extensive analysis of the data structures stored there, which would provide invaluable novel intuition and knowledge into the processes, occurring inside the black box that is the neural network.

Asset Return

The Influence of Exogenous Capital on Asset Return

Victor Kolev, Stefan Hadzhistoykov
Supervisor: Rafael Rafailov

Abstract pdf
The principal aim of our research is to extend the Capital asset pricing model (CAPM) to account for exogenous capital by developing a mathematical model and scrutinize its statistical validity. The effects of it have not yet been thoroughly examined, despite greatly influencing some assets, namely stocks with a large percent of restricted shares. A mathematical model is formulated, which introduces exogenous capital in CAPM and from it follows that the exogenous factor is a source of asset return. Using linear regression analysis it is confirmed that this anomaly is distinct from other known market factors and indeed increases return.

News

  • First Place, EUCYS 2021
    September 19, 2021

    I qualified for the European Contest for Young Scientists, the largest European science fair, as part of the Bulgarian delegation and was awarded the first 7000€ prize, as well as participation in the orestigious LIYSEF science forum.

  • Research Mentor @ SRS
    August, 2021

    I attented the Summer Research School in Bulgaria for a second time, again mentoring 3 students on machine learning research projects. in the country.

  • Joined Efemarai, Inc.
    June 7, 2021

    Summer internship at Efemarai as a Machine Learning Researcher, focusing on applying Deep Generative Models to extract meaningful representations from large-scale unstructured image data.

  • Graduated high school
    May 24, 2021

    I graduated the Sofia High School of Mathematics with a perfect GPA and honors for significant contibutions to elevating the school's national and international prestige.

  • Award Winner, ISEF 2021
    May 18, 2021

    I was part of the Bulgarian delegation to Regeneron ISEF 2021, the largest student science fair. I was awarded a 4th Grand Award in one of the most competitive categories, Robotics and Intelligent Machines, as well as 2 special prizes: an honourable mention by AAAI, and the Innopolis Award.

  • Invited speaker @ FutureMakers
    November 21, 2020

    I gave a talk at a TED-style event about carreer and personal development. My main message was that there are many human traits in machine learning algorithms, and there is often also something for us to learn from machine learning.

  • Excellent Students of Bulgaria
    October 30, 2020

    I was distinguished by the Bulgarian Ministry of Education and Science as one of the most accomplised students in the country.

  • Research Mentor @ SRS
    August, 2020

    I mentored 3 high school students students at the 2020 Summer Research School, a sister program of the Research Science Institute.

  • Encore Presentation @ RSI
    July 31, 2020

    In addition to being bestowed with the Top 5 Paper award, I was also selected to represent the Research Science Institute in front of a wide academic audience at the Encore Presentations.

  • ISEF Alumnus
    May 16, 2020

    I am a finalist at the 2020 International Science and Engineering Fair.