Workshop on Deep Reinforcement Learning for Knowledge Discovery

August 5, 2019, Anchorage, Alaska - USA

INTRO

While supervised and unsupervised learning have been extensively used for knowledge discovery for decades and have achieved immense success, much less attention has been paid to reinforcement learning in knowledge discovery until the recent emergence of deep reinforcement learning (DRL). By integrating deep learning into reinforcement learning, DRL is not only capable of continuing sensing and learning to act, but also capturing complex patterns with the power of deep learning. Recent years have witnessed the enormous success of DRL for numerous domains such as the game of Go, video games, and robotics, leading up to increasing advances of DRL for knowledge discovery. For instance, RL-based recommender systems have been developed to produce recommendations that maximize user utility (reward) in the long run for interactive systems; RL-based traffic signal systems have been designed to control traffic lights in real time to enhance traffic efficiency for urban computing. Similar excitement has been generated in other areas of knowledge discovery, such as graph optimization, interactive dialogue systems, and big data systems. While these successes show the promise of DRL, applying learning from game-based DRL to knowledge discovery is fraught with unique challenges, including, but not limited to, extreme data sparsity, power-law distributed samples, and large state and action spaces. Therefore, it is timely and necessary to provide a venue, which can bring together academia researchers and industry practitioners (1) to discuss the principles, limitations and applications of DRL for knowledge discovery; and (2) to foster research on innovative algorithms, novel techniques, and new applications of DRL to knowledge discovery.

CALL FOR PAPER

We invite the submission of novel research paper (6 ~ 10 pages), demo paper (4 ~ 10 pages), visionary papers (4 ~ 10 pages) as well as extended abstracts (1 ~ 4 pages). Submissions must be in PDF format, written in English, and formatted according to the latest double-column ACM Conference Proceedings Template. All papers will be peer reviewed, single-blinded. Submitted papers will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. All the papers are required to be submitted via EasyChair system. For more questions about the workshop and submissions, please send email to zhaoxi35@msu.edu.

We encourage submissions on a broad range of DRL for knowledge discovery in various domains. Topics of interest include but are not limited to theoretical aspects, algorithms, methods, applications, and systems, such as:

  • Foundation:
  • - Reinforcement Learning and Planning
  • - Decision and Control
  • - Exploration
  • - Hierarchical RL
  • - Markov Decision Processes
  • - Model-Based RL
  • - Multi-Agent RL
  • - Inverse RL
  • - Contextual Bandits
  • - Navigation
  • Business:
  • - Advertising and E-commerce
  • - Finance
  • - Marketing
  • - Markets and Crowds
  • - Recommender systems
  • Urban Computing:
  • - Smart Transportation
  • - Intelligent Environment
  • - Urban Planning
  • - Urban Economy
  • - Urban Energy
  • Computational Linguistics:
  • - Dialogue and Interactive Systems
  • - Semantic Parsing
  • - Summarization
  • - Machine Translation
  • - Question Answering
  • Graph Mining:
  • - Social and Network Sciences
  • - Graph Modeling and Embedding
  • - Graph Generation and Optimization
  • - Combinatorial Optimization and Planning
  • Big Data Systems:
  • - Systems for large-scale RL
  • - Environments for testing RL
  • - RL to improve Systems
  • Further target application areas:
  • - Health Care
  • - Computer Vision
  • - Education
  • - Security
  • - Time Series
  • - Multimedia

IMPORTANT DATES

May 15, 2019: Workshop paper submission due (23:59, Pacific Standard Time)

June 1, 2019: Workshop paper notifications

June 16, 2019: Camera-ready deadline for workshop papers

August 5, 2019: Workshop Date

PROGRAM

Opening & Welcome

Name Name Tittle Tittle

Keynote 1

Name Name Tittle Tittle

Paper Presentation

Invited Talk 1

Name Name Tittle Tittle

Coffee Break

Keynote 2

Name Name Tittle Tittle

Invited Talk 2

Name Name Tittle Tittle

Poster Session

Closing Remarks

Name Name Tittle Tittle

ORGANIZERS

Image

Jiliang Tang Michigan State University

Image

Dawei Yin JD.com

Image

Long Xia JD.com

Image

Alex Beutel Google Brain

Image

Minmin Chen Google Brain

Image

Shaili Jain Facebook

Image

Xiangyu Zhao Michigan State University

ACCEPTED PAPERS Poster Instruction

  • Rel4KC: A Reinforcement Learning Agent for Knowledge Graph Completion and Validation [PDF]
  • Authors: Xiao Lin, Pero Subasic and Hongfeng Yin
  • From AlphaGo Zero to 2048 [PDF]
  • Authors: Yulin Zhou
  • Horizon: Facebook’s Open Source Applied Reinforcement Learning Platform [PDF]
  • Authors: Edoardo Conti and Jason Gauci
  • Reinforcement Learning Driven Heuristic Optimization [PDF]
  • Authors: Qingpeng Cai, Will Hang, Azalia Mirhoseini, George Tucker, Jingtao Wang and Wei Wei
  • Deep Reinforcement Learning for List-wise Recommendations [PDF]
  • Authors: Xiangyu Zhao, Liang Zhang, Long Xia, Zhuoye Ding, Dawei Yin and Jiliang Tang
  • Deep Reinforcement Learning for Traffic Signal Control along Arterials [PDF]
  • Authors: Hua Wei, Chacha Chen, Kan Wu, Guanjie Zheng, Zhengyao Yu, Vikash Gayah and Zhenhui Jessie Li
  • Fair and Explainable Heavy-tailed Solutions of Option Prices through Reinforcement, Deep, and EM Learnings [PDF]
  • Authors: Chansoo Kim and Byoungseon Choi
  • Presenters: Chansoo Kim and Byoungseon Choi
  • CuSH: Cognitive ScHeduler for Heterogeneous High Performance Computing System [PDF]
  • Authors: Giacomo Domeniconi, Eun Kyung Lee and Alessandro Morari