Project List

Submission Instruction

  • Please use the PKU-IAI Technical Report Template on the LaTeX Template page.
  • After submission, the instructor and TAs will provide you review feedback. Use the feedback to improve the paper quality. At the same time, you will also receive a PKU-IAI technical report ID. Update the ID in the LaTeX template.
  • After revision, you will need to upload your technical report to an OpenReview venue. You will also need to upload the code, models, and all necessary steps to reproduce the results; this can be done by either using OpenReview or GitHub/OSF/GDrive/WandB. Only when you complete this step will you receive proper credit.

Project List

Affordance and Functionality

  1. Reproduce result from the GPNN paper.
  2. Implement Neural Parts and Unsupervised Learning for Cuboid Shape Abstraction via Joint Segmentation from Point Clouds on the chair category of PartNet. Evaluate and compare their performance on semantics segmentation qualitatively and quantitatively.
  3. Design a method for learning the concept of a daily object (e.g., a cup or a chair)
    • What are the key components of the concept of “a cup”?
    • What is the formal definition of your problem?
    • What is your method?
    • Collect your training data if necessary
    • How would you evaluate your method? What would convince the most skeptical person that your method really learnt the concept of such a daily object (e.g., a cup)?

Intuitive Physics

  1. A computational model for modeling visually grounded VoEs
  • Leverage SOTA computer vision algorithms. For instance: Aloe.
  • Input: VoE videos from existing works. For instance: ADEPT, IntPhys.
  • Output: The AI should be surprised if a VoE is given.
  1. A probabilistic simulation approach for the water-pouring task
  • Build a computational model for AI to solve/understand the water-pouring task.
  • Input: a few dynamic scenes with glass filled in with water; you may need to write your own simulator using existing simulator, for instance: Box2D, Taichi.
  • Output: the angle of glass to pour water.
  • Reference models by Tom Griffiths and Josh Tenenbaum’s groups.

Causality

Model-based RL for causal transfer

  • Background:
    • Recent progress in model-free RL has renewed our perspective towards task solving. But is “Reward is all you need” true?
    • OpenLock task is a virtual escaping game. The experiment focuses on the ability of causal transfer, i.e., to understand the abstract causal structure and to utilize the implicit meta-rules. See a video demonstration here.
    • Prior works suggest that model-free RL fails to understand the abstract causal structure, even a simple one. Would model-based RL work?
  • Task:
    • Design a model-based RL method to solve the OpenLock task. Clearly state how you construct the model. Please make sure your method is general, even if the method fails.
    • Compare the results of the proposed model-based RL method with model-free RL method.
    • (Optional) What if the OpenLock is probabilistic, i.e., the probability to successfully pushing one lever is at a fixed chance rather than 100%? Could your MBRL method handle such a situation? Present the results.
    • (Optional) Any other methods to solve the deterministic & probabilistic OpenLock task?

Tool, Mirroring, and Imitation

Virtual Tool Game

  • Play with the Virtual Tool Game and reproduce baselines in the referred paper.
  • Design a new scenario that leverages compositional concepts (at least two), e.g., Bridge + Catapult + …, to solve the problem.
  • Propose a model that can solve the above new compositional problem while learning the concept individually.

Nonverbal Communication

  1. Cooperation
  • Gaze and pointing cues might be significant in a cooperation task.
  • This paper shows that chimpanzees have the necessary socio-cognitive skills to naturally develop a simple communicative strategy to ensure coordination in a collaborative task.
  • Build a simple simulated scenario to computationally reproduce the experimental results.
  • Requirements:
    • Include important nonverbal communication cues, such as gaze, pointing.
    • Learn a policy for such nonverbal communication under a shared goal in a cooperation setting.
    • You could extend the setting and model to a more general version if possible.
  1. Emergent Languages
  • Design your own task and environment for a single pair or a group of agents.
  • Test whether the agents can successfully solve your task through communication.
  • Requirements:
    • Get familiar with the simulator
    • Design a valid and interesting task: What are the possible means for communication—verbal, nonverbal, or combined?
    • Train agents using EGG toolkit
    • Design valid evaluation metrics
    • Report results

Intentionality

  1. Multi-agent Activity Parsing and Prediction on LEMMA
  • Try grammar parsing / planning methods to solve the activity learning/parsing/prediction in a neural-symbolic way.
  • Challenges:
    • how to represent activities in a multi-agent scenario?
    • how to plan for all agents according to some symbolic plan structure?
    • What are better ways of evaluation in addition to future activity prediction?
  1. The Watch-And-Help benchmark
  • Play with the Virtual Home environment
  • Reproduce baselines for the Watch-And-Help benchmark
  • Propose new algorithms to solve the human-robot teaming problem with goals/intents considered.
  • Challenges:
    • Getting familiar with Virtual Home environment and properly setting up training/testing process.
    • In human-robot teaming, how to represent goals and intents and combine them into the planning model for intelligent agents?
    • Beating powerful baselines.

Animacy

Generate animate and inanimate dot motion

  • Give your formulation for the unified animate and inanimate motion problem. Show your model’s versions on synthesis and discriminative respectively.
  • Generate diverse animate and inanimate dot motion stimuli.
  • Perform a human study on your generated stimuli for a verification of your model.
  • Given provided stimuli, can your model classify the motion?

Theory of Mind (ToM)

  1. Build a mini Theory of Mind system.
  • Include necessary modules and show the capability of your system in real or simulated scenarios.
  • Requirements:
    • Do NOT use open-sourced projects.
    • Include the following mental components: desire, belief, and intent.
    • Include the inverse mental inference process.
    • Include at least two agents. Show your model’s capability in modeling the interaction based on ToM.
    • Include the forward planning process.
  1. Build an AI agent that tackles the Hanabi challenge using the official environment.

Abstract Reasoning

  1. Implement the probabilistic PDDL solver and reproduce the block stacking experiments (see Huang, De-An, et al.).

  2. Design a minimal differentiable engine that supports implicit convex optimization and differentiation (see Differentiable Optimization above).

  3. Implement DreamCoder in Python / PyTorch and reproduce one experiment.

Utility

Learning human utility for object arrangement.

XAI and Communication

RSA Model

Previous
Next