Project Progress

LLMs

  • Setting up LLM environment locally (llama2-7b-chat).
  • Working with Pascal to develop self-hosted LLMs (developing API).
  • [WA Data & LLM Platform]
  • [LLM API]
  • [Bring LLM to WA Community]
    graph LR;
      Client-->|LLMTaskRequest|APIServer;
      APIServer-->|Response|Client;
      APIServer-->|QueueTasks|TaskQueue;
      LLMWorker-->|GetTask|TaskQueue;
      TaskQueue-->|PopTask|LLMWorker;
      LLMWorker-->|Response|APIServer;
    

Simulator

  • Setting up basic environment for the project.
    • Dockerizing the simulator.
    • prepare for integrations.

TODOs

  • [Paper].
    • Working with Xiangrui. [paper] being accepted by IEEE IV 2024 conference.
    • [ITSC] if possible (due 1st May), Suggestions?
    • Ideas? Is latency a good topic?
      • Metrics?
        • LLM computing time, vehicle responding time, end-to-end responding time
      • what about other ideas? Accuracy?
  • Research Proposal:
    • Construct a framework for integrating LLM with robotics in the context of navigation.
      • *Reasoning Accuracy:
        • Multi-modal LLMs vs text only LLMs - Spatial-temporal reasoning issue.
        • Common way in LLMs: Prompting, RAG, and Fine-tuning.
      • Reasoning Latency:
        • On-premise open-source LLMs vs Cloud-based LLMs.
        • Scaling LLM size (light-LLM)
      • Reasoning security:
        • Evaluation framework.
      • Method:
        • Eyesim, Carla, ShuttleBus, Indoor robot?
        • LLM optimisations:
          • Prompting: Chain of thoughts, Visual QA
          • *RAG: relational, knowledge graph, vectorstore, attention.
          • Fine-tuning?

Additional

  • Add name on paper?
  • Kaya account for using GPUs.